id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2302.07497 | OGLE-2017-BLG-1038: A Possible Brown-dwarf Binary Revealed by Spitzer
Microlensing Parallax | We report the analysis of microlensing event OGLE-2017-BLG-1038, observed by
the Optical Gravitational Lensing Experiment, Korean Microlensing Telescope
Network, and Spitzer telescopes. The event is caused by a giant source star in
the Galactic Bulge passing over a large resonant binary lens caustic. The
availability of space-based data allows the full set of physical parameters to
be calculated. However, there exists an eightfold degeneracy in the parallax
measurement. The four best solutions correspond to very-low-mass binaries near
($M_1 = 170^{+40}_{-50} M_J$ and $M_2 = 110^{+20}_{-30} M_J$), or well below
($M_1 = 22.5^{+0.7}_{-0.4} M_J$ and $M_2 = 13.3^{+0.4}_{-0.3} M_J$) the
boundary between stars and brown dwarfs. A conventional analysis, with scaled
uncertainties for Spitzer data, implies a very-low-mass brown dwarf binary lens
at a distance of 2 kpc. Compensating for systematic Spitzer errors using a
Gaussian process model suggests that a higher mass M-dwarf binary at 6 kpc is
equally likely. A Bayesian comparison based on a galactic model favors the
larger-mass solutions. We demonstrate how this degeneracy can be resolved
within the next ten years through infrared adaptive-optics imaging with a 40 m
class telescope. | Amber Malpas, Michael D. Albrow, Jennifer C. Yee, Andrew Gould, Andrzej Udalski, Antonio Herrera Martin, Spitzer Team, :, Charles A. Beichman, Geoffery Bryden, Sebastiano Calchi Novati, Sean Carey, Calen B. Henderson, B. Scott Gaudi, Yossi Shvartzvald, Wei Zhu, KMTNet Collaboration, :, Sang-Mok Cha, Sun-Ju Chung, Cheongho Han, Kyu-Ha Hwang, Youn Kil Jung, Dong-Jin Kim, Hyoun-Woo Kim, Seung-Lee Kim, Chung-Uk Lee, Dong-Joo Lee, Yongseok Lee, Byeong-Gon Park, Richard W. Pogge, Yoon-Hyun Ryu, In-Gu Shin, Weicheng Zang, OGLE Collaboration, :, Patryk Iwanek, Szymon Kozlowski, Przemek Mróz, Pawel Pietrukowicz, Radoslaw Poleski, Krzysztof A. Rybicki, Jan Skowron, Igor Soszyński, Michal K. Szymański, Krzysztof Ulaczyk | 2023-02-15T07:07:37Z | http://arxiv.org/abs/2302.07497v1 | # OGLE-2017-BLG-1038: A Possible Brown-dwarf Binary Revealed by Spitzer Microlensing Parallax
###### Abstract
We report the analysis of microlensing event OGLE-2017-BLG-1038, observed by the Optical Gravitational Lensing Experiment, Korean Microlensing Telescope Network, and Spitzer telescopes. The event is caused by a giant source star in the Galactic Bulge passing over a large resonant binary lens caustic. The availability of space-based data allows the full set of physical parameters to be calculated. However, there exists an eightfold degeneracy in the parallax measurement. The four best solutions correspond to very-low-mass binaries near (\(M_{1}=170^{+40}_{-50}M_{J}\) and \(M_{2}=110^{+20}_{-30}M_{J}\)), or well below (\(M_{1}=22.5^{+0.7}_{-0.4}M_{J}\) and \(M_{2}=13.3^{+0.4}_{-0.3}M_{J}\)) the boundary between stars and brown dwarfs. A conventional analysis, with scaled uncertainties for Spitzer data, implies a very-low-mass brown dwarf binary lens at a distance of 2 kpc. Compensating for systematic Spitzer errors using a Gaussian process model suggests that a higher mass M-dwarf binary at 6 kpc is equally likely. A Bayesian comparison based on a galactic model favors the larger-mass solutions. We demonstrate how this degeneracy can be resolved within the next ten years through infrared adaptive-optics imaging with a 40 m class telescope.
binaries: general -- brown dwarfs -- gravitational lensing: micro
## 1 Introduction
Microlensing is a phenomenon in which the path of light emitted from a distant star (the source) is bent by a curve in space-time, caused by a massive object (the lens). If the source is approximately behind the lens, as seen by an observer, it brightens as unresolved images of the source are formed about the Einstein ring that has angular radius
\[\theta_{\rm E}=\sqrt{\frac{4GM}{c^{2}}\left(\frac{\rm au}{D_{\rm L}}-\frac{ \rm au}{D_{\rm S}}\right)}=\sqrt{\kappa M\pi_{\rm rel}}, \tag{1}\]
where \(\pi_{\rm rel}={\rm au}(D_{\rm L}^{-1}-D_{\rm S}^{-1})\), \(M\) is the mass of the lens system, \(D_{\rm L}\) and \(D_{\rm S}\) are the distance to the lens and source, respectively, and \(\kappa=4G/(c^{2}{\rm au})\sim 8.14\,{\rm mas}/M_{\odot}\).
For transient alignments, where the closest angular separation of the source and lens is on the order of \(\theta_{\rm E}\) or smaller, photometric microlensing events can be observed as increasing and decreasing apparent brightness of the combination of the source star and unresolved neighbors, including the lens. Because only the source light is magnified, the luminosity of the lens system does not directly contribute to the event detection rate. As a result, microlensing is uniquely sensitive to the detection of low-mass, dim lenses such as brown dwarfs (BD; for example, Gould et al., 2009, Shvartzvald et al., 2016, and Han et al., 2020) and unbound planetary-mass objects (for example, Mroz et al., 2020 and Mroz et al., 2019) as proposed by Paczynski (1986).
A limitation of the microlensing method is that, for most microlensing events, the light-curve model leaves a degeneracy between the mass and distance of the lens. This degeneracy can in principle be resolved either by measuring two other parameters (the Einstein radius \(\theta_{\rm E}\), and the microlens parallax \(\pi_{\rm E}\)) or by separately observing the lens and source some years after the event in high-resolution images. While \(\theta_{\rm E}\) has been measured for most planetary and binary events published to date, \(\pi_{\rm E}\) has not.
For events with an extremely dim lens, proper-motion measurement via late time imaging is not feasible at typical lens distances, given current observing capabilities. Breaking the mass-distance degeneracy for very faint lens systems thus requires a measurement of the microlens parallax. The spatial separation between observers required to detect parallax at a single epoch depends on characteristics of the microlensing event, such as the distance to the lens system and duration of the event. Because of the large separation between Earth and the Spitzer Space Telescope (located more than 1 au distant from Earth), microlensing observations from Spitzer, in conjunction with those from Earth, provide a reliable means of measuring parallax. This uniquely wide separation is what motivated the Spitzer microlensing project (Yee et al., 2015).
Microlensing has been used to discover 34 BDs from beyond the local regime (Chung et al., 2019). So far, this extended population has demonstrated unusual dynamics, such as an unexpected number of counter-rotating BDs (Chung et al., 2019; Shvartzvald et al., 2019, 2017). It is unclear to what degree these extreme kinematics are representative of the population as a whole.
BDs are stellar-like objects that are not massive enough to maintain a sufficient core temperature for main-sequence hydrogen fusion. Though the more massive BDs are capable of lithium fusion, and most BDs are capable of deuterium fusion, these processes do not provide sufficient heat to stop BDs from gradually cooling as they radiate the heat generated during their formation. As a result, they are very faint and become fainter as they age. Deuterium fusion occurs in objects with masses of approximately \(>13\,M_{J}\). This is often adopted as a criterion to distinguish BDs from planets; objects below this mass are planets, be they bound to a stellar object or free floating. However this mass definition is sometimes in conflict with the formation definition: BDs form like stars and planets form in circumstellar disks.
All but five of the microlensing BDs have been detected as binary systems. The number of BDs detected in binaries makes up an artificially high proportion of the total number of detections because binary events have more easily detected finite-source effects and therefore are more likely to have their associated masses calculated. Some of these have member masses at about the deuterium fusion limit (Choi et al., 2013; Han et al., 2017; Albrow et al., 2018), supporting the arguments of Grether & Lineweaver (2006) and Chabrier et al. (2014) for a mass overlap between the gas-giant planet and BD regimes. Deuterium fusion has become an insufficient metric for classification between BD and gas-giant planets. These populations have distinct formation histories, which, though difficult to infer, provide a more meaningful way to separate them in the mass-overlap region.
The upper BD cutoff is defined by sustained hydrogen fusion. Studies evaluating the hydrogen burning limit are summarised in Table 5 of Dieterich et al. (2018), from which we deduce that the BD upper limit is in the range of (\(\sim 70-95\,M_{J}\)). This variance has a large dependence on chemical composition (e.g., Chabrier & Baraffe, 1997). Forbes & Loeb (2019) investigate the idea of over-massive BDs. These are theoretically formed through Roche lobe overflow. The result is that, with only the mass information to draw from, this cutoff is vague.
Little is known about the very low mass end of the stellar initial mass functions (IMF). The empirical IMFs of Kroupa (2001), Chabrier (2005), Thies & Kroupa (2007); Thies & Kroupa (2008), and Kroupa et al. (2013) show disparity with the theoretical IMFs deduced from analytical descriptions of pre-stellar-cloud core distributions (Padoan & Nordlund, 2002; Hennebelle & Chabrier, 2008; Hennebelle & Chabrier, 2009) at the very-low-mass end, approximately between \(84\,M_{J}\) and
(\(0.08\,M_{\odot}\) and \(0.2\,M_{\odot}\)). Empirical IMFs usually require assumptions about age and metalicity in order to determine the IMF from an observed luminosity function. Observationally, measuring a mass function across the entire stellar mass range is challenging because sampling the upper mass range requires massive star clusters, and sampling the lower mass range requires nearby clusters. With the closest massive clusters at distances of a few kiloparsecs, observing both ends of the mass function in one star cluster is not currently possible photometrically (Elmegreen, 2009). Wegg et al. (2017) shows one way in which microlensing surveys can be used to probe the IMF of the inner Milky Way, although this method used an existing dynamical model to infer the masses from the timescales (\(t_{\rm E}\)) of \(\sim 4000\) events and therefore is not purely empirical. The timescales considered were \(2\,\)days \(<t_{\rm E}<200\,\)days, which relates to the mass via \(t_{\rm E}^{2}\propto M\).
Currently, photometric surveys are only capable of probing relatively bright and very local populations of BDs. For example, Rosell et al. (2019) quote a distance limit in their Dark Energy Survey catalog of "beyond \(400\,\)pc". This selection bias in observability provides a limited view of BDs, in distance, mass and age. Further detections of very-low-mass objects in binary systems, will help to clarify our understanding of the dynamical properties of BD populations and the low-mass end of the IMF, because such systems are likely to have been formed as part of the very-low-mass end of the IMF, not like planets in a circumstellar disk.
The following sections in this paper describe our analysis of microlensing event OGLE-2017-BLG-1038 and how we determined this event to be a BD binary. SS2 describes the observations made of this event, and the data-reduction methods used. SS3 outlines our analysis of the ground-based data and resulting conclusions about source star characteristics. SS4 details our analysis of the space-based, Spitzer data and our final modeling results. The corresponding physical parameters for our most likely models are calculated in SS5. In SS6 we compare the relative probabilities of our best model solutions and then we discuss, in SS7, how different assumptions of the galactic model, as well as selection effects, may influence these probabilities.
## 2 Data Collection and Reduction
OGLE-2017-BLG-1038 is located at (R.A., decl.)\({}_{\rm J2000}=(17:58:36.55,\,-27:18:58.4)\), \((l,b)=(2.8536,-1.6382)^{\circ}\). It was first identified as a microlensing event candidate by the Optical Gravitational Lensing Experiment early warning system (OGLE; Udalski et al., 1994), on 2017 June 3, from their ongoing survey (mostly in \(I\) band) using the \(1.3\,\)m Warsaw telescope in the Las Campanas Observatory in Chile. Repeated OGLE observations of the event took place at an interval of mostly 1 day.
The Korean Microlensing Telescope Network (KMTNet; Kim et al., 2016) also discovered this event as KMT-2017-BLG-0363 and observed it in the \(V\) and \(I\) bands. OGLE-2017-BLG-1038 was observed in two overlapping KMTNet search fields (BLG03 and BLG43), from each of the three KMTNet telescopes: Cerro Tololo Inter-American Observatory (KMT-C), South African Astronomical Observatory (KMT-S), and Siding Springs Observatory (KMT-A). This resulted in a cadence of \(\sim 15\) minutes between successive observations. The KMTNet observations were also primarily made in the \(I\) band. However, occasional _V_-band observations were made to provide color information. Therefore, 12 sets of KMTNet light curves were obtained for this event.
The end of the event was also observed by the Spitzer Space Telescope Infrared Array Camera (IRAC; Fazio et al., 2004) instrument at an approximately 1 day cadence. While both the KMTNet and OGLE observations were made as part of regular survey operations, the Spitzer observations were scheduled for this event specifically as part of a program to enable space-parallax measurements for microlensing events (Calchi Novati et al., 2015; Yee et al., 2015). This event was selected for Spitzer observations on 2017 June 13 (HJD' = 7918.11) and met the objective criteria on 2017 June 19 (HJD' = 7923.95). Both of these selections took place before the binary nature of the event was recognized, i.e., when it was still believed to be a point lens. Members of the Spitzer Team first noticed that the event was anomalous on 2017 June 20 (HJD' 7925.04).
Kinematic measurements from the source star in this event, as well as surrounding field stars, were obtained from Gaia Early Data Release 3 (Gaia Collaboration et al., 2020, 2016).
The ground-based data were reduced using difference imaging (Tomaney and Crotts, 1996; Alard and Lupton, 1998) procedures. The OGLE images were reduced with their custom difference image procedures (see Wozniak, 2000). The KMTNet light curves were extracted from the images using pyDIA (Albrow, 2017) software, and the Spitzer light curve was extracted by the methods detailed in Calchi Novati et al. (2015).
## 3 Ground-Based Analysis
The light curve of this event (see Figure 1) has a triple-peaked perturbation over a 5 day period (2017 June 22-27) with the three peaks showing smoothed curves, indicative of a resolved source crossing a caustic. Caustics are features of a multiple-lens system. Therefore,
we began our modeling with a binary-lens model, which we ultimately found was sufficient to describe the light curves for this event.
The binary-lens model is parameterized by (\(s\), \(q\), \(\rho\), \(u_{0}\), \(\alpha\), \(t_{0}\), \(t_{\rm E}\)), where \(s\) is the angular separation of the two lens masses in units of \(\theta_{\rm E}\), \(q\) is the mass ratio of the lens objects, \(\rho\) is the source angular radius in units of \(\theta_{\rm E}\), \(u_{0}\) is the closest line-of-sight point of approach to the lens center of mass made by the source in its relative trajectory (again in units of \(\theta_{\rm E}\)), and \(t_{0}\) is the time at which this happens (\(|u_{0}|=u\,(t_{0})\), where \({\bf u}(t_{i})\) is the position of the source, projected onto the lens plane, at a given time, (\(t\)), \(\alpha\) is the angle of the projected rectilinear source trajectory relative to an axis that passes through the lens masses, and \(t_{\rm E}\) is the Einstein radius crossing time (the time the source takes to travel an angular distance of \(\theta_{\rm E}\)). For simplification, the motions in these models were considered from the reference frame of the lens system. This meant that, for modeling purposes, the relative velocities of any of the bodies involved were attributed to the "source velocity".
Our analysis of the ground-based light curves began by performing a grid search over a fixed resolution on \(s\), \(q\), \(u_{0}\), and \(\alpha\), using point-source approximations away from the caustics, for their computational speed, and convolved magnification maps in high-magnification regions, where finite-source effects were significant. The other model parameters were fitted by \(\chi^{2}\) minimization with \(\rho\) values found by interpolating between grid points with discrete convolutions. These calculations used a modified version of the Microlensing Observations Rapid Search for Exoplanets code (McDougall & Albrow, 2016).
The best 20 grid solution regions were further investigated using the Emcee sampler (Foreman-Mackey et al., 2013). For this process we used the more accurate Image Centered Inverse RAy Shooting (ICIRAS) (Bennett, 2010) or contour integration (Bozza, 2010; Bozza et al., 2018) methods to calculate the model magnification in regions close to caustics, and the hexadecapole approximation (Pejcha & Heyrovsky, 2009; Gould, 2008) otherwise. A fixed limb-darkening coefficient (\(\Gamma=0.53\))1 was applied to the source in these calculations. Two of the regions converged to the same, and significantly most likely solution, while the next most likely solution had a \(\Delta\chi^{2}\) of \(\sim 110\,000\), before renormalization. The geometry of this static, ground-based solution is shown in Figure 2, and the magnification curve, with ground-based data, is shown in Figure 1. The fitted model parameters are displayed in Table 1 as the Static model. The solution corresponds to a source passing over the edges of a large resonant caustic. We note that this solution corresponds to small negative blending for three of the data sources, though this is a normal occurrence for microlensing photometry in a very crowded bulge field (Park et al., 2004), especially for dim lenses. Table 1 shows \(F_{\rm B}/F_{\rm S}\) for the OGLE source, which is within \(2\,\sigma\) of being positive.
Figure 1: Magnification curves resulting from the fitted static binary-lens model.
Figure 2: Lens system and caustic geometry resulting from the \((s,q)=(1.0,1.0)\) grid seed, with a projected source trajectory, for the static binary-lens model, fitted to the ground-based data. Colored circles show the source position at the times of the data points, where the colors correspond to those specified in Figures 1 and 4, and the circle size depicts the source size.
The source fluxes for each data set, were found from a linear fit;
\[F_{i}=A_{i}\times F_{\rm S}+F_{\rm B}, \tag{2}\]
where \(F_{\rm S}\) is the source-star flux, \(F_{\rm B}\) is the blended flux2, \(A_{i}\) is the magnification at time \(t_{i}\), and \(F_{i}\) is the observed total flux at time \(t_{i}\). This solution to the static model was used to renormalize the ground-based data uncertainties (see Yee et al., 2012), and the solution was then allowed to reconverge.
Footnote 2: The blended flux is made up of the nonlensed contributions to the light-curve flux measurements, from light sources near the line of sight. Sometimes the largest contributor to this flux component is the lens star, though this is rarely the case.
### Lens Orbital Motion or Ground-Based Parallax?
Although the peaks of the light curve are well fitted by this static-lens, rectilinear-source model, there is a region between dates 7915-7922 where the model systematically underpredicts the data (Figure 3). In Figure 4 we show the cumulative \(\chi^{2}\) as a function of time for each individual data set. All curves show significant jumps near 7915-7922, indicating that there is a real missing feature in our static model. Higher-order effects are required for the model to provide a good description of these data.
Common high-order effects in microlensing light curves are orbital parallax (motion of Earth during an event) and orbital motion of the binary-lens system. A known degeneracy exists between these. Suspecting the significance of one or both of these higher-order effects, we added them to the generative model, both collectively and separately. We approximated the orbital motion of the lens objects by allowing \(\alpha\) and \(s\) to vary linearly with time, adding the model parameters \(\dot{\alpha}\) and \(\dot{s}\). Modeling the parallax effect requires the introduction of two new parameters, \((\pi_{\rm E,N},\pi_{\rm E,E})\), which are components of the vector \(\boldsymbol{\pi}_{\rm E}\), where \(\|\boldsymbol{\pi}_{\rm E}\|=\frac{\pi_{\rm rel}}{\theta_{\rm E}}\), and its direction is that of the lens-source relative proper motion. The introduction of measurable parallax breaks the reflected symmetry of the source trajectory about the lens axis; a trajectory above the lens axis is not equivalent to a trajectory below the lens axis (except in the limit that the source lies exactly on the ecliptic). We therefore modeled both positive and negative \(u_{0}\) solutions in which parallax was considered. For those solutions with both parallax and lens orbital motion, we calculate \(\beta\) (the ratio of the projected kinetic to potential energy of the lens; An et al., 2002; Dong et al., 2009), where values less than unity indicate a lens system consistent with a bound orbit;
\[\beta=\frac{2(\rm au)^{2}}{c^{2}}\frac{\pi_{\rm E}}{\theta_{\rm E}}\frac{\left[ \left(\frac{1}{\delta}\frac{ds}{dt}\right)^{2}+\left(\frac{da}{dt}\right)^{2} \right]s^{3}}{\left[\pi_{\rm E}+\left(\frac{\pi_{\rm S}}{\theta_{\rm E}} \right)\right]^{3}}. \tag{3}\]
In our investigations of the significance of these two higher-order effects (Table 1), we find that, alone, lens orbital motion describes the static model discrepancies better than parallax. Including both higher-order effects yields only a minor \(\chi^{2}\) reduction compared with the purely lens-orbital-motion model, and the lens-orbital
Figure 4: Cumulative \(\chi^{2}\) plot for the renormalized static model.
Figure 3: 7915-7922 HJD crop of the magnification model from the best, static fit (solid black lines) and the corresponding ground-based light-curve data with renormalized errors. The data show a clear trend above the fit line in this region. The dotted black lines show the lens-orbital-motion-inclusive magnification model used in the next step of this event analysis. Outside of this crop region the two models are visually indistinguishable. The data colors correspond to those specified in Figures 1 and 4
motion parameters change very little. (The low \(\beta\) values for these models show that the implied orbits are bound.) Conversely, the posteriors of the parallax model change drastically when lens orbital motion is added. We therefore conclude lens orbital motion is well constrained and sufficient to describe the deviation on the static model from 7915-7922. This model is illustrated by the dotted lines in Figure 3.
### Source Color
Color-magnitude Diagrams (CMDs) were created for each KMTNet observation site and field with \(I\) and \(V\) data (KMTC-03; Figure 5, KMTC-43, KMTS-03, KMTS-43, KMTA-03, and KMTA-43). We use the normal KMT practice of adopting magnitude zero points of \(I_{ZP}=28\) and \(V_{ZP}=28.65\). The source-star fluxes, obtained from fitting the magnification model to each light curve, were used to find the source star's position on the corresponding CMDs. The source fluxes for the highest likelihood solution (ground based) are given in Table 2.
The red clump in each CMD was centroid fitted, and acted as a calibration for obtaining the intrinsic colors and magnitudes of the field. The galactic bulge red clump can be used to calibrate the CMD because its intrinsic color and magnitude are known to high precision. The intrinsic color of the red clump is \((V-I)_{\rm RC,0}=1.06\)(Bensby et al., 2011). The intrinsic I-magnitude of the red-clump was found by interpolating the extinction correction table from (Nataf et al., 2013) for the target's galactic longitude (\(l=2.85^{\circ},\,b=-1.64^{\circ}\)); \(I_{\rm RC,0}=14.35\pm 0.04\). Assuming that the source is obscured by the same amount of dust as the average red clump star in this field, \((V-I)_{\rm RC,0}\) and \(I_{\rm RC,0}\) provide an absolute color and magnitude calibration to the CMDs.
Using the mean calibrated color and magnitude, the intrinsic magnitude and color of the source was found to be \((I_{0},(V-I)_{0})=(14.01\pm 0.05,1.11\pm 0.04)\), averaged over all six CMDs. These values are very similar for each of the possible solutions for the final model.
This source color information was also used to infer the Spitzer source flux and a color-color relation between KMTC-03 and Spitzer using the method of Calchi Novati et al. (2015b). The expected Spitzer Source flux is \(F_{\rm S,L}=56.1\pm 1.7\) and the optical-infrared source color is \((I-L)_{\rm S}=-4.43\pm 0.03\), with an \(L\)-magnitude zero point of 25.
## 4 Inclusion of Satellite Data
Having a Spitzer light curve for this event meant that, despite there being very inconclusive orbital parallax signals in the ground-based data, parallax could still be measured (Refsdal, 1966). In this section we describe our analysis of the space-based Spitzer data using typical error renormalization methods, discuss concerns over systematics errors in the data, and present an alternate approach to coping with such systematics.
### Satellite Parallax Degeneracies
Figure 6 shows the raw Spitzer data and a corresponding magnification curve from estimating \(F_{\rm S}=56.1\) (as is suggested by the color comparisons of SS3.2), \(F_{\rm B}=0\), and adopting the ground-based model. In this figure, we can see a clear, decreasing signal that has \(\Delta F>30\) Spitzer flux units. The Spitzer data are inconsistent with very small parallax, as the shape of the magnification curve is not well represented by the static ground-based model, and no alternative values of \(F_{\rm S}\) and \(F_{\rm B}\) could bring them into agreement. At the time of the first Spitzer observation, the ground-based light curve is still exiting the cusp while the Spitzer data are clearly not. This is strong evidence for a parallax effect. At the same time, the required magnification change as seen from Spitzer (\(\Delta A\sim 1.6\)) indicates that the parallax cannot be too large.
When viewed from Spitzer, the angular source trajectory across the lens plane is offset by a vector \((\Delta\beta,\Delta\tau)/\theta_{\rm E}\), in directions (perpendicular, parallel) to \(\mathbf{D}_{\perp}\), the separation between Spitzer and Earth pro
Figure 5: Color-magnitude diagram from the KMTC-03 field with the fitted centroid of the red clump and the source position indicated by the red “+” and blue “+”, respectively.
jected onto the lens plane. This vector is related to the parallax measurement, but can be more useful in understanding the parallax likelihood space in comparison with the caustic diagram representation of the event. The two parameters \((\Delta\beta,\Delta\tau)\) can be mapped onto \(\pi_{\rm E,E}\) and \(\pi_{\rm E,N}\), via \(\mathbf{\pi}_{\rm E}=\frac{\rm au}{D_{\perp}}\left(\Delta\tau,\Delta \beta\right).\) The parallel offset is simply
\[\Delta\tau=\frac{t_{0,{\rm Spitzer}}-t_{0,{\rm Earth}}}{t_{\rm E}}. \tag{4}\]
In the case of a single lens, the perpendicular offset suffers from a four-fold satellite parallax degeneracy,
\[\Delta\beta=\pm u_{0,{\rm Spitzer}}-\pm u_{0,{\rm Earth}}, \tag{5}\]
due to the exact circular symmetry of the magnification field about the lens (Refsdal, 1966), as illustrated in Gould (1994). (The sign convention we adopt here is that a positive value of \(u_{0}\) indicates that, during its projected trajectory, the source approaches the lens center of mass on its right hand side.) In general, this fourfold degeneracy usually reduces to twofold with the addition of a second lens body, as the resulting caustic features break the symmetry of the magnification field. However, for binary-lens events in which the trajectory runs approximately parallel to the lens axis (such as the current case), trajectories reflected about the lens axis result in similar magnification curves, in which case the four-fold degeneracy is retained (Zhu et al., 2015).
A grid-search approach was used to determine the most likely parallax-solution regions. With the inclusion of space-based data, the two parallax parameters (\(\pi_{\rm E,N}\) and \(\pi_{\rm E,E}\)) were added to the model.
When performing the parallax grid search, the ground-based model parameters (including lens orbital motion) were fixed, and a maximum-likelihood search
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & Static & \multicolumn{2}{c}{Parallax} & \multicolumn{2}{c}{LOM} & \multicolumn{2}{c}{Parallax + LOM} \\ \cline{2-6} \(\hat{u}_{0}\) & - & + & - & - & + & - \\ \hline \(s\) & \(0.9833^{+0.0006}_{-0.0005}\) & \(0.9757^{+0.0003}_{-0.0009}\) & \(0.9978^{+0.0011}_{-0.0005}\) & \(0.9932^{+0.0002}_{-0.0009}\) & \(0.9916^{+0.010}_{-0.0009}\) & \(0.9888^{+0.015}_{-0.0004}\) \\ \(q\) & \(0.621\pm 0.002\) & \(0.616^{+0.003}_{-0.002}\) & \(0.590^{+0.002}_{-0.003}\) & \(0.607\pm 0.003\) & \(0.609^{+0.004}_{-0.003}\) & \(0.615^{+0.002}_{-0.005}\) \\ \(\log_{10}\rho\) & \(-1.6027^{+0.0006}_{-0.0011}\) & \(-1.6094^{+0.0011}_{-0.0008}\) & \(-1.5818^{+0.0017}_{-0.0008}\) & \(-1.5842^{+0.0004}_{-0.0012}\) & \(-1.5859^{+0.0018}_{-0.0008}\) & \(-1.5911^{+0.0026}_{-0.0004}\) \\ \(u_{0}\) & \(-0.5693^{+0.0008}_{-0.0006}\) & \(0.468^{+0.003}_{-0.009}\) & \(-0.430^{+0.012}_{-0.005}\) & \(-0.551^{+0.0004}_{-0.0012}\) & \(0.552\pm 0.005\) & \(-0.647^{+0.029}_{-0.004}\) \\ \(\alpha\) & \(-2.9702^{+0.0005}_{-0.0007}\) & \(2.845^{+0.003}_{-0.011}\) & \(-2.853^{+0.011}_{-0.004}\) & \(-2.9710^{+0.0007}_{-0.0006}\) & \(2.965^{+0.007}_{-0.006}\) & \(-3.064^{+0.020}_{-0.004}\) \\ \(t_{0}\) & \(7926.900^{+0.007}_{-0.006}\) & \(7924.47^{+0.07}_{-0.16}\) & \(7927.33^{+0.06}_{-0.04}\) & \(7927.00^{+0.003}_{-0.0011}\) & \(7926.83^{+0.105}_{-0.105}\) & \(7927.37^{+0.03}_{-0.22}\) \\ \(t_{\rm E}\) & \(11.852^{+0.115}_{-0.18}\) & \(13.57^{+0.10}_{-0.07}\) & \(10.55^{+0.02}_{-0.08}\) & \(11.855^{+0.023}_{-0.007}\) & \(12.02^{+0.15}_{-0.10}\) & \(12.33^{+0.11}_{-0.16}\) \\ \hline \(\pi_{\rm EN}\) & 0 & \(-11.5^{+0.3}_{-1.0}\) & \(12.6^{+1.2}_{-0.5}\) & 0 & \(-0.6^{+0.6}_{-0.5}\) & \(-9.6^{+0.1}_{-0.4}\) \\ \(\pi_{\rm E,E}\) & 0 & \(10.7\pm 0.5\) & \(-10.0^{+0.2}_{-0.8}\) & 0 & \(1.1^{+1.1}_{-0.7}\) & \(1.9^{+0.8}_{-0.7}\) \\ \(\hat{s}\) & 0 & 0 & 0 & \(0.30\pm 0.04\) & \(0.31^{+0.08}_{-0.04}\) & \(0.24^{+0.07}_{-0.04}\) \\ \(\hat{\alpha}\) & 0 & 0 & 0 & \(-1.27\pm 0.04\) & \(1.20^{+0.06}_{-0.05}\) & \(-1.57^{+0.10}_{-0.06}\) \\ \hline \(\beta\) & & & & 0.13 & 0.01 \\ \(\chi^{2}_{min}\) & 12592.83 & 11946.44 & 12047.65 & 11468.38 & 11466.26 & 11441.77 \\ \(\Delta\chi^{2}_{min}\) & 0 & -646.40 & -545.19 & -1124.46 & -1126.57 & -1151.07 \\ \(N\) & & & & 12607 & & \\ \hline \(I_{\rm S,OGLE}\) & 16.4 & & & & \\ \(F_{\rm B,OGLE}/F_{\rm S,OGLE}\) & -0.0075 \(\pm\) 0.0041 & & & & \\ \hline \end{tabular} Note. – Those solutions indicated to by “LOM” refer to the models in which lens orbital motion was included. The source magnitude uses a zero point of \(I_{ZP}=28\). N is the total number of light-curve data points. Solutions with \(\beta<1\) are consistent with a bound orbit, but can only be calculated for models including both lens orbital motion and parallax.
\end{table}
Table 1: Comparison of the Highest Likelihood Fit Parameters for Binary-lens Models with and without the Higher-order Effects of Parallax and Lens Orbital Motion, Fit to Ground-based Data, with Renormalized Errors
\begin{table}
\begin{tabular}{c c c c} \hline \hline Source & \(F_{\rm S,\it I}\) & \(F_{\rm S,\it V}\) & \(F_{\rm S,\it L}\) \\ \hline KMTC-03 & 52595.17 & 5000.67 & \\ KMTC-43 & 34761.72 & 5074.91 & \\ KMTS-03 & 50862.79 & 4125.91 & \\ KMTS-43 & 52803.33 & 4511.05 & \\ KMTA-03 & 40084.08 & 4813.62 & \\ KMTA-43 & 38048.48 & 4800.27 & \\ OGLE & 43537.75 & & & \\ _Spitzer_ & & & 56.09 \\ \hline \end{tabular} Note. – These values were calculated using an orbiting, binary-lens model, for each of the ground-based sources. The Spitzer source flux is an estimate based on comparative CMDs between the Spitzer field and the KMTC-03 field.
\end{table}
Table 2: Source fluxes, for Each Observation Source and Band
was performed for the Spitzer light curve over a large range of discrete \(\pi_{\mathrm{E},N}\) and \(\pi_{\mathrm{E},E}\) values.
This grid search indicated that there were four solution regions for the given ground-based model, with the two outer regions having much higher likelihoods (i.e, lower \(\chi^{2}\)) than the two inner regions (Figure 7a). These four solutions regions represent the \(\pm u_{0,\mathrm{Spitzer}}\) degenerate trajectories relating to two distinct solution families. We refer to these families as close (c) and wide (w). The four solutions regions result from only \(-u_{0,Earth}\) and indicate that, including the \(+u_{0,Earth}\) trajectory, we have an eightfold degeneracy for this particular geometry.
Because the Spitzer data only cover the falling part of the light curve and cover no caustic feature, the light curve alone does not contribute very strong constraints on the parallax measurement. We have thus implemented in the modeling an additional \(\chi^{2}\) penalty term that weighted the fit toward a source-flux ratio (between KMT-C03 and Spitzer \(L\)) matching that inferred by the calculated \((I-L)_{0}\) source color, found in SS3.2. This color-constraint (Shin et al., 2017) term was of the form
\[\chi^{2}_{constraint}=\frac{\left(2.5\log_{10}\left(\frac{\left(\frac{F_{L}}{P_ {L}}\right)_{model}}{\left(\frac{F_{L}}{P_{L}}\right)_{constraint}\right)} \right)^{2}}{\sigma^{2}_{constraint}}. \tag{6}\]
The constraint changed the likelihood space of the parallax model. The four solutions-regions from the unconstrained grid, remained as features in the constrained grid. However, the close set of solutions have more comparable likelihoods to the wide set than in the unconstrained grid.
When comparing Figure 7a and Figure 7b, the reason for the four lobes in the likelihood space becomes apparent. For this event, \(\Delta\beta\) approximately aligns with \(\pi_{\mathrm{E},N}\) and \(\Delta\tau\) with \(\pi_{\mathrm{E},E}\). Simplistically, changing \(\Delta\tau\) moves the Spitzer-data nodes backward or forward in time along the Spitzer trajectory, whereas \(\Delta\beta\) shifts the "parallel" space-based trajectory closer to or farther away from the ground-based trajectory. The lobes and connective contours in Figure 7a result from solutions for which the Spitzer data hug the leftmost cusps of the caustic of Figure 7b.
Figure 8 shows a more restricted view of the ground-based trajectory for this set of solutions, and the caustics at key epochs in the light curve, which change over the course of the event due to the orbital motion of the lenses.
Within each wide or close set, the pairs are the previously predicted \(\pm u_{0,\mathrm{Spitzer}}\) degenerate solutions. A further four degenerate solutions are obtained by reflecting all trajectories in Figure 7b (ground and Spitzer) about the lens axis.
The eight degenerate solution regions were further investigated using emcee with both ground-based and Spitzer data, renormalized errors, and both parallax and lens orbital motion included in the model. All model parameters were left free to evolve for all instances. Model parameters for the resulting solutions are given in Table 3. They are all somewhat similar in likelihood with an overall range in \(\Delta\chi^{2}\leq 87\). The best solution found was the c -/+ geometry. All close solutions were favored over the wide by a margin of \(\chi^{2}_{w}-\chi^{2}_{c}\geq 8.96\). The nonfavored close solutions have a range \(12.48<\Delta\chi^{2}<28.06\).
### Spitzer Systematic Errors
Before we can have faith in these Spitzer parallax measurements, we must first address concerns of systematics in the Spitzer light curve.
Yee et al. (2021), Gould et al. (2020), Hirao et al. (2020), and Zang et al. (2020) include detailed investi
Figure 6: Raw Spitzer light curve and model light curve resulting from the static binary-lens model (no parallax), fitted to the ground-based data, and transforming to the Spitzer flux system assuming \(F_{\mathrm{L}}\equiv F_{\mathrm{S},L}A\), where \(F_{\mathrm{S},L}=56.1\) and \((I-L)_{\mathrm{S}}=-7.4\). The ground-based observations have also been scaled to the Spitzer flux system. The residuals between the model and the data are depicted in black for ground-based data and red for _Spitzer_ data. These show a dramatic difference for the \(t<7935\) data.
gations into Spitzer systematics. These investigations point to poorly determined positions of nearby blend stars in combination with the seasonal rotation of the Spitzer camera. This has resulted in variable blended levels (\(F_{\rm B}\)) seen over timescales on the order of tens of days. These works conclude that Spitzer systematics are at the level of \(\sim 1\) Spitzer flux unit where, for a typical event, \(F_{\rm B}\approx 3\). Concerns have been raised for previous events (Zhu et al., 2017; Koshimoto and Bennett, 2019) where the flux levels were \(F_{\rm S}<5\) and thus \(F_{\rm S}\sim F_{\rm B}\), in which case systematics on the order of 1 could be considered fractionally significant.
We now consider whether systematics in the Spitzer data are significant for this event. The Spitzer magnification curve has a bump between \(t=7936\) and \(t=7941\) (corresponding to \(\Delta F\simeq 5\) Spitzer flux units; see Figure 6) that is not produced by any of our best generative model solutions that incorporate satellite parallax (Section 4.1). This implies a systematic error and demonstrates the scale to which we can expect them in this specific Spitzer data set; a few flux units over timescales of around 5 days. This is a higher \(\Delta F\) perturbation than is expected for Spitzer systematic on a smooth curve (typically \(\Delta F\simeq 1\) Spitzer flux units)
The parallax terms in the model are sensitive to small contiguous perturbations in the data, especially for those data after \(t=7955\), where flux changes of a few units change the shape of the slope enough to result in different parallax measurements, which affect the resulting physical solutions.
Figure 8: Caustic diagram with projected ground-based trajectory for the c -/+ solution. The ground-based data points are represented by colored circles on the trajectories, where the colors correspond to the observation site and field, as specified in Figures 1 and 4. The caustics are depicted here at three instances corresponding to the start of the “problem region” (\(7915=t_{0}-1.01t_{\rm E}\)), \(t_{0}\) (\(7926.91\)), and the time of the first Spitzer data point (\(7931.47=t_{0}+0.38t_{\rm E}\))). These epochs are represented on the grounds based trajectory with colors matching their corresponding caustics.
Figure 7: _Left_: contour maps demonstrating the results of the parallax grid searches over discretely varied \(\pi_{{\rm E},E}\) and \(\pi_{{\rm E},N}\) for the \(-u_{0,Earth}\) configuration, including only the Spitzer \(\chi^{2}\) components. The dashed contours show the \(\chi^{2}\) landscape without a color constraint and the solid lines with the constraint. Note that the \(\pi_{{\rm E},N}\)-axis of this figure is reversed from the usual orientation so that the two figures approximately align. _Right_: caustic diagram with projected ground-based and Spitzer-based trajectories (black and red, respectively). The four Spitzer trajectories are the result of minimization from the local \(\chi^{2}\) minima from the left figure, with all modeling parameters free to evolve. The data points are represented by colored circles on the trajectories, where the colors correspond to the observation site and field, as specified in Figures 1 and 4. The caustics change with the orbiting of the lens bodies and are depicted here at the instances of the first and last Spitzer data points, specifically for the c -/+ solution (all four \(u_{0,Earth}<0\) solutions looks very diagrammatically similar to the one shown). These epochs are represented on grounds-based trajectory (also from the c -/+ solution) with colours matching their corresponding caustics.
For this event, we have a Spitzer source flux much larger than the expected blend flux, a light curve with clearly and significantly decreasing flux, and baseline observations. Therefore, we would not ordinarily expect systematics to play a major role in this case. However, this event is somewhat sensitive to systematics in the baseline and shows evidence of similar systematics elsewhere in the light curve. We are therefore cautious of the effects systematic error in the Spitzer data may have on our conclusions.
### Modeling Spitzer Errors
In an attempt to properly consider the apparent systematic errors in the Spitzer data, we have included in our model an error-bar renormalization parameter and two Gaussian process (GP) parameters.
Gaussian processes were first introduced in microlensing event analysis by Li et al. (2019). In this paper they used a GP to model source variability, rather than systematics, as well as a traditional inflated-error-bar scaling method. The GP method achieved better results in their case, as evidenced by the residuals in their Figure 1. However, they adopt their inflated-error-bar scaling model due to multiple practical and theoretical concerns. The practical issues they raise are how to cope with different blending effects between observation sources and how to perform error re-scaling. The blending issue is not relevant in our case because we only apply a GP model on the Spitzer data set. The theoretical issues they raise are in regards to choice of GP kernel and the possibility of degeneracies between the microlensing and GP parameters, for which they saw no evidence in their posterior distributions. We also saw no evidence of degeneracies between microlensing and GP parameters in our posterior distributions. In regards to the choice of GP kernel we tested both the exponential (described below) and Matern 3/2 kernels and found no significant difference between the results. We did not test the kernel used in Li et al. (2019) as it is meant for modeling quasi-periodic variations.
The degenerate solutions of Section 4.1 have reduced \(\chi^{2}\) values that imply that the Spitzer flux uncertainties have been underestimated by factors of between 2 and 5 times before renormalization. Because these factors change for each solution we include a multiplicative Spitzer error renormalization as a free parameter and consequently the likelihood must change to include the penalty
\[\ln P_{\rm S}=-N\ln S,\]
where \(S\) is the Spitzer error scaling factor and \(N\) refers here to the number of Spitzer data points.
Simultaneously we included an exponential GP model to fit the systematic features in the Spitzer light curve using the Celerite package (Foreman-Mackey et al., 2017). This replaces the vector of data variances with a data covariance matrix,
\[K_{nm}=\sigma_{n}^{2}\delta_{nm}+k(t_{n},t_{m}).\]
We use a GP kernel
\[k(\tau_{nm})=a\exp(-c\tau_{nm}),\]
where \(\tau_{nm}=|t_{n}-t_{m}|\), and \(a\) and \(c\) are the GP model parameters.
The GP likelihood is then
\[\ln P_{GP}=-\frac{1}{2}\mathbf{r}^{T}\mathbf{K}^{-1}\mathbf{r}-\frac{1}{2}\ln \det\mathbf{K}-\frac{N}{2}\ln 2\pi,\]
where \(\mathbf{r}\) is the vector of (data - model) residuals.
The results of this modeling are displayed in Table 4. We find that inclusion of these three new model parameters has little effect on the microlensing parameters of all eight solutions, although the spread of likelihood values between solutions does change. With the GP parameters included in the model, our best solution is no longer the c -/+ but the c +/-, although, by a very small margin. The light curve for this model is shown in Figure 9. The full family of close solutions are all similarly likely, \(-2\Delta\ln\mathcal{L}<2.3\), where we consider \(-2\Delta\ln\mathcal{L}\) as an effective \(\Delta\chi^{2}\).
### Model Comparison
When we compare the likelihoods using the standard analysis approach (Table 3) and GP analysis (Table 4), both favour the close solutions. For the close solutions, the range of \(\Delta\chi^{2}<28\) using the standard approach and \(\Delta\chi^{2}_{\rm eff}<3\) using GP.3 The physical properties \(M_{tot}\) and \(D_{\rm L}\) of all four close solutions are in agreement between the standard and GP approaches to within \(1.2\,\sigma\). Therefore, the physical interpretation is the same in both cases.
Footnote 3: Here we refer to effective \(\Delta\chi^{2}\) values, which are calculated via \(\Delta\chi^{2}_{eff}=-2\Delta\ln\mathcal{L}\). This is in order to provide an equivalent scale to the regular \(\Delta\chi^{2}\) values we use to aproraise solutions in the standard approach. For the standard approach \(\Delta\chi^{2}=-2\Delta\ln\mathcal{L}\) because the extra likelihood components are the same for all solutions. We use these two parameters to compare spreads between methods, as they are on the same scale, but we do not directly compare solutions between methods, as these values are not equivalent.
This is not true of all the degenerate wide-family solutions. While the large-parallax solutions remain most disfavoured between approaches, with matching \(M_{tot}\)
and \(D_{\rm L}\) values (masses at or below the deuterium fusion limit, 2 kpc away), the small-parallax solutions tell a different story. Using the standard approach, all wide solutions are disfavoured by a \(\Delta\chi^{2}>37\). However, using the GP approach the small-parallax solutions have \(\Delta\chi^{2}_{\rm eff}\) values of 7 and 15, within the \(\Delta\chi^{2}\) range of close solutions using the standard approach. The physical interpretation is also different for these two solutions. The physical properties \(M_{tot}\) and \(D_{\rm L}\) differ between approaches by \(<4\sigma\).
Our interpretation is that the physical solutions are not equally sensitive to systematic errors in the Spitzer data. The posteriors of \(\pi_{\rm E,E}\) are wider using the GP approach than the standard approach, especially the wide solutions for which the extrapolated trajectories do not cross caustics. It appears that the parallax measurement (particularly \(\pi_{\rm E,E}\)) is proportionally more affected for smaller parallax solutions, making them more sensitive to systematic errors, but that the affect this has on the close solutions is limited by the nearness to a caustic crossing, which has a dominating effect on the likelihood space. Whether these conclusion are true in general is an interesting thought for future work.
While inflating error bars may be the correct approach for accommodating noise in data that is approximately Gaussian, it is appropriate to use a correlated noise approach where there are obvious systematic trends. The apparent perturbations in our Spitzer data are not represented by any of our best model solutions and therefore show that the errors in this data set are clearly correlated on time scales of a few days. However, the importance of using a correlated noise approach varies for our different solutions families and we believe that the importance of such modelling in other Spitzer events would also be dependent on many event-specific-properties.
Whether or not we consider the expense of a GP approach necessary, in our case, depends on the \(\Delta\chi^{2}_{\rm eff}\) ranges we are prepared to accept. If we accept solutions at the \(\Delta\chi^{2}_{\rm eff}\lesssim 90\) level, all eight degenerate solutions are valid, whether or not a GP is included. However, at the \(\Delta\chi^{2}_{\rm eff}\lesssim 50\) level, we would reject the w -/+ and w +/- solutions using the standard approach. Using the GP approach, we would accept all of these solutions, with w -/- and w +/+ converging into significantly different physical lens compositions.
## 5 Physical parameters
### Angular Einstein Radius
There exist empirical relations for determining the angular size of a star from its intrinsic color and magnitude. According to Kervella & Fouque (2008), the most appropriate of these relations for non-M-type giants are those found in Nordgren et al. (2002) and van Belle (1999).4 We use the Nordgren et al. (2002) surface brightness relation, specifically for non-variable giant stars (their Equation 12),
Footnote 4: Depending on the selection of surface brightness relation, the implied \(\theta_{\rm E}\) differs by around 8%, in this case.
\[\log_{10}(2\theta_{*})=0.5522+0.246\left(V-K\right)_{0}-V_{0}/5. \tag{7}\]
Using the empirical color-color relations of Bessell & Brett (1988) for giant stars we find the \((V-K)_{0}\) equivalent of the intrinsic source color, \((V-I)_{0}\), that was calculated from the CMDs, \((V-K)_{{\rm S},O}=2.57\pm 0.09\). The solutions for the models including higher-order effects have effectively identical \(\theta_{*}=7.6\pm 0.5\,\mu\)as.
\(\theta_{\rm E}\) was calculated using the fitted \(\rho\) value for each solution, where \(\theta_{\rm E}=\theta_{*}/\rho\). The light-curve data provided good coverage of the caustic crossing and therefore \(\rho\)
Figure 9: Spitzer magnification curve for the c +/- solution. The blue lines show the fitted magnification curve for 100 samples from the posterior with the GP effects shown. The red line is the magnification curve matching the parameters in Table 4. The error-bar scaling in this figure corresponds to the red line (\(S_{\rm Spitzer}=2.49\)), and the size of the error bars is not necessarily the scaling used for each of the blue samples. Left: shows the magnification curve over the same time period as Figure 1. Right: covers only the Spitzer data set.
was well constrained, and almost identical, in our models. The calculated value of \(\theta_{\rm E}\) for all solutions is
\[\theta_{\rm E}=0.29\pm 0.02\,{\rm mas}.\]
Knowing \(\theta_{\rm E}\) gives an angular scale to the geometric models.
### Mass, Distance and Separation
The intrinsic I-band magnitude of the source star was previously calculated by comparing its fitted \(I\)-band magnitude to the mean red-clump magnitude on a CMD. By assuming the intrinsic red-clump magnitude and that the source star is at the distance of the average red-clump star in the CMD field, we find \(D_{\rm S}=7.85\pm 0.06\,{\rm kpc}\).
With values for \(D_{\rm S}\), \(\theta_{\rm E}\) and \(\pi_{\rm E}\), the degeneracy in Equation 1 is broken, and the mass and distances can be calculated for each solution. Given the fitted parameters \(\pi_{\rm E,E}\) and \(\pi_{\rm E,N}\), \(\theta_{E}\), and \(D_{\rm S}\), the distance to the lens was found, using
\[\frac{1}{D_{\rm L}[{\rm kpc}]}=\pi_{\rm rel}[{\rm mas}]+\frac{1}{D_{\rm S}[{ \rm kpc}]},\]
where \(\pi_{\rm rel}=\theta_{\rm E}\pi_{\rm E}\). Knowing the distance to the lens system and \(\theta_{\rm E}\) in angular units, the lens geometry can be calculated in absolute terms. The masses for each of the lens components, their projected separations, and the distances to the lens system are given in Tables 3 and 4. All large-parallax solutions, and both small-parallax wide solutions, are consistent with BD binary lenses of varying masses. However, the small-parallax close solutions are consistent with M-dwarf binaries, where the mass of the smaller of the binary objects (\(m_{2}=110^{+20}_{-30}\,M_{J}\)) is very near the BD upper cut-off (\(\sim 70-95\,M_{J}\)) and therefore may or may not be large enough for hydrogen fusion, depending mostly on its chemical composition.
### Proper Motion and Velocity
The relative lens-source heliocentric proper motion was determined via
\[\mathbf{\mu}_{rel,hel}=\frac{\theta_{\rm E}}{t_{\rm E}}\hat{\mathbf{\pi}}_{\rm E}+\frac{\pi_{\rm rel}}{{\rm au}}\mathbf{v}_{ \oplus,\perp}, \tag{8}\]
for each solution, where \(\mathbf{v}_{\oplus,\perp}\) is the projected velocity of Earth at \(t_{0}\), parallel to the lens plane, \(\mathbf{v}_{\oplus,\perp}(N,E)=(-0.104,29.296)\,{\rm km\,s}^{-1}\), and \(\hat{\mathbf{\pi}}_{\rm E}=\mathbf{\pi}_{\rm E}/\left|\pi_{ \rm E}\right|\) is a unit vector in the microlensing parallax direction.
The \(\mu_{rel,hel}\) values for each solution are shown in Tables 3 and 4. All of the degenerate solutions have high relative proper motions, \(\mu_{rel}>8\,{\rm mas\,yr}^{-1}\). A proper motion of \(\mu_{rel}\lesssim 10\,{\rm mas\,yr}^{-1}\) does not innately give any information on the location of the lens i.e. disk vs. bulge. However, if one adds knowledge of the source proper motion, \(\mu_{\rm S}\), then \(\mu_{rel}=8\,{\rm mas\,yr}^{-1}\) may give such information. For example, if \(\mu_{\rm S}\) were at the center of the bulge distribution then a bulge lens would be very unlikely because a proper motion of \(8\,{\rm mas\,yr}^{-1}\) from the centroid is extreme compared with the bulge dispersion of \(\sigma(l,b)=(3.0,2.5)\,{\rm mas\,yr}^{-1}\). Therefore this hypothetical case would favor a disk lens.
The source star for this event was observed by Gaia (EDR3 4063557344313009920), and hence its heliocentric proper motion is precisely measured as
Figure 10: Proper motions of the lens solutions and source star. For context, we also include contour representations of the disk (blue) and bulge (red) distributions. The bulge contours are from histograms of the red-clump stars from Gaia EDR3, selecting stars within a \(0.2^{o}\) radius cone centred on the lens. The distribution of red-clump stars is the results of a Gaussian fit to the red clump on the field’s CMD. The innermost thicker line of the red-clump distribution contains approximately 68% of the population samples. The outermost thicker line contains approximately 95% of the population samples. The blue contours depict the theoretical distribution of the disk stars used in our galactic model. The solid ellipses correspond to the 1 and 2 \(\sigma\) proper-motion dispersions of disk stars at \(D=6\) kpc. The dotted ellipses show the same for disk stars at \(D=2.3\) kpc.
\(\mu_{\rm S,\it hel}(N,E)=(-5.7,-7.7)\pm(0.2,0.3)\,{\rm mas\,yr^{-1}}\), 5 relative to quasars in the distant universe. The source is \(\sim 1\sigma\) due west of the centroid (see Figure 10). This means that a bulge lens is more easily accommodated, provided that direction of \(\mu_{\it rel}\) is roughly east. Similarly, the \(\mu_{\it rel}\) direction most consistent with a disk lens is northeast, although this direction is also very plausible for a bulge lens.
Footnote 5: Here we have doubled the published errors, as recommended by Rybizki et al. (2021).
The heliocentric lens proper motion is calculated via
\[\mu_{\rm L,\it hel}=\mu_{\rm S,\it hel}+\mu_{\it rel,\it hel}. \tag{9}\]
The unexpected outcome of our \(\mu_{\rm L}\) calculations is that none of the eight degenerate solutions align well with the disk or bulge dispersions, as shown in Figure 10. However, this demonstrates a misleading aspect of proper motion comparisons in that closer objects have higher proper- motions given the same tangential velocity.
The lens proper motion relates to the heliocentric lens velocity via
\[v_{\rm L,\it hel}=4.74\times D_{\rm L}\mu_{\rm L,\it hel}, \tag{10}\]
where distance is expressed in kiloparsecs, \(\mu_{\rm L,\it hel}\) is in miliarcseconds per year, and 4.74 is a conversion factor so that \(v_{\rm L,\it hel}\) is in kilometers per second. These physical parameters for each solution can also be found in Tables 3 and 4.
From Figure 10 we can see that the source is a fairly kinematically typical bulge star, lying on the 1\(\sigma\) contour of the Gaia field bulge dispersion.
Comparisons of the lens velocities, from each of the eight degenerate solutions, with disk and bulge dispersions from Gaia EDR3 are shown in Figure 11. These empirical dispersions are used for demonstrative purposes only. All eight lens solutions have unusual velocities when compared to typical disk stars, with the w +/+ and both +/- lens solutions rotating about the galactic center more slowly than typical disk stars, the w -/- and c +/+ counterrotating, and the -/+ and c -/- solutions seemingly moving through the disk, with large \(b\) velocities. The solutions are all less exceptional when compared with bulge kinematics, although only the small-parallax solutions have distances that allow for the lens to be a bulge member according to current galactic density models (e.g., Han & Gould, 2003). The velocities of the w -/-, c +/+, w +/-, and c +/- solutions also appear consistent with the retrograde microlensing group.
## 6 Solution Probabilities
The somewhat uncommon physical parameters compel us to look at our solution probabilities more cautiously and holistically than a purely likelihood-based comparison. One problem with the likelihood calculation is that, formally, it relies on the assumption that our data are Gaussian distributed, with accurate uncertainties. Practically, this is never true for microlensing photometry. However, for this analysis, we apply Bayes theorem as though they were Gaussian.
The probability of a system having the solution-specific proper motion or velocity, mass, and distance is also an important factor. We therefore calculate the probability factor \(\ln z\) that determines the relative detection probability of each solution given a galactic model, with a bias to incorporate their relative light-curve-fit likelihoods.
We compute the galactic probability (Equation 15 of Gould (2020)) using a modified version of the Galactic Bayesian code described in Herrera-Martin et al. (2020). This model is based on the stellar Milky Way density model of Han & Gould (2003) and mass functions of Chabrier (2003) using the prescription of Dominik (2006).
There is a common wisdom in microlensing analysis that small-parallax events are more probable than their large-parallax degenerate counterparts. This is known as the Rich argument, as detailed in Calchi Novati et al. (2015). For single-lens events and binary-lens events for which the lens axis and source trajectory are approximately parallel (as in this case), if the true parallax solution is the smaller parallax solution it will always generate a large-parallax degenerate counterpart. The reverse, however, is not always true. The ratio of these probabilities (Rich factor) is implicitly accounted for in our galactic models (Gould, 2020).
Our calculated \(-2\Delta\ln z\) values for each degenerate solution, and both error approaches, are displayed in Tables 3 and 4. Kass & Raftery (1995) interpret \(-2\Delta\ln z<(2.3,4.6,9.2)\) as ("substantial","strong", "decisive") evidence favoring one solution over another. By their metric, we would decisively consider the c -/+ the best solution given a standard Spitzer error approach. Using this approach, the probability of the c -/+ solution, compared with the next most probable solution (c -/-), is \(-2\Delta\ln z\)=13.48. However, using the GP error approach with the Kass & Raftery (1995) interpretation, the evidence supporting the equally favoured, small-parallax, close solutions (c -/-, c +/+; \(-2\Delta\ln z\)=0.01) over the c -/+ is "strong" (\(-2\Delta\ln z\)=8.9).
At the low galactic latitude of our event, and especially given the calculated distances to the lens of the
large-parallax solutions, one would expect lens bodies to be members of the galactic disk. However, at a distance of \(\sim 6\)kpc (as in the c -/- and c +/+ cases), it is possible that the lens is a member of the bulge population. Our galactic modeling of c +/+ showed that it is on the order of 100 times more likely to be a member of the bulge than the disk, whereas for c -/+ this was more like 1400 times more likely to be a member of the disk than the bulge. Currently, our galactic model most highly disfavours the counter-rotating BD solutions, with disk-like distances (c +/- and w +/- with \(-2\Delta\ln z\), without a light-cure likelihood bias, of 24.45 and 34.12, respectively).
It is worth noting that \(\ln z\) is based on a galactic model and therefore implicitly favors solutions matching our expectation of kinematic, mass, and density dispersions. Even the kinematic dispersions displayed in Figures 10 and 11 are informed by mostly bright stars and may not be truly representative of the dispersions of much dimmer objects, of which we know very little. Some healthy skepticism needs to exist around the model's completeness, especially considering the high proportion of microlensing BDs with unusual proper motions.
To determine how representative these retrograde detections are of the BD population as a whole, we must must first have a good understanding of the innate selection biases in microlensing events, for or against these extreme proper motions. However, if we were to down-weight the light-curve likelihood based on the knowledge that our errors are not Gaussian, we will generally favour the low parallax solutions.
## 7 Discussion
In our analysis of event OGLE-2017-BLG-1038, we fit a binary lens model including higher order effects: lens orbital motion and parallax. We include space-based data from Spitzer and model systematic errors in these data. We have a resulting eightfold solution degeneracy in this event. These solutions have total lens masses ranging from \(0.027-0.27M_{\odot}\). We also included in our probability comparison a galactic probability for each lens configuration. After these processes we find that our most probable solutions are the c +/+ and c -/-, both with masses of \(m_{1}\simeq 170\,M_{J}\) and \(m_{2}=110^{+20}_{-30}M_{J}\) (0.16 and 0.11\(\,M_{\odot}\)), separated by 1.7 au, at a distance of 6.0 kpc. The companion masses for these solutions are near the upper limit for BDs (the hydrogen burning limit). The lens systems for the c +/+ and c -/- solutions have tangential velocities of \(v_{\rm L,hel}(l,b)=(-358,-126)\,{\rm km\,s^{-1}}\) and \(v_{\rm L,hel}(l,b)=(9,113)\,{\rm km\,s^{-1}}\), respectively.
The c -/- solution has a minutely higher galactic probability than c +/+ with \(-2\Delta\ln\mathcal{L}=1.09\). They are
Figure 11: Heliocentric velocity of the most likely lens star solutions. As previously, we also include contour representations of the bulge (red) and disk (blue) distributions. The bulge contours are from histograms of the red-clump stars from Gaia EDR3. The red-clump distribution was selected as in Figure 10. Due to the unreliable nature of Gaia distances obtained from parallax, especially at large distances, \(D_{\rm RC}=D_{\rm S}=7.85\) kpc was used to estimate the red-clump velocities from the Gaia proper motion measurements. The outermost thicker line contains approximately 95% of the population samples. The blue contours depict the theoretical distribution of disk stars, used in our galactic model. The solid ellipses correspond to the 1 and 2 \(\sigma\) velocity dispersions. Left: The small-parallax solutions and both distributions. Right: Only the disk distribution, with the large-parallax solutions, as these solutions are at distances not compatible with a lens belonging to the bulge.
equally likely when considered in the context of both the light-curve fit and the galactic model.
Favouring these solutions over the large-parallax, close-family solutions (\(m_{1}\simeq 22.5\) and \(m_{2}\simeq 13.7\); \(D_{\rm L}=2.33\); \(v_{\rm L,hel}(l,b)=(-11,88)\,{\rm km\,s^{-1}}\) and \(v_{\rm L,hel}(l,b)=(-174,-21)\,{\rm km\,s^{-1}}\) for c -/+ and c +/-, respectively) relies on our being confident in the galactic model for very-low-mass objects. Evidence from other microlensing events suggest that we do not understand the kinematic structure of BDs at distances of \(D<4\,{\rm kpc}\). To date, three BD systems have been discovered using microlensing that appear to be counterrotating with respect to the disk (Chung et al., 2019; Shvartzvald et al., 2019, 2017). These microlensing members lie very much in the plane of the disk and explanations for their characteristics, which we consider here, are that they are members of the disk with extreme motions; they are halo members with a coincidental disk alignment; they are members of a counterrotating population of very-low-mass objects (as suggested by Shvartzvald et al., 2019); or, they are evidence of an oversimplified galactic model. The physical parameters of the lens of this event raise the question as to whether or not OGLE-2017-BLG-1038 is another member of this group.
One explanation for extreme kinematics for a low-mass disk lens is that the disk may have a larger velocity dispersion for lower mass objects. If we assume that the lens was born in a cluster, it may have received a kick from an interaction with a star, and a binary will have a higher scattering cross section for such an interaction. Cluster dissolution has been extensively modeled (e.g., Hurley et al., 2005; Wang et al., 2016). Most stars are believed to come from open clusters, however expulsion velocities from open clusters are small compared to Galactic rotation. For example, the open cluster simulations of Jorgensen & Church (2020) show most stars escaping with velocities \(<10\,{\rm km\,s^{-1}}\), relative to their parent cluster. It is therefore very unlikely that such an escapee would be travelling \(\sim 100\,{\rm km\,s^{-1}}\), or more, opposed to the disk. For globular clusters, higher mass objects preferentially wind up in tight binaries, whose members can be expelled at very high velocities (Hut et al., 1992a, b), but such expulsions are likely to account for a tiny fraction of all stars. This appears an unlikely origin for these counterrotating low-mass objects.
Another aspect of the galactic model that may be misunderstood is the bulge density model. We propose that a mass dependent spatial cutoff could explain the observed abundance of counterrotating BDs. If we consider that the bulge extends further for lower mass objects, then at \(D<4\,{\rm kpc}\) the mass independent model would significantly underrepresent lower mass objects belonging to the bulge population and therefore having extreme (when compared to neighbouring disk stars) kinematics. Density models are fit to observational data and therefore are specifically fit to objects much larger than our inferred lens and those of the aforementioned retrograde BDs.
Another explanation may be that the lens is a halo star. Halo stars are known to have a much larger velocity dispersion, and their mean galactic rotation is much smaller than the disk (Du et al., 2018; Posti et al., 2018). While this large velocity dispersion could explain the kinematics of the other retrograde BD stars, it is a leap to make that assumption here, when it is not unlikely that the lens belongs to the bulge.
Are these retrograde BD detections the first members of a new class of object? At this stage, the characterization of these events as an independent population is speculative. Their existence as a discrete population affects the way we view the galactic probability of this solution, because such a population is not represented in the galactic model. Even if a misunderstood selection effect or aspect of the galactic model is responsible for their overabundance in detection, such an effect is not included in our current probability calculations. More needs to be known about this retrograde group before the significance of this solution can be truly understood.
The analysis of more low-mass lens events will provide new insights into the very-low-mass end of the mass function and its density and kinematics. There is little observational evidence to constrain any of these distributions at present. It is always possible that low-mass BDs are far more numerous than currently known and are currently represented by our galactic model.
Whatever the case, for low-mass lenses, we believe that selection of a solution based on typical disk kinematic arguments is unlikely to be valid. The same reasoning leads us to believe that we cannot categorically claim this lens as a member of either a bulge, halo, or retrograde BD population. A more complex consideration of selection biases and possible population dynamics (beyond the scope of this paper) would be required.
A more empirical means of confirming the small parallax configuration would be to observe the lens photometrically. The hydrogen burning host and likely hydrogen-burning companion, corresponding to the small-parallax, close-family solutions, are bright enough to be visible at their implied lens distances (\(D_{\rm L}=6\) kpc). Given the relative proper motions of these solutions (\(\mu_{rel,hel}=9.0\,{\rm mas\,yr^{-1}}\)), we could expect the separation of source and lens to be sufficient for them to be resolved with the advent of infrared adaptive optics imaging from the coming generation of 40 m class
telescopes. This is not true of the solutions near the planet-BD boundary, which are too dim to be resolved, no matter the angular separation between source and lens.
We expect first light for Multi-AO Imaging Camera for Deep Observations (MICADO) on the 39 m European Extremely Large Telescope (EELT) to be 2030. Kim et al. (2021) have argued, by scaling the work of Bowler et al. (2015) with the Keck coronograph, that an EELT coronograph could achieve \(\Delta K=11\) contrast at 77 mas. By 2030 the angular separation of the lens and source will be \(\sim 115\) mas. Using the mass-luminosity function of Just et al. (2015) and the previously calculated source-star K magnitude, we estimate \(\Delta K=9.2\) between the source star and the primary lens body for the M-dwarf solutions (c -/- and c +/+). Therefore the composition of this lens, be it BD or M-dwarf, can be verified with astrometric follow-up at the expected first light of MICADO on EELT.
## 8 Summary
In this paper we report our analysis of microlensing event OGLE-2017-BLG-1038, with data from KMTNet, OGLE, and Spitzer. Ground-based data show the event is due to a giant source passing across a fold and cusp of a resonant caustic, due to a rotating binary lens. The analysis of the combined Spitzer, KMT, and OGLE light-curve data resulted in eight degenerate satellite-parallax solutions. With a GP model fit to the Spitzer data to account for systematic effects, the best solutions are the four belonging to the close family. Of these solutions the small- parallax solutions both have masses of \(M_{1}\simeq 170^{+40}_{-50}M_{J}\) (an M-dwarf) and \(m_{2}=110^{+20}_{-30}\,M_{J}\) (at the BD/M-dwarf cutoff). The large-parallax solutions are both comprised of a BD binary with \(m_{1}=22\pm 2\,M_{J}\) and \(m_{2}=14\pm 1\,M_{J}\). Inclusion of a detection probability based on a galactic model favors the small-parallax solutions. However, this approach to appraising solutions may be biased by an incomplete description of the distribution of very-low-mass objects in the galaxy and should not rule out solutions with similar light-curve-fit likelihoods. Late-time imaging could be used to reject these low-mass BD solutions, since an M dwarf should be visible given sufficient lens-source separation, but a low-mass BD binary will not.
This research has made use of the KMTNet system operated by the Korea Astronomy and Space Science Institute (KASI) and the data were obtained at three host sites of CTIO in Chile, SAAO in South Africa, and SSO in Australia.
Work by Cheongho Han was supported by the grants of National Research Foundation of Korea (2020R1A4A2002885 and 2019R1A2C2085965).
This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
Software: matplotlib (Hunter, 2007), numpy (Harris et al., 2020), scipy (Virtanen et al., 2020)
|
2306.15564 | Parity doublet model for baryon octets: diquark classifications and mass
hierarchy based on the quark-line diagram | We construct $ {\rm SU(3)}_{\rm L} \otimes {\rm SU(3)}_{\rm R}$ invariant
parity doublet models within the linear realization of the chiral symmetry.
Describing baryons as the superposition of linear representations should be
useful description for transitions toward the chiral restoration. The major
problem in the construction is that there are much more chiral representations
for baryons than in the two-flavor cases. To reduce the number of possible
baryon fields, we introduce a hierarchy between representations with good or
bad diquarks (called soft and hard baryon representations, respectively). We
use $(3,\bar3)+(\bar3,3)$ and $(8,1)+(1,8)$ as soft to construct a chiral
invariant Lagrangian, while the $(3,6)+(6,3)$ representations are assumed to be
integrated out, leaving some effective interactions. The mass splitting
associated with the strange quark mass is analyzed in the first and second
order in the meson fields $M$ in $(3,\bar3)+(\bar3,3)$ representations. We
found that the chiral $ {\rm SU(3)}_L \otimes {\rm SU(3)}_R$ constraints are
far more restrictive than the $ {\rm SU(3)}_V$ constraints used in conventional
models for baryons. After extensive analyses within $(3,\bar3)+(\bar3,3)$ and
$(8,1)+(1,8)$ models, we found that models in the first order of $M$ do not
reproduce the mass hierarchy correctly, although the {\GO} is satisfied. In the
second order, the masses of the positive parity channels are reproduced well up
to the first radial excitations, while some problem in the mass ordering
remains in a negative parity channel. Apparently the baryon dynamics is not
well-saturated by just $(3,\bar3)+(\bar3,3)$ and $(8,1)+(1,8)$ representations,
as indicated by the necessity of terms higher order in $M$. | Takuya Minamikawa, Bikai Gao, Toru kojo, Masayasu Harada | 2023-06-27T15:42:02Z | http://arxiv.org/abs/2306.15564v1 | # Parity doublet model for baryon octets:
###### Abstract
We construct \(\mathrm{SU(3)}_{\mathrm{i}}\otimes\mathrm{SU(3)}_{\mathrm{R}}\) invariant parity doublet models within the linear realization of the chiral symmetry. Describing baryons as the superposition of linear representations should be useful description for transitions toward the chiral restoration. The major problem in the construction is that there are much more chiral representations for baryons than in the two-flavor cases. To reduce the number of possible baryon fields, we introduce a hierarchy between representations with good or bad diquarks (called soft and hard baryon representations, respectively). We use \((3,\bar{3})+(\bar{3},3)\) and \((8,1)+(1,8)\) as soft to construct a chiral invariant Lagrangian, while the \((3,6)+(6,3)\) representations are assumed to be integrated out, leaving some effective interactions. The mass splitting associated with the strange quark mass is analyzed in the first and second order in the meson fields \(M\) in \((3,\bar{3})+(\bar{3},3)\) representations. We found that the chiral \(\mathrm{SU(3)}_{L}\otimes\mathrm{SU(3)}_{R}\) constraints are far more restrictive than the \(\mathrm{SU(3)}_{V}\) constraints used in conventional models for baryons. After extensive analyses within \((3,\bar{3})+(\bar{3},3)\) and \((8,1)+(1,8)\) models, we found that models in the first order of \(M\) do not reproduce the mass hierarchy correctly, although the Gell-Mann-Okubo mass relation is satisfied. In the second order, the masses of the positive parity channels are reproduced well up to the first radial excitations, while some problem in the mass ordering remains in a negative parity channel. Apparently the baryon dynamics is not well-saturated by just \((3,\bar{3})+(\bar{3},3)\) and \((8,1)+(1,8)\) representations, as indicated by the necessity of terms higher order in \(M\).
## I Introduction
Chiral symmetry in quantum chromodynamics (QCD) is the key symmetry to describe the low energy hadron dynamics. Although the chiral symmetry is spontaneously broken by the formation of chiral condensates [1; 2; 3], the chiral symmetry in the underlying theory leaves a number of constraints on the low energy dynamics [4; 5; 6; 7]. Effective Lagrangians for hadrons are constructed by grouping a set of fields in a chiral invariant way, modulo small explicit breaking associated with the current quark masses.
The most general construction of chiral Lagrangian is based on the nonlinear realization of the chiral symmetry [8; 9] in which fields transform nonlinearly under chiral transformations. The great advantage of this construction is that pions accompany space-time derivatives appearing in powers of \(\sim\partial/\Lambda_{\chi}\), where \(\Lambda_{\chi}\) is the typical chiral symmetry breaking scale related to the pion decay constant \(f_{\pi}\) as \(\Lambda_{\chi}\sim 4\pi f_{\pi}\)[10], which leads to the low-energy constants of the chiral perturbation theory to be \(\sim\mathcal{O}(10^{-3})\)[11; 12]. This power counting greatly systematizes the construction of effective Lagrangians.
While the nonlinear realization has the advantage in generality and systematics, it also has the disadvantage when we try to address the physics at energies near or greater than \(\sim\Lambda_{\chi}\). One simple way of improving the description is to manifestly include massive degrees of freedom. The problem also occurs when we consider the chiral restoration in extreme environments; there, the denominators of the derivatives, \(\sim\Lambda_{\chi}\), become small, invalidating the derivative expansion with only pions.
A model of linear realization is less general but more suitable when we describe QCD in extreme environments (e.g., neutron stars [13]) with partial restoration of chiral symmetry. Near chiral restored region the hadron spectra should recover chiral multiplets, e.g., \((\sigma,\vec{\pi})\). Implementing candidates of chiral multiplets from the beginning should simplify our descriptions; we do not have to dynamically generate relevant degrees of freedom.
In this work we consider a model of baryons in linear realization of chiral symmetry, aiming at its application to dense QCD. We include the parity doublet structure which allows us to introduce the chiral invariant mass [14; 15; 16; 17; 18; 19; 20]. For increasing baryon densities, the existence of such mass has large impacts on the density dependence of baryon masses as well as baryon-meson couplings. Previously we have analyzed models of two-flavors [21; 22; 23; 24], but in this work we extend the model to the three-flavor case. This is necessary to analyze dense baryonic matter with hyperons.
The extension from two-flavors to three-flavors, however, drastically complicates the construction of chiral Lagrangian for baryons since there are so many possible representations. Combining three quarks in linear chiral representations, one can create several representa
tions for baryons. For two-flavors, we start with quarks in \((2_{L},1_{R})\) and \((1_{L},2_{R})\), then the three products yield \((2_{L},1_{R})\), \((4_{L},1_{R})\), \((3_{L},2_{R})\) and \(L\leftrightarrow R\). When we include only nucleons, we may focus on \((2_{L},1_{R})\) and \((1_{L},2_{R})\), and the number of fields is managable. For three flavors, we start with quarks in \((3_{L},1_{R})\) and \((1_{L},3_{R})\), and find much more representations for their products. Although there are several studies of baryons based on the models including possible chiral representations of baryons [25; 26; 27; 28; 29; 30; 31; 32; 33], to the best of our knowledge, for three-flavors, the construction of linearly realized chiral Lagrangian for baryons has not been established.
In order to keep the number of representations tractable, in this work we introduce dynamical assumptions based on the quark dynamics. We assume that baryons in representations including "good diquarks", the representations \((\bar{3}_{L},1_{R})\) or \((1_{L},\bar{3}_{R})\), are lighter than those including "bad diquarks", the representations \((6_{L},1_{R})\) or \((1_{L},6_{R})\)[34; 35; 36; 37]. In this paper we call baryon representations including good diquarks _soft baryons_, and those with bad diquark _hard baryons_. In this paper the hard baryons are integrated out and do not manifestly appear, but the consequence of such integration can be traced in effective vertices including high powers of \(M\) (Fig. 1).
Based on this idea we build the chiral Lagrangian for soft baryon fields in \((3_{L},\bar{3}_{R})\), \((8_{L},1_{R})\) with \(L\leftrightarrow R\). We include mesonic fields and the parity doublet structure in a chiral invariant way. Both the spectra of positive and negative baryons as well as the first radial excitations are studied. As usual in the linear realization, we do not have good rationals to restrict the power of mesonic fields, so we examine how important higher order terms are.
The remarkable and unexpected finding in our construction is that the chiral symmetry and the above dynamic assumption give very strong impacts on baryon masses, especially the SU(3) flavor breaking due to the strange quark mass. For example, for models including only \((3_{L},\bar{3}_{R})\) and \((\bar{3}_{L},3_{R})\), the usual baryon mass ordering based on the number of strange quarks does not hold, at least at the order of meson fields we have worked on. We then add \((8_{L},1_{R})\) and \((1_{L},8_{R})\) representations, finding them to be insufficient. To improve the description of spectra, we are forced to increase powers of mesonic fields up to the second order of Yukawa interactions.
We try to reproduce the ground and first radially excited states for positive and negative baryon octets. Our modeling works for positive parity baryons, but for negative baryons, some of mass ordering related to the strange quark appears to be inconsistent with the picture based on the constituent quark models [38]. This situation persists even after our extensive survey for parameter space.
Some comments are in order for comparison with the previous studies. The textbook example of the octet mass formula [39] is based on the SU(3) symmetry with the explicit breaking as perturbation, but the underlying Hamiltonian does not have the chiral symmetry. There are some previous studies for the parity doublet model including hyperons [25; 26; 27; 28; 29; 30; 31; 32; 33]. For example, in Ref.[30], current quark masses are incorporated into a parity doublet model based on the SU(3)\({}_{\rm L}\times\)SU(3)\({}_{\rm R}\) chiral symmetry, and the pion-nucleon \(\Sigma_{\pi N}\) and kaon-nucleon \(\Sigma_{KN}\) terms are studied. In Ref. [31], explicit breaking is effectively introduced into the masses without explicit forms of the Lagrangian terms to study the difference of behavior in hot matter. However, to the best of our knowledge, there
Figure 1: Higher-order quark exchange diagrams. (1) is just combination of the first-order interaction without the bad diquarks. (2) has three quark exchanges (three meson fields) through the bad diquarks, while (3) has two quark exchanges (two meson fields). In (3), there is a mixing between naive and mirror representations.
is no analysis of mass spectra of baryons including hyperons in a chiral invariant model.
This paper is structured as follows. In Sec.II, the chiral representations of \((3_{L},\bar{3}_{R})+(\bar{3}_{L},3_{R})\) and \((8_{L},1_{R})+(1_{L},8_{R})\) for octet baryons are defined. In Sec.III, we study a Lagrangian up to the first order of Yukawa interactions, and found that the mass hierarchy of the baryon octet cannot be reproduced. In Sec.IV, we classify hadronic effective interactions based on quark diagrams. Then, in Sec.V, we construct the second-order Yukawa-type interactions which should be induced by integrating out hard baryons. In Sec.VI, we perform numerical fit of baryon spectra. Sec.VII is devoted to the summary.
## II Chiral representation for hadron
In three-flavor chiral symmetry \(\mathrm{SU(3)_{L}\times SU(3)_{R}}\), quark fields are defined as the fundamental representations, namely left-handed \((q_{\mathrm{L}})^{l}\sim(3,1)\) and right-handed \((q_{\mathrm{R}})^{r}\sim(1,3)\), with upper indices \(l,r=1,2,3=u,d,s\). The antiquark fields are defined as the dual representations \((\bar{q}_{\mathrm{L}})_{l}\sim(3,1)\) and \((\bar{q}_{\mathrm{R}})_{r}\sim(1,3)\) with lower indices \(l,r\). The scalar meson field is defined as \((M)_{r}^{l}\sim(q_{\mathrm{L}})^{l}\otimes(\bar{q}_{\mathrm{R}})_{r}\sim(3, \bar{3})\) in this paper.
Since baryons consist of three valence quarks, the baryon fields are related with the tensor products of three quark fields. We define the left-handed baryon field as a product of a spectator left-handed quark and left- or right-handed diquark, while the right-handed baryon has a right-handed spectator quark. Taking irreducible decomposition, the left-handed baryon can be expressed as the following representations
\[q_{\mathrm{L}}\otimes(q_{\mathrm{L}}\otimes q_{\mathrm{L}}+q_{ \mathrm{R}}\otimes q_{\mathrm{R}})\sim\] \[(1,1)+(8,1)+(8,1)+(10,1)+(3,\bar{3})+(3,6)\,. \tag{1}\]
The octet baryons are included in \((3,\bar{3})\), \((8,1)\), and \((3,6)\), which can be illustrated as in Fig.2. The representations \((3,\bar{3})\) and \((8,1)\) contain flavor-antisymmetric diquarks \(\sim\bar{3}\) which is called "good" diquark, while \((3,6)\) contains flavor-symmetric diquark \(\sim 6\) called "bad" diquark. We call baryon representations including good diquarks _soft baryons_, and those with bad diquark _hard baryons_. In this paper the hard baryons are integrated out and do not manifestly appear, but the consequence of such integration can be traced in effective vertices including high powers of \(M\). The detailed discussions are given in Sec.V.
The baryon fields denoted as \(\psi\) and \(\chi\) are related with the quark fields as follows: For example, the left-handed baryons have the relations,
\[(\psi_{\mathrm{L}})^{l[r_{1}r_{2}]} \sim(q_{\mathrm{L}})^{l}\otimes(q_{\mathrm{R}})^{[r_{1}}\otimes (q_{\mathrm{R}})^{r_{2}]}\,, \tag{2}\] \[(\chi_{\mathrm{L}})^{l_{1}[2_{l}3]} \sim(q_{\mathrm{L}})^{l_{1}}\otimes(q_{\mathrm{L}})^{l_{2}} \otimes(q_{\mathrm{L}})^{l_{3}]}\,, \tag{3}\]
where \([\,\cdot\,]\) implies that two indices in the bracket are antisymmetrized. The relations can be rewritten as
\[(\psi_{\mathrm{L}})^{l}_{r}\sim\varepsilon_{r_{1}r_{2}}(q_{\mathrm{L}})^{l} \otimes(q_{\mathrm{R}})^{r_{1}}\otimes(q_{\mathrm{R}})^{r_{2}}\,, \tag{4}\]
\[(\chi_{\mathrm{L}})^{l}_{r}\sim\varepsilon_{l^{\prime}l_{1}l_{2}}(q_{\mathrm{L} })^{l}\otimes(q_{\mathrm{L}})^{l_{1}}\otimes(q_{\mathrm{L}})^{l_{2}}\,, \tag{5}\]
where \(\varepsilon_{ijk}\) is the totally asymmetric tensor. For these baryon fields, upper indices are interpreted as the ones of quarks, and lower indices are as the ones of good diquarks. For example, \((\psi_{\mathrm{L}})^{l[r_{1}r_{2}]}\) consists of a left-handed quark with upper index \(l\) and two antisymmetrized right-handed quarks with upper indices \(r_{1}\) and \(r_{2}\), while \((\psi_{\mathrm{L}})^{l}_{r}\) consists of a left-handed quark with upper index \(l\) and a scalar right-handed diquark (\(\bar{3}\) representation) with lower index \(r\). The baryon fields with three indices and the ones with two indices are equivalent through the following relations,
\[(\psi_{\mathrm{L}})^{l}_{r} =\frac{1}{2}\varepsilon_{rr_{1}r_{2}}(\psi_{\mathrm{L}})^{l[r_{1} r_{2}]}\,, \tag{6}\] \[(\psi_{\mathrm{L}})^{l[r_{1}r_{2}]} =\varepsilon^{rr_{1}r_{2}}(\psi_{\mathrm{L}})^{l}_{r}\,, \tag{7}\]
and there are also the same relations for \(\chi\). We call the baryon fields with three indices "three-index notation" (Eqs.(2)-(3)), and the ones with two indices "two-index notation" (Eqs.(4)-(5)) in this paper.
The two-index notation is often used to calculate as usual, because it is directly related with the adjoint representation matrices as
\[(\chi)^{i}_{j} \sim\begin{bmatrix}\frac{1}{\sqrt{2}}\Sigma^{0}+\frac{1}{\sqrt{6} }\Lambda&\Sigma^{+}&p\\ \Sigma^{-}&-\frac{1}{\sqrt{2}}\Sigma^{0}+\frac{1}{\sqrt{6}}\Lambda&n\\ \Xi^{-}&\Xi^{0}&-\frac{2}{\sqrt{6}}\Lambda\end{bmatrix}\,, \tag{8}\] \[(\psi)^{i}_{j} \sim\frac{1}{\sqrt{3}}\Lambda_{0}\] \[+\begin{bmatrix}\frac{1}{\sqrt{2}}\Sigma^{0}+\frac{1}{\sqrt{6}} \Lambda&\Sigma^{+}&p\\ \Sigma^{-}&-\frac{1}{\sqrt{2}}\Sigma^{0}+\frac{1}{\sqrt{6}}\Lambda&n\\ \Xi^{-}&\Xi^{0}&-\frac{2}{\sqrt{6}}\Lambda\end{bmatrix}\,, \tag{9}\]
for left-handed and right-handed respectively. To distinguish \(\psi\) and \(\chi\) explicitly, we treat a flavor of a baryon as an index of \(\psi\) or \(\chi\) fields, e.g., \(\psi_{3}^{1}=\psi_{p}\), \(\chi_{3}^{1}=\chi_{p}\), \(\psi_{2}^{3}=\psi_{\Xi^{0}}\), \(\psi_{3}^{3}=\psi_{\Lambda\circ}/\sqrt{3}-2\psi_{\Lambda}/\sqrt{6}\), and so on. For simplicity, we define the isospin vectors as
\[\psi_{N} \equiv(\psi_{p},\psi_{n}) \tag{10}\] \[\psi_{\Sigma} \equiv(\psi_{\Sigma^{-}},\psi_{\Sigma^{0}},\psi_{\Sigma^{+}})\] (11) \[\psi_{\Xi} \equiv(\psi_{\Xi^{-}},\psi_{\Xi^{0}})\,. \tag{12}\]
We also define the same notations for \(\chi\).
In the three-index notation, it is easy to distinguish the antiquark (\(\bar{3}\)) and the diquark (also \(\bar{3}\)), because it has a one-to-one correspondence between the indices of the baryon field and the ones of the quark fields, as in Eqs.(2)-(3). In addition, we can easily see the charge of \(\mathrm{U(1)_{A}}\) symmetry in the three-index notation. For example, if one wants to make a contraction of the baryon field \(\psi_{\mathrm{R}}\sim(\bar{3},3)\) and the meson field \(M\sim(3,\bar{3})\) as
\[(\psi_{\mathrm{R}})^{r}_{l}(M)^{l}_{r}=\mathrm{tr}(\psi_{\mathrm{R}}M)\,, \tag{13}\]
which is invariant under \(\mathrm{SU(3)_{L}\times SU(3)_{R}}\) but not invariant under \(\mathrm{U(1)_{A}}\), because there are three left-handed
quarks but there are no left-handed antiquarks. The transformation property is the same as
\[(\psi_{\rm R})^{r}_{l}(M)^{l}_{r}\sim(q_{\rm R})^{r}\varepsilon_{ll_{1}l_{1}l_{2}} (q_{\rm L})^{l_{1}}(q_{\rm L})^{l_{2}}(q_{\rm L})^{l}(\bar{q}_{\rm R})_{r}\,, \tag{14}\]
where the left and right components are SU(3) singlet but the left handed one has a finite U(1)\({}_{\rm A}\) charge. We emphasize that such term actually appears in the form of a specific combination with other terms (Eq.(44) in Sec.V.4). This property also leads to a correspondence between the quark diagrams and the hadronic effective interactions, as will be explained in Sec.IV.
Next, let us define the parity doubling partners \(\psi^{\rm mir}\) and \(\chi^{\rm mir}\) for \(\psi\) and \(\chi\) respectively as the opposite assigns for the chiral representations ("mirror" assignment) as
\[(\psi_{\rm L})^{l}_{r}\sim(3,\bar{3})\,\qquad(\psi^{\rm mir}_{\rm L })^{r}_{l}\sim(\bar{3},3)\, \tag{15}\] \[(\chi_{\rm L})^{l_{1}}_{l_{2}}\sim(8,1)\,\qquad(\chi^{\rm mir}_{\rm L })^{r_{1}}_{r_{2}}\sim(1,8)\,, \tag{16}\]
and these fields have opposite parity respectively as,
\[\psi^{\rm~{}parity}_{~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{ }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{} }~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~ {}~
which is called the Gell-Mann-Okubo mass relation for octet baryons.
In a naive quark mass counting, the Gell-Mann-Okubo mass relation is satisfied by assuming \(M_{u}\simeq M_{d}\), \(m_{N}\sim 3M_{u}\), \(m_{\Xi}\sim M_{u}+2M_{s}\), \(m_{\Lambda}\sim 2M_{u}+M_{s}\), and \(m_{\Sigma}\sim 2M_{u}+M_{s}\), where \(M_{q}\) (\(q=u,d,s\)) are the constituent quark masses. These estimates hold for typical constituent quark models. On the other hand, these quark counting is sufficient but not necessary conditions; the Gell-Mann-Okubo mass relation is a weaker condition than that deduced from the quark counting.
### Model 1: only \((3,3)+(\bar{3},3)\)
First we consider a model including only the \((3,\bar{3})+(\bar{3},3)\) representations for octet baryons, \(\psi\), and the \((3,\bar{3})\) representation of mesons, \(M\), which generates the chiral variant mass of baryons through the spontaneous chiral symmetry breaking. The chiral invariant term for Yukawa interactions at the first order in \(M\) is
\[\mathcal{L}^{\text{model}(1)}=-g\big{[}\varepsilon^{l_{1}l_{2}l_{ 3}}\varepsilon_{r_{1}r_{2}r_{3}}(\bar{\psi}_{\text{L}})^{l_{1}}_{r_{1}}(M^{ \dagger})^{l_{2}}_{r_{2}}(\psi_{\text{R}})^{l_{3}}_{l_{3}}\] \[\qquad\qquad+\varepsilon_{l_{1}l_{2}l_{3}}\varepsilon^{r_{1}r_{2 }r_{3}}(\bar{\psi}_{\text{R}})^{l_{1}}_{r_{1}}(M)^{l_{2}}_{r_{2}}(\psi_{\text{L }})^{l_{3}}_{r_{3}}\big{]}\,, \tag{26}\]
where \(\varepsilon_{ijk}\) is the totally asymmetric tensor.
Equation (26) can be represented graphically in Fig.3. In this model, \(\sigma_{s}\) (\(\propto(M)^{\bar{3}}_{3}\), or \(\propto\langle\bar{s}s\rangle\)) contributes only to the \(\Sigma\) baryons. This implies that the \(\Xi\) baryons must be degenerated with the nucleons in this model, and therefore, this model cannot reproduce the octet baryon masses correctly.
It should be instructive to mention the difference from the two-flavor case where we have only nucleons for which we use \((2_{L},1_{R})\) and \((1_{L},2_{R})\) representations. The \((2_{L},1_{R})\) representations may be constructed as \(q_{\text{L}}(q_{\text{L}})^{2}\) or \(q_{\text{L}}(q_{\text{R}})^{2}\). Here good diquarks are the SU(2) singlet, but in the context of three-flavor, these diquarks are \(\bar{3}\) representation in SU(3). To construct baryon octet analogous to nucleons in two-flavor models, we need to add more representations in SU(3).
### Model 2: \((3,3)+(\bar{3},3)\) and \((8,1)+(1,8)\)
Next we add the representation \((8,1)+(1,8)\), \(\chi\), to models of \((3,\bar{3})+(\bar{3},3)\). We emphasize that, at the first (and second) orders in \(M\), there are no Yukawa interactions that couple \(\chi_{L}\) and \(\chi_{R}\) fields. This is because the \(\chi\) contains three valence quarks with all left-handed or right-handed so that Yukawa interactions with \(\chi\) should include three quark exchanges that flip the chirality of all three quarks. In other words, since U(1)\({}_{\text{A}}\)-charges for \(\chi_{\text{L}}\), \(\chi_{\text{R}}\), \(M\) are \(-3\), \(3\), and \(-2\) respectively, a U(1)\({}_{\text{A}}\) symmetric term cannot be constructed unless we consider the cubic orders, \(M^{3}\) or \((M^{\dagger})^{3}\).
There are, however, the first order Yukawa interactions between \(\psi\) and \(\chi\). The simplest Lagrangian, at the leading order in \(M\), is
\[\mathcal{L}^{\text{model}(2)}=\mathcal{L}^{\text{model}(1)}-g^{ \prime}\operatorname{tr}\left[\bar{\chi}_{\text{L}}M\psi_{\text{R}}+\bar{\chi} _{\text{R}}M^{\dagger}\psi_{\text{L}}+\text{h.c.}\right]. \tag{27}\]
This additional interaction \(\operatorname{tr}\bigl{(}\bar{\chi}_{\text{R}}M^{\dagger}\psi_{\text{L}}\bigr{)}\) can be interpreted as in Fig.4. The strange quark contributes to \(\Xi\) baryons through this interaction yields splitting between \(\Sigma\) and \(\Xi\).
This model still contains problems in reproducing the spectra of octet baryons. To see this, let us calculate the mass eigenvalues for the ground-state octet baryons in this model by taking the VEV as \(\langle M\rangle=\text{diag}(\alpha,\beta,\gamma)\) with \(\alpha=\beta\) as before. We note that \(\alpha\), \(\beta\), and \(\gamma\) correspond to the contribution from \(\langle\bar{u}u\rangle\), \(\langle\bar{d}d\rangle\), and \(\langle\bar{s}s\rangle\), and \(\alpha=\beta\) is assured by the isospin symmetry. According to the linear sigma model, when the pion and kaon decay constants are \(f_{\pi}\approx 93\,\text{MeV}\) and \(f_{K}\approx 110\,\text{MeV}\), its value is \(\langle M\rangle\propto\text{diag}(f_{\pi},\,f_{\pi},\,2f_{K}-f_{\pi})\approx \text{diag}(93,\,93,\,127)\,\text{MeV}\). Using the VEV of \(M\), the lagrangian can be decomposed as
\[\mathcal{L}^{\text{model}(2)}=-\left(\bar{\psi}_{N}\;\;\bar{\chi} _{N}\right)\hat{M}_{N}\begin{pmatrix}\psi_{N}\\ \chi_{N}\end{pmatrix}-\left(\bar{\psi}_{\Sigma}\;\;\bar{\chi}_{\Sigma}\right) \hat{M}_{\Sigma}\begin{pmatrix}\psi_{\Sigma}\\ \chi_{\Sigma}\end{pmatrix}\] \[-\left(\bar{\psi}_{\Xi}\;\;\bar{\chi}_{\Xi}\right)\hat{M}_{\Xi} \begin{pmatrix}\psi_{\Xi}\\ \chi_{\Xi}\end{pmatrix}+\text{(terms for $\Lambda$ baryons)}\, \tag{28}\]
where \(\hat{M}_{N}\), \(\hat{M}_{\Sigma}\) and \(\hat{M}_{\Xi}\) are \(2\times 2\) mass matrices given by
\[\hat{M}_{N}=\begin{pmatrix}-g\alpha&h\alpha\\ h\alpha&0\end{pmatrix}\,\] \[\hat{M}_{\Sigma}=\begin{pmatrix}-g\gamma&h\alpha\\ h\alpha&0\end{pmatrix}\, \tag{29}\] \[\hat{M}_{\Xi}=\begin{pmatrix}-g\alpha&h\gamma\\ h\gamma&0\end{pmatrix}\.\]
The strange quark contributions for \(\Sigma\) baryons, which enters the diagonal components in the mass matrix, corresponds to Fig.3(b). The one for \(\Xi\) baryons, which enters the off-diagonal components, corresponds to Fig.4(c). The mass eigenvalues for the ground-state octet members can be written as
\[m[N]=m(|g\alpha|,|h\alpha|)\,, \tag{30}\] \[m[\Sigma]=m(|g\gamma|,|h\alpha|)\,,\] (31) \[m[\Xi]=m(|g\alpha|,|h\gamma|)\,, \tag{32}\]
where \(m(x,y)\equiv\sqrt{(x/2)^{2}+y^{2}}-x/2\) is an eigenvalue of the matrix \(\begin{pmatrix}x&y\\ y&0\end{pmatrix}\).
Here we note \(|g\alpha|<|g\gamma|\) and \(\partial_{x}m(x,y)<0\); this means that this model leads to \(m[N]>m[\Sigma]\). Therefore, somewhat counter-intuitively, this model cannot reproduce the octet baryon masses correctly.
Note that, when we neglect the mixing with the singlet \(\Lambda\) baryon, the mass term is expressed as
\[-\left(\bar{\psi}_{\Lambda}\ \ \bar{\chi}_{\Lambda}\right)\hat{M}_{\Lambda} \begin{pmatrix}\psi_{\Lambda}\\ \chi_{\Lambda}\end{pmatrix}\, \tag{33}\]
with
\[\hat{M}_{\Lambda}=\begin{pmatrix}-\frac{g}{3}\left(4\alpha-\gamma\right)& \frac{h}{3}\left(\alpha+2\gamma\right)\\ \frac{h}{3}\left(\alpha+2\gamma\right)&0\end{pmatrix}. \tag{34}\]
From this, the mass of the octet \(\Lambda\) baryon can be calculated as
\[m[\Lambda]=m\big{(}|g(4\alpha-\gamma)|/3,\,|h(\alpha+2\gamma)|/3\big{)}\,. \tag{35}\]
We stress that the mass matrices of octet baryons satisfy the Gell-Mann-Okubo mass relation as
\[\frac{1}{2}\left[\hat{M}_{N}+\hat{M}_{\Xi}\right]=\frac{1}{4}\left[3\hat{M}_{ \Lambda}+\hat{M}_{\Sigma}\right]. \tag{36}\]
(See Sec.VI.2 for detail.) However, the mass eigenvalues can satisfy the Gell-Mann-Okubo mass relation only up to first order of strange quark breaking \(\mathcal{O}(\gamma-\alpha)\).
To summarize this section, we found that simple models based on the baryon octet with good diquarks do not reproduce the baryon octet masses correctly, unless we go beyond the lowest order in \(M\). We need to add more representations including bad diquarks or go to higher orders in \(M\).
## IV Quark diagram and chiral Yukawa interaction
In the previous section we have found that the simplest version of the \((3,\bar{3})+(\bar{3},3)\) and \((8,1)+(1,8)\) model does not work well. We have to go beyond the leading order, in other word, we need to include two or more \(M\) fields in Yukawa interaction terms. Since there are many possible terms for Yukawa interactions at higher order, we need to set up some rules for systematic treatments.
In this paper, we propose to use quark diagrams to classify Yukawa interactions. Quark fields in baryon and meson fields are connected to manifestly conserve the quantum numbers. At the first order of \(M\) (the first order Yukawa interactions), there are only two types of chiral Yukawa interaction. Here, although the first order Yukawa terms were treated in the last section, we repeat the analysis of the graph in terms of meson-diquark and meson-spectator couplings. The coupling constants are expressed as \(g_{1,2}^{\rm a}\) and \(g_{1,2}^{\rm s}\), respectively. Here, the subscript \(1,2\) refers to two baryon fields in the parity doublet model for a given representation, \(\psi_{1,2}\) and \(\chi_{1,2}\). In the next section (Sec.V), we will deal with "second-order" Yukawa interactions, which include two quark exchanges (two meson fields).
### Correspondence between quark diagrams and hadronic effective interactions
We explain how to find the hadronic effective interaction corresponding to a given quark diagram. To find the correspondence, the three-index notation (Eqs.(2)-(3)) for baryons is more convenient than the two-index notation. The two-index notation is useful for notational simplicity, though.
As shown in Fig.5, one draws a quark diagram in which each pair of \(\bar{q}_{i}\) and \(q_{i}\) (\(i=\rm L,R\)) is connected
Figure 4: Yukawa couplings between \((3,\bar{3})\) and \((1,8)\) baryon fields.
Figure 3: Yukawa couplings between \((3,\bar{3})\) and \((\bar{3},3)\) baryon fields.
though a quark line. Along quark lines, charges in the \(\mathrm{U}(3)_{\mathrm{L}}\times\mathrm{U}(3)_{\mathrm{R}}\) symmetry are conserved. Baryonic fields with different chirality are connected by inserting mesonic fields.
According to our dynamical assumption based on diquarks, the chirality flipping processes involving bad diquarks are assumed to be suppressed. Integrating such intermediate states involves at least the second order in \(M\). The second-order contribution to the baryon mass is about \(\sim\langle M\rangle^{2}/\big{(}M_{\mathrm{hard}}-M_{\mathrm{soft}}\big{)}\) where \(\langle M\rangle\) is the VEV of the meson field and \(M_{\mathrm{hard}}\) and \(M_{\mathrm{soft}}\) are the masses for hard- and soft-baryons, respectively. The mass scale \(M_{\mathrm{hard}}-M_{\mathrm{soft}}\) is the order of \(M_{\Delta}-M_{N}\sim 300\) MeV. The major assumption in this paper is that, the approximation \(\langle M\rangle/\big{(}M_{\mathrm{hard}}-M_{\mathrm{soft}}\big{)}\ll 1\), which should become increasing valid toward the chiral restoration with \(\langle M\rangle\to 0\), also sheds light on baryons in the vacuum. Under this assumption the second-order contributions in \(\langle M\rangle\) are suppressed compared with the first-order.
Meanwhile, the direct coupling between soft baryons (baryons with good diquarks) yields soft intermediate states which cannot be treated perturbatively; the Hamiltonian for soft baryonic fields must be fully diagonalized. The full diagonalization involves iterations of soft baryon graphs; to avoid the double counting, from the list of higher order terms in \(M\) we must pick up terms in which only hard baryons (baryons with bad-diquark) appear in the intermediate states. In the following we dictate how to organize interactions between \(\psi\) and \(\chi\) fields.
### "First-order" Yukawa interaction
We begin with the first order Yukawa interactions. For soft baryon fields there are only two possible processes:
* _Coupling to diquarks_-- The Yukawa interaction couples to a quark \(q_{\mathrm{R}}\) in \((3_{L},\bar{3}_{R})\) representation, \(\psi_{\mathrm{L}}\sim q_{\mathrm{L}}[q_{\mathrm{R}}q_{\mathrm{R}}]\). In other words, the matrix \(M\) couples to one of quarks forming a good diquark. After the chiral flipping, the \((\bar{3}_{L},\bar{3}_{R})\) representation, \(\psi_{\mathrm{R}}\sim[q_{\mathrm{L}}q_{\mathrm{L}}]q_{\mathrm{R}}\) is formed. This case was discussed in Sec. III.2. The same is true after exchanges of \(L\) and \(R\).
* _Coupling to a spectator quark_-- A spectator quark \(q_{\mathrm{L}}\) in \((3_{L},\bar{3}_{R})\) representation, \(\psi_{\mathrm{L}}\sim q_{\mathrm{L}}[q_{\mathrm{R}}q_{\mathrm{R}}]\), couples to \(M\) and flips the chirality. The resulting representation is \((1_{L},8_{R})\), \(\psi_{\mathrm{R}}\sim q_{\mathrm{R}}[q_{\mathrm{R}}q_{\mathrm{R}}]\). This case was discussed in Sec. III.3. The same is true after exchanges of \(L\) and \(R\).
The same arguments are applied to the mirror representations. Below we write down the effective Lagrangian for these couplings.
#### iii.2.1 Diquark interaction (\(g_{1,2}^{\mathrm{a}}\))
The first-order chiral Yukawa interaction corresponding to a diagram in Fig.6 is written as
\[(\bar{\psi}_{\mathrm{R}})_{r_{1}[l_{1}l_{2}]}(M)^{l_{2}}_{r_{2}}(\psi_{ \mathrm{L}})^{l_{1}[r_{1}r_{2}]}\,. \tag{37}\]
In the two-index notation, this is equivalent to
\[\varepsilon_{l_{1}l_{2}l_{3}}\varepsilon^{r_{1}r_{2}r_{3}}(\bar{\psi}_{ \mathrm{R}})^{l_{3}}_{r_{1}}(M)^{l_{2}}_{r_{2}}(\psi_{\mathrm{L}})^{l_{1}}_{r_ {3}}\,, \tag{38}\]
which was treated in Sec.III.2. This expression is also equivalent to the following contribution:
\[\mathrm{tr}\big{(}\bar{\psi}M\psi\big{)}-\mathrm{tr}\left[\,\bar {\psi}\psi\big{(}\,\mathrm{tr}(M)-M\big{)}\,\right]\] \[+\mathrm{tr}\big{(}\bar{\psi}\big{)}\,\mathrm{tr}(M)\,\mathrm{tr}( \psi)-\mathrm{tr}\big{(}\bar{\psi}\big{)}\,\mathrm{tr}(M\psi)-\mathrm{tr} \big{(}\bar{\psi}M\big{)}\,\mathrm{tr}(\psi)\,. \tag{39}\]
The traced baryonic fields \(\mathrm{tr}\,\psi\) or \(\mathrm{tr}\,\bar{\psi}\) represent the \(\Lambda_{0}\) (flavor-singlet \(\Lambda\) baryon). Terms without \(\Lambda_{0}\) are summarized in the first line of Eq.(39) which takes the same form as Eq.(20), so that the flavor-octet baryons satisfy the Gell-Mann-Okubo mass relation.
In the following sections, the coupling constants of the Yukawa interaction of the form given in Eq. (38) for naive representation is denoted as \(g_{1}^{\mathrm{a}}\) and that for mirror representation is as \(g_{2}^{\mathrm{a}}\).
#### iii.2.2 Spectator interaction (\(g_{\mathrm{i},2}^{\mathrm{a}}\))
Figure 7 shows that one of three quarks \(q_{L}\) included in the \((8_{L},1_{R})\) representation flips its chirality to \(q_{R}\). The
Figure 5: Correspondence between a quark diagram and a hadronic effective interaction.
Figure 6: First-order chiral Yukawa interaction between \((\bar{3},3)\) and \((3,\bar{3})\).
corresponding effective interaction is written as
\[(\bar{\chi}_{\rm R})_{r[r_{1}r_{2}]}(M^{\dagger})^{\tau}_{l}(\psi_{\rm L})^{l[r_ {1}r_{2}]}\,. \tag{40}\]
In the two-index notation, this can be written as
\[\mathrm{tr}\big{(}\bar{\chi}_{\rm R}M^{\dagger}\psi_{\rm L}\big{)}\;. \tag{41}\]
We note that this term generates the contributions to the masses of octet baryons which satisfy the Gell-Mann-Okubo mass relation as shown in Eq. (36). We would like to stress that, even if any pair of quarks in \(\chi\) forms a diquark, as seen in Fig.7, the corresponding effective interaction is expressed by the term given in Eq. (40). This property is because of the traceless property of \(\chi\), or equivalently,
\[(\chi)^{i[jk]}+(\chi)^{j[ki]}+(\chi)^{k[ij]}=0\,. \tag{42}\]
In the following sections, the coupling constants of the Yukawa interaction connecting \(\chi\) and \(\psi\) are denoted as \(g_{1}^{\rm s}\) and \(g_{2}^{\rm s}\).
## V Integrating out baryons including a bad diquark: second-order Yukawa interaction
In this section, we construct a minimal set of the second-order Yukawa interactions based on the quark diagram introduced in the previous section.
We omit the flavor singlet \(\Lambda\) baryons for simplicity, which may be heavier than the octet baryons due to the U(1)\({}_{\rm A}\) anomaly. (See Sec.V.5 for the singlet baryons.)
As mentioned in Sec.IV.1, it is important to omit terms which generate soft intermediate states. In this section we carefully pick out terms yielding only hard intermediate states.
### Classification of the processes: overview
We consider two sets of representations \(\psi_{1,2}\) and \(\chi_{1,2}\) for the parity doublet and examine how to combine them to generate the second order in \(M\). Single insertion of a meson line flips the chirality and change the chiral representation of baryon fields. As we have mentioned, we have to remove graphs which are simply iterations of the first order graph. For this purpose, the representations generated by the chirality flipping process must belong to hard baryons which include a bad diquark. As we stated in the previous section, the transition from soft to hard baryon intermediate states effectively introduce a factor \(\langle M\rangle/\big{(}M_{\rm hard}-M_{\rm soft}\big{)}\) as an expansion parameter.
In this paper we focus on the Yukawa interactions concerning the scalar and pseudoscalar mesons only. Then, from the structure of the Dirac spinor, the chirality of the baryon must flip at the interaction point. As we will show below, this is impossible without mirror representations. Thanks to the availability of the mirror representations in our framework, Yukawa interaction terms can be made SU(3) chiral invariant by using the mirror representation for one of baryon fields.
### Transition \(\psi_{1,2}\to\psi_{2,1}\)
Let us consider the transition between the same chiral representations. First we examine \(\psi_{1,2}\to\psi_{2,1}\). There are three possible processes.
#### v.2.1 Double spectator-meson interactions (\(\psi\)-\(\psi\))
A spectator quark \(q_{\rm R}\) of \(\psi_{\rm R}\sim q_{\rm R}[q_{\rm L}q_{\rm L}]\) flips the chirality twice (Fig.8). In this process hard baryons do not appear in the intermediate states, we must omit them to avoid the double counting.
Figure 8: Yukawa coupling between \((\bar{3},3)\) and \((\bar{3},3)\) where a spectator quark flips the chirality.
Figure 7: First-order chiral Yukawa interactions connecting from \((8_{L},1_{R})\) to \((\bar{3}_{L},3_{R})\) representations. Although there are three patterns, all of them correspond to the same effective interaction, due to the traceless of \(\chi\).
#### iv.2.2 Spectator-meson and diquark-meson interactions (\(h_{1}\), \(\psi\)-\(\psi\))
Both a spectator quark and a quark forming a diquark flip the chirality once. The chirality flipping in a diquark destroys a good diquark in the initial state and generates a hard baryon in the intermediate states. (Fig.9). This interaction can be written as
\[(\bar{\psi}_{\rm R})_{r_{1}[l_{1}l^{\prime}]}(M)^{l_{1}}_{r_{2}}(M^{\dagger})^ {r_{1}}_{l_{2}}(\psi^{\rm mir}_{\rm L})^{r_{2}[l_{2}l^{\prime}]}. \tag{43}\]
In terms of the two-index notation, this is written as
\[{\rm tr}\big{(}\bar{\psi}_{\rm R}M^{\dagger}M\psi^{\rm mir}_{\rm L}\big{)}-{ \rm tr}\big{(}\bar{\psi}_{\rm R}M^{\dagger}\big{)}\,{\rm tr}\big{(}M\psi^{\rm mir }_{\rm L}\big{)}\,. \tag{44}\]
The first term \({\rm tr}\big{(}\bar{\psi}_{\rm R}M^{\dagger}M\psi^{\rm mir}_{\rm L}\big{)}\) satisfies the Gell-Mann-Okubo mass relation, while the second term \({\rm tr}\big{(}\bar{\psi}_{\rm R}M^{\dagger}\big{)}\,{\rm tr}\big{(}M\psi^{ \rm mir}_{\rm L}\big{)}\) breaks it, because it is not the form of Eq.(20). However, when the flavor singlet \(\Lambda\) is omitted, this term contributes only to the octet member of \(\Lambda\) baryon and the breaking contribution is proportional to \((\gamma-\alpha)^{2}\). Therefore, this interaction satisfies the Gell-Mann-Okubo mass relation up to first-order of strange quark mass perturbation. We should stress that this term is possible only when there exists a mirror representation \(\psi^{\rm mir}_{\rm L}\).
#### iv.2.3 Double meson insertions into a single quark in a diquark (\(h_{2}\), \(\psi\)-\(\psi\))
Figure 10 shows the diagram with double chirality flipping in a quark belonging to a diquark. The intermediate states are hard. This interaction can be written as
\[(\bar{\psi}_{\rm R})_{r[l_{1}l]}(MM^{\dagger})^{l_{1}}_{l_{2}}(\psi^{\rm mir}_ {\rm L})^{r[l_{2}l]}\, \tag{45}\]
or in the index contracted notation,
\[{\rm tr}\big{[}\bar{\psi}_{\rm R}\psi^{\rm mir}_{\rm L}({\rm tr}\big{(}MM^{ \dagger}\big{)}-MM^{\dagger})\big{]}\, \tag{46}\]
which satisfies the Gell-Mann-Okubo mass relation.
### Transition \(\chi_{1,2}\to\chi_{2,1}\)
Next, we examine \(\chi_{1,2}\to\chi_{2,1}\). There are three possible processes. The processes are similar to the \(\psi_{1,2}\to\psi_{2,1}\) transitions but the microphysics is not quite identical.
#### iv.2.1 Double spectator-meson interactions (\(\chi\)-\(\chi\))
As in the \(\psi\) cases, double meson insertions into a spectator quark \(q_{\rm R}\) of \(\chi_{\rm R}\sim q_{\rm R}[q_{\rm R}q_{\rm R}]\) (Fig.11) does not contain any hard baryons and we must omit them to avoid the double counting.
#### iv.2.2 Double meson insertions into a single quark in a diquark (\(\chi\)-\(\chi\))
Figure 12 shows the diagram with double chirality flipping in a quark belonging to a diquark. The intermediate states are hard. The difference between the upper and
Figure 11: Yukawa coupling between \((1,8)\) and \((1,8)\) where a spectator quark flips the chirality.
Figure 10: Yukawa coupling between \((\bar{3},3)\) and \((\bar{3},3)\) where a spectator quark in the good-diquark flips the chirality.
Figure 9: Yukawa coupling between \((\bar{3},3)\) and \((\bar{3},3)\) where a spectator quark and a quark in the good-diquark flip the chirality.
lower panels are the constituents forming the diquark. In the former this interaction can be written as
\[(\bar{\chi}_{\rm R})_{r_{1}[r_{2}r_{3}]}(M^{\dagger}M)^{r_{3}}_{r_{4}}(\chi^{\rm mir }_{\rm L})^{r_{1}[r_{2}r_{4}]}\, \tag{47}\]
or in the index contracted notation,
\[{\rm tr}\big{[}\bar{\chi}_{\rm R}\chi^{\rm mir}_{\rm L}\big{(}{\rm tr}\big{(}M^ {\dagger}M\big{)}-M^{\dagger}M\big{)}\big{]}. \tag{48}\]
In the latter, there is reformation of a diquark. The corresponding interaction term can be written as
\[(\bar{\chi}_{\rm R})_{r_{1}[r_{2}r_{3}]}(M^{\dagger}M)^{r_{3}}_{r_{4}}(\chi^{ \rm mir}_{\rm L})^{r_{2}[r_{1}r_{4}]}\, \tag{49}\]
or in the index contracted notation,
\[{\rm tr}\big{(}\bar{\chi}_{\rm R}M^{\dagger}M\chi^{\rm mir}_{\rm L}\big{)}-{ \rm tr}\big{[}\bar{\chi}_{\rm R}\chi^{\rm mir}_{\rm L}({\rm tr}\big{(}M^{ \dagger}M\big{)}-M^{\dagger}M)\big{]}. \tag{50}\]
Both terms separately satisfy the Gell-Mann-Okubo mass relation.
### Transition \(\psi_{1,2}\to\chi_{1,2}\) or \(\chi_{2,1}\)
Finally we examine the off-diagonal transitions between different chiral representations, the \(\psi_{1,2}\to\chi_{2,1}\) processes. It turns out that the only nonzero processes are two meson insertions to quarks belonging to good diquarks. Figure 13 shows two diagrams, but they can be expressed by a single term in the Lagrangian,
\[(\bar{\psi}_{\rm R})_{r[l_{1}l_{2}]}(M)^{l_{1}}_{r_{1}}(M)^{l_{2}}_{r_{2}}( \chi^{\rm mir}_{\rm L})^{r[r_{1}r_{2}]}\, \tag{51}\]
or
\[(\bar{\psi}_{\rm R})_{r[l_{1}l_{2}]}(M)^{l_{1}}_{r_{1}}(M)^{l_{2}}_{r_{2}}( \chi^{\rm mir}_{\rm L})^{r_{1}[rr_{2}]}. \tag{52}\]
Eq.(51) and Eq.(52) are equivalent due to the traceless of \(\chi\) given in Eq. (42). This can be also written as
\[{\rm tr}\Big{(}\bar{\psi}_{\rm R}\chi^{\rm mir}_{\rm L}\hat{O}\Big{)}\, \tag{53}\]
where
\[\hat{O}^{r_{3}}_{l_{3}}\equiv\varepsilon_{l_{1}l_{2}l_{3}}\varepsilon^{r_{1} r_{2}r_{3}}(M)^{l_{1}}_{r_{1}}(M)^{l_{2}}_{r_{2}}. \tag{54}\]
From the expression in Eq. (53), one can easily see that this term satisfies the Gell-Mann-Okubo mass relation.
### Singlet \(\Lambda\) baryon
In this paper, we omit the contribution from the flavor singlet \(\Lambda\) baryons, \(\Lambda_{0}\). In the quark model, wave functions of three quarks in a flavor singlet baryon are totally antisymmetric in the flavor space as well as in the color space. Since the spin wave functions cannot be totally antisymmetric, the space part of the wave functions should be in the excited level. This implies that \(\Lambda_{0}\) cannot be a groundstate.
In the present model a pair of \(\Lambda_{0}\)s is included in the \((3,\bar{3})\) and \((\bar{3},3)\) representations, which may mix with some \(\Lambda\) baryons of the octet members when the flavor symmetry breaking is included. However, we note that \(\Lambda_{0}\) baryon appears from chiral singlet \(\Lambda\) baryons of \((1,1)\) representation, for which there exists a chiral symmetic mass term given by
\[-m_{\Lambda}(\bar{\Lambda}^{(1,1)}_{\rm L}\Lambda^{(1,1)}_{\rm R}+\bar{\Lambda }^{(1,1)}_{\rm R}\Lambda^{(1,1)}_{\rm L})\, \tag{55}\]
corresponding to the quark diagram including U(1)\({}_{\rm A}\) anomaly as in Fig.14. The \(\Lambda\) baryon of \((1,1)\) representation can be made heavy even before the spontaneous chiral symmetry breaking. When the chiral symmetry is spontaneously broken, this mixes with the flavor singlet \(\Lambda\) baryon belonging to \((3,\bar{3})\) and \((\bar{3},3)\) representations. Thus, we naturally expect that the flavor singlet \(\Lambda\) baryons are heavier than the flavor-octet \(\Lambda\) baryons.
## VI Numerical fit to mass spectra
### Model
The entire Lagrangian which we use in this work consists of the following sectors:
\[{\cal L}_{\rm total}={\cal L}_{\rm kin}+{\cal L}_{\rm CIM}+{\cal L}_{\rm Yukawa }+{\cal L}_{\rm 2nd}\,. \tag{56}\]
The kinetic term is just the ordinal one for \(\psi\), \(\psi^{\rm mir}\), \(\chi\), and \(\chi^{\rm mir}\). The chiral invariant mass terms are expressed as
\[{\cal L}_{\rm CIM}=-m_{0}(\bar{\psi}\gamma_{5}\psi^{\rm mir})-m_{0}(\bar{\chi} \gamma_{5}\chi^{\rm mir})+{\rm h.c.}\,, \tag{57}\]
Figure 14: Anomalous interaction between the chiral singlet baryons \(\Lambda\sim(1,1)\).
Figure 13: Although there are two patterns, they correspond to the same effective interaction.
where we suppose the chiral invariant masses for \(\psi\) and \(\chi\) are the same for simplicity.
The first-order Yukawa interactions are given by
\[\mathcal{L}_{\rm Yukawa}=\] \[-g_{1}^{s}\big{[}-\varepsilon_{r_{1}r_{2}r_{3}}\varepsilon^{l_{1} l_{2}l_{3}}(\bar{\psi}_{\rm L})^{r_{1}}_{l_{1}}(M^{\dagger})^{r_{2}}_{l_{2}}( \psi_{\rm R})^{r_{3}}_{l_{3}}+{\rm h.c.}\big{]}\] \[-g_{2}^{s}\big{[}-\varepsilon_{l_{1}l_{2}l_{3}}\varepsilon^{r_{1} r_{2}r_{3}}(\bar{\psi}_{\rm L}^{\rm mir})^{l_{1}}_{r_{1}}(M)^{l_{2}}_{r_{2}}( \psi_{\rm R}^{\rm mir})^{l_{3}}_{r_{3}}+{\rm h.c.}\big{]}\] \[-g_{1}^{s}\big{[}{\rm tr}(\bar{\psi}_{\rm L}M\chi_{\rm R}+\bar{ \psi}_{\rm R}M^{\dagger}\chi_{\rm L}+{\rm h.c.})\big{]}\] \[-g_{2}^{s}\big{[}{\rm tr}(\bar{\psi}_{\rm L}^{\rm mir}M^{\dagger} \chi_{\rm R}^{\rm mir}+\bar{\psi}_{\rm R}^{\rm mir}M\chi_{\rm L}^{\rm mir}+{\rm h.c.})\big{]}\,. \tag{58}\]
The second-order terms introduced in the previous section are summarized as
\[\mathcal{L}_{\rm 2nd}=-\frac{g_{1}^{\rm d}}{f_{\pi}}\big{[}\,{\rm tr }\Big{(}\bar{\psi}_{\rm L}\chi_{\rm R}^{\rm mir}\hat{O}^{\dagger}\Big{)}-{\rm tr }\Big{(}\bar{\psi}_{\rm R}\chi_{\rm L}^{\rm mir}\hat{O}\Big{)}+{\rm h.c.}\big{]}\] \[-\frac{g_{2}^{\rm d}}{f_{\pi}}\big{[}\,{\rm tr}\Big{(}\bar{\chi}_ {\rm L}\psi_{\rm R}^{\rm mir}\hat{O}\Big{)}-{\rm tr}\Big{(}\bar{\chi}_{\rm R} \psi_{\rm L}^{\rm mir}\hat{O}^{\dagger}\Big{)}+{\rm h.c.}\big{]}\] \[-\frac{h_{1}}{f_{\pi}}\big{\{}\,{\rm tr}\Big{(}\bar{\psi}_{\rm L }MM^{\dagger}\psi_{\rm R}^{\rm mir}\Big{)}-{\rm tr}\big{(}\bar{\psi}_{\rm L}M \big{)}\,{\rm tr}\big{(}M^{\dagger}\psi_{\rm R}^{\rm mir}\big{)}\] \[\qquad\qquad+{\rm h.c.}\big{\}}\] \[-\frac{h_{3}}{f_{\pi}}\big{\{}\,{\rm tr}\big{[}\bar{\chi}_{\rm L }\chi_{\rm R}^{\rm mir}({\rm tr}\big{(}M^{\dagger}M\big{)}-MM^{\dagger})\big{]}\] \[\qquad\qquad-{\rm tr}\big{[}\bar{\chi}_{\rm R}\chi_{\rm L}^{\rm mir }({\rm tr}\big{(}M^{\dagger}M\big{)}-M^{\dagger}M\big{)}\big{]}\] \[\qquad\qquad+{\rm h.c.}\big{\}}\] \[-\frac{h_{4}}{f_{\pi}}\big{\{}\,-{\rm tr}\big{(}\bar{\chi}_{\rm L }MM^{\dagger}\chi_{\rm R}^{\rm mir}\big{)}\] \[\qquad\qquad+{\rm h.c.}\big{\}}\] \[+{\rm h.c.}\big{\}}\] \[+{\rm tr}\big{[}\bar{\chi}_{\rm L}\chi_{\rm R}^{\rm mir}({\rm tr} \big{(}MM^{\dagger}\big{)}-MM^{\dagger})\big{]}\] \[+{\rm tr}\big{(}\bar{\chi}_{\rm R}M^{\dagger}M\chi_{\rm L}^{\rm mir }\big{)}\] \[-{\rm tr}\big{[}\bar{\chi}_{\rm R}\chi_{\rm L}^{\rm mir}({\rm tr} \big{(}M^{\dagger}M\big{)}-M^{\dagger}M\big{)}\big{]}\] \[+{\rm h.c.}\big{\}}\, \tag{59}\]
with \(\hat{O}\) defined in Eq. (54). We take the mean field approximation for the scalar meson \(\langle M\rangle={\rm diag}(\alpha,\beta,\gamma)\), assuming the isospin symmetry \(\alpha=\beta\). It is convenient to introduce a unified notation for the chiral representations of baryons as \(\Psi_{i}=(\psi_{i},\chi_{i},\gamma_{5}\psi_{i}^{\rm mir},\gamma_{5}\chi_{i}^{ \rm mir})^{T}\) (\(i=N,\Lambda,\Sigma,\Xi\)). By using this, mass terms of baryons are written as
\[\tilde{\mathcal{L}}=-\sum_{i=N,\Lambda,\Sigma,\Xi}\bar{\Psi}_{i}\hat{M}_{i}\Psi _{i}\, \tag{60}\]
where the mass matrices \(\hat{M}_{i}\) (\(i=N,\Lambda,\Sigma,\Xi\)) is defined as
\[\hat{M}(x^{a},x^{b},x^{\rm d},x^{\rm h1},x^{\rm h23},x^{\rm h4})\equiv\] \[\begin{pmatrix}g_{1}^{a}x^{a}&g_{1}^{s}x^{s}&m_{0}+\frac{h_{1}}{f_ {\pi}}x^{\rm h1}+\frac{h_{2}}{f_{\pi}}x^{\rm h23}&\frac{g_{1}^{4}}{f_{\pi}}x^{ \rm d}\\ 0&\frac{g_{2}^{\rm d}}{f_{\pi}}x^{\rm d}&m_{0}+\frac{h_{3}}{f_{\pi}}x^{\rm h23}+ \frac{h_{4}}{f_{\pi}}x^{\rm h4}\\ &&g_{2}^{\rm g}x^{\rm a}&g_{2}^{\rm s}x^{\rm s}\\ &&0\end{pmatrix} \tag{61}\]
with \(x^{a},\cdots,\,x^{h_{4}}\) defined in Table 1. We note that the matrix \(\hat{M}\) is symmetric, so we omitted some components in Eq. (61).
Diagonalizing the \(4\times 4\) matrix \(\hat{M}_{i}\) in Eq. (61), we obtain four mass eigenvalues, \(m_{i}^{\rm g.s.}\), \(m_{i}^{(1)}\), \(-m_{i}^{(2)}\) and \(-m_{i}^{(3)}\), and the corresponding mass eigenstates, \(B_{i}^{\rm g.s.}\), \(B_{i}^{(1)}\), \(\gamma_{5}B_{i}^{(2)}\) and \(\gamma_{5}B_{i}^{(3)}\), where \(B_{i}^{(2)}\) and \(B_{i}^{(3)}\) are negative parity baryons. As a result, the mass term is rewritten as
\[\tilde{\mathcal{L}}= -\sum_{i}\big{[}m_{i}^{\rm g.s.}\bar{B}_{i}^{\rm g.s.}B_{i}^{\rm g.s.}+m_{i}^{(1)}\bar{B}_{i}^{(1)}B_{i}^{(1)}\] \[\quad+m_{i}^{(2)}\bar{B}_{i}^{(2)}B_{i}^{(2)}+m_{i}^{(3)}\bar{B}_{i}^ {(3)}B_{i}^{(3)}\big{]}. \tag{62}\]
### Gell-Mann-Okubo mass relation for mass matrices
As seen in Sec.V.1, all interactions except the \(h_{1}\) term satisfy the Gell-Mann-Okubo mass relation. The breaking term is proportional to \(\epsilon^{2}\), where \(\epsilon\) is defined in the VEV of the meson field as \(\langle M\rangle={\rm diag}(\alpha,\alpha,\alpha+\epsilon)\). Therefore, assuming \(\epsilon\ll\alpha\), the Gell-Mann-Okubo mass relations among the mass matrices for \(N\), \(\Lambda\), \(\Sigma\), and \(\Xi\) are approximately satisfied as
\[\frac{\hat{M}_{N}+\hat{M}_{\Xi}}{2}-\frac{3\hat{M}_{\Lambda}+\hat{M}_{\Sigma}}{4}= \mathcal{O}\left(\left(\epsilon/\alpha\right)^{2}\right). \tag{63}\]
Let \(\hat{D}_{i}\) be the diagonalized matrices of \(\hat{M}_{i}\), and let \(\hat{U}_{N}\) be the unitary matrix which diagonalizes \(\hat{M}_{N}\). Then the perturbations for \(\epsilon\) of the mass eigenvalues are
\[\hat{D}_{f}=\hat{D}_{N}+{\rm diag}\left[\hat{U}_{N}^{\dagger}\frac{d\hat{M}_{f}}{d \epsilon}\bigg{|}_{\epsilon=\hat{U}_{N}}\right]\epsilon+\mathcal{O}(\epsilon^{2}) \tag{64}\]
where \(f=\Lambda,\Sigma,\Xi\), and \({\rm diag}\big{[}\hat{X}\big{]}\) is the diagonal part of \(\hat{X}\). Therefore, Eq.(63) implies that the mass eigenvalues in this model can satisfy the Gell-Mann-Okubo mass relation for small \(\epsilon\) as
\[\frac{\hat{D}_{N}+\hat{D}_{\Xi}}{2}-\frac{3\hat{D}_{\Lambda}+\hat{D}_{\Sigma}}{4}= \mathcal{O}\left(\left(\epsilon/\alpha\right)^{2}\right). \tag{65}\]
This argument implies that the Gell-Mann-Okubo mass relation is satisfied for small \(\epsilon/\alpha\) in this model. However, as seen in the previous sections, it is rather difficult
### Traces of mass matrices
In this section, we would like to note that there are non-trivial relations among traces of the mass matrices. The explicit forms of traces are shown as follows:1
Footnote 1: We note that these matrices satisfy the Gell-Mann–Okubo mass relation as
\[\frac{\mathrm{tr}(\hat{M}_{N})+\mathrm{tr}(\hat{M}_{\mathbb{Z}})}{2}=\frac{3 \mathrm{tr}(\hat{M}_{\Lambda})+\mathrm{tr}(\hat{M}_{\Sigma})}{4}\,. \tag{66}\]
\[\mathrm{tr}(\hat{M}_{\Sigma}) =(g_{1}^{\mathrm{a}}+g_{2}^{\mathrm{a}})\gamma=\mathrm{tr}(\hat{M }_{N})\frac{\gamma}{\alpha}\, \tag{67}\] \[\mathrm{tr}(\hat{M}_{\Xi}) =(g_{1}^{\mathrm{a}}+g_{2}^{\mathrm{a}})\alpha=\mathrm{tr}(\hat{M }_{N})\,\] (68) \[\mathrm{tr}(\hat{M}_{\Lambda}) =(g_{1}^{\mathrm{a}}+g_{2}^{\mathrm{a}})\frac{4\alpha-\gamma}{3} =\mathrm{tr}(\hat{M}_{N})\frac{(4\alpha-\gamma)/3}{\alpha}. \tag{69}\]
We determine the VEVs of the meson field \(M\) from the decay constants of pion and kaon as
\[\alpha=f_{\pi}\,\quad\gamma=2f_{K}-f_{\pi}. \tag{70}\]
In Table 2, the input values of \(f_{\pi}\) and \(f_{K}\) are shown together with the determined values of \(\alpha\) and \(\gamma\). As for the baryon masses, we use the values listed in Table 3 picked up from the PDG table [40]. The value of the \(\mathrm{tr}\big{[}\hat{M}_{N}\big{]}\) is determined as
\[\mathrm{tr}(\hat{M}_{N}) =(939+1440-1530-1650)\,\mathrm{MeV}\] \[=-801\,\mathrm{MeV}\,. \tag{71}\]
Then, the trace values for the other flavors are also determined as the following:
\[\mathrm{tr}(\hat{M}_{\Sigma})=\mathrm{tr}(\hat{M}_{N})\frac{\gamma}{\alpha}=1 094\,\mathrm{MeV}\, \tag{72}\]
\[\mathrm{tr}(\hat{M}_{\Xi})=-801\,\mathrm{MeV}\, \tag{73}\]
\[\mathrm{tr}(\hat{M}_{\Xi})=\mathrm{tr}(\hat{M}_{N})\frac{(4\alpha- \gamma)/3}{\alpha}=-703\,\mathrm{MeV}. \tag{74}\]
We note that not all of four masses in a given baryon flavor, except \(N\) and \(\Lambda\), are well-established as can be seen from Table 3. The trace \(\mathrm{tr}\Big{(}\hat{M}_{\Lambda}\Big{)}\) with the experimental values
\[(1116+1600-1674-1800)\,\mathrm{MeV}=-758\,\mathrm{MeV}. \tag{75}\]
This value is close to the value of \(-703\,\mathrm{MeV}\) in Eq. (74), with \(\mathrm{tr}\Big{(}\hat{M}_{N}\Big{)}\) and \(\langle M\rangle\) as inputs. The agreement seems reasonably good. Actually the octet \(\Lambda\) mass has ambiguities related to the mixing with the singlet, so the saturation of the equality including only the octet \(\Lambda\) implies the mixing between the singlet and octet is not very large.
On the other hand, the trace of \(\Sigma\) and \(\Xi\) masses is not established experimentally. Hence we need extra discussions about the usage of the trace formula. This is presented in the next section.
### Numerical results
In this subsection, we numerically fit the model parameters to known mass spectra of light baryons, and also give some predictions.
Using twelve mass values in Table 3, we fit the ten Yukawa couplings \(g_{1}^{a}\), \(g_{2}^{a}\), \(g_{1}^{s}\), \(g_{2}^{s}\), \(g_{1}^{d}\), \(g_{2}^{d}\), \(h_{1}\), \(h_{2}\), \(h_{3}\) and \(h_{4}\) by minimizing the following function:
\[f_{\mathrm{min}}=\sum_{i=1}^{12}\left(\frac{m_{i}^{\mathrm{theory}}-m_{i}^{ \mathrm{input}}}{\delta m_{i}}\right)^{2}\, \tag{76}\]
where errors \(\delta m_{i}\) are taken as \(\delta m_{i}=10\,\mathrm{MeV}\) for the ground-state baryons and \(\delta m_{i}=100\,\mathrm{MeV}\) for the excited baryons. The difference in \(\delta m_{i}\) is used since the masses for excited states generally contain more errors.
We select certain sets of parameters which provide reasonably good fit satisfying \(f_{\mathrm{min}}<1\). Since there still remain many sets of parameters, we further restrict parameters by requiring
\[\sum_{i}|\Delta_{\mathrm{GO},i}|<100\,\mathrm{MeV}\, \tag{77}\]
w
\begin{table}
\begin{tabular}{c|c} \hline \hline \(f_{\pi}\) & 93 MeV \\ \(f_{K}\) & 110 MeV \\ \(\alpha\) & \(f_{\pi}(=93\,\mathrm{MeV})\) \\ \(\gamma\) & \(2f_{K}-f_{\pi}(=127\,\mathrm{MeV})\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Physical inputs of the decay constants for pion and kaon [40], and the VEV of the meson field \(\langle M\rangle=\mathrm{diag}(\alpha,\beta,\gamma)\) with assuming isospin symmetry \(\alpha=\beta\).
\begin{table}
\begin{tabular}{c
where
\[\Delta_{{\rm GO},i}\equiv\frac{m_{i}[N]+m_{i}[\Xi]}{2}-\frac{3m_{i}[\Lambda]+m_{i }[\Sigma]}{4}\, \tag{79}\]
with \(i\) indicating the octet members. We summarize the results of fitting in Fig.15 by showing masses of baryons together with the \(\Delta_{{\rm GO},i}\). In this figure, black lines with error bars show the inputs listed in Table 3, and blue points show the best fitted values of masses and \(\Delta_{{\rm GO},i}\).
Figure 15 shows that positive baryons with the ground and first excited states are reproduced well. Meanwhile, in the negative parity channel, the mass hierarchy between \(\Sigma\) and \(\Xi\) states look at odd with the counting based on the strange quark mass. Although this hierarchy is not rejected by the experiments, we regard it as a problem from the view of naturalness. 2
Footnote 2: We note that there is a two-fold ambiguity in identifying \(\Xi\) baryons with negative parity, which we call \(\Xi(-)\) below. For some parameter choice, we found two solutions in which one of \(\Xi(-)\) is identified as an octet-member with \(N(1535)\) and another with \(N(1650)\). In Fig. 15, we show one solution which provides smaller value of \(|\Delta_{{\rm GO},i}|\).
We study the mixing structure of the ground-state nucleon, \(N(939)\). In the present analysis, the mass eigenstate for \(N(939)\) is expressed as
\[B_{N}^{\rm g.s.}=c_{\psi}\psi+c_{\chi}\chi+c_{\psi}^{\rm mir}\psi^{\rm mir}+c_ {\chi}^{\rm mir}\chi^{\rm mir}, \tag{80}\]
where \(c_{\psi}\), \(c_{\chi}\), \(c_{\psi}^{\rm mir}\) and \(c_{\chi}^{\rm mir}\) show the ratio of each wave function included in the ground-state nucleon with the normalization of \(|c_{\psi}|^{2}+|c_{\chi}|+|c_{\psi}^{\rm mir}|^{2}+|c_{\chi}^{\rm mir}|^{2}=1\). For clarifying the mixing structure, we define the "naive-mirror ratio" as
\[|c_{\psi}|^{2}+|c_{\chi}|^{2}(=1-|c_{\psi}^{\rm mir}|^{2}-|c_{\chi}^{\rm mir}|^{ 2})\, \tag{81}\]
and "\(\psi\)-\(\chi\) ratio" as
\[|c_{\psi}|^{2}+|c_{\psi}^{\rm mir}|^{2}(=1-|c_{\chi}|^{2}-|c_{\chi}^{\rm mir}|^ {2})\, \tag{82}\]
The results of mixing structure are summarized in Fig.16 and the couplings used are summarized in Table.4. It is remarkable that, in most cases, the "\(\psi\)-\(\chi\) ratio" is around 50%, which implies that the ground-state nucleon is provided by the maximally mixed state of \((3_{L}\,,\,\bar{3}_{R})+(\bar{3}_{L}\,,\,3_{R})\) and \((1_{L}\,,\,8_{R})+(8_{L}\,,\,1_{R})\) representations. We still note that, for \(m_{0}=100\,\)MeV, there are some solutions for which \((1_{L}\,,\,8_{R})+(8_{L}\,,\,1_{R})\) representation is dominated.
## VII Summary and discussion
We proposed a systematic way to construct models of baryons based on the chiral U(3)\({}_{L}\times\)U(3)\({}_{R}\) symmetry. The symmetry constraints are far stronger than in models assuming only the SU(3) flavor symmetry, and the chiral Yukawa interactions appear in very specific ways. We assume that chiral representations \((3_{L}\,,\,6_{R})\) and \((6_{L}\,,\,3_{R})\) with a bad diquark are heavier than \((3_{L}\,,\,\bar{3}_{R})\), \((\bar{3}_{L}\,,\,3_{R})\), \((1_{L}\,,\,8_{R})\) and \((8_{L}\,,\,1_{R})\). We showed that the inclusion of the first order Yukawa interactions for the four representations satisfies the Gell-Mann-Okubo mass relation, but cannot reproduce the mass ordering of octet members of the ground states; the quark graphs convincingly explain why, at the first order, the strange quark mass does not appear to reproduce the correct mass ordering. Then, we expanded our systematic analyses to the second-order Yukawa interactions, and showed that the mass ordering problem is cured for the ground state of positive parity baryons. The state is found to be a maximally mixed state of \((3_{L}\,,\,\bar{3}_{R})+(\bar{3}_{L}\,,\,3_{R})\) and \((1_{L}\,,\,8_{R})+(8_{L}\,,\,1_{R})\) representations. The results imply that the quark diagrams are very useful to constrain the possible types of Yukawa interactions.
In the present analyses up to the second order, while the mass ordering in the positive parity ground state is reproduced correctly, in the negative parity we found the unnatural mass ordering of the ground state; the mass of \(\Sigma\) including a single strange quark is heavier than \(\Xi\) with two strange quarks. Although such ordering is not fully excluded because these two negative parity states have not been confirmed experimentally, we feel unlikely that \(\Sigma\) is heavier than \(\Xi\). After extensive parameter searches, we have reached somewhat unexpected conclusion that the second order Yukawa interactions are not sufficient. This sort of difficulties has not been manifest within the analyses for two-flavor models. Further studies are mandatory.
This work is partially motivated from the hope to saturate the \(U(3)_{\rm L}\times U(3)_{\rm R}\) dynamics of baryons within a few chiral representations, as done for mesons \((\pi,\sigma,\rho,a_{1})\) by Weinberg. We introduced a hierarchy based on good and bad diquarks to pick up chiral representations which are supposed to be important, but our analyses indicate the necessity to include at least the second order of Yukawa interactions; the descriptions based on \(U(3)_{\rm L}\times U(3)_{\rm R}\) chiral representations are much more involved than those
\begin{table}
\begin{tabular}{c||c|c|c|c} & \multicolumn{3}{c||}{Mass inputs for octet members [MeV]} \\ \hline \hline \(J^{P}\) & \(N\) & \(\Lambda\) & \(\Sigma\) & \(\Xi\) \\ \hline \(m_{1}:1/2^{+}\)(G.S.) & \(N(939)\): 939 & \(\Lambda(1116)\): 1116 & \(\Sigma(1193)\): 1193 & \(\Xi(1318)\): 1318 \\ \(m_{2}:1/2^{+}\) & \(N(1440)\): 1440 & \(\Lambda(1600)\): 1600 & \(\Sigma(1660)\): 1660 & \(\Xi(?)\): \\ \(m_{3}:1/2^{-}\) & \(N(1535)\): 1530 & \(\Lambda(1670)\): 1674 & \(\Sigma(?)\): & \(\Xi(?)\): \\ \(m_{4}:1/2^{-}\) & \(N(1650)\): 1650 & \(\Lambda(1800)\): 1800 & \(\Sigma(1750)\): 1750 & \(\Xi(?)\): \\ \hline \end{tabular}
\end{table}
Table 3: Physical inputs for the four octet masses.
requiring only the \(SU(3)\)-flavor symmetry. Clearly our analyses need improvement. Several possibilities are in order:
i) It is possible that the classification of chiral representations based on good and bad diquarks is not very effective. If this is the case, we need to explicitly include several additional chiral representations, such as \((3_{L}\,,\,6_{R})\) and \((6_{L}\,,\,3_{R})\). The necessity to include baryons with bad diquarks raises questions whether we should manifestly include the decuplet baryons such as \(\Delta\) and the interactions with the octet baryons. This would greatly increase the number of couplings at the tree level. On the other hand, it is possible that including massive resonances at the leading order reduces the importance of Yukawa interactions at higher orders.
ii) Another possibility is that the linear realization,
\begin{table}
\begin{tabular}{l||c c|c c c|c c} & 100-A & 100-B & 800-A & 800-B & 800-C & 1400-A & 1400-B \\ \hline \hline \(m_{0}\) [MeV] & 100 & 100 & 800 & 800 & 800 & 1400 & 1400 \\ \hline \(g_{1}^{a}\) (\(\psi\)-\(\psi\)) & -12.63 & -8.77 & -3.98 & -8.7 & 3.69 & -1.86 & -3.59 \\ \(g_{2}^{a}\) (\(\psi^{\rm mir}\)-\(\psi^{\rm mir}\)) & 3.35 & -0.04 & -5.26 & -0.75 & -12.46 & -7.13 & -5.5 \\ \(g_{1}^{s}\) (\(\psi\)-\(\chi\)) & 7.57 & 9.56 & 2.12 & 7.95 & 7.44 & 5.79 & 0.57 \\ \(g_{2}^{s}\) (\(\psi^{\rm mir}\)-\(\chi^{\rm mir}\)) & -10.97 & -12.44 & 3.78 & 11.19 & -8.95 & 6.88 & 0.82 \\ \(g_{1}^{d}\) (\(\psi\)-\(\chi^{\rm mir}\)) & -3.83 & 0.01 & 7.09 & -1.04 & -4.78 & -6.14 & -6.58 \\ \(g_{2}^{d}\) (\(\chi\)-\(\psi^{\rm mir}\)) & -5.1 & 0.66 & -6.17 & -1.47 & -4.15 & 6.19 & 6.89 \\ \(h_{1}\) (\(\psi\)-\(\psi^{\rm mir}\)) & 5.88 & -3.38 & 7.35 & 1.33 & 1.17 & 2.35 & 4.25 \\ \(h_{2}\) (\(\psi\)-\(\psi^{\rm mir}\)) & -3.73 & 5.61 & -5.64 & -9.51 & -4.01 & -7.24 & -7.39 \\ \(h_{3}\) (\(\chi\)-\(\chi^{\rm mir}\)) & -1.6 & 3.56 & -1.65 & 1.51 & 0.78 & -2.42 & -2.36 \\ \(h_{4}\) (\(\chi\)-\(\chi^{\rm mir}\)) & -2.85 & 1.4 & 0.33 & -0.18 & -0.53 & -3.7 & -2.71 \\ \hline naive ratio of \(N(939)\) & 0.62 & 0.79 & 0.57 & 0.74 & 0.98 & 0.72 & 0.54 \\ \(\psi\) ratio of \(N(939)\) & 0.19 & 0.51 & 0.45 & 0.52 & 0.65 & 0.61 & 0.64 \\ \end{tabular}
\end{table}
Table 4: Some sample solutions, which correspond to the indicated points in Fig.16. \(g_{1,2}^{\rm s}\) or \(g_{1,2}^{\rm d}\) have values of around 5-10, which is not small. This implies the mixing between \(\psi^{\rm(mir)}\) and \(\chi^{\rm(mir)}\) is not small.
Figure 15: Numerical results of the four octet masses for the chiral invariant mass \(m_{0}=800\,{\rm MeV}\). They almost satisfy the Gell-Mann–Okubo mass relation (\(\Delta_{\rm GO}<100\,{\rm MeV}\)) and well reproduce the physical inputs. For other values of \(m_{0}\), there are solutions which satisfy the same conditions. In this model, the \(\Sigma\) baryon in the octet member of \(N(1535)\) becomes heavier than the others.
even after superposing many chiral representations, is not sufficient to explain baryons in vacuum. If we are indeed required to include infinite number of the Nambu-Goldstone (NG) bosons around baryons, the non-linear realization is a more natural choice for baryons in vacuum, although the description near the chiral restoration should become more complicated.
iii) The extreme limit of infinite number of NG bosons around a baryon leads to the description of a baryon in the chiral soliton models. Here a coherent pion cloud represents the baryon charge at the core in the same way as electric fields around an electron allow us to infer the existence of the electric charge. If including many NG bosons are indeed essential, the physical baryons would be hardly saturated by a few chiral representations.
###### Acknowledgements.
The work of T.M., B.G., and M.H. was supported in part by JSPS KAKENHI Grant No. 20K03927. T.M. and B.G. were also supported by JST SPRING, Grant No. JPMJSP2125. T.M. and B.G. would like to take this opportunity to thank the "Interdisciplinary Frontier Next-Generation Researcher Program of the Tokai Higher Education and Research System." T.K. was supported by JSPS KAKENHI Grant No. 23K03377 and by the Graduate Program on Physics for the Universe (GP-PU) at Tohoku university.
|
2308.03872 | Leading all-loop quantum contribution to the effective potential in the
inflationary cosmology | In this paper, we have constructed quantum effective potentials and used them
to study slow-roll inflationary cosmology. We derived the generalised RG
equation for the effective potential in the leading logarithmic approximation
and applied it to evaluate the potentials of the $T^2$ and $T^4$-models, which
are often used in modern models of slow-roll inflation. We found that while the
one-loop correction strongly affects the potential, breaking its original
symmetry, the contribution of higher loops smoothes the behaviour of the
potential. However, unlike the $\phi^4$-case, we found that the effective
potentials preserve spontaneous symmetry breaking when summing all the leading
corrections. We calculated the spectral indices $n_s$ and $r$ for the effective
potentials of both models and found that they are consistent with the
observational data for a wide range of parameters of the models. | D. I. Kazakov, R. M. Iakhibbaev, D. M. Tolkachev | 2023-08-07T18:47:17Z | http://arxiv.org/abs/2308.03872v2 | ###### Abstract
###### Abstract
In this paper, we have constructed quantum effective potentials and used them to study slow-roll inflationary cosmology. We derived the generalised RG equation for the effective potential in the leading logarithmic approximation and applied it to evaluate the potentials of the \(T^{2}\) and \(T^{4}\)-models, which are often used in modern models of slow-roll inflation. We found that while the one-loop correction strongly affects the potential, breaking its original symmetry, the contribution of higher loops smoothes the behaviour of the potential. However, unlike the \(\phi^{4}\)-case, we found that the effective potentials preserve spontaneous symmetry breaking when summing all the leading corrections. We calculated the spectral indices \(n_{s}\) and \(r\) for the effective potentials of both models and found that they are consistent with the observational data for a wide range of parameters of the models.
**Leading all-loop quantum contribution to the effective potential in the inflationary**
**cosmology**
**D.I. Kazakov\({}^{1,2}\), R. M. Iakhibbaev\({}^{1}\) and D.M. Tolkachev\({}^{1,3}\)**
\({}^{1}\)_Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, Dubna, Russia, 141980_
\({}^{2}\)_Moscow Institute of Physics and Technology, Dolgoprudny, Russia, 141701_
\({}^{3}\)_Stepanov Institute of Physics, Minsk, Belarus, 220072_
## Introduction
The inflationary model based on the assumption of the existence of an accelerated expansion stage in the early Universe, became one of the foundations of modern cosmology [1, 2]. This area is rapidly developing because it explains the observed flatness, homogeneity and isotropy of the observable Universe. The advantage of the inflationary theory is also related to its successful explanation of the peculiarities of the CMB spectrum [3].
The most common way to realise the accelerated expansion is to use a scalar field (inflaton or inflationary field) with self-interaction and to consider the solution of the field equations in slow-roll regime. Obtaining an inflaton potential is a rather difficult problem, since at present the observational data give a wide scope for consideration of a large number of potentials [4]. Recently, \(\alpha\)-attractor type models in inflationary cosmology have aroused considerable interest, mainly because they are justified in supergravity and
satisfy observational data. [5, 6, 7]. Therefore, their (or their possible modifications) detailed study in the context of inflation is quite an interesting task [6].
In quantum theory, the classical inflaton potential acquires quantum loop corrections. Such corrections, first considered in the paper by Coleman and Weinberg [8], can change the form of the potential and lead to the appearance of a new vacuum, which means spontaneous symmetry breaking takes place. The inflationary model based on the \(\phi^{4}\) theory with a one-loop quantum correction was considered in [9]. Within the framework of this model, the old theory of inflation with tunneling [1] and then the new inflation in models with slow rolling [9] was formulated. Note, however, that taking into account only one-loop corrections is not always justified. As was shown in the original work [8], summing the leading terms with the renormalization group significantly changes the behavior of the potential. Hence, the construction of an effective potential that takes into account all leading corrections is a current task.
However, the all-loop effective potentials are not often used in cosmology. Existing literature (see e.g. [10, 11]) deals with effective potentials in renormalisable cases when the formalism of the usual renormalisation group formalism is applicable. At the same time, the most popular cosmological potentials from the point of view of quantum field theory, are non-renormalizable. This leads to all the problems typical of non - renormalizable theories such as infinite arbitrariness in normalisation of counter terms, absence of standard renormalization group (RG) equations for summation of leading corrections, scheme dependence of results of calculations, etc. However, if one considers only the leading quantum corrections (the leading logarithmic approximation), these problems can be avoided. Indeed, the leading divergences (they are also the leading orders of \(\log\phi^{2}\)) are universal, they do not depend on the arbitrariness of the subtraction procedure (they are not scheme-dependent). Moreover, this property is independent of the renormalizability or non-renormalizability of the theory. Thus, one can try to calculate the main quantum corrections to the effective potential and apply the obtained expression for the description of inflation, ignoring the main problems of non-renormalizable theories so far.
In the present paper we take into account the leading quantum corrections to the classical potential. Considering an arbitrary scalar potential, we derive a generalised RG equation which sums all leading logarithmic corrections to the potential. This way, we derive an effective potential and based on it we analyse the cosmological parameters that are obtained from this modified potential. In the following, we consider the potential for the so-called \(T\)-model [7], which can be represented as
\[V=\tanh^{2n}(\phi/(\sqrt{6\alpha M_{Pl}^{2}})), \tag{1}\]
where \(\phi\) is the inflaton field, \(M_{Pl}\) is Planck's mass, but \(\alpha\) is a free parameter, \(n=1,2\). A qualitative analysis of the behaviour of quantum corrections to such potentials was carried out in [12], and in this paper we will be able to check their conclusions about flat asymptotic behaviour of the one-loop corrected potential.
Based on the obtained numerical solutions of the RG equation, effective potentials and cosmological parameters, we can understand at under parameters the effective potential satisfies the observational data.
## 1 Single-field model of slow-roll inflationary scenario: setup and observables
The starting point in inflationary cosmology with a single field is the scalar-tensor theory, with the action that can be presented as [4, 13]:
\[S=\int d^{4}x\sqrt{-g}\left[\frac{M_{pl}^{2}}{2}R+\frac{1}{2}g^{\mu\nu}\partial _{\mu}\phi\partial_{\nu}\phi-V(\phi)\right], \tag{2}\]
where \(R\) is a scalar curvature, \(M_{Pl}=8\pi G\) is Planck's mass, \(\phi\) is the inflation scalar field, and \(V(\phi)\) is self-interaction term. In the Friedman-Lemetre-Robertson-Walker metric, the equations of motion for the above model can be written in the form [14]
\[3M_{pl}^{2}H^{2}=\frac{\dot{\phi}^{2}}{2}+V, \tag{3}\]
\[-2M_{pl}^{2}\dot{H}=\phi^{2}, \tag{4}\]
\[\ddot{\phi}+3H\dot{\phi}+\frac{\partial V}{\partial\phi}=0. \tag{5}\]
Here \(H=\dot{a}/a\) is Hubble's constant and \(a(t)\) is a scale factor.
To describe the evolution of the background, it is useful to introduce cosmological parameters of the Hubble flow:
\[\epsilon_{n}=\frac{d\log\epsilon_{n-1}}{dN_{e}}, \tag{6}\]
here \(\epsilon_{0}=H_{0}/H\), with \(H_{0}\) being initial Hubble's constant, \(N_{e}\) is the ratio of the scale factor at the end of inflation to the factor at its beginning: \(N_{e}=\ln(a/a_{0})\) (also \(N_{e}\) is called the number of \(e\)-foldings). Cosmological parameters are responsible for the nature of inflation and reflect the shape of the inflation potential. Inflation is an accelerated expansion, and the condition for such an expansion can be written as \(\epsilon_{1}<1\), and the end of inflation corresponds to \(\epsilon_{1}=1\). Under the condition of correctness of the slow roll approximation, we can consider that \(\epsilon_{n}\ll 1\), so that the following estimation of the cosmological parameter becomes valid:
\[\epsilon_{1}\simeq\frac{1}{2}\left(\frac{V^{\prime}}{V}\right)^{2}=\frac{1}{2 M_{pl}}\left(\frac{d\phi}{dN_{e}}\right)^{2}. \tag{7}\]
The condition \(\epsilon=1\) with the help of (7) can be rewritten in terms of the Klein-Gordon-Fock equation [4]:
\[\frac{1}{M_{pl}}\left(\frac{d\phi}{dN_{e}}\right)^{2}=\frac{d\ln V}{dN_{e}}. \tag{8}\]
This equation can be integrated to give an explicit expression for the number of \(e\)-folds [4]:
\[N_{e}-N_{0}=-\frac{1}{M_{pl}^{2}}\int_{\phi_{0}}^{\phi}\frac{V(x)}{V_{x}(x)}dx. \tag{9}\]
Based on observational evidence, we can also say that the lower limit for expansion of the Universe should be a number of 50 to 60 \(e\)-folds. The observable characteristics of inflation are the spectral indices: CMB tilt of scalar perturbations [4, 7]
\[n_{s}=1-2\epsilon_{1}+\epsilon_{2}, \tag{10}\]
the tilt of tensor perturbation [4]:
\[n_{t}=-2\epsilon_{1}. \tag{11}\]
The CMB tensor-to-scalar ratio is [4]:
\[r=16\epsilon_{1}. \tag{12}\]
According to the relic radiation data obtained from the BICEP/PLANCK [15, 16], the observed values for these quantities are as follows:
\[r<0.036,\ n_{s}=0.9649\pm 0.0042. \tag{13}\]
The \(n_{s}\)-\(r\) plot generally allows one to discriminate models and compare them with observational data.
In this paper we consider the T-model potential, which we will rewrite as follows [6, 17]
\[V(\phi)=g\tanh^{2n}\left(\phi\omega/\sqrt{6\alpha}\right), \tag{14}\]
where \(\omega=M_{Pl}^{-1}\), \(g\) is an appropriate inflation scale, and \(\alpha\) we put equal to unity. In the following, we will restrict ourselves to two cases of T-model: \(n=1\) (for convenience, we will call it the \(T^{2}\)-model) and \(n=2\) (\(T^{4}\)-model). According to the above (8), above for cosmological parameters, one can easily find the exact analytical formula for the dependence of \(\phi\) on \(N_{e}\) for T-models [4]:
\[\phi^{(n)}(N_{e})=\frac{\sqrt{6}}{2\omega}\operatorname{arccosh}\left(\sqrt{1+ 4/3n^{2}}+4/3\ n\Delta N\right), \tag{15}\]
where \(\Delta N=N_{f}-N_{e}\), and \(N_{f}\) can be found from the inflation ending condition \(\epsilon_{1}=1\). Due to the simple form of the \(\phi(N)\) dependence the corresponding spectral parameters of the Hubble flow can be obtained. Thus, the T-model is quite convenient, since all parameters in it are represented in analytical form. In a general form, the functions and spectral parameters for T-models in arbitrary form are presented in ref. [4].
Effective potential in scalar field theory with
arbitrary potential
The effective potential is defined as part of the effective action without derivatives. The direct way to find the effective potential \(V_{eff}(\phi)\) by perturbation theory is to calculate the sum of one-particle irreducible vacuum diagrams obtained using Feynman's rules derived from the shifted action \(S[\phi+\widehat{\phi}]\), where \(\phi\) is the classical field obeying the equation of motion and \(\widehat{\phi}(x)\) is the quantum field over which integration is performed [18]. This means that one has to consider the 1PI vacuum diagrams with propagators containing an infinite number of insertions \(v_{2}(\phi)\equiv\frac{d^{2}V(\phi)}{d\phi^{2}}\), which act like a mass depending on the field \(\phi\): \(m^{2}(\phi)=gv_{2}(\phi)\). The vertices are also obtained from the expansion of the potential \(V(\phi+\widehat{\phi})\) by the quantum field \(\widehat{\phi}\). After that, the effective potential is constructed as a perturbation expansion by the coupling constant \(g\)
\[V_{eff}=g\sum_{n=0}^{\infty}(-g)^{n}V_{n}, \tag{16}\]
where \(V_{0}=V\) is the initial classical potential.
We choose a dimensional regularisation to control UV divergence in loop integrals, taking the dimension of spacetime \(D=4-2\epsilon\). Proceeding in this way, in the one-loop case, we obtain the quantum correction [19]:
\[V_{1}=\frac{1}{16\pi^{2}}\frac{1}{4}\frac{v_{2}^{2}}{\epsilon}\left(\frac{ \mu^{2}}{m^{2}}\right)^{\epsilon}\rightarrow\frac{1}{16\pi^{2}}\frac{v_{2}^{2 }}{4}\left(\frac{1}{\epsilon}+\log\frac{\mu^{2}}{m^{2}}\right),\quad m^{2}=gv_ {2}(\phi). \tag{17}\]
Further, the singular part \(\sim 1/\epsilon\) is removed by introducing of the UV counterterms, and the finite part \(\sim\log(gv_{2}(\phi))\) gives the contribution to the effective potential. Note that the coefficient of the pole \(1/\epsilon\) exactly coincides with the coefficient of the logarithm. This property is also preserved in higher loops: the coefficient of the leading pole \(1/\epsilon^{n}\) coincides with the coefficient of the leading logarithm \(\log^{n}(gv_{2}(\phi))\). Hence, the task is to find the coefficients of the leading poles, which, in turn, due to the special features of the \({\cal R}\)-operation is determined by one-loop diagrams [19].
The next step is to obtain recurrence relations connecting the leading divergences in subsequent loops. They obviously follow from the local structure of \({\cal R}\)-operations [19]. Denoting the singular part of the effective potential (coefficient at the leading pole \(1/\epsilon^{n}\)) in the n-th order of perturbation theory by \(\Delta V_{n}\), one can obtain the following recurrence relation [19]
\[n\Delta V_{n}=\frac{1}{4}\sum_{k=0}^{n-1}D_{2}\Delta V_{k}D_{2}\Delta V_{n-1- k},\ \ \ n\geq 1,\ \ \Delta V_{0}=V_{0}, \tag{18}\]
where \(D_{2}\) is the second derivative by the field \(\phi\). This recurrence relation allows one to compute all leading divergences \(\Delta V_{n}\) for an arbitrary potential \(V\) in an algebraic way without computing diagrams.
To sum up the leading divergences (or, what is the same, the leading logarithms in the finite part), it is convenient to pass to the differential equation for the sum of series
\[\Sigma(z,\phi)=\sum_{n=0}^{\infty}(-z)^{n}\Delta V_{n}(\phi), \tag{19}\]
where \(z=g/\epsilon\). Indeed, multiplying Eq.(18) by the factor \((-z)^{n}\) and summing over n from \(n=2\) to \(\infty\), we obtain the differential equation for the function \(\Sigma(z,\phi)\),
\[\frac{d\Sigma}{dz}=-\frac{1}{4}(D_{2}\Sigma)^{2},\ \ \Sigma(0,\phi)=V_{0}(\phi). \tag{20}\]
This is the desired generalised RG equation for the effective potential in the approximation of leading logarithms. For renormalizable, interaction it reduces to the usual RG equation [19]. To receive an effective potential in the solution of this equation, one should replace the pole by \(\epsilon\) with the corresponding logarithm:
\[V_{eff}(g,\phi)=\Sigma(z,\phi)|_{z\rightarrow-\frac{g}{16\pi^{2}}\log gv_{2}/ \mu^{2}}. \tag{21}\]
Recall that here \(v_{2}(\phi)\equiv\frac{d^{2}V_{0}(\phi)}{d\phi^{2}}\).
Note that Eq.(20) is a partial derivative equation, and the function \(\Sigma(z,\phi)\) depends on two variables: \(z\) and \(\phi\). In general, it is usually more convenient to reduce this function to dimensionless variables. In some special cases, (20) can be reduced to an ordinary differential equation, but still one has to use numerical methods to solve it [19].
## 3 Inflation and effective potentials
### \(T^{2}\)-model
Consider the theory with the potential (14) for \(n=1\), which corresponds to the \(T^{2}\)-model:
\[V=g\tanh^{2}\left(\frac{\phi\omega}{\sqrt{6}}\right). \tag{22}\]
Further, to simplify equation (20) it is convenient to represent the function \(\Sigma\) in dimensionless variables \(x=z\omega^{4}\) and \(y=\tanh^{2}(\phi\omega/\sqrt{6})\), because the loop decomposition of this function can be represented as polynomials on tangents. Then
\[\Sigma(z\omega^{4},\tanh^{2}(\phi\omega/\sqrt{6}))\equiv S(x,y).\]
The obtained function is in some way an analogue of an arbitrary function \(F(\tanh(\frac{\phi}{\sqrt{6}}))\) in the theory of conformal chaotic inflation [7]. But in our case this function is restricted by the RG equation (20) and it contains an additional regularisation parameter \(\mu^{2}\).
Renaming the variables and functions in the original equation (20), the generalised RG-equation for such a potential can be written as:
\[\frac{\partial}{\partial x}S(x,y)=-\frac{1}{4}\left(\omega^{2}\frac{\partial^{2} }{\partial\phi^{2}}S(x,y(\phi))\right)^{2}. \tag{23}\]
Such change of variables considerably simplifies the form of the equation, so that for the \(T^{2}\)-model the generalised RG-equation, after the transformation from the derivative over \(\phi\) to the derivative over \(y\), has the following form:
\[\frac{1}{36}(y-1)^{2}\left((1-3y)S_{y}-2(y-1)yS_{yy}\right)^{2}=-S_{x}. \tag{24}\]
For brevity, the lower indices are introduced here to denote the corresponding derivative of the function \(S\). The initial and boundary conditions for the equation can be defined as:
\[S(0,y)=y,\ S(x,1)=1,\ S_{x}(x,1)=0, \tag{25}\]
because the potential in the limit \(g=0\) must coincide with the classical potential, as stated in (20) and as \(\phi\rightarrow\pm\infty\) must reach a constant. The exact solution of Eq. (24) cannot be represented analytically: it is a complex two-dimensional surface, and to find an effective potential, we have to perform a substitution \(z\rightarrow-\frac{g}{16\pi^{2}}\log{gv_{2}}/{\mu^{2}}\), and in the case of the \(T^{2}\)-model [19]
\[v_{2}=-\frac{1}{3}\omega^{2}\left(\cosh\left(\sqrt{\frac{2}{3}}\omega\phi \right)-2\right)\mbox{sech}^{4}\left(\frac{\omega\phi}{\sqrt{6}}\right). \tag{26}\]
That is, this substitution is equivalent to the choice of a particular curve on a two-dimensional surface given by the solution of Eq. (24). The numerical solution of the equation can be carried out using Euler's backward differentiation method [20] or built-in methods in Wolfram Mathematica.
The initial potential has flat asymptotics, so it is often used in models of chaotic inflation, as it was indicated earlier due to the attractor properties of solutions (solutions of Friedmann equations do not depend on initial conditions and shrink into a stable limit cycle in the phase space). The solutions of the generalised RG-equation have the same asymptotics but can modify the behaviour of the potential in inflationary cosmological models leading to different slow roll scenarios and creating a different dynamical picture of solutions. More specifically, one can study the form of the total quantum effective potential at different parameters \(g,\mu^{2}\). And since they are related, it is possible to restrict them by modifying one of them, say, fixing 1\(g\ \sim 1\), and changing \(\mu\) to satisfy the conditions of
convergence. The convergence conditions of the original series for the potential (19) are represented by the following expressions [19]:
\[\log g\frac{\omega^{2}(2-\cosh(\phi\omega))}{\mu^{2}\cosh^{4}(\phi\omega)}>1,\ \frac{g \omega^{4}}{16\pi^{2}}<1, \tag{27}\]
they set the bounds of applicability of the approximation of the leading logs for the effective potential.
We compare the behaviour of the resummed potential with a one-loop contribution which has the following form
\[V_{1loop}(\phi)=V(\phi)+\frac{g^{2}}{16\pi^{2}}\frac{1}{4}v_{2}^{2}\log\left( \frac{gv_{2}}{\mu^{2}}\right). \tag{28}\]
In fact, the \(T^{2}\)-model (\(T^{4}\)-model either) is called an attractor because the phase portrait of the Friedman equations (3-5) represents a limit cycle (or attractor), i.e., the
Figure 1: Comparison of the classical potential (red dashed line), the one-loop correction (green solid line), and the RG effective potential (blue solid line) at \(g\sim 1,\ \mu^{2}\sim 10^{-2}\) (upper plot) and \(g\sim 1,\ \mu^{2}\sim 10\) (lower plot)
general dynamics of this system does not depend on the initial conditions. So introducing additional corrections to this potential can affect the dynamical properties, and there can appear the Hopf bifurcation picture [21].
From Fig.1 one can see that the one-loop potential strongly modifies the classical part of \(V(\phi)\) causing a maximum (or minimum at \(\mu^{2}>1\)) at \(\phi=0\), while the all-loop quantum effective potential smoothes the perturbation from the first loop contribution. One can conclude from this observation that the quantum effective potential is more stable in contrast to the one-loop-corrected potential. We also note that for small \(\mu\), a situation may occur where the contributions of all loops are unable to flatten the maximum near \(\phi=0\).
The relevant potentials at different \(\mu\) are shown in Figure 2. Spontaneous symmetry breaking with a maximum near zero is observed and, starting from certain extremely small \(\mu^{2}\sim 10^{-80}\), additional minima can arise which correspond to a false vacuum.
The varying behaviour of the potential can find its application in slow rolling. It is convenient to depict the whole set of cosmological parameters to demonstrate how inflation changes for different values of \(\mu^{2}\). Using the formulas from the first chapter and numerical calculations, we plot the value of \(\epsilon_{1}\) as a function of \(\phi\). Based on these plots, one can conclude that the occurrence of extra minima for some values of \(\mu^{2}\) does not strongly affect the slow roll behaviour, although, as one can see, the smaller \(\mu^{2}\), the shorter is the inflation (e.g., for the classical \(T^{2}\)-model the inflation ends at \(\phi_{end}\omega\simeq 1.208\), while for \(\mu^{2}\sim 10^{-3}\) the inflation ends at \(\phi_{end}\omega\simeq 1.061\)). Thus, the slow rolling is realised within the observational data for rather wide bounds on \(\mu^{2}\).
Interestingly, for the all-loop effective potential with \(\mu^{2}\leq 10^{-5}\), the situation arises when slow-roll inflation can become eternal (for the one-loop potential perpetual inflation occurs already with \(\mu^{2}\leq 10^{-1}\)) [22, 23]. This is due to spontaneous symmetry breaking and the fact that the maximum of the effective potential is less than unity in our notation. That is, the behaviour of the potential deteriorates when \(\mu^{2}\) decreases, since false vacua emerge. The existence of false vacua leads to the problem of phase transitions in the early Universe [24], although on the other hand, these features can change the power spectrum of gravitational waves at certain scales, which may be useful for explaining primordial black holes production [25]. The question of how exactly tunnelling through potential barriers can occur was studied in the literature [24, 26], and is beyond the scope of our consideration (as well as the analysis of the postinflationary reheating phase in these models). A qualitative study of the reheating phase for corrected \(T\)-model potential was made in [12].
### \(T^{4}\)-model
Similar to the case of the \(T^{2}\)-model, we can consider \(T^{4}\)-model with the potential
\[V=g\tanh^{4}(\frac{\phi\omega}{\sqrt{6}}). \tag{29}\]
For this potential, one can also write down a generalised RG equation with the same initial and boundary conditions and then construct the effective potential. The initial differential equation (20) can be written in the form
\[\frac{\partial}{\partial z}\Sigma(z\omega^{4},\tanh^{4}(\phi\omega/\sqrt{6})) =-\frac{1}{4}\left(\frac{\partial^{2}}{\partial\phi^{2}}\Sigma(z\omega^{4}, \tanh^{4}(\phi\omega/\sqrt{6}))\right)^{2} \tag{30}\]
Figure 3: Plots for the Hubble flow parameter \(\epsilon_{1}\) In Fig. 2(a) the blue line — \(\mu^{2}\sim 10^{-2}\), green line — \(\mu^{2}\sim 10^{-8}\), yellow line — \(\mu^{2}\sim 10^{-10}\), at the 2(b) purple line \(\mu^{2}\sim 10^{-16}\), orange one \(\mu^{2}\sim 10^{-56}\). The red dashed line corresponds to the classical potential and the grey line indicates the end of inflation
After changing the variables to dimensionless \(x=z\omega^{4}\), \(y=\tanh^{4}(\phi\omega/\sqrt{6})\) and the desired function as \(\Sigma(z\omega^{4},\tanh^{4}(\phi\omega/\sqrt{6}))=S(x,y)\) the differential generalised RG-equation, after passing from the derivative of \(\phi\) to the derivative on \(y\), will have the following form:
\[\frac{1}{9}\left(y^{1/2}-1\right)^{2}y\left(\left(5y^{1/2}-3\right)S_{y}+4y \left(y^{1/2}-1\right)S_{yy}\right)^{2}=-S_{x}, \tag{31}\]
with the boundary and initial conditions in the same form as for the \(T^{2}\)-model:
\[S(0,y)=y,\ S(x,1)=1,\ S_{x}(x,1)=0. \tag{32}\]
Equation (31) looks more complicated than in the case of the \(T^{2}\) model but it is numerically analysable. To find an effective potential from (31), we still need to substitute \(z\rightarrow-\frac{g}{16\pi^{2}}\log{gv_{2}}/\mu^{2}\), and in the case of the \(T^{4}\) model
\[v_{2}=-\frac{2}{3}\omega^{2}\left(\cosh\left(\sqrt{\frac{2}{3}}\omega\phi \right)-4\right)\tanh^{2}\left(\frac{\omega\phi}{\sqrt{6}}\right)\text{sech}^ {4}\left(\frac{\omega\phi}{\sqrt{6}}\right). \tag{33}\]
Figure 4 shows a comparison of the classical potential, the one-loop-corrected potential, with the all loop effective potential for various \(\mu^{2}\). Here we also show spontaneous symmetry breaking for which the contribution from the first loop is responsible, and still the quantum effective potential is characterised by a smoother behaviour. Unlike the previous case the effective potential for the \(T^{4}\)-model turns to zero at the minimum, though from a certain value of \(\mu^{2}\) there is also symmetry breaking leading to the appearance of the maximum of the effective potential at \(\phi=0\), which has the same meaning of instability
Figure 4: Classical potential \(\tanh^{4}\) (red dashed line), single-loop (green solid line) and full effective (blue solid line) potentials at \(\mu^{2}\sim 10^{-80}\), the purple solid line corresponds to the quantum effective potential at \(\mu^{2}\sim 10^{-80}\)
as in the case of the \(T^{2}\)-model. Also as can be seen from the figure 4, the \(T^{4}\) potential is much more stable to quantum corrections and almost does not change in a large range of values of the \(\mu^{2}\) parameter. Thus, for estimations of the all-loop effective potential, one can use the classical \(T^{4}\) potential with a one-loop quantum correction.
The Fig. 5 shows the cosmological parameter \(\epsilon_{1}\) for several effective potentials. It can be seen that irrespective of the value of the parameter \(\mu^{2}\), the slow rolling is completed (if we do not speak about too small values of the parameter \(\mu^{2}\), where the occurrence of false vacua occurs).
Equations for the effective potentials of \(T^{2n}\)-models can be derived, but it turns out that they are not fundamentally different from the case of the full RG-summed potential of the \(T^{4}\)-model.
## Conclusion
In this paper, we have constructed quantum effective potentials and used them to study inflationary cosmology with slow roll. We derived the generalised RG equation for the effective potential in the leading logarithmic approximation and applied it to evaluate the potentials of the \(T^{2}\) and \(T^{4}\)-models, which are often used in modern cosmology of slow roll inflation. We found that while the one-loop correction strongly affects the potential breaking its original symmetry, the contribution of higher loops smoothes the behaviour of the potential. However, unlike the \(\phi^{4}\)-case [8], we found that the effective potentials
Figure 5: Plots for the cosmological parameter \(\epsilon_{1}\). The red dashed line corresponds to the classical potential, the blue line all-loop effective potential — \(\mu^{2}\sim 10^{-2}\), the green \(\mu^{2}\sim 10^{-10}\), the purple \(\mu^{2}\sim 10^{-16}\), the orange \(\mu^{2}\sim 10^{-56}\). The grey line represents the end of inflation
preserve spontaneous symmetry breaking when summing all the leading corrections (1). This may lead to effects related to the decay of the metastable vacuum.
As shown in [12], quantum one-loop corrections do not deviate the asymptotic values of the effective potential at large fields and can change the form of the potential only at small values of the inflaton field. However, taking into account all leading logarithms by means of the RG-formalism (or generalised RG) may lead to a significant change of the asymptotics [8, 19]. In the case of \(T\)-models of \(\alpha\)-attractors, we have found that the asymptotics of the potential remains unchanged when all-loop corrections are taken into account, thus confirming the conclusions of [12].
In this work, we also constructed and studied the Hubble flow parameters reflecting the basic properties of slow-roll inflation. It was found that for many values of the dimensional regularization parameter \(\mu\) the inflation can become eternal. In some region of parameters the all-loop quantum effective potential only slightly differs from the classical one and leads to a similar slow-roll behavior. Indeed, our calculations of cosmological inflation parameters fit the diagram of the \(n_{s}-r_{s}\) plot. In Fig. 6, we combined the data for the indices \(n_{s}\) and \(r_{s}\) for the effective potentials of both models in the purple region. The diagram reveals that all effective potentials are consistent with the observational data [16]. The purple region on the plot corresponds to all allowed values of the parameter \(\mu^{2}\) for the potential. It can be seen that the effective potentials based on \(T^{2}\) or \(T^{4}\) do not differ significant but are still distinguishable. The \(T^{2}\) and \(T^{4}\) with one-loop corrections can also enter this region, but a fine tuning of the \(\mu\) parameter is required (for example, \(\mu^{2}\simeq 2\) for the \(T^{2}\)-model in our units).
The approach used in this paper allows one to calculate the effective potentials for other cosmological models of inflation. It can be used to constrain or to generalize some inflationary models in the slow-roll approximation.
## Acknowledgments
The authors are grateful to A.Starobinsky and S. Ketov for the statement of the problem and for D.Gorbunov for numerous useful discussions of inflationary scenarios. Financial support from the Russia Scientific Foundation Grant # 21-12-00129 is cordially acknowledged.
|
2307.04998 | Selective Sampling and Imitation Learning via Online Regression | We consider the problem of Imitation Learning (IL) by actively querying noisy
expert for feedback. While imitation learning has been empirically successful,
much of prior work assumes access to noiseless expert feedback which is not
practical in many applications. In fact, when one only has access to noisy
expert feedback, algorithms that rely on purely offline data (non-interactive
IL) can be shown to need a prohibitively large number of samples to be
successful. In contrast, in this work, we provide an interactive algorithm for
IL that uses selective sampling to actively query the noisy expert for
feedback. Our contributions are twofold: First, we provide a new selective
sampling algorithm that works with general function classes and multiple
actions, and obtains the best-known bounds for the regret and the number of
queries. Next, we extend this analysis to the problem of IL with noisy expert
feedback and provide a new IL algorithm that makes limited queries.
Our algorithm for selective sampling leverages function approximation, and
relies on an online regression oracle w.r.t.~the given model class to predict
actions, and to decide whether to query the expert for its label. On the
theoretical side, the regret bound of our algorithm is upper bounded by the
regret of the online regression oracle, while the query complexity additionally
depends on the eluder dimension of the model class. We complement this with a
lower bound that demonstrates that our results are tight. We extend our
selective sampling algorithm for IL with general function approximation and
provide bounds on both the regret and the number of queries made to the noisy
expert. A key novelty here is that our regret and query complexity bounds only
depend on the number of times the optimal policy (and not the noisy expert, or
the learner) go to states that have a small margin. | Ayush Sekhari, Karthik Sridharan, Wen Sun, Runzhe Wu | 2023-07-11T03:32:20Z | http://arxiv.org/abs/2307.04998v1 | # Selective Sampling and Imitation Learning via Online Regression+
###### Abstract
We consider the problem of Imitation Learning (IL) by actively querying noisy expert for feedback. While imitation learning has been empirically successful, much of prior work assumes access to noiseless expert feedback which is not practical in many applications. In fact, when one only has access to noisy expert feedback, algorithms that rely on purely offline data (non-interactive IL) can be shown to need a prohibitively large number of samples to be successful. In contrast, in this work, we provide an interactive algorithm for IL that uses selective sampling to actively query the noisy expert for feedback. Our contributions are twofold: First, we provide a new selective sampling algorithm that works with general function classes and multiple actions, and obtains the best-known bounds for the regret and the number of queries. Next, we extend this analysis to the problem of IL with noisy expert feedback and provide a new IL algorithm that makes limited queries.
Our algorithm for selective sampling leverages function approximation, and relies on an online regression oracle w.r.t. the given model class to predict actions, and to decide whether to query the expert for its label. On the theoretical side, the regret bound of our algorithm is upper bounded by the regret of the online regression oracle, while the query complexity additionally depends on the eluder dimension of the model class. We complement this with a lower bound that demonstrates that our results are tight. We extend our selective sampling algorithm for IL with general function approximation and provide bounds on both the regret and the number of queries made to the noisy expert. A key novelty here is that our regret and query complexity bounds only depend on the number of times the optimal policy (and not the noisy expert, or the learner) go to states that have a small margin.
## 1 Introduction
From the classic supervised learning setting to the more complex problems like interactive Imitation Learning (IL) (Ross et al., 2011), high-quality labels or supervision is often expensive and hard to obtain. Thus, one wishes to develop learning algorithms that do not require a label for every data sample presented during the learning process. Active learning or selective sampling is a learning paradigm that is designed to reduce query complexity by only querying for labels at selected data points, and has been extensively studied in both theory and in practice (Agarwal, 2013; Dekel et al., 2012; Hanneke and Yang, 2021; Zhu and Nowak, 2022; Cesa-Bianchi et al., 2005; Hanneke and Yang, 2015).
In this work, we study selective sampling and its application to interactive Imitation Learning (Ross et al., 2011). Our goal is to design algorithms that can leverage general function approximation and online regression oracles to achieve small regret on predicting the correct labels, and at the same time minimize the number of expert queries made (query complexity). Towards this goal, we first study selective sampling which is
an online active learning framework, and provide regret and query complexity bounds for general function classes (used to model the experts). Our key results in selective sampling are obtained by developing a connection between the regret of the online regression oracles and the regret of predicting the correct labels. Additionally, we bound the query complexity using the eluder dimension (Russo and Van Roy, 2013) of the underlying function class used to model the expert. We complement our results with a lower bound indicating that a dependence on an eluder dimension like complexity measure is unavoidable in the query complexity in the worst case. In particular, we provide lower bounds in terms of the star number of the function class--a quantity closely related to the eluder dimension. Our new selective sampling algorithm, called SAGE, can operate under fairly general modeling assumptions, loss functions, and allows for multiple labels (i.e., multi-class classification).
We then extend our selective sampling algorithm to the interactive IL framework proposed by Ross et al. (2011) to reduce the query complexity. While the DAgger algorithm proposed by Ross et al. (2011) has been extensively used in various robotics applications (e.g., Ross et al. (2013); Pan et al. (2018)), it often requires a large number of expert queries. There have been some efforts on reducing the expert query complexity by leveraging ideas from active learning (e.g., Laskey et al. (2016); Brantley et al. (2020)), however, these prior attempts do not have theoretical guarantees on bounding expert's query complexity. In this work, we provide the first provably correct algorithm for interactive IL with general function classes, called RAVIOLI, which not only achieves strong regret bounds in terms of maximizing the underlying reward functions, but also enjoys a small query complexity. Furthermore, we note that RAVIOLI operates under significantly weaker assumptions as compared to the prior works, like Ross et al. (2011), on interactive IL. In particular, we only assume access to a noisy expert, as compared to the prior works that assume that the expert is noiseless. In fact, for the noisy setting, we show that one can not even hope to learn from purely offline expert demonstrations unless one has exponentially in horizon \(H\) many samples. Such a strong separation does not hold in the noiseless setting.
Our bounds depend on the margin of the noisy expert, which intuitively quantifies the confidence level of the expert. In particular, the margin is large for states where the expert is very confident in terms of providing the correct labels, while on the other hand, the margin is small on the states where the expert is less confident and subsequently provides more noisy labels as feedback. Such kind of margin condition was missing in prior works, like Ross et al. (2011), which assumes that the expert can provide confident labels everywhere. Additionally, we note that our margin assumption is quite mild as we only assume that the expert has a large margin under the states that could be visited by the noiseless expert (however, the states visited by the learner, or by following the noisy expert, may not have a small margin).
We then extend our results to the multiple expert setting where the learner has access to \(M\) many experts/teachers who may have different expertise at different parts of the state space. In particular, there is no expert who can singlehandedly perform well on the underlying environment, but an aggregation of their policies can lead to good performance. Such an assumption holds in various applications and has been recently explored in continuous control tasks like robotics and discrete tasks like chess and Minigrid (Beliaev et al., 2022). For illustration, consider the game of chess, where we can easily find experts that have non-overlapping skills, e.g. some experts may have expertise on how to open the game, and other experts may have expertise in endgames. In this case, while no single expert can perform well throughout the game, an aggregation of their policies can lead to a good strategy that we wish to compete with.
Similar to the single expert setting, we model the expertise of the experts in multiple expert setting using the concept of margins. Different experts have different margin functions, capturing the fact that experts may have different expertise at different parts of the state space. Prior work from Cheng et al. (2020) also considers multiple experts in IL and provides meaningful regret bounds, however, their assumption on the experts is much stronger than us: they assume that for any state, there at least exists one expert who can
achieve high reward-to-go if the expert took over the control starting from this state till the end of the episode. Furthermore, Cheng et al. (2020) considers the setting where one can also query for the reward signals, whereas we do not require access to any reward signals. We complement our theory by providing experiments that demonstrate that our IL algorithms outperform the prior baselines of Cheng et al. (2020) on a classic control benchmark (cartpole) with neural networks as function approximation.
## 2 Contributions and Overview of Results
### Selective Sampling
Online selective sampling models the interaction between a learner and an adversary over \(T\) rounds. At the beginning of each round of the interaction, the adversary presents a context \(x_{t}\) to the learner. After receiving the context, the learner makes a prediction \(\hat{y}_{t}\in[K]\), where \(K\) denotes the number of actions. Then, the learner needs to make a choice of whether or not to query an _expert_ who is assumed to have some knowledge about the true label for all the presented contexts. The experts knowledge about the true label is modeled via the ground truth modeling function \(f^{\star}\), which is assumed to belong to a given function class \(\mathcal{F}\) but is unknown to the learner. If the learner decides to query for the label, then the expert will return a noisy label \(y_{t}\) sampled using \(f^{\star}\). If the learner does not query, then the learner does not receive any feedback in this round. The learner makes an update based on the latest information it has, and moves on to the next round of the interaction. The goal of the learner is to compete with the expert policy \(\pi^{\star}\), that is defined using the experts model \(f^{\star}\). In the selective sampling setting, we are concerned with two things: the total regret of the learner w.r.t. the policy \(\pi^{\star}\), and the number of expert queries that the learner makes. Our key contributions are as follows:
* We provide a new selective sampling algorithm (Algorithm 1) that relies on an online regression oracle w.r.t. \(\mathcal{F}\) (where \(\mathcal{F}\) is the given model class) to make predictions and to decide whether to query for labels. Our algorithm can handle multiple actions, adversarial contexts, arbitrary model class \(\mathcal{F}\), and fairly general modeling assumptions (that we discuss in more detail in Section 3), and enjoys the following regret bound and query complexity: \[\mathrm{Reg}_{T}=\bar{\mathcal{O}}\left(\inf_{\varepsilon}\left\{\varepsilon T _{\varepsilon}+\frac{\mathrm{Reg}(\mathcal{F};T)}{\varepsilon}\right\}\right) \quad\text{and}\quad N_{T}=\bar{\mathcal{O}}\left(\inf_{\varepsilon}\left\{T _{\varepsilon}+\frac{\mathrm{Reg}(\mathcal{F};T)\cdot\mathfrak{E}(\mathcal{F},\varepsilon;f^{\star})}{\varepsilon^{2}}\right\}\right)\right)\] (1) where \(\mathrm{Reg}(\mathcal{F};T)\) denotes the regret bound for the online regression oracle on \(\mathcal{F}\), \(\mathfrak{E}(\mathcal{F},\varepsilon;f^{\star})\) denotes the eluder dimension of \(\mathcal{F}\), and \(T_{\varepsilon}\) denotes the number of rounds at which the margin of the experts predictions is smaller than \(\varepsilon\) (the exact notion of margin is defined in Section 3).
* We show via a lower bound that, without additional assumptions, the dependence on the eluder dimension in the query complexity bound (1) is unavoidable if we desire a regret bound of the form (1), even when \(T_{\varepsilon}=0\). The details are located in Section 3.2.
* For the stochastic setting, where the context \(\{x_{t}\}_{t\leq T}\) are sampled i.i.d. from a fixed unknown distribution, we provide an alternate algorithm (Algorithm 4) that enjoys the same regret bound as (1) but whose query complexity scales with the disagreement coefficient of \(\mathcal{F}\) instead of the eluder dimension (Theorem 2). Since the disagreement coefficient is always smaller than the eluder dimension, Theorem 2 yields an improvement in the query complexity.
* Then, in Section 3.3, we show how to extend our selective sampling algorithm when the learner only receives bandit feedback, i.e. on every query the learner receives a binary feedback signal on whether the chosen action \(\widehat{y}_{t}\) is identical to \(y_{t}\). Our algorithm given in Algorithm 2 is based on _Inverse Gap Weighting_ exploration strategy of Foster and Rakhlin (2020); Abe and Long (1999), and enjoys a
best-of-both-worlds style regret and query complexity bound. On the regret side, the provided instance dependent (\(\varepsilon\)-dependent) bound is \(O\Big{(}T_{\varepsilon}\mathrm{Reg}\big{(}\mathcal{F};T\big{)}+\frac{\mathrm{ Reg}\big{(}\mathcal{F};T\big{)}^{2}\cdot\mathfrak{E}\big{(}\mathcal{F},\varepsilon; \underline{f}^{\star}\big{)}}{\varepsilon^{2}}\Big{)}\) and scales with the eluder dimension. However, in the worst case, the regret bound is also bounded by \(O(\sqrt{KT\mathrm{Reg}\big{(}\mathcal{F};T\big{)}})\), and thus our algorithm enjoys the best of the two guarantees.
* In Section 5, we show how to extend our selective sampling algorithm when the learner can query \(M\) different experts on each round. Here, we do not assume that any of the experts is single-handedly optimal for the entire context space, but that there exist aggregation functions of these experts' predictions that perform well in practice, and with which we compete.
### Imitation Learning
We then move to the more challenging Imitation Learning (IL) setting, where the learner operates in an episodic finite horizon Markov Decision Process (MDP), and can query a noisy expert for feedback (i.e. the expert action) on the states that it visits. The interaction proceeds in \(T\) episodes of length \(H\) each. In episode \(t\), at each time step \(h\in[H]\) and on the state \(x_{t,h}\), the learner chooses an action \(\widehat{y}_{t,h}\) and transitions to state \(x_{t,h+1}\). However, the learner does not receive any reward signal. Instead, the learner can actively choose to query an _expert_ who has some knowledge about the correct action to be taken on \(x_{t,h}\), and gives back noisy feedback \(y_{t,h}\) about this action. Similar to the selective sampling setting, the experts knowledge about the true label is modeled via the ground truth modeling function \(f_{h}^{\star}\), which is assumed to belong to a given function class \(\mathcal{F}_{h}\) but is unknown to the learner. The goal of the learner is to compete with the optimal policy \(\pi^{\star}\) of the (noiseless) expert. Our key contributions in IL are:
* In Section 4, we first demonstrate an exponential separation in terms of task horizon \(H\) in the sample complexity, for learning via offline expert demonstration only vs interactive querying of experts, when the feedback from the expert is noisy.
* We then provide a general IL algorithm (in Algorithm 3) that relies on online regression oracles w.r.t. \(\{\mathcal{F}_{h}\}_{h\leq H}\) to predict actions, and to decide whether to query for labels. Similar to the selective sampling setting, the regret bound for our algorithm scales with the regret of the online regression oracles, and the query complexity bound has an additional dependence on the eluder dimension. Furthermore, our algorithm can handle multiple actions, adversarially changing dynamics, arbitrary model class \(\mathcal{F}\), and fairly general modeling assumptions.
* A key difference from our results in selective sampling is that the term \(T_{\varepsilon}\) that appears in our regret and query complexity bounds in IL denote the number of time steps in which the expert policy \(\pi^{\star}\) has a small margin (instead of the number of time steps when the learner's policy has a small margin). In fact, the learner and the expert trajectories could be completely different from each other, and we only pay in the margin term if the expert trajectory at that time step would have a low margin. See Section 4 for the exact definition of margin.
* In Section 5, we provide extensions to our algorithm when the learner can query \(M\) experts at each round. Similar to selective sampling setting, we do not assume that any of the experts is singlehandedly optimal for the entire state space, but that there exist aggregation functions of these experts' predictions that perform well in practice, and with which we compete.
* In Section 6, we evaluate our IL algorithm on the Cartpole environment, with single and multiple experts. We found that our algorithm can match the performance of passive querying algorithms while making a significantly lesser number of expert queries.
## 3 Selective Sampling
In the problem of selective sampling, on every round \(t\), nature (or an adversary) produces a context \(x_{t}\). The learner then receives this context and predicts a label \(\overline{y}_{t}\in[K]\) for that context. The learner also computes a query condition \(Z_{t}\in\{0,1\}\) for that context. If \(Z_{t}=1\), the learner requests for label \(y_{t}\in[K]\) corresponding to the \(x_{t}\), and if not, the learner receives no feedback on the label for that round. Let \(\mathcal{F}\) be a model class such that each model \(f\in\mathcal{F}\) maps contexts \(x\) to scores \(f(x)\in\mathbb{R}^{K}\). In this work we assume that while contexts can be chosen arbitrarily, the label \(y_{t}\) corresponding to a context \(x_{t}\) is drawn from a distribution over labels specified by the score \(f^{\star}(x_{t})\) where \(f^{\star}\in\mathcal{F}\) is a fixed model unknown to the learner. We assume that a link function \(\phi:\mathbb{R}^{K}\mapsto\Delta(K)\) maps scores to distributions and assume that the noisy label \(y_{t}\) is sampled as
\[y_{t}\sim\phi(f^{\star}(x_{t})). \tag{2}\]
In this work, we assume that the link function \(\phi(v)=\nabla\Phi(v)\) for some \(\Phi:\mathbb{R}^{K}\mapsto\mathbb{R}\) (see Agarwal (2013) for more details) which satisfies the following assumption.
**Assumption 1**.: _The function \(\Phi\) is \(\lambda\)-strongly-convex and \(\gamma\)-smooth, i.e. for all \(u,u^{\prime}\in\mathbb{R}^{K}\),_
\[\frac{\lambda}{2}\|u^{\prime}-u\|_{2}^{2}\leq\Phi(u^{\prime})-\Phi(u)-(\nabla \Phi(u),u^{\prime}-u)\leq\frac{\gamma}{2}\|u^{\prime}-u\|_{2}^{2}.\]
Our main contribution in this section is a selective sampling algorithm that uses online non-parametric regression w.r.t. the model class \(\mathcal{F}\) as a black box. Specifically, define the loss function corresponding to the link function \(\phi\) as \(\ell_{\phi}(v,y)=\Phi(v)-v[y]\) where \(v\in\mathbb{R}^{K}\) and \(y\in[K]\). We assume that the learner has access to an online regression oracle for the loss \(\ell_{\phi}\) (which is a convex loss) w.r.t. the class \(\mathcal{F}\), that for any sequence \(\{(x_{1},y_{1}),\ldots,(x_{T},y_{T})\}\) guarantees the regret bound
\[\sum_{s=1}^{T}\ell_{\phi}(f_{s}(x_{s}),y_{s})-\inf_{f\in\mathcal{F}}\sum_{s=1} ^{T}\ell_{\phi}(f(x_{s}),y_{s})\leq\operatorname{Reg}^{\ell_{\phi}}(\mathcal{ F};T). \tag{3}\]
When \(\phi\) is identity (under which the models in \(\mathcal{F}\) directly map to distributions over the labels), then \(\ell_{\phi}\) denotes the standard square loss, and we need a bound on the regret w.r.t. the square loss, denoted by \(\operatorname{Reg}^{\text{sq}}(\mathcal{F};T)\). When \(\phi\) is the Boltzman distribution mapping (given by \(\Phi\) being the softmax function) then \(\ell_{\phi}\) is the logistic loss, and we need an online logistic regression oracle for \(\mathcal{F}\). Minimax rates for the regret bound in (3) are well known:
* _Square-loss regression:_Rakhlin and Sridharan (2014) characterized the minimax rates for online square loss regression in terms of the offset sequential Rademacher complexity of \(\mathcal{F}\), which for example, leads to regret bound \(\operatorname{Reg}^{\text{sq}}(\mathcal{F};T)=O(\log|\mathcal{F}|)\) for finite function classes \(\mathcal{F}\), and \(\operatorname{Reg}^{\text{sq}}(\mathcal{F};T)=O(d\log(T))\) when \(\mathcal{F}\) is a \(d\)-dimensional linear class. More examples can be found in Rakhlin and Sridharan (2014, Section 4). We refer the readers to Krishnamurthy et al. (2017); Foster et al. (2018) for efficient implementations.
* _Logistic-loss regression:_ When \(\mathcal{F}\) is finite, we have the regret bound \(\operatorname{Reg}(\mathcal{F};T)\leq O(\log|\mathcal{F}|)\)(Cesa-Bianchi and Lugosi, 2006, Chapter 9). For learning linear predictors, there exists efficient improper learner with regret bound \(\operatorname{Reg}(\mathcal{F};T)\leq O(d\log|T|)\)(Foster et al., 2018). More examples can be found in Foster et al. (2018, Section 7) and Rakhlin and Sridharan (2015).
When one deals with complex model classes \(\mathcal{F}\) such that the labeling concept class corresponding to \(\mathcal{F}\) could possibly have infinite VC dimension (like it is typically the case), then one needs to naturally rely on
a margin-based analysis (Tsybakov, 2004; Shalev-Shwartz and Ben-David, 2014; Dekel et al., 2012). For \(p\in\mathbb{R}^{\bar{K}}\), we use the following well-known notion of margin for multiclass settings1:
Footnote 1: Throughout the paper, we assume that the ties in \(\operatorname{argmax}\) or \(\operatorname{argmin}\) are broken arbitrarily, but consistently.
\[\mathtt{Margin}(p)=\phi(p)[k^{*}]-\max_{k^{*}\neq k^{*}}\phi(p)[k^{\prime}] \qquad\text{where}\quad k^{*}\in\operatorname*{argmax}_{k}\phi(p)[k], \tag{4}\]
A key quantity that appears in our results is the number of \(x_{t}\)'s that fall within an \(\varepsilon\) margin region,
\[T_{\varepsilon}=\sum_{t=1}^{T}\mathbf{1}\{\mathtt{Margin}(f^{*}(x_{t}))\leq \varepsilon\}.\]
\(T_{\varepsilon}\) denotes the number of times where even the Bayes optimal classifier is confused about the correct label on \(x_{t}\), and has confidence less than \(\varepsilon\). The algorithm relies on an online regression oracle mentioned above to produce the predictor \(f_{t}\) at every round. The predicted label \(\widehat{y}_{t}=\mathtt{SelectAction}(f_{t}(x_{t}))=\operatorname*{argmax}_{k} \phi(f_{t}(x_{t}))[k]\) is picked based on the score \(f_{t}(x_{t})\) (where \(\widehat{y}_{t}\) is the label with the largest score). The learner updates the regression oracle on only those rounds in which it makes a query. Our main algorithm for selective sampling is provided in Algorithm 1.2
Footnote 2: Unless explicitly specified, the action set is given by \(\mathcal{A}=[K]=\{1,\ldots,K\}\) where \(K\geq 2\).
```
0: Parameters \(\delta,\gamma,\lambda,T\), function class \(\mathcal{F}\), and online regression oracle Oracle w.r.t \(\ell_{\phi}\).
1: Set \(\Psi_{\delta}^{\ell_{\phi}}(\mathcal{F},T)=\frac{4}{\lambda}\mathrm{Reg}^{ \ell_{\phi}}(\mathcal{F};T)+\frac{112}{\lambda^{2}}\log(4\log^{2}(T)/\delta)\), Compute \(f_{1}\leftarrow\mathsf{Oracle}_{1}(\varnothing)\).
2:for\(t=1\) to \(T\)do
3: Nature chooses \(x_{t}\).
4: Learner plays the action \(\widehat{y}_{t}=\mathtt{SelectAction}(f_{t}(x_{t}))\).
5: Learner computes \[\Delta_{t}(x_{t})\coloneqq\max_{f\in\mathcal{F}}\|f(x_{t})-f_{t}(x_{t})\| \quad\text{s.t.}\quad\sum_{s=1}^{t-1}Z_{s}\|f(x_{s})-f_{s}(x_{s})\|^{2}\leq \Psi_{\delta}^{\ell_{\phi}}(\mathcal{F},T).\] (5)
6: Learner decides whether to query: \(Z_{t}=\mathbf{1}\{\mathtt{Margin}(f_{t}(x_{t}))\leq 2\gamma\Delta_{t}(x_{t})\}\).
7:if\(Z_{t}=1\)then
8: Learner queries the label \(y_{t}\) on \(x_{t}\).
9:\(f_{t+1}\leftarrow\mathsf{Oracle}_{t}(\{x_{t},y_{t}\})\).
10:else
11:\(f_{t+1}\gets f_{t}\).
```
**Algorithm 1** Selective \(\mathbf{SAmplinG}\) with \(\mathtt{Expert}\) Feedback (\(\mathbf{SAGE}\))
Our goal in this work is twofold: Firstly, we would like Algorithm 3 to have a low regret w.r.t. the optimal model \(f^{*}\), defined as
\[\mathrm{Reg}_{T}=\sum_{t=1}^{T}\mathbf{1}\{\widehat{y}_{t}\neq y_{t}\}-\sum_{t =1}^{T}\mathbf{1}\{\mathtt{SelectAction}(f^{*}(x_{t}))\neq y_{t}\}\]
Simultaneously, we also aim to make as few label queries \(N_{T}=\sum_{t=1}^{T}Z_{t}\) as possible. Before delving into our results, we first recall the following variant of eluder-dimension (Russo and Van Roy, 2013; Foster et al., 2020; Zhu and Nowak, 2022).
**Definition 1** (Scale-sensitive eluder dimension (normed version)).: _Fix any \(f^{*}\in\mathcal{F}\), and define \(\mathfrak{E}(\mathcal{F},\beta;f^{*})\) to be the length of the longest sequence of contexts \(x_{1},x_{2},\ldots x_{m}\) such that for all \(i\), there exists \(f_{i}\in\mathcal{F}\) such that_
\[\|f_{i}(x_{i})-f^{*}(x_{i})\|>\beta,\quad\text{and}\quad\sum_{j<i}\|f_{i}(x_{j} )-f^{*}(x_{j})\|^{2}\leq\beta^{2}.\]
_The value function eluder dimension is defined as \(\mathfrak{E}(\mathcal{F},\beta^{\prime};f^{*})=\sup_{\beta\geq\beta^{\prime}} \widetilde{\mathfrak{E}}(\mathcal{F},\beta;f^{*})\)._
Bounds on the eluder dimension for various function classes are well known, e.g. when \(\mathcal{F}\) is finite, \(\mathfrak{E}(\mathcal{F},\beta^{\prime};f^{*})\leq|\mathcal{F}|-1\), and when \(\mathcal{F}\) is the set of \(d\)-dimensional function with bounded norm, then \(\mathfrak{E}(\mathcal{F},\beta^{\prime};f^{*})=O(d)\). We refer the reader to Russo and Van Roy (2013); Mou et al. (2020); Li et al. (2022) for more examples. The following theorem is our main result for selective sampling:
**Theorem 1**.: _Let \(\delta\in(0,1)\). Under the modeling assumptions above (in (2), (3) and (4)), with probability at least \(1-\delta\), Algorithm 1 obtains the regret bound_
\[\mathrm{Reg}_{T}=\widetilde{\mathcal{O}}\!\left(\inf_{\varepsilon}\!\left\{ \varepsilon T_{\varepsilon}+\frac{\gamma^{2}}{\lambda\varepsilon}\mathrm{ Reg}^{\ell_{\phi}}(\mathcal{F};T)+\frac{\gamma^{2}}{\lambda^{2} \varepsilon}\log(1/\delta)\right\}\right),\]
_while simultaneously the total number of label queries made is bounded by:_
\[N_{T}=\widetilde{\mathcal{O}}\!\left(\inf_{\varepsilon}\!\left\{T_{ \varepsilon}+\frac{\gamma^{2}}{\lambda\varepsilon^{2}}\cdot\mathrm{Reg}^{ \ell_{\phi}}(\mathcal{F};T)\cdot\mathfrak{E}(\mathcal{F},\varepsilon/4\gamma; f^{*})+\frac{\gamma^{2}}{\lambda^{2}\varepsilon^{2}}\log(1/\delta)\right\}\right).\]
A few points are in order:
* It must be noted that for most settings we consider, as an example if model class \(\mathcal{F}\) is finite, one typically has that \(\mathrm{Reg}(\mathcal{F};T)\leq\log|\mathcal{F}|\). Thus, in the case where one has a hard margin condition i.e. \(T_{\varepsilon_{0}}=0\) for some \(\varepsilon_{0}>0\), we get \(\mathrm{Reg}_{T}\leq O\left(\frac{\log|\mathcal{F}|}{\varepsilon_{0}}\right)\) and \(N_{T}\leq O\left(\frac{\mathfrak{E}(\mathcal{F},\varepsilon;f^{*})\log| \mathcal{F}|}{\varepsilon_{0}^{2}}\right)\).
* Our regret bound does not depend on the eluder dimension. However, the query complexity bound has a dependence on eluder dimension. Thus, for function classes for which the eluder dimension is large, the regret bound is still optimal while the number of label queries may be large.
Theorem 1 Proof Sketch for Binary Actions and Square Loss.Let \(\mathcal{A}=\{1,2\}\), and the link function \(\phi(z)=z\) corresponding to square-loss \(\ell_{\phi}=(v-y)^{2}/2\); here \(\lambda=\gamma=1\).
Let \(\bar{\mathcal{F}}\subseteq\{\mathcal{X}\mapsto[-1,1]\}\) be a function class, and \(\bar{f}^{*}\in\mathcal{F}\). We assume that for any context \(x\), the label \(y\) is drawn according to the distribution \(\Pr(y_{t}=2)={}^{1+\bar{f}^{*}(x)}\!\!/_{2}\). Using \(\bar{\mathcal{F}}\), we can define the score function class \(\mathcal{F}=\{f_{\bar{f}}\mid\bar{f}\in\bar{\mathcal{F}}\}\) where \(f(x)=\frac{1}{2}(1-\bar{f}(x),1+\bar{f}(x))^{\top}\in[0,1]^{2}\), and additionally define \(f^{*}=f_{\bar{f}^{*}}\). Clearly, the Bayes optimal predictor that chooses the action with the largest score is given by \(\mathtt{SelectAction}(f^{*}(x))=1+\mathrm{sign}(\bar{f}^{*}(x))\). Furthermore, \(\mathtt{Margin}(f^{*}(x))\coloneqq|\Pr(y=2\mid x)-\Pr(y=1\mid x)|=|\bar{f}^{*} (x)|\) which implies that \(T_{\varepsilon}=\sum_{t=1}^{T}\mathbf{1}\{|\bar{f}^{*}(x_{t})|\leq\varepsilon\}\). Finally, the oracle in (3) reduces to a square-loss online regression oracle, which implies that with probability at least \(1-\delta\), for all \(t\leq T\),
\[\sum_{s=1}^{t}Z_{s}(\bar{f}_{s}(x_{s})-\bar{f}^{*}(x_{s}))^{2}\lesssim\sum_{s=1 }^{t}Z_{s}(\bar{f}_{s}(x_{s})-y_{s})^{2}-\sum_{s=1}^{t}Z_{s}(\bar{f}^{*}(x_{s} )-y_{s})^{2}\lesssim\mathrm{Reg}^{\mathrm{sq}}(\bar{\mathcal{F}};T)+\log(\nicefrac {{T}}{{\delta}}), \tag{6}\]
The above implies that \(\bar{f}^{*}\) satisfies the constraints in (5) with the right choice of constants, \(\lambda\), and \(\gamma\), and thus \(|\bar{f}_{t}(x_{t})-\bar{f}^{*}(x_{t})|\leq\Delta_{t}(x_{t})\) (see Lemma 10 for proof). However, since the query condition in
Algorithm 1 is \(Z_{t}=\mathbf{1}\{|\bar{f}_{t}(x_{t})|\leq\Delta_{t}(x_{t})\}\), we have that if \(Z_{t}=0\), then \(|\bar{f}_{t}(x_{t})|>\Delta_{t}(x_{t})\) which implies that \(\text{sign}(\bar{f}^{*}(x_{t}))\neq\text{sign}(\bar{f}_{t}(x_{t}))\). Thus,
\[\sum_{s=1}^{t}\bar{Z}_{s}\mathbf{1}\{\text{sign}(\bar{f}^{*}(x_{t}))=\text{ sign}(\bar{f}_{t}(x_{t}))\}=0. \tag{7}\]
_Regret bound._ Using the fact that \(y_{t}\sim 1+\text{Ber}\big{(}\nicefrac{{1}{{\varepsilon}}}{{\varepsilon}}(x_{t}) /2\big{)}\), \(\widehat{y}_{t}=\text{SelectAction}\big{(}f_{t}(x_{t})\big{)}=1+\text{sign}(\bar{ f}_{t}(x_{t})\big{)}\), we have
\[\text{Reg}_{T} =\sum_{t=1}^{T}\Pr(\widehat{y}_{t}\neq y_{t})-\Pr(\text{SelectAction }(f^{*}(x_{t}))\neq y_{t})\] \[\leq\sum_{t=1}^{T}\mathbf{1}\{\text{sign}(\bar{f}_{t}(x_{t}))\neq \text{sign}(\bar{f}^{*}(x_{t}))\}\cdot|2\Pr(y_{t}=1)-1|\] \[=\sum_{t=1}^{T}\mathbf{1}\{\text{sign}(\bar{f}_{t}(x_{t}))\neq \text{sign}(\bar{f}^{*}(x_{t}))\}\cdot|\bar{f}^{*}(x_{t})|\]
The right hand side above can be split and upper bound via the following three terms:
\[\text{Reg}_{T} \leq\varepsilon\sum_{t=1}^{T}\mathbf{1}\{|\bar{f}^{*}(x_{t})|\leq \varepsilon\}+\sum_{t=1}^{T}Z_{t}\mathbf{1}\{\text{sign}(\bar{f}_{t}(x_{t})) \neq\text{sign}(\bar{f}^{*}(x_{t})),|\bar{f}^{*}(x_{t})|>\varepsilon\}\cdot| \bar{f}^{*}(x_{t})|\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\sum_{t=1}^{T}\bar{Z}_{t} \mathbf{1}\{\text{sign}(\bar{f}_{t}(x_{t}))\neq\text{sign}(\bar{f}^{*}(x_{t}) )\}\cdot|\bar{f}^{*}(x_{t})|.\] \[=\varepsilon T_{\varepsilon}+\underbrace{\sum_{t=1}^{T}Z_{t} \mathbf{1}\{\text{sign}(\bar{f}_{t}(x_{t}))+\text{sign}(\bar{f}^{*}(x_{t})),| \bar{f}^{*}(x_{t})|>\varepsilon\}\cdot|\bar{f}^{*}(x_{t})|}_{:=\mathbb{T}_{A}},\]
where the first term is \(T_{\varepsilon}\), and the last term is zero due to (7). The term \(\mathbb{T}_{A}\) denotes the regret for the rounds in which the learner queries for the label, and the margin for \(\bar{f}^{*}(x_{t})\) is larger than \(\varepsilon\). We note that
\[\mathbb{T}_{A}\leq\sum_{t=1}^{T}Z_{t}\mathbf{1}\{|\bar{f}^{*}(x_{t})-\bar{f}_{ t}(x_{t})|>\varepsilon\}\cdot|\bar{f}^{*}(x_{t})-\bar{f}_{t}(x_{t})|\]
where the inequality holds because \(|\bar{f}^{*}(x_{t})-\bar{f}_{t}(x_{t})|\geq|\bar{f}^{*}(x_{t})|\) since they have opposite signs. Using the fact that \(\mathbf{1}\{a\geq b\}\leq\nicefrac{{a}}{{b}}\) for all \(a,b\geq 0\), and the bound in (6), we get
\[\mathbb{T}_{A}\leq\frac{1}{\varepsilon}\sum_{t=1}^{T}Z_{t}\big{(}\bar{f}^{*}(x _{t})-\bar{f}_{t}(x_{t})\big{)}^{2}\lesssim\frac{1}{\varepsilon}\text{Reg}^{ \text{sq}}(\mathcal{F};T)+\frac{1}{\varepsilon}\log(\nicefrac{{T}}{{\delta}}),\]
Gathering all the terms, we get
\[\text{Reg}_{T}=\widetilde{\mathcal{O}}\bigg{(}\varepsilon T_{\varepsilon}+ \frac{1}{\varepsilon}\text{Reg}^{\text{sq}}(\mathcal{F};T)+\frac{1}{\varepsilon }\log(\nicefrac{{1}}{{\delta}})\bigg{)}.\]
_Query complexity._ Plugging in the query rule, and splitting as in the regret bound, we get
\[N_{T}=\sum_{t=1}^{T}Z_{t}=\sum_{t=1}^{T}\mathbf{1}\{|\bar{f}_{t}(x_{t})|\leq \Delta_{t}(x_{t})\}\]
\[\leq\underbrace{\sum_{t=1}^{T}\mathbf{1}\{|\bar{f}^{*}(x_{t})|\leq \varepsilon\}}_{=T_{\varepsilon}}+\underbrace{\sum_{t=1}^{T}\mathbf{1}\{|\bar{ f}_{t}(x_{t})|\leq\Delta_{t}(x_{t}),|\bar{f}^{*}(x_{t})|>\varepsilon,\Delta_{t}(x_{t}) \leq\varepsilon/3\}}_{:=T_{C}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad
**Theorem 2**.: _Let \(\delta\in(0,1)\), and consider the modeling assumptions in (2), (3) and (4). Furthermore, suppose that \(x_{t}\) is sampled i.i.d. from \(\mu\), where \(\mu\) is a fixed distribution. Then, with probability at least \(1-\delta\), Algorithm 4 obtains the bounds3_
Footnote 3: In the rest of the paper, the notation \(\widetilde{\mathcal{O}}\) hides additive \(\log(1/\delta)\)-factors which, for constant \(\delta\) and in all the results, are asymptotically dominated by the other terms presented in the displayed bounds.
\[\mathrm{Reg}_{T}=\widetilde{\mathcal{O}}\Bigg{(}\inf_{\varepsilon}\Bigg{\{} \varepsilon T_{\varepsilon}+\frac{\gamma^{2}}{\lambda\varepsilon}\mathrm{ Reg}^{\ell_{\phi}}(\mathcal{F};T)\Bigg{\}}\Bigg{)},\]
_while simultaneously the total number of label queries made is bounded by:_
\[N_{T}=\widetilde{\mathcal{O}}\Bigg{(}\inf_{\varepsilon}\Bigg{\{} T_{\varepsilon}+\frac{\gamma^{2}}{\lambda\varepsilon^{2}}\cdot\mathrm{ Reg}^{\ell_{\phi}}(\mathcal{F};T)\cdot\theta^{\mathrm{val}}\big{(} \mathcal{F},\nicefrac{{\varepsilon}}{{8\gamma}},\nicefrac{{\mathrm{Reg}^{ \ell_{\phi}}(\mathcal{F};T)}}{{T}};{f}^{\star}\big{)}\Bigg{\}}\Bigg{)}.\]
We note that Algorithm 4 automatically adapts to Tsybakov noise condition with respect to \(\mu\).
**Corollary 1** (Tsybakov noise condition, Tsybakov (2004)).: _Suppose there exists constants \(c,\rho\geq 0\) s.t. \(\mathrm{Pr}_{x\sim\mu}(\mathtt{Margin}(f^{*}(x))\leq\varepsilon)\leq c \varepsilon^{\rho}\) for all \(\varepsilon\in(0,1),\) and consider the same modeling assumptions as in Theorem 2. Then, with probability at least \(1-\delta\), Algorithm 4 obtains the bound_
\[\mathrm{Reg}_{T}=\widetilde{\mathcal{O}}\Bigg{(}\big{(}\mathrm{ Reg}^{\ell_{\phi}}(\mathcal{F};T)\big{)}^{\frac{\rho+1}{\rho+2}}\cdot(T)^{\frac{1}{ \rho+2}}\Bigg{)},\]
_while simultaneously the total number of label queries made is bounded by:_
\[N_{T}=\widetilde{\mathcal{O}}\Bigg{(}\big{(}\mathrm{Reg}^{\ell_{\phi}}\big{(} \mathcal{F};T\big{)}\cdot\theta^{\mathrm{val}}\big{(}\mathcal{F},\nicefrac{{ \varepsilon}}{{8\gamma}},\nicefrac{{\mathrm{Reg}^{\ell_{\phi}}(\mathcal{F};T )}}{{T}};{f}^{\star}\big{)}\big{)}^{\frac{\rho}{\rho+2}}\cdot T^{\frac{2}{ \rho+2}}\Bigg{)}.\]
_where the \(\widetilde{\mathcal{O}}(\cdot)\) notation hides poly-logarithmic factors of \(\gamma,\lambda,c,\rho\) and \(\log(T/\delta)\)._
A detailed comparison of our results with the relevant prior works is given in Appendix C.
### Lower Bounds (Binary Action Case)
We supplement the above upper bound with a lower bound in terms of the star number of \(\mathcal{F}\) (defined below). The star number is bounded from above by the eluder dimension which appears in our upper bounds (Lemma 6). While star number may not be lower bounded by eluder dimension in general, for many commonly considered classes, star number is of the same order as the eluder dimension (Foster et al., 2020). For the sake of a clean presentation, we restrict our lower bound to the binary actions case, although one can easily extend the lower bound to the multiple actions case.
**Definition 3** (scale-sensitive star number (scalar version; weak variant)).: _For any \(\zeta\in(0,1)\) and \(\beta\in(0,\zeta/2)\), define \(\mathfrak{s}^{\mathrm{val}}(\mathcal{F},\zeta,\beta)\) as the largest \(m\) for which there exists a target function \(f^{*}\in\mathcal{F}\) and sequence \(x_{1},\ldots,x_{m}\in\mathcal{X}\) such that for all \(i\in[m]\), \(|f^{*}(x_{i})|>\zeta\) and there exists some \(f_{i}\in\mathcal{F}\) such that all of the following hold:_
* \(\Sigma_{j\neq i}(f_{i}(x_{j})-f^{*}(x_{j}))^{2}<\beta^{2}\)__
* \(|f_{i}(x_{i})|>\zeta/2\) _and_ \(f_{i}(x_{i})f^{*}(x_{i})<0\)__
* \(|f_{i}(x_{i})-f^{*}(x_{i})|\leq 2\zeta\)__
The below theorem provides a lower bound on number of queries, in terms of star number for any algorithm that guarantees a non-trivial regret bound.
**Theorem 3**.: _Given a function class \(\mathcal{F}\) and some desired margin \(\zeta>0\), define \(\beta\in(0,\zeta/2)\) be the largest number such that \(\beta^{2}\leq\min\{\zeta^{2}/\mathsf{s}^{\mathrm{val}}(\mathcal{F},\zeta,\beta), \zeta^{2}/16\}\). Then, for any algorithm that guarantees regret bound of \(\mathbb{E}[\mathrm{Reg}_{T}]\leq 64\frac{\zeta T}{\mathsf{s}^{\mathrm{val}}( \mathcal{F},\zeta,\beta)}\) on all instances with margin \(\zeta/2\), there exists a distribution \(\mu\) over \(\mathcal{X}\) and a target function \(f^{*}\in\mathcal{F}\) with margin4\(\zeta\) such that the number of queries \(N_{T}\) made by the algorithm on that instance in \(T\) rounds of interaction satisfy_
Footnote 4: For the binary actions case where \(\mathcal{A}=\{1,2\}\), we note that \(\mathtt{Margin}(f(x))=|\Pr(y=2\mid f(x))-\Pr(y=1\mid f(x))|=|f(x)|\).
\[\mathbb{E}[N_{T}]=\Omega\Bigg{(}\frac{\mathsf{s}^{\mathrm{val}}(\mathcal{F}, \zeta,\beta)}{40\zeta^{2}}\Bigg{)}.\]
The above lower bound demonstrates that for any algorithm that has a sublinear regret guarantee, a dependence on an additional complexity measure like the star number (or the eluder dimension) is unavoidable in the number of queries in the worst case. This suggests that our upper bound cannot be further improved beyond the discrepancy between the star number and eluder dimension. The following corrolary illustrates the above lower bound.
**Corollary 2**.: _There exists a class \(\mathcal{F}\) with \(|\mathcal{F}|=\sqrt{T}\), and \(\mathsf{s}^{\mathrm{val}}(\mathcal{F},\zeta,\beta)=O(\sqrt{T})\) for any \(\beta=O(1)\) and \(\zeta=O(1)\), such that any algorithm that makes less than \(\sqrt{T}\) number of label queries, will have a regret of at least \(\mathbb{E}[\mathrm{Reg}_{T}]\geq\sqrt{T}\) on some instance with margin \(\zeta\)._
### Extension: Learning with Bandit Feedback
We next consider the problem of selective sampling when the learner only receives bandit feedback. In particular, on every query, instead of getting the noisy-expert action, the learner only receives a binary feedback on whether the chosen action \(\widehat{y}_{t}\) matches the action \(y_{t}\) of the noisy expert. For this bandit feedback, we propose an algorithm for selective sampling based on the Inverse Gap Weighting (IGW) scheme of Foster and Rakhlin (2020), that enjoys margin-dependent bounds. The algorithm is provided in Algorithm 2.
For simplicity, we restrict ourselves to the square loss setting, i.e. \(\phi(z)=z\), for which \(\gamma=\lambda=1\). At each round, the learner first builds a version space (line 5) containing all functions that are close to the oracle prediction \(f_{t}\) on the data observed so far; for all \(t\leq T\), the expert model \(f^{*}\in\mathcal{F}_{t}\) with high probability. The learner then constructs the candidate set \(\mathcal{A}_{t}\) of optimal actions corresponding to \(\mathcal{F}_{t}\). When \(|\mathcal{A}_{t}|>1\), the learner is not sure of the optimal action, and thus the learner makes a query. The width \(w_{t}\) essentially means the largest possible regret that we will suffer if we choose any option in \(\mathcal{A}_{t}\). The query strategy used in Algorithm 2 consists of two cases: when the estimated cumulative regret has not exceeded an \(O(\sqrt{T})\) threshold (meaning that \(\xi_{t}=0\)), the learner samples \(\widehat{y}_{t}\sim\mathrm{Uniform}(\mathcal{A}_{t})\); otherwise, the learner samples \(\widehat{y}_{t}\) according to the Inverse Gap Weighting (IGW) distribution \(p_{t}=\mathrm{IGW}_{\alpha_{t}}(f_{t}(x_{t}))\) that is given by:
\[p_{t}[y]=\begin{cases}\frac{1}{K+\alpha_{t}\big{(}f_{t}(x_{t})[y_{t}^{*}]-f_{t }(x_{t})[y]\big{)}}&\text{if}\qquad y\neq y_{t}^{*}\\ 1-\sum_{y\neq y_{t}^{*}}p_{t}[y]&\text{if}\qquad y=y_{t}^{*}\end{cases}, \tag{8}\]
where \(y_{t}^{*}=\operatorname*{argmax}f_{t}(x_{t})[y]\). Finally, the learner feeds the query feedback into the online regression oracle, obtains a new prediction \(f_{t+1}\), and proceeds to the next round. We assume that the sequence of functions \(\{f_{t}\}_{t\leq T}\) generated by the square loss regression oracle satisfies the following regret guarantee for all \(t\leq T\):
\[\sum_{s=1}^{t}Z_{s}(f_{s}(x_{s})[\widehat{y}_{s}]-\mathbb{Q}_{s})^{2}-\inf_{f \in\mathcal{F}}\sum_{s=1}^{t}Z_{s}(f(x_{s})[\widehat{y}_{s}]-\mathbb{Q}_{s})^ {2}\leq\mathrm{Reg}^{\mathrm{sq}}(\mathcal{F};T). \tag{9}\]
The following theorem establishes both worst-case and instance-dependent upper bounds for regret and query complexity for Algorithm 2.The proof is deferred to Appendix C.4.
**Theorem 4**.: _Let \(\delta\in(0,1)\), and consider the modeling assumptions in (2), (3) and (4) with the link function \(\phi(z)=z\) (corresponding to \(\ell_{\phi}\) being the square loss). Furthermore, assume that the learner only gets bandit feedback \(\mathbb{Q}_{t}=\mathbf{1}\{\widehat{y}_{t}=y_{t}\}\). Then, Algorithm 2 has the regret bounded by:_
\[\mathrm{Reg}_{T}=\widetilde{\mathcal{O}}\!\left(\min\!\left\{\sqrt{KT\mathrm{ Reg}^{\mathrm{sq}}(\mathcal{F};T)},\inf_{\varepsilon}\!\left\{\ T_{\varepsilon}\mathrm{Reg}^{\mathrm{sq}}( \mathcal{F};T)+K\!\left(\mathrm{Reg}^{\mathrm{sq}}(\mathcal{F};T)\right)^{2} \cdot\frac{\tilde{\mathfrak{E}}\left(\mathcal{F},\varepsilon/4;f^{\star} \right)}{\varepsilon}\right\}\right\}\right),\]
_while simultaneously the total number of label queries made is bounded by:_
\[N_{T}=\widetilde{\mathcal{O}}\!\left(\min\!\left\{T,\inf_{\varepsilon}\!\left\{ \ T_{\varepsilon}^{2}\mathrm{Reg}^{\mathrm{sq}}(\mathcal{F};T)/K+K\!\left( \mathrm{Reg}^{\mathrm{sq}}(\mathcal{F};T)\right)^{3}\cdot\frac{\tilde{ \mathfrak{E}}^{2}\left(\mathcal{F},\varepsilon/4;f^{\star}\right)}{ \varepsilon^{2}}\right\}\right\}\right), \tag{11}\]
_where \(\tilde{\mathfrak{E}}\) denotes the bivariate version of scale-sensitive eluder dimension given in Definition 8._
Note that contrary to Theorem 1, the above result has a dependence on the eluder dimension in both the regret and the query complexity in the respective instance-dependent bounds. Thus, when the eluder dimension is unbounded we default to the standard worst-case regret bounds that are common in the contextual bandits literature (Lattimore and Szepesvari, 2020). However, when the eluder dimension is finite, and \(T_{\varepsilon}\) is small for some value of \(\varepsilon\) (e.g. when there is a hard margin for some value of \(\varepsilon\)) then the instance-dependent regret bound can be significantly smaller, and thus more favorable. We remark that similar best-of-both-worlds style regret bounds are also well known in the literature for the simpler multi-armed bandits problem (Lattimore and Szepesvari, 2020), however the prior works in that direction do no focus on the query complexity.
Finally, note that the bandit feedback model that we considered in this section is an instance of the general contextual bandits problem considered in the prior works (Langford and Zhang, 2007; Foster et al., 2020). In particular, setting the loss \(\ell(x_{t},\widehat{y}_{t})=\mathbf{1}\{\widehat{y}_{t}=y_{t}\}\) in the general contextual bandits problem recovers our setting as a special case. However, our Algorithm 2 and the corresponding bounds in Theorem 4 can be easily extended to work for the more general contextual bandit problem without requiring any modifications to the analysis. Instance-dependent regret bounds for contextual bandits were first explored in Foster et al. (2020), however, there are a few major differences: firstly, our algorithm is designed for adversarial contexts and does not rely on epoching which makes our algorithm easier to implement and analyze. Secondly, we build our algorithm on an online regression oracles w.r.t. \(\mathcal{F}\) whereas Foster et al. (2020) rely on access to supervised learning oracles w.r.t. \(\mathcal{F}\).
#### 3.3.1 Benefits from Multiple Queries in Bandit Feedback
In various applications, e.g. in online advertising, even though the learner is restricted to bandit feedback, it can gain more information about the expert model by querying bandit feedback on multiple actions. Towards that end, we consider the selective sampling setting where at every round of interaction the learner can choose to query the expert on two actions \(\widehat{y}_{t}\) and \(\widetilde{y}_{t}\) and receive bandit feedback \(\mathbf{1}\{\widehat{y}_{t}=y_{t}\}\) and \(\mathbf{1}\{\widehat{y}_{t}=y_{t}\}\) respectively, where \(y_{t}\) is the action chosen by the noisy expert (and is not directly revealed to the learner). While the regret of the learner is computed w.r.t. \(\widehat{y}_{t}\), the action \(\widehat{y}_{t}\) is purely explorative and is only used to gather information about the expert model \(f^{\star}\).5
Footnote 5: An important difference between our multiple bandit query model and the multiple bandit query model considered in prior works in the bandit literature (e.g. in Agarwal et al. (2010)) is that the prior work accounts for both the played actions \(\widehat{y}_{t}\) and \(\widetilde{y}_{t}\) in the regret definition. On the other hand, we consider regret w.r.t. \(\widehat{y}_{t}\) only.
The setup, and our algorithm for selective sampling with multiple queries with bandit feedback is formally described in Algorithm 5 in Appendix C.5. Note that our algorithm relies on the square loss oracle that satisfies the regret guarantee given in (9). The obtained regret and query complexity bounds are as follows:
**Theorem 5** (Power of two queries in bandit feedback).: _Let \(\delta\in(0,1)\), and consider the modeling assumptions in (2), (3) and (4) with the link function \(\phi(z)=z\) (corresponding to \(\ell_{\phi}\) being the square loss). Furthermore, assume that the learner only gets bandit feedback \(\widehat{\mathbb{Q}}_{t}=\mathbf{1}\{\widehat{y}_{t}=y_{t}\}\) and \(\widehat{\mathbb{Q}}_{t}=\mathbf{1}\{\widehat{y}_{t}=y_{t}\}\) for two actions \(\widehat{y}_{t}\) and \(\widetilde{y}_{t}\). Then, Algorithm 5 (given in Appendix C.5) has the regret bounded:_
\[\mathrm{Reg}_{T}=\widetilde{\mathcal{O}}\bigg{(}\inf_{\varepsilon}\bigg{\{} \varepsilon T_{\varepsilon}+\frac{K}{\varepsilon}\mathrm{Reg}^{\mathrm{sq}}( \mathcal{F};T)\bigg{\}}\bigg{)},\]
_while simultaneously the total number of bandit queries made is bounded by:_
\[N_{T}=\widetilde{\mathcal{O}}\bigg{(}\inf_{\varepsilon}\bigg{\{}T_{ \varepsilon}+\frac{K}{\varepsilon^{2}}\cdot\mathrm{Reg}^{\mathrm{sq}}( \mathcal{F};T)\cdot\mathfrak{E}(\mathcal{F},\varepsilon/4\gamma;f^{\star}) \bigg{\}}\bigg{)}.\]
Note that, in comparison to the bounds in Theorem 4, the regret bound with access to multiple bandit queries does not scale with the eluder dimension of \(\mathcal{F}\). Furthermore, both the regret and the query complexity bounds are upper bounds by the corresponding worst-case bounds in Theorem 4, when \(\varepsilon\) is set optimally. At an intuitive level, this is because the exploration and exploitation can now be separated between actions \(\widehat{y}_{t}\) and \(\widetilde{y}_{t}\). In the single action query case, the chosen action needed to trade-off between exploration and exploration, thus we used IGW scheme, and consequently suffered from a dependence on the eluder dimension in the regret. Finally, while we only considered square loss setting (i.e. \(\phi(z)=z\)) here, extending the above results with bandit feedback for more general \(\phi\) is an interesting future research direction.
## 4 Imitation Learning (\(H>1\)) with Selective Queries to an Expert
The problem of Imitation Learning (IL) consists of learning policies in MDPs when one has access to an expert (aka the teacher) that can make suggestions on which actions to take at a given state. IL has enjoyed tremendous empirical success, and various different interaction models have been considered. In the simplest IL setting, studied under the umbrella of offline RL (Levine et al., 2020) or Behavior Cloning (Ross and Bagnell, 2010; Torabi et al., 2018), the learner is given an offline dataset of trajectories (state and action pairs) from an expert and aims to output a well-performing policy. Here, the learner is not allowed any interaction with the expert, and can only rely on the provided dataset of expert demonstrations for learning. A much stronger IL setting is the one where the learner can interact with the expert, and rely on its feedback on states that it reaches by executing its own policies.
In their seminal work, Ross et al. (2011) proposed a framework for interactive imitation learning via reduction to online learning and classification tasks. This has been extensively studied in the IL literature (e.g., Ross and Bagnell (2014); Sun et al. (2017); Cheng and Boots (2018)). The algorithm DAgger from (Ross et al., 2011) has enjoyed great empirical success. On the theoretical side, however, performance guarantees for DAgger only hold under the assumption that, when queried, the expert makes action suggestions from a very good policy \(\pi^{*}\) that we would like to compete with. However, in practice, human demonstrators are far from being optimal and suggestions from experts should be modeled as noisy suggestions that only correlate with \(\pi^{*}\). It turns out that IL where one only has access to noisy expert suggestions is drastically different from the noiseless setting. For instance, in the sequel, we show that there can be an exponential separation in terms of the dependence on horizon \(H\) in the sample complexity of learning purely from offline demonstration vs learning with online interactions.
Formally, we consider interactive IL in an episodic finite horizon Markov Decision Process (MDP), where the learner can query a noisy expert for feedback (i.e., action) on the states that it visits. The game proceeds in \(T\) episodes. In each episode \(t\), the nature picks the initial state \(x_{t,1}\) for \(h=1\); then for every time step \(h\in[H]\), the learner proposes an action \(\hat{y}_{t,h}\in[K]\) given the current state \(x_{t,h}\); then the system proceeds by selecting the next state \(x_{t;h+1}\leftarrow\mathbb{T}_{t,h}(x_{t,h},\hat{y}_{t,h})\), where \(\mathbb{T}_{t,h}:\mathcal{X}\times\mathcal{Y}\mapsto\mathcal{X}\) denotes the deterministic dynamics at timestep \(h\) of round \(t\) and is unknown to the learner. The learner then decides whether to query the expert for feedback. If the learner queries, it receives a recommended action from the expert, and otherwise the learner does not receive any additional information. The game moves on to the next time step \(h+1\), and moves to the next episode \(t+1\) when it reaches to time step \(H\) in the current episode. We now describe the expert model. With \(f^{*}_{h}\) being the underlying score function at time step \(h\), the expert feedback is sampled from a distribution \(\phi(f^{*}_{h}(x))\in\Delta(K)\), with \(\phi:\mathbb{R}^{K}\mapsto\mathbb{R}^{K}\) being some link function (e.g., \(\phi(p)[i]\propto\exp(p[i])\)). The goal of the leaner is to perform as well as the Bayes optimal policy6 defined as \(\pi^{*}_{h}(x):=\operatorname*{argmax}_{a\in[K]}\phi(f^{*}_{h}(x))\). In particular, the learner aims to find a sequence of policies \(\{\pi_{t}\}_{t\leq T}\) that have a small cumulative regret defined w.r.t. some (unknown) reward function under possibly adversarial (and unknown) transition dynamics \(\{\mathbb{T}_{t,h}\}_{h\leq H,t\leq T}\). At the same time, the learner wants to minimize the number of queries made to the expert. Formally, we consider counterfactual regret defined as
Footnote 6: Note that the comparator policy \(\pi^{*}\) reflects the experts models, and may not be the optimal policy for the underlying MDP.
\[\operatorname{Reg}_{T}=\sum_{t=1}^{T}\sum_{h=1}^{H}r(x_{t,h}^{\pi^{*}},\pi^{*} _{h}(x_{t,h}^{\pi^{*}}))-\sum_{t=1}^{T}\sum_{h=1}^{T}r(x_{t,h},\hat{y}_{t,h})\]
where \(x_{t,h}\) are the states reached by the learner corresponding to the chosen actions and the dynamics, and \(x_{t,h}^{\pi^{*}}\) denotes the states that would have been generated if we executed \(\pi^{*}\) from the beginning of the episode under the same dynamics. The query complexity \(N_{T}\) is the total number of queries to the expert across all \(H\) steps in \(T\) episodes.
Given the selective sampling results we provided in the earlier section, one may be tempted to apply them to the imitation learning problem. However, there is a caveat. A key to the reduction in Ross et al. (2011) is to apply Performance Difference Lemma (PDL) to reduce the problem of IL to online classification under the sequence of state distributions induced by the policies played by the learning algorithm. Hence, if one blindly applied this reduction, then in the margin term, one would need to account for the states that the learner visits (which could be arbitrary). Thus, for DAgger to have meaningful bounds, we would require a large margin over the entire state space. This is too much to ask for in practical applications. Consider the example of learning autonomous driving from a human driver as the expert. It is reasonable to believe that human drivers can confidently provide the right actions when they are driving themselves or are faced with situations they are more familiar with. However, assuming that the human driver is going to be confident in an unfamiliar situation (e.g., an emergency situation that is not often encountered by the human driver), is a strong assumption. Towards that end, we make a significantly weaker, and much more realistic, margin assumption that the expert has a large margin only on the state distribution induced by \(\pi^{\star}\), and not on the state distribution of the learner or the noisy expert.7 In particular, we define \(T_{\varepsilon,h}\) to denote the total number of episodes where the comparator policy \(\pi^{\star}\) visits a state with low margin at time step \(h\), i.e., \(T_{\varepsilon,h}=\sum_{t=1}^{T}\mathbbm{1}\{\texttt{Margin}(f_{h}^{\star}(x _{t,h}^{\pi^{\star}}))\leq\varepsilon\}\).
Footnote 7: The precise definition of the Margin for IL is given in the appendix.
We now proceed to our main results in this section. Learning from a noisy expert is indeed very challenging. In fact, learning from noisy expert feedback may even be statistically intractable in the non-interactive IL setting, where the learner is only limited to accessing offline noisy expert demonstrations for learning, e.g. in offline RL, Behavior Cloning, etc. The following lower bound formalizes this. In fact, the same lower bound also shows that AggreVaTe (Ross and Bagnell, 2014) style algorithms would not succeed under noisy expert feedback, AggreVaTe relies on roll-outs obtained by running the (noisy) expert suggestions.
**Proposition 1** (Lower bound for learning from non-interactive noisy demonstrations).: _There exists an MDP, for every \(h\leq H\), a function class \(\mathcal{F}_{h}\) with \(|\mathcal{F}_{h}|\leq 2^{H}\), a noisy expert whose optimal policy \(\pi^{\star}(x)=\operatorname*{argmax}_{a}\bigl{(}f_{h}^{\star}(x)[a]\bigr{)}\) for some \(f_{h}^{\star}\in\mathcal{F}_{h}\) with \(T_{\varepsilon,h}=0\) for any \(\varepsilon\leq 1/4\), such than any non-interactive algorithm needs \(\Omega(2^{H})\) many noisy expert trajectory demonstrations to learn, with probability at least \(3/4\), a policy \(\widehat{\pi}\) that is \(1/8\)-suboptimal w.r.t. \(\pi^{\star}\)._
Proposition 1 suggests that in order to learn with a reasonable sample complexity (that is polynomial in \(H\)), a learner must be able to interactively query the expert. In Algorithm 3, we provide an interactive imitation learning algorithm (with selective querying) that can learn from noisy expert feedback. A key to obtaining our result is a modified version of PDL, that we provide in Lemma 27 in the appendix, that allows us to only have the margin under the state distribution of \(\pi^{\star}\). Our result extends to the setting where transitions are picked adversarially, i.e., at time step \(h\) and episode \(t\), after seeing \(\hat{y}_{t,h}\) proposed by the learner, the nature can select \(\mathbb{T}_{t,h}\) which deterministically generates \(x_{t,h+1}\) given \(x_{t,h},\hat{y}_{t,h}\). The regret bound and query complexity bounds for Algorithm 3 are:
**Theorem 6**.: _Let \(\delta\in(0,1)\). Under the modeling assumptions above, with probability at least \(1-\delta\), Algorithm 3 obtains:_
\[\operatorname{Reg}_{T}=\widetilde{\mathcal{O}}\Biggl{(}\inf_{ \varepsilon}\biggl{\{}H\sum_{h=1}^{H}T_{\varepsilon,h}+\frac{H\gamma^{2}}{ \lambda\varepsilon^{2}}\sum_{h=1}^{H}\operatorname{Reg}^{\ell_{\phi}}( \mathcal{F}_{h};T)\biggr{\}}\Biggr{)},\]
_while simultaneously the total number of expert queries made is bounded by:_
\[N_{T}=\widetilde{\mathcal{O}}\Biggl{(}\inf_{\varepsilon}\biggl{\{}H\sum_{h=1} ^{H}T_{\varepsilon,h}+\frac{H\gamma^{2}}{\lambda\varepsilon^{2}}\sum_{h=1}^{ H}\operatorname{Reg}^{\ell_{\phi}}(\mathcal{F}_{h};T)\cdot\mathfrak{E}(\mathcal{F}_{h},\varepsilon/8\gamma;f_{h}^{\star})\biggr{\}}\Biggr{)}.\]
Since the above bound holds for any sequence of dynamics \(\{\mathbb{T}_{h,t}\}_{h\leq H,t\leq T}\), the result of Theorem 6 also holds for the stochastic IL setting where the transition dynamic is stochastic but fixed during the interaction. In particular, setting \(\mathbb{T}_{h,t}\sim\mathscr{T}_{h}\) sampled i.i.d. from a fixed stochastic dynamics \(\{\mathscr{T}_{h}\}_{h\leq H}\) recovers a similar bound for the stochastic setting. However, since the transition dynamics is fixed throughout interaction, one can hope to replace the eluder dimension in the query complexity by disagreement coefficient of the corresponding function classes by using epoching techniques similar to Section 3.1; we leave this for future research.
## 5 Imitation Learning from Multiple Experts
In Dekel et al. (2012), the problem of selective sampling from multiple experts is considered with the main motivation being that we can consider each expert as being confident (and correct) in certain states or scenarios, and we would like to learn from their joint feedback. The goal there is to perform not only as well as the best of them individually but even as well as the best combination of them. This motivation is even more lucrative for the IL setting, as we can hope to get policies that perform much better than any single expert. Continuing with the example of learning to drive from human demonstrations, we might have one human demonstrator who is an expert in highway driving, another human who is an expert in city driving, and the third one in off-road conditions. Each expert is confident in their own terrain, but we would like to learn a policy that can perform well in all terrains.
The formal model is similar to the single-expert case, but we now have \(M\) experts. For every time step \(h\leq H\), the \(m\)-th expert has an underlying ground truth model \(f_{h}^{*,m}\in\mathcal{F}_{h}^{m}\) that it uses to produce its label, i.e. for a given state \(x_{h}\) it draws its label as \(y_{h}^{m}\sim\phi(f_{h}^{*,m}(x_{h}))\), where \(\phi\) is the link function. On rounds in which the learner queries for the experts feedback, it gets back a label from each of the \(M\) experts, i.e. \(\{y_{h}^{1},\ldots,y_{h}^{M}\}\). While on every query the learner gets a different label from each expert, its objective is to perform as well as a comparator policy that is defined w.r.t. some ground truth aggregation function that we define next.
The aggregation function \(\mathscr{A}:\Delta([K])^{M}\mapsto\Delta([K])\), known to the learner, combines the recommendation of the \(M\) experts to obtain a ground truth label for the corresponding state. In particular, on a given state \(x_{h}\), the label \(y_{h}\) is samples as:
\[y_{h}\sim\mathscr{A}\Big{(}\phi(f_{h}^{\star,1}(x_{h})),\ldots,\phi(f_{h}^{ \star,M}(x_{h}))\Big{)}. \tag{13}\]
Given the aggregation function \(\mathscr{A}\) and the above label generation process, the policy \(\pi^{\star}\) that we wish to compete with in our regret bound is simply the Bayes optimal predictor given by
\[\pi^{\star}(x_{h})=\texttt{SelectAction}(\mathscr{A}(\phi(f_{h}^{\star,1}(x_{h}) ),\ldots,\phi(f_{h}^{\star,M}(x_{h})))), \tag{14}\]
where \(\texttt{SelectAction}:\Delta(K)\mapsto[K]\) is given by \(\texttt{SelectAction}(p)=\operatorname*{argmax}_{k\in[K]}p[K]\). Our main Theorem 7 below bounds the number of label queries to the experts, and regret with respect to this \(\pi^{\star}\), and is obtained using the imitation learning algorithm given in Algorithm 7 in Appendix E.2. Before we state the result, we first provide some examples of the aggregation function \(\mathscr{A}\) to illustrate the generality of our setup:
* _Random aggregation:_ Given a state \(x_{h}\), the aggregation rule chooses an expert uniformly at random and returns the label \(y_{h}\) sampled from its model. In particular, \[y_{h}\sim\phi(f_{h}^{\star,\widetilde{m}}(x_{h})),\qquad\text{ where}\qquad\widetilde{m}\sim\operatorname{Uniform}([M]).\] Here, the distribution \(\mathscr{A}\Big{(}\phi(f_{h}^{\star,1}(x_{h})),\ldots,\phi(f_{h}^{\star,M}(x_ {h}))\Big{)}=\frac{1}{M}\sum_{m=1}^{M}\phi(f^{\star,m}(x_{h}))\).
* _Majority label_: \(\mathscr{A}\) is deterministic. Given a state \(x_{t}\), the aggregation rule chooses the label \(y_{h}\in[K]\) which is the top preference for the majority of the experts. In particular, \[y_{h}=\mathscr{A}\Big{(}\phi(f_{h}^{\star,1}(x_{h})),\ldots,\phi(f_{h}^{\star, M}(x_{h}))\Big{)}=\operatorname*{argmax}_{k\in[K]}\sum_{m=1}^{M}\mathbf{1}\{k= \operatorname*{argmax}_{\widetilde{k}\in[K]}\phi(f_{h}^{\star,m}(x_{h})[ \widetilde{k}])\}.\]
* _Majority of confident experts:_ This aggregation rule is also deterministic and was first introduced in Dekel et al. (2012). Given a state \(x_{t}\), the aggregation rule chooses the label \(y_{h}\in[K]\) which is the top preference for the majority of the \(\rho\)_-confident_ experts on \(x_{h}\) i.e. the experts whose margin on \(x_{h}\) is larger than \(\rho\). In particular, \[y_{h} =\mathscr{A}\Big{(}\phi(f_{h}^{\star,1}(x_{h})),\ldots,\phi(f_{h}^ {\star,M}(x_{h}))\Big{)}\] \[=\operatorname*{argmax}_{k\in[K]}\sum_{m=1}^{M}\mathbf{1}\{k= \operatorname*{argmax}_{\widetilde{k}\in[K]}\phi(f_{h}^{\star,m}(x_{h})[ \widetilde{k}])\text{ and }\texttt{Margin}(\phi(f_{h}^{\star,m}(x_{h}))>\rho)\},\] where \(\texttt{Margin}(f_{h}^{\star,m}(x_{h})>\rho)=\max_{k_{1}}\bigl{(}\phi(f_{h}^{ \star,m}(x_{h}))[k_{1}]-\bigl{(}\max_{k_{2}\neq k_{1}}\phi(f_{h}^{\star,m}(x_{ h}))[k_{2}]\bigr{)}\bigr{)}\). This aggregation rule is useful when there may be many experts that give equal weights to the top and the second-to-top coordinates w.r.t. their respective models, and hence can not be confidently accounted for in the majority rule. Furthermore, instead of choosing the majority label, similar to Dekel et al. (2012), one can also return the label sampled according to a uniform distribution over \(\rho\)-confident experts.
Our bounds depend on a margin term \(T_{\varepsilon,h}\), that captures the number of rounds in which the Bayes optimal predictor \(\pi^{\star}\) can flip its label if our estimates of the \(M\) experts are off by at most \(\varepsilon\) (in \(\ell_{\infty}\) norm). Similar to the single expert case, we only pay in the margin term for time steps in which the counterfactual trajectory w.r.t. the policy \(\pi^{\star}\) has a small-margin. We note that while the trajectories taken by the learner or the noisy experts may go through states that have a large-margin, the margin term \(T_{\varepsilon,h}\) that appears in our bounds only accounts for time steps when the comparator policy \(\pi^{\star}\) (the optimal aggregation of expert recommendations) would go to a small-margin region, which could be much smaller. For the ease of notation, we defer the exact definition of margin, and the term \(T_{\varepsilon,h}\) to (102) in Appendix E.2, and state the main result below:
**Theorem 7**.: _Let \(\delta\in(0,1)\). Under the modeling assumptions above for the multiple experts setting, with probability at least \(1-\delta\), the imitation learning Algorithm 7 (given in the appendix) obtains:_
\[\mathrm{Reg}_{T}=\widetilde{\mathcal{O}}\!\left(\inf_{\varepsilon}\!\left\{H \sum_{h=1}^{H}T_{\varepsilon,h}+\frac{H}{\lambda\varepsilon^{2}}\sum_{m=1}^{ M}\sum_{h=1}^{H}\mathrm{Reg}^{\ell_{\phi}}(\mathcal{F}_{h}^{m};T)\right\}\right),\]
_while simultaneously the total number of label queries made is bounded by:_
\[N_{T}=\widetilde{\mathcal{O}}\!\left(\inf_{\varepsilon}\!\left\{H\sum_{h=1}^{H }T_{\varepsilon,h}+\frac{H}{\lambda\varepsilon^{2}}\sum_{h=1}^{H}\sum_{m=1}^{ M}\mathrm{Reg}^{\ell_{\phi}}(\mathcal{F}_{h}^{m};T)\cdot\mathfrak{E}(\mathcal{F}_{h}^{m },\nicefrac{{\varepsilon}}{{8}};f_{h}^{\star,m})\right\}\right).\]
In Section 6, we evaluate our IL algorithm on the Cartpole environment, with single and multiple experts. We found that our algorithm can match the performance of passive querying algorithms while making a significantly lesser number of expert queries.
### Extension: Improved Bounds for Selective Sampling
Note that setting \(H=1\) in Theorem 7 recovers a bound on the regret and query complexity for selective sampling with multiple experts. We provide a simplified algorithm for selective sampling in Algorithm 6 in Appendix E.1 for completeness. However, in selective sampling, one can improve the \(\nicefrac{{1}}{{\varepsilon^{2}}}\) term in the regret and query complexity bound under additional assumptions.
Recall that in selective sampling (\(H=1\)), given a context \(x\), the ground truth label is sampled as \(y\sim\mathscr{A}(\phi(F^{*}(x)))\) where \(F^{*}(x)=\left[f^{*,1}(x_{h})\right),\ldots,\phi(f^{*,M}(x_{h})]\). Furthermore, the policy \(\pi^{*}\) that we wish to compete with is given by \(\pi^{*}(x)=\mathtt{SelectAction}(\mathscr{A}(\phi(F^{*}(x))))\). The following theorem is an improvement over Theorem 7 when the function \(\mathscr{A}\) is \(\eta\)-Lipschitz, i.e. for any \(U,V\in\mathbb{R}^{K\times M}\) we have that \(\|\mathscr{A}(U)-\mathscr{A}(V)\|\leq\|U-V\|_{F}\).
**Theorem 8**.: _Let \(\delta\in(0,1)\). Suppose that the aggregation function \(\mathscr{A}\) is \(\eta\)-Lipschitz. Under the modeling assumptions above for selective sampling with multiple expert feedback, with probability at least \(1-\delta\), Algorithm 6 (given in Appendix E.1) obtains:_
\[\mathrm{Reg}_{T}=\widetilde{\mathcal{O}}\!\left(\inf_{\varepsilon}\!\left\{T_ {\varepsilon}+\min\{\frac{1}{\varepsilon^{2}},\frac{\lambda\eta\sqrt{M}}{ \lambda\varepsilon}\}\sum_{m=1}^{M}\mathrm{Reg}^{\ell_{\phi}}(\mathcal{F}^{m}; T)\right\}\right),\]
_while simultaneously the total number of label queries made is bounded by:_
\[N_{T}=\widetilde{\mathcal{O}}\!\left(\inf_{\varepsilon}\!\left\{T_{ \varepsilon}+\frac{180}{\lambda\varepsilon^{2}}\sum_{m=1}^{M}\mathrm{Reg}^{ \ell_{\phi}}(\mathcal{F}^{m};T)\cdot\mathfrak{E}(\mathcal{F}^{m},\nicefrac{{ \varepsilon}}{{3}};f^{\star,m})\right\}\right).\]
The proof is deferred to Appendix E.1. Extending the above improvement to the Imitation Learning (\(H>1\)) setting is an interesting direction for future research.
## 6 Experiments
We conduct experiments to verify our theory. To this end, we first introduce the simulator, _Cart Pole_(Barto et al., 1983; Brockman et al., 2016), and then explain the implementation of our algorithm and the baselines. Finally, we present the results.
Cart Pole.Cart Pole is a classical control problem, in which a pole is attached by an un-actuated joint to a cart. The goal is to balance the pole by applying force to the cart either towards the left or towards the right (so binary action). The episode is terminated once either the pole is out of balance or the cart deviates too far from the origin. A reward of 1 is obtained in each time step (however, the algorithm does not get any reward signal). The observations are four-dimensional, with the values representing the cart's position, velocity, the pole's angle, and angular velocity. The action is binary, indicating the force is either to the left or to the right.
Expert policies generation.We first generate an optimal policy \(\pi^{*}\) (that attains the maximum possible reward of 500) by policy gradient. We notice that when running the optimal policy \(\pi^{*}\), the absolute value of the cart's position only lies in \([0,2]\). Hence, to generate \(M\) experts, we first divide this interval into \(M\) sub-intervals \([a_{0},a_{1}]\),\([a_{1},a_{2}]\),...,\([a_{n-1},a_{M}]\) (\(a_{0}=0\) and \(a_{M}=2\)) by geometric progression. For the \(i\)-th expert, it plays the same action as \(\pi^{*}\) when the absolute value of the cart's position is in the interval \([a_{i-1},a_{i}]\) and plays uniformly at random outside of this interval. We find that using such generation, each expert individually cannot achieve a good performance (when \(M>1\)), while a proper combination of them can still be as strong as \(\pi^{*}\). We conduct experiment for \(M=1,2,3,\) and \(5\), respectively. Given this design of expert generation, when the cart is in the sub-interval \([a_{i-1},a_{i}]\), the only expert with non-zero margin is exactly the \(i\)-th expert.
Implementation.The algorithm is similar to Algorithm 7 but with some modification for practical purpose. First, we use a neural network (single hidden layer neural network,with 4 neurons in the hidden layer) as our function class \(\{\mathcal{F}_{h}^{m}\}_{h\leq H,m\leq M}\). Second, we specify SelectAction to pick the action of the most confident expert, i.e.,
\[\texttt{SelectAction}(f_{t,h}^{1}(x),\ldots,f_{t,h}^{M}(x))\coloneqq\text{ sign}(f_{t,h}^{\hat{i}}(x))\quad\text{where}\quad\hat{i}=\operatorname*{arg\,max}_{i \in[M]}|f_{t,h}^{i}(x)|.\]
Since we are considering binary action, we assume \(f_{t,h}^{i}(x)\in[-1,1]\), and the action space is \(\{-1,1\}\). Third, to compute \(\Delta_{t,h}^{m}\) efficiently, we apply the Lagrange multiplier to (95) to arrive at the following equivalent problem:
\[\Delta_{t,h}^{m}(x_{t,h})\coloneqq\min_{f\in\mathcal{F}_{h}^{m}} \max_{\alpha\geq 0} -\|f(x_{t,h})-f_{t,h}^{m}(x_{t,h})\|\] \[+\alpha\left(\sum_{s=1}^{t-1}Z_{s,h}\big{\|}f(x_{s,h})-f_{s,h}^{m }(x_{s,h})\big{\|}^{2}-\Psi_{\delta}^{\ell_{\phi}}(\mathcal{F}_{h}^{m},T) \right).\]
Then we treat the Lagrange multiplier \(\alpha\) as a constant, which converts the problem into the following:
\[\Delta_{t,h}^{m}(x_{t,h})\coloneqq\min_{f\in\mathcal{F}_{h}^{m}} -\|f(x_{t,h})-f_{t,h}^{m}(x_{t,h})\|+\alpha\sum_{s=1}^{t-1}Z_{s,h}\big{\|}f(x_{ s,h})-f_{s,h}^{m}(x_{s,h})\big{\|}^{2}. \tag{15}\]
The study of varying \(\alpha\) is shown in Figure 1. We found that small values (e.g., \(\alpha=1\)) mostly lead to poor performance, while the results are fairly similar for large values. In our key experiments, we choose \(\alpha=50\) when the number of experts is 1, 2 or 3, and choose 200 for 5-expert experiments. We note that since computing (15) for each time step involves repetitively fitting neural networks, which is time-consuming, we do a warm start at each round. In particular, we set the initial weights for the neural network of each round to be the weights of the trained network from the previous round. We also implemented _early stopping_ that stops the iteration if the loss does not significantly decrease for multiple consecutive iterations. The online regression oracle Oracle is instantiated as applying gradient descent for certain steps on the mean squared loss over all data collected so far, using warm start for speedup as well.
We first conduct experiments on a single expert setting. In Figure 2 we plot the curves of return and number of queries with respect to iterations for our method, and compare to DAgger (which passively makes queries at every time step; Ross and Bagnell (2014)). We note that while our algorithm does not converge to the optimal value as fast as DAgger, the number of queries made by our algorithm is significantly fewer, which means that our method is indeed balancing the speed of learning and the number of queries.
In additional to DAgger, we also compare to the following baselines:
* **Passive learning.** By passive learning, we mean running our algorithms with \(Z_{t,h}=1\), i.e., making queries whenever possible. Based on different styles of expert feedback, we divide the passive learning baselines into two: _noisy experts_ and _noiseless experts_. For the former we get the noisy label \(y_{t,h}^{m}\) for \(x_{t,h}\) (generated by \(y_{t,h}^{m}\sim\phi(f_{h}^{\star,m}(x_{t,h}))\)), and for the latter we directly get the action of the optimal policy (i.e. the action \(\pi_{h}^{\star}(x_{t,h})\)). Intuitively, noiseless feedback is more helpful than the noisy one.
* **MAMBA.** We compare our algorithm with (a slight variant of) MAMBA (Cheng et al., 2020). At each time step, it creates copies of the environment and run each expert policy on these copies, and then it selects the action of the expert policy with the highest return. For simplicity, we refer to this algorithm as MAMBA. Note that MAMBA assumes that one has access to the underlying reward function. Thus this baseline is using significantly more information than our approach.
* **Best expert.** We also compared our algorithm with the best expert policy.
The main results are shown in Figure 3. We first noticed that our algorithm outperforms passive learning
Figure 1: Learning curves of return with respect to the number of queries for different values of \(\alpha\) and different numbers of experts.
Figure 2: Learning curves of the return and the number of queries for 1 expert.
with noisy experts in all settings. Moreover, we beat the noiseless version when there is only one expert. Intuitively, getting feedback from noiseless experts is a very strong assumption and it is not surprising to see that the performance is improved with this stronger feedback. Note that our algorithm is only getting noisy labels as feedback. We also note that, despite the fact that MAMBA achieves better results than the best expert policy (in terms of the value function), it is still worse than our algorithm. Indeed, MAMBA does not even learn a policy that can solve the task when \(M\geq 2\). This is because by our construction of experts, there is no single expert that is capable of solving the task alone. Note that MAMBA performs well in the one expert case because in that case, the (single) expert can reliably solve the control task.
## 7 Conclusion
In this paper, or goal is to develop algorithms for online IL with active queries with small regret and query complexity bounds. Towards that end, we started by considering the selective sampling setting (IL with \(H=1\)), and provided a selective sampling algorithm that can work with general function classes \(\mathcal{F}\) and modeling assumptions, and relies on access to an online regression oracle w.r.t. \(\mathcal{F}\) to make its predictions (Section 3). The provided regret and query complexity bounds depend on the margin of the expert model. We then extended our selective sampling algorithm to interactive IL (Section 4). For IL, we showed that the margin term that appears in the regret and the query complexity depends on the margin of the expert on counterfactual trajectories that would have been observed on following the expert policy (that we wish to compare to), instead of the trajectories that the learner observes. Thus, if the expert always chooses actions that leads to states where it is confident (i.e. has less margin), the margin term will be smaller. We also considered extensions to bandit feedback, and learning with multiple experts.
We conclude with a discussion of future research directions:
* _Computationally efficient algorithms:_ The algorithms that we considered in this paper are not computationally efficient beyond simple function classes (linear functions). In particular, our algorithms need to perform minimization over \(\mathcal{F}\) which could be NP-hard when \(\mathcal{F}\) is non-convex. Furthermore, even when minimization over \(\mathcal{F}\) is tractable, our algorithms need to maintain a version space of feasible functions in \(\mathcal{F}\) (e.g. in (5) or (10)) in order to compute the query condition, which is also intractable without strong assumptions on \(\mathcal{F}\).
* _Learning via offline regression oracles_: The algorithms that we considered in this paper rely on access to _online_ regression oracles w.r.t. the underlying function class. While we have rigorous understanding
Figure 3: Learning curves of return with respect to the number of queries for different algorithms and numbers of experts.
of online algorithms for various classical settings e.g. linear functions, etc., provably scaling these algorithms for more complex function classes used in practice, e.g. neural networks, is still an active area of research. On the other hand, the theory and practice of offline regression w.r.t. these complex function classes is much more developed. Towards that end, it would be interesting to explore if our algorithms can be generalized to work with offline regression oracles. A potential approach is to rely on the epoching trick used in Algorithm 4 and fitting a fresh model \(\widehat{f}_{e}\) via offline regression at the beginning of each epoch, and then using it to predict \(\widehat{y}\) for all time steps in epoch \(e\). However, this would result in a dependence on the eluder dimension in both the regret and the query complexity. Improving this is an interesting future research direction.
* _IL with bandit feedback:_ In Section 3.3, we considered selective sampling with bandit feedback--where the learner only receives feedback on whether its chosen action matches the action of the noisy expert. Extending the framework of learning with bandit feedback to imitation learning, and for multiple experts, is an interesting direction for future research. Practically speaking, IL with bandit feedback has tremendous applications from online advertising to robotics.
* _Extension to unknown \(T\):_ The algorithms rely on the knowledge of \(T\) to set the query condition and the constraint set. Extending our algorithms to operate without a-priori knowledge of \(T\), e.g. by extending the standard doubling trick in interactive learning for our setting, is an interesting technical direction.
### Acknowledgements
AS thanks Sasha Rakhlin and Dylan Foster for helpful discussions. AS acknowledges support from the Simons Foundation and NSF through award DMS-2031883, as well as from the DOE through award DE-SC0022199. WS acknowledges support from NSF grant IIS-2154711. KS acknowledges support from NSF CAREER Award 1750575, and LinkedIn-Cornell grant.
|
2305.12983 | Why current rain denoising models fail on CycleGAN created rain images
in autonomous driving | One of the main tasks of an autonomous agent in a vehicle is to correctly
perceive its environment. Much of the data that needs to be processed is
collected by optical sensors such as cameras. Unfortunately, the data collected
in this way can be affected by a variety of factors, including environmental
influences such as inclement weather conditions (e.g., rain). Such noisy data
can cause autonomous agents to take wrong decisions with potentially fatal
outcomes. This paper addresses the rain image challenge by two steps: First,
rain is artificially added to a set of clear-weather condition images using a
Generative Adversarial Network (GAN). This yields good/bad weather image pairs
for training de-raining models. This artificial generation of rain images is
sufficiently realistic as in 7 out of 10 cases, human test subjects believed
the generated rain images to be real. In a second step, this paired good/bad
weather image data is used to train two rain denoising models, one based
primarily on a Convolutional Neural Network (CNN) and the other using a Vision
Transformer. This rain de-noising step showed limited performance as the
quality gain was only about 15%. This lack of performance on realistic rain
images as used in our study is likely due to current rain de-noising models
being developed for simplistic rain overlay data. Our study shows that there is
ample space for improvement of de-raining models in autonomous driving. | Michael Kranl, Hubert Ramsauer, Bernhard Knapp | 2023-05-22T12:42:32Z | http://arxiv.org/abs/2305.12983v1 | # Why current rain denoising models fail on CycleGAN
###### Abstract
One of the main tasks of an autonomous agent in a vehicle is to correctly perceive its environment. Much of the data that needs to be processed is collected by optical sensors such as cameras. Unfortunately, the data collected in this way can be affected by a variety of factors, including environmental influences such as increment weather conditions (e.g., rain). Such noisy data can cause autonomous agents to take wrong decisions with potentially fatal outcomes. This paper addresses the rain image challenge by two steps: First, rain is artificially added to a set of clear-weather condition images using a Generative Adversarial Network (GAN). This yields good/bad weather image pairs for training de-raining models. This artificial generation of rain images is sufficiently realistic as in 7 out of 10 cases, human test subjects believed the generated rain images to be real. In a second step, this paired good/bad weather image data is used to train two rain denoising models, one based primarily on a Convolutional Neural Network (CNN) and the other using a Vision Transformer. This rain de-noising step showed limited performance as the quality gain was only about 15%. This lack of performance on realistic rain images as used in our study is likely due to current rain de-noising models being developed for simplistic rain overlay data. Our study shows that there is ample space for improvement of de-raining models in autonomous driving.
## Introduction
Significant progress has been made in the development of autonomous vehicles over the last decade [1]. Much of this progress is due to the availability of increasingly powerful AI systems and models. These systems can make plausible decisions based on the data they collect, just as a human driver is constantly asked to do when driving. One of the core tasks of such an autonomous agent is therefore the correct perception of its environment. A large part of the data required to correctly control and navigate an autonomous vehicle is image data. Optical data is essential for basic tasks such as lane detection, object recognition, distance measurement or calculation of approach speeds [2]. The impeccable quality of this image data is therefore crucial to the smooth functioning of the many processing steps and inference tasks required for autonomous driving.
Unfortunately, optical sensors - mainly cameras - are also subject to a wide range of interference when collecting this data. On the one hand, such interference is caused by the capturing sensors themselves. Every camera sensor produces a certain amount of noise. However, this is due to the way current image sensors work. On the other hand, and with a much more serious impact, such image disturbances are caused by external influences from the environment. This includes adverse weather conditions, such as rain, for example.
This work focuses on the task of image enhancement and restoration using deep learning methods. To achieve this, two essential conditions must be met. First, appropriate training data must be available. Training data must represent a scene with and without the image disturbance. Second, suitable deep learning models capable of removing such disturbances are required. These models must be able to infer from the disturbed image material what the original image would have looked like without the disturbance. This work takes a new approach to the generation of training data, one that has found limited attention in the current literature.
Unlike previous work, the rainy version of a given image pair is not created by applying simple raining layers. Instead, a previously trained Generative Adversarial Network (GAN) is used to perform a style transfer on the training images. This makes it possible to create more realistic rain scenarios. This new type of training data is used to train and benchmark existing de-raining models. Their performance is re-evaluated in the light of this new type of training data.
## Methods
### Training data generation
Deep Learning (DL) models for image restoration and enhancement rely on image pairs during the training process. These image pairs must contain both the clean version of an image without any existing artefacts and a variant containing the artefacts that the model is supposed to remove. Creating a dataset of real images for this type of image interference would be a very challenging task. It would require capturing identical road traffic scenes in both clear and rainy weather. The scene itself, the perspective and the time of day would all have to be the identical. In order to achieve the goal of training DL models for rain removal, we therefore decided to use Al-based methods to generate suitable training data.
The model that was used to generate the training data for the de-raining models is called CycleGAN [3]. It uses an approach that replaces pair-based supervision with set-based supervision. To generate the required training data with CycleGAN, the implementation provided by the authors was used. In order to maintain the current version of the implementation at the time this work was done and to ensure traceability, a fork of their repository was created. It can be accessed using the subrepo in [4].
### bdd_rain dataset
Images from the Berkeley Deep Drive (BDD100K) dataset [5] were used to create the training data for CycleGAN. A python script was used to read the annotations in json format. The script is available at [4]. The images were then automatically sorted into rainy and clear. The images for the dataset of this project were taken from the training part of BDD100K. Of the 70,000 images contained there, there are 5,070 rainy images and 37,344 clear images. Although the two classes are unbalanced this does not affect the result as 1000 images from each of the two classes were randomly selected for the final dataset, forming the bdd_rain dataset used to train the CycleGAN model.
### syn_derain dataset
After generating the bdd_rain training dataset using a CycleGAN, the deraining models were created. Again, images from the BDD100K dataset were chosen as the source material. This time they were chosen, from the test part of the dataset. 1000 images with clear weather labelling were randomly selected. With these images, the versions with artificially added rain were generated in an inference run of the trained CycleGAN model.
### Evaluation of the training data
In the context of this work, it was necessary to evaluate images based on a subjective impression without a reference image being available for comparison. Specifically, the quality of the synthetically generated rain in the syn_derain dataset. Human perception is superior to any metric, no matter how sophisticated [11]. However, a single person's judgement is still biased. To avoid such a one-sided evaluation raising doubts about the objectivity of the work, the evaluation was carried out by different people with different backgrounds. A good way to get a larger audience to judge the images of artificial rain is to use a survey. A survey was designed in the form of a simple quiz. The user was given 10 images to look at in the quiz and should select for each of the images whether it is "real" or "fake".
For the quiz1, 6 random images were selected from the rain-set of the syn_derain dataset. It should be emphasised that the selection of the images from the dataset was random. No pre-selection of the images took place that could have biased the results of the survey. For these images, "fake" was taken as the correct answer. Rain-labelled images from the BDD100K dataset were also randomly selected for the remaining 4 images. For these pictures, the correct answer was "real". The outcome of the survey es presented in the results section.
Footnote 1: The user survey can be accessed in [https://forms.gle/QVX/hwdDfQAbMJtk7](https://forms.gle/QVX/hwdDfQAbMJtk7)
### Training and evaluation of de-raining models
The de-raining models used for evaluation in this work are Convolutional Neural Networks (CNNs) and transformer based. As there are not many published models in the transformer sector for this special use case, the choice fell on Restormer [6] which represents the first de-raining model evaluated in the course of this work. The main reason for selecting this model was that the authors made a conscious effort to reduce the computational complexity of the model.
The second model examined in this work is called Recurrent SE Context Aggregation Net (RESCAN) [7]. The model works mainly with CNNs and partly with Recurrent Neural Networks (RNNs).
To train both models, existing implementations provided directly by the authors were used. To perform the training of the Restormer model the implementation found in the Restormer sub-repository in [4] was used. Several changes were required to adapt the existing implementations to the needs of the current project. A relatively profound change was the adjustment of the model's batch size and patch size. This change was required to allow model training with the limited VRAM available in the used hardware setup. A further adjustment had to be made because the original model had only been trained on the Rain100H and Rain100L [8] datasets and therefore only supported the directory structure used there. The implementation was changed to allow any model name to be specified for training. Parameters used for training can be found in the repository mentioned above.
Only minor changes were required in the RESCAN implementation which can be found in the RESCAN subrepo in [4]. However, the structure of the syn_derain dataset had to be adapted for RESCAN. The image pairs do not need separate directories for the rain and norain versions of the images. Instead, the pairs must be merged into one image, which then has the dimensions 2W\(\times\)H of a single image. In this merged image, the rain version must be on the left and the clean version on the right.
In the validation process, the trained models and the images from the test dataset (_n_=94) of syn_derain were used to create predictions. The predictions were performed on the rainy images from the said test dataset. The result of the predictions was a derained image. The qualitative results compared to the clean initial image are presented in the results section.
## Results
### Training data of paired clear/rain weather images
Using 1000 clear weather images from the BDD100K dataset, artificially rainy versions of these images were created using the trained CycleGAN model for image-to-image translation. This set of image pairs was splitted into a train, test and validation subset. The test subset consisting of 94 image pairs was used later to measure the performance of the deraining models. Example results from the syn_derain dataset generated in this way are shown in fig. 1.
Figure 1: Representative synthetic rain image examples. Left side: original image from the BDD100K dataset. Right side: same image but with rain conditions artificially added by the trained CycleGAN model.
### User survey
The quiz had a total of 21 participants over a period of about 2 weeks. For the evaluation of the answers, each answer was considered separately and no grouping per picture was made. Thus, 210 independent answers were included in the evaluation. In order to visualise the results a confusion matrix is depicted in fig. 2. The calculated metrics for the survey result can be found in table 1. False Positive Rate (FPR) and accuracy have relatively low values, which shows that the participants had difficulties in deciding whether a synthetic image was real or fake. Taken together this indicates a rather high quality of the generated rain images and shows that the images are well suited for training the de-raining models.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline & \multicolumn{3}{c|}{SSIM} & \multicolumn{3}{c|}{PSNR-HVS-M} \\ \cline{2-7} & Mean & \(\sigma\) & Gain & Mean & \(\sigma\) & Gain \\ \hline Restormer & 0.79 & 0.11 & +0.16 & 20.68dB & 5.50dB & +4.49dB \\ RESCAN & 0.74 & 0.19 & +0.11 & 18.44dB & 1.71dB & +2.25dB \\ \hline Rain image & 0.63 & 0.16 & - & 16.19dB & 3.60dB & - \\ \hline \end{tabular}
\end{table}
Table 2: Quality results for 94 samples of syn_drain test dataset
Figure 3: Violin plot of de-rained images (_n_=94) quality scoring
## Discussion
Our results show that the generation of clean/rainy image pairs using a cycle GAN is a promising approach. The results of the user survey show that the image-to-image translation works well for rainy weather conditions and is sufficiently realistic for human perception. This shows that it is possible to train de-raining models with realistic data and not just use overlays with artificial rain streaks as usually done in the literature [12, 13]. Reflections from a wet road, for example, can interfere with sensor perception. This can cause object recognition to fail and the autonomous agent to overlook pedestrians or other road users.
Regarding the results of the de-raining models, the quality gains are rather small. The main reason for this is that the de-raining models used are primarily designed to remove artificial rain streak overlays. RESCAN tries to cover a wide range of rain patterns with several streak layers in order to learn as many different patterns as possible. However, as the real rain images from the BDD100K dataset show, such intense rain streaks are rarely encountered in road traffic situations. Furthermore, these streaks are usually not the main cause of visibility impairment but are also caused by reflections from wet road surfaces and spray. A quality comparison with existing work was not useful in the given context. During the research, we did not come across any projects using training data generated by CycleGAN (or other image-to-image translation models) to train image restoration models. Other authors have always used datasets with rather simple rain overlays. The authors of Restormer [6] and RESCAN [7] also used only such datasets.
Figure 4: Example results of Restormer and RESCAN Model. To the left is the reference image without rain (_norain_ column) and next to it is the version of this image that has been rained using CycleGAN (_rain_ column). The third image from the left shows the derained result of Restormer (_Restormer_ column) and the right image shows the derained result of RESCAN (_RESCAN_ column). Below each image (except the reference image), the 5 value shows the corresponding SSIM quality rating, and the \(P\) value shows the rating using PSNR-HVS-M. In the Restormer and RESCAN columns, the quality gain over the rain image is also given after the rating. The gain value is positive if the result of the respective de-raining model achieved a better-quality rating compared to the rained image.
## Conclusion
Our study shows that current rain noising models have limited performance on realistic rain images. Therefore, we believe that new and/or optimised methods are needed in order to properly remove bad weather influences from images in autonomous driving.
## Data and Software Availability
The used software, models and data sets are available in the following GitHub repository:
[https://github.com/Mickr4ne/iwire-av](https://github.com/Mickr4ne/iwire-av).
|
2307.12122 | Synthesis of Batik Motifs using a Diffusion -- Generative Adversarial
Network | Batik, a unique blend of art and craftsmanship, is a distinct artistic and
technological creation for Indonesian society. Research on batik motifs is
primarily focused on classification. However, further studies may extend to the
synthesis of batik patterns. Generative Adversarial Networks (GANs) have been
an important deep learning model for generating synthetic data, but often face
challenges in the stability and consistency of results. This research focuses
on the use of StyleGAN2-Ada and Diffusion techniques to produce realistic and
high-quality synthetic batik patterns. StyleGAN2-Ada is a variation of the GAN
model that separates the style and content aspects in an image, whereas
diffusion techniques introduce random noise into the data. In the context of
batik, StyleGAN2-Ada and Diffusion are used to produce realistic synthetic
batik patterns. This study also made adjustments to the model architecture and
used a well-curated batik dataset. The main goal is to assist batik designers
or craftsmen in producing unique and quality batik motifs with efficient
production time and costs. Based on qualitative and quantitative evaluations,
the results show that the model tested is capable of producing authentic and
quality batik patterns, with finer details and rich artistic variations. The
dataset and code can be accessed
here:https://github.com/octadion/diffusion-stylegan2-ada-pytorch | One Octadion, Novanto Yudistira, Diva Kurnianingtyas | 2023-07-22T16:42:26Z | http://arxiv.org/abs/2307.12122v1 | # Synthesis of Batik Motifs using a Diffusion - Generative Adversarial Network
###### Abstract
Batik, a unique blend of art and craftsmanship, is a distinct artistic and technological creation for Indonesian society. Research on batik motifs is primarily focused on classification. However, further studies may extend to the synthesis of batik patterns. Generative Adversarial Networks (GANs) have been an important deep learning model for generating synthetic data, but often face challenges in the stability and consistency of results. This research focuses on the use of StyleGAN2-Ada and Diffusion techniques to produce realistic and high-quality synthetic batik patterns. StyleGAN2-Ada is a variation of the GAN model that separates the style and content aspects in an image, whereas diffusion techniques introduce random noise into the data. In the context of batik, StyleGAN2-Ada and Diffusion are used to produce realistic synthetic batik patterns. This study also made adjustments to the model architecture and used a well-curated batik dataset. The main goal is to assist batik designers or craftsmen in producing unique and quality batik motifs with efficient production time and costs. Based on qualitative and quantitative evaluations, the results show that the model tested is capable of producing authentic and quality batik patterns, with finer details and rich artistic variations. The use of the Wasserstein loss function tends to produce batik motifs that are relatively new but less neat than the use of the StyleGAN2-Ada loss. The quality of the dataset also has a positive impact on the quality of the resulting batik patterns. Overall, this research contributes to the integration of Diffusion-GAN technology with traditional arts and culture, especially in the synthesis of batik motifs. However, there is still room for further development in increasing skill and accuracy in producing more detailed batik motifs. The dataset and code can be accessed here: [https://github.com/octadion/diffusion-stylegan2-ada-pytorch](https://github.com/octadion/diffusion-stylegan2-ada-pytorch)
**Keywords:** Batik, Generative Adversarial Network, Diffusion, Diffusion-GAN
## 1 Introduction
Batik, a combination of art and craftsmanship, is an artistic and technological creation unique to the Indonesian people. It has reached unparalleled levels of design, motifs, and production processes. The deeply meaningful and philosophical butik patterns continue to be explored, drawing inspiration from various customs and cultures in Indonesia. According to the Indonesian Dictionary, a motif refers to a pattern or design that forms diverse and captivating forms [1].
Research on butik motifs has primarily focused on their classification, aiming to identify specific motifs present in butik fabrics. However, further research can expand into the synthesis of butik motifs. Generating butik patterns automatically can be achieved with the help of artificial intelligence. intelligence is chosen for its flexibility and applicability in various fields, such as speech recognition, computer vision, natural language processing, and more.
Generative Adversarial Network (GAN) has become a crucial deep learning model in generating synthetic data, such as images or text. Comprising a generator and discriminator, the GAN model collaboratively produces high-quality synthetic data [2].
One challenge in using GAN models lies in their unstable and inconsistent performance during training. Convergence failure or the inability to achieve the desired accuracy level leads to inconsistent outputs. Even when provided with the same input, the model generates outputs of varying quality. This inconsistency stems from the model's struggle to learn consistent patterns in the data or when excessive noise interferes during training, particularly evident when generating complex data like images or videos. Researchers have sought to develop more stable and high-quality models, such as Wasserstein GAN, StyleGAN2, or BigGAN. In this particular case, StyleGAN was chosen due to its implementation simplicity, availability of pre-trained models, and ease of fine-tuning.
StyleGAN2, a variant of the GAN model, excels at generating realistic and high-quality images. The StyleGAN2 architecture adopts a style-based approach, allowing the generator model to separate the style and content aspects of an image. This separation enhances control over the generated image's features at different levels of detail. Nevertheless, further improvements can be made to the StyleGAN model, including the incorporation of the Diffusion technique [3].
Inspired by thermodynamic processes, the Diffusion model employs a step-by-step diffusion chain, progressively introducing random noise to the data. The model then learns to reverse this diffusion process, resulting in desired data samples generated from the initial noise [4].
In the realm of butik application, StyleGAN2 can effectively generate realistic synthetic butik patterns. To achieve optimal performance and enhance the model further, StyleGAN2 can be fine-tuned and adjusted with techniques like the aforementioned
diffusion technique. The fine-tuning process for batik involves using carefully curated datasets, encompassing collection, selection, quality determination, and motif choice. Additionally, adjusting the model's architecture, such as modifying the number of layers or kernel size, can improve its ability to learn complex batik patterns. Appropriate loss functions and the incorporation of diffusion techniques should also be considered.
Therefore, the utilization of StyleGAN2, diffusion techniques, and fine-tuning holds promise for generating intricate and realistic synthetic batik patterns of high quality. These patterns can find applications in various fields, including fashion design and other creative industries. They will assist batik designers and artisans in producing unique and top-tier batik patterns, ultimately expediting production time and reducing costs.
## 2 Related Works
Goodfellow et al 2014 [2] in their research titled "Generative Adversarial Nets", discussed a model with adversarial networks which has two main components: a generator and a discriminator. The generator tries to create data that resemble the original data distribution, while the discriminator attempts to differentiate between the original data and the data produced by the generator. The goal is to train the generator to produce data so convincing that the discriminator cannot distinguish it from the original data.
Martin Arjovsky, Soumith Chintala, and Leon Bottou 2017 [5], with their research "Wasserstein GAN", they discuss the Wasserstein GAN (WGAN) as an advancement over conventional GANs. WGAN employs the Wasserstein metric in their work to calculate the difference between the generator's distribution and the actual data distribution. Compared to typical GANs, WGAN has shown to be more stable throughout training.
With their research, Ishaan Gulrajani et al. 2017 [6] address difficulties in training GANs and present Gradient Penalty (GP) as a novel solution. It replaces the Weight Clipping approach and demonstrates that GP is more efficient at producing optimal discriminators and delivering superior outcomes across a range of data generation activities. In order to show how GP enhances performance, the paper also examines other training-related topics such overfitting and sample quality measurement.
A study by Fedus et al. (2018) [7] This research focuses on the use of Generative Adversarial Networks (GANs), experiments are carried out using variations of GANs, such as non-saturating GAN and WGAN-GP, then synthesis trials are carried out to see the effectiveness of each model, using the CelebA, CIFAR-10, and Color MNIST datasets. Based on the results that have been done, it is found that WGAN-GP has the best performance which can overcome the problem of mode collapse.
Tero Karras, Samuli Laine, and Timo Aila 2019 [3] with his research entitled, "A Style-based Generative Adversarial Network", in which this research focuses on developing GAN models with a style control approach. Their work demonstrates a more stable learning process, eliminates mode collapse, and generates high-quality images. The findings of their research will serve as a foundational methodology for designing GANs in upcoming research.
Karras et al 2020 [8] proposed an adaptive discriminator augmentation mechanism for training GANs with limited data. Their experiments demonstrated its effectiveness, even with a small number of training images. The paper discussed implications for image and video authenticity and training efficiency. This study made significant contributions to GAN development with limited data.
Ho et al 2020 [4] introduced Denoising Diffusion Probabilistic Models (DDPMs) as a new generative model that trains a model to predict the original data distribution from a reverse diffusion process. DDPMs have demonstrated impressive sample quality and have inspired various subsequent research in diffusion models.
The research on batik especially Indonesian butik motifs is limited to the classification such as [9]. Agus Eko Minarno, Moch. Chamdani Mustaqim, et al 2021 [10], in their research titled "Deep Convolutional Generative Adversarial Network Application in Batik Pattern Generator", discuss the application of the Deep Convolutional method and GANs in generating batik patterns. Their research findings demonstrate that batik patterns can be effectively produced using the GAN method. In this work we try to explore the possibility of generating various Indonesian butik motifs using diffusion model and GANs.
## 3 Research Method and Materials
### Generative Adversarial Network
The Generative Adversarial Network (GAN) is a neural network architecture used for generating synthetic data that appears to be taken from the original distribution. A GAN consists of two opposing models, which is a generator (\(G\)) and a discriminator (\(D\)) [2].
The generator is a neural network that generates synthetic data. The generator receives random noise \(z\) as input and converts it into an expected output as synthetic data \(G(z)\).
The discriminator is a neural network used to distinguish between original data and synthetic data generated by the generator. The discriminator receives input data (either original or synthetic) and outputs a score \(D(x)\) or \(D(G(z))\) indicating how convincing the data is considered to be original.
The training of the GAN involves optimizing the generator to fool the discriminator into not being able to distinguish synthetic data from original data, and optimizing the discriminator to accurately distinguish synthetic data from original data. This process is repeated until convergence.
This can be represented by the following loss function:
\[\min_{G}\max_{D}V(D,G)=\mathbb{E}_{x\sim p_{\text{data}}(x)}[\log D(x)]+ \mathbb{E}_{z\sim p_{z}(z)}[\log(1-D(G(z)))] \tag{1}\]
Where \(\mathbb{E}\) is the symbol for expectation or average, \(x\) is the data from the original distribution \(p_{\text{data}}\), \(x\) is the data from the original distribution \(p_{\text{data}}\), \(D(x)\) is the discriminator's belief that \(x\) is real, \(G(z)\) is the synthetic data generated by the generator, and \(G(z)\) is the synthetic data generated by the generator. The Discriminator (\(D\)) is trained to maximize this loss function (trying to make \(D(x)\) close to \(1\) and \(D(G(z))\)
close to 0), while the Generator (\(G\)) is trained to minimize the second part of this loss function (trying to make \(D(G(z))\) close to 1).
### Style-GAN
The Style Generative Adversarial Network (StyleGAN) is an advancement from previous GANs. Unlike its predecessors, StyleGAN can generate new images with significantly higher quality [3].
StyleGAN, like previous GANs, employs the concept of adversarial training, where the training consists of two competing networks: the generator and the discriminator. The generator is responsible for creating fake images, while the discriminator is tasked with distinguishing between real and fake images. In this process, both networks train each other, allowing the generator to create images that increasingly fool the discriminator until the generator produces images that resemble the real ones.
There are several components that distinguish StyleGAN from traditional GANs, which make the StyleGAN model superior to traditional GANs. They include: Mapping Network - a mapping concept to convert latent vectors into style vectors that govern various visual aspects, Weight Demodulation - a technique used to remove distortions in the generated images, Noise Injection - a noise injection into the generator to influence details and textures so that images are more varied.
### Diffusion Model
Diffusion models are a type of generative model used to generate new data similar to the data used in training. Fundamentally, diffusion models work by degrading the training data through repeated addition of Gaussian noise in its iterations, and then the model learns how to restore the noisy data by reversing the noise addition process. After training the model, new data can be generated just by feeding random noise into the learned noise-removal process [4].
More specifically, the diffusion model is a latent variable model that maps to the latent space using a Markov chain. This chain progressively adds noise to the data to obtain a posterior close to \(q(x_{1:T}|x_{0})\), where \(x_{1:T}\) are latent variables of the same dimension as \(x_{0}\). As illustrated in the Figure 1 below, in a process from the Markov chain, an image will be added with Gaussian noise until it becomes pure noise, and then the diffusion model's task is to learn to reverse the process as shown in the Figure 2.
Figure 1: Adding noise process in diffusion model.
### Proposed Method
#### 3.4.1 Diffusion-GAN
Diffusion-GAN is an innovative framework for Generative Adversarial Networks (GANs) that introduces a unique approach using a forward diffusion chain to generate instant noise distributed by a Gaussian mixture. Diffusion-GAN comprises three components: an adaptive diffusion process, a diffusion time step-dependent discriminator, and a generator [11].
1. The same adaptive diffusion mechanism is used to diffuse both the observed data and the created data. Each diffusion time step uses a different noise-to-data ratio. The mathematical definition of this diffusion process is as follows: \[q(x_{1}:T|x_{0}):=\prod_{t=1}^{T}q(x_{t}|x_{t-1}),\] (2) \[q(x_{t}|x_{t-1}):=N(x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}\sigma^{2}I).\] (3)
2. Diffusion Time Step Discriminator: Between diffused real data and diffused produced data, the discriminator is trained to distinguish. The diffusion time step has an impact on how well the discriminator performs.
3. Generator: By backpropagating through the forward diffusion chain, the generator picks up information from the feedback of the discriminator. To balance the amounts of noise and data, the diffusion chain's length is dynamically changed.
When utilizing Diffusion-GAN, an actual image or a generated image can be used as the diffusion process input. The several stages of the diffusion process gradually magnify the noise in the image. The generator and the data affect how many diffusion steps there are. The diffusion process must be differentiable in order to calculate the derivative of the output from the input. As a result, it is possible to update the generator by diffusing the gradient from the discriminator. Diffusion-GAN compares noisy versions of actual and created images that were obtained by sampling from a Gaussian mixture distribution over the diffusion steps using the time step-dependent discriminator, as opposed to typical GANs that directly compare real and generated images. Due to different noise-to-data ratios, this distribution demonstrates the property that certain components contribute more noise than others. Sampling from this distribution has two benefits: first, by addressing the problem of vanishing gradients,
Figure 2: Reverse process in diffusion model.
it stabilizes the training process; second, it enriches the dataset by producing multiple noisy copies of the same image, improving data efficiency and generator diversity.
#### 3.4.2 Architecture
Figure 3: An overview of architecture.
As per the Figure 3 above, we utilize the StyleGAN2-Ada architecture as the primary architecture, which comprises a mapping network made up of several fully connected layers. Following this, the generator consists of several components, namely a generation block for each resolution level, a style module, the progressive growing technique which starts training at a low resolution and gradually increases the resolution during the training process, and the implementation of weight demodulation to reduce emerging artifacts. Subsequently, the discriminator is composed of several components, including a discrimination block for each resolution level, downsampling, and a fully-connected layer to produce a final score indicating whether the input image is considered real or synthetic. Lastly, we modify the architecture by adding a diffusion function to inject Gaussian noise into the images generated during training, enabling the discriminator to learn to distinguish noise in real images and noise in fake images, thereby enhancing the model, where details are shown in the Figure 4 below.
#### 3.4.3 Algorithm
Below is the algorithm of Diffusion-GAN:
Figure 4: Detail of Diffusion Proccess.
```
1:\(i\gets 0\)
2:while\(i\leq\) number of training iterations do
3:Step I: Update discriminator
4: Sample minibatch of \(m\) noise samples \(\{z_{1},z_{2},\ldots,z_{m}\}\) from the distribution \(p_{z}(z)\).
5: Generate samples \(\{x_{g,1},x_{g,2},\ldots,x_{g,m}\}\) using the generator \(G\) with inputs \(\{z_{1},z_{2},\ldots,z_{m}\}\).
6: Sample minibatch of \(m\) data examples \(\{x_{1},x_{2},\ldots,x_{m}\}\) from the distribution \(p(x)\).
7: Sample \(\{t_{1},t_{2},\ldots,t_{m}\}\) uniformly with replacement from a given tepl list.
8:for\(j\in\{1,2,\ldots,m\}\)do
9: Sample \(y_{j}\sim q(y_{j}|x_{j},t_{j})\) and \(y_{g,j}\) from the distribution \(q(y_{g,j}|x_{g,j},t_{j})\).
10:endfor
11: Update discriminator by maximizing \(L_{D}=-E_{x\sim p_{data}(x)}[\log D(x)]-E_{z\sim p_{z}(z)}[\log(1-D(G(z)))]\)
12:Step II: Update generator
13: Sample minibatch of \(m\) noise samples \(\{z_{1},z_{2},\ldots,z_{m}\}\) from the distribution \(p_{z}(z)\).
14: Generate samples \(\{x_{g,1},x_{g,2},\ldots,x_{g,m}\}\) using the generator \(G\) with inputs \(\{z_{1},z_{2},\ldots,z_{m}\}\).
15: Sample \(\{t_{1},t_{2},\ldots,t_{m}\}\) with replacement from a given tepl list.
16:for\(j\in\{1,2,\ldots,m\}\)do
17: Sample \(y_{g,j}\) from the distribution \(q(y_{g,j}|x_{g,j},t_{j})\).
18:endfor
19: Update generator by minimizing \(L_{G}=E_{z\sim p_{z}(z)}[\log(1-D(G(z)))]\)
20:Step III: Update diffusion
21:if\(i\mod 4==0\)then
22: Calculate \(r_{d}=E_{y,t\sim p(y,t)}[\text{sign}(D_{\phi}(y,t)-0.5)]\) and update \(T=T+\text{sign}(r_{d}-d_{\text{target}})\cdot C\).
23: Sample the tepl list with tepl \(=[0,\ldots,0,t_{1},\ldots,t_{32}]\), where \(t_{k}\sim p_{\pi}\) for \(k\in\{1,\ldots,32\}\), and \(t\sim p_{\pi}:=\) (uniform: Discrete, priority: Discrete).
24:endif
25:\(i\gets i+1\)
26:endwhile
```
**Algorithm 1** Diffusion-GAN
The Algorithm 1 above is the algorithm used in training the Generative Adversarial Network model with the Diffusion method. In each training iteration, the following steps are performed: First, the discriminator is updated to maximize the Discriminator's loss function. First, we take \(m\) minibatch samples of noise samples \(z_{1},z_{2},\ldots,z_{m}\) from the \(p_{z}(z)\) distribution. Then, generator \(G\) is used to generate samples \(x_{g,1},x_{g,2},\ldots,x_{g,m}\) using inputs \(z_{1},z_{2},\ldots,z_{m}\). Next, we take \(m\) minibatch samples from the sample data \(x_{1},x_{2},\ldots,x_{m}\) from the \(p(x)\) distribution. The samples \(t_{1},t_{2},\ldots,t_{m}\) are taken randomly by replacement from the list of tepls given. Next,
for each \(j\) in \(1,2,\ldots,m\), sample \(y_{j}\) from the distribution \(q(y_{j}|x_{j},t_{j})\) and \(y_{g,j}\) from the distribution \(q(y_{g,j}|x_{g,j},t_{j})\). The discriminator is updated by maximizing the log likelihood of the samples using the Discriminator loss function.
Next, the Generator is updated by minimizing the Generator loss function. First, we take \(m\) minibatch samples of noise samples \(z_{1},z_{2},\ldots,z_{m}\) from the \(pz(z)\) distribution. Generator \(G\) is used to generate samples \(x_{g,1},x_{g,2},\ldots,x_{g,m}\) using the input \(z_{1},z_{2},\ldots,z_{m}\). Samples \(t_{1},t_{2},\ldots,t_{m}\) are taken with replacement from the list of tepls given. For each \(j\) in \(1,2,\ldots,m\), sample \(y_{g,j}\) from the distribution \(q(y_{g,j}|x_{g,j},t_{j})\). The generator is updated by minimizing the log likelihood of the samples using the Generator loss function.
And last, if \(i\) modulus 4 is equal to 0, then the following steps are taken. First, \(r_{d}\) is calculated by taking the average sign of the difference between \(D_{\phi}(y,t)\) and 0.5, where \(D\) is the discriminator and \(\phi\) is the internal representation of the data. The \(r_{d}\) value is used to update the \(T\) value by taking the sign of the difference between \(r_{d}\) and \(dtarget\), then multiplied by \(C\) and added to \(T\). Next, the tepl list is updated by selecting \(t\) randomly from the \(p_{\pi}\) distribution. The \(p_{\pi}\) distribution consists of a uniform discrete distribution for elements 0 and a priority discrete distribution for \(t\) elements in tepl. After the steps are completed, the initialization value is incremented by 1 and returns to step 2 to continue iteration and training repetition. This algorithm describes how discriminators and generators interact with each other in Diffusion-GAN training to produce a model capable of producing new data samples that resemble the original data.
### Data Acquisition and Preprocessing
The batik motif image data used in this research was sourced from various platforms such as the internet, Kaggle, and GitHub [12]. There are 20 types of batik motifs used in the study, namely: nitik, bali, ceplok, lereng, kawung, megamendung, parang, peakalongan, priangan, sidomukti, betawi, cendrawasih, cianis, garutan, gentongan, keraton, lasem, sekar, sidoluhur, and tambal, we chose 20 types of batik with several considerations such as data availability and dataset diversity, which is because the models require diverse data to be able to learn well. Figure 5 below shows a comparison of the data before and after preprocessing, the first image shows the size of the image which is still not the same as the other images, besides that the orientation of the image and color are still original, where in the second image the size of the image is in accordance with changes in orientation and color alteration, Next, the Figure 6 shows sample images that have been pre-processed, there are examples of Betawi, Balinese and Cendrawasih batik.
For each type of batik, images were selected with clearly visible motifs, while images with excessive noise, non-batik shapes, and unclear motifs were removed from the dataset. Subsequently, data augmentation was performed on the dataset, as GAN models require as much data as possible to achieve good performance. Given that each type of batik has a different number of data, the amount of data augmentation was adjusted based on the number of data for each type of batik. Augmentation was done by cropping, flipping horizontally and vertically, and color alteration. We aimed to balance the number of data for each type of batik to avoid overfitting, which could result in one type of batik dominating the generated results. Therefore, augmentation for each type of batik was performed with 1000 data, with the total data reaching 20.000 batik motif data. But, at first, we try to train with 10.000 data, to see how the data affects the results.
Figure 5: Comparison between unpreprocessed and preprocessed data sample: (a) Unpreprocessed Data, (b) Preprocessed Data
Figure 6: Preprocessed dataset samples : (a) Batik Betawi, (b) Batik Bali, (c) Batik Cendrawasih
## 4 Result and Discussion
### Hyperparameters
In the baseline StyleGAN model that will be used, there are several hyperparameters that can be set to achieve optimal results. In this experiment, with 256x256 resolution batik image data, the training configuration consists of the following parameters:
* ref_gpus: The number of reference GPUs used during training. The number of reference GPUs can affect the mini-batch size used during training.
* kimg: The total number of training iterations in thousands.
* mb (mini-batch): The number of samples processed in a single training iteration.
* mbstd (mini-batch standard deviation): A factor used to control the variation in the mini-batch samples.
* fmaps: The factor determining the number of feature maps used in the model. This value can affect the complexity and capacity of the model.
* lrate (learning rate): The rate at which the model's weights are updated during training.
* gamma: A factor controlling stability during training.
* ema (Exponential Moving Average): A technique used to smooth the model during training and achieve better results.
* ramp: The gradual increase value used during training.
* map: The factor determining the number of attribute maps used in the model.
Following the paper256 configuration [3], we have set the values as follows: ref_gpus=8, kimg=25000, mb=64, mbstd=8, fmaps=0.5, lrate=0.0025, gamma=1, ema=20, ramp=None, and map=8.
After hyperparameter in StyleGAN, in Diffusion there are several hyperparameters that introduced, the following parameters are dtarget which is a threshold to identify whether the current discriminator is overfitting, \(p_{\pi}\) which is the sampling distribution, and noise. in this experiment we set the dtarget=0.6, \(p_{\pi}\)=priority and noise=0.05.
### Architecture Comparison
First, we experiment with parameters as mentioned in the previous chapter and with 10,000 datasets, Table 1 shows the best results are produced by the baseline+diffusion, with the best FID value around 45.611, and KID around 0.0084. The best precision and recall results are obtained from the baseline+diffusion and the baseline that uses Wasserstein loss with a precision value of 0.269 and a recall of 0.1414.
Next, we tried to retrain with the addition of the dataset to 20,000 data, to see if there is an increase in performance and better results. Table 2 shows the best results are obtained by the baseline+diffusion with the best FID value around 29.045 and KID around 0.00643, the best precision and recall results are obtained by the baseline+diffusion with Wasserstein loss with a precision value of 0.1744 and a recall from base+wassertein loss with 0.1612. From the experiments that have been conducted, it is known that the addition of the dataset is able to improve performance with better results.
Lastly, we attempted to retrain the model using a combination of the previous 20,000 butik motif dataset and approximately 1,200 batik motifs from the ITB-mBatik dataset [13]. The ITB-mBatik dataset consists of unique, high-quality, digitally created symmetric batik patterns, which differ from the original batik motif images in the previous dataset. We trained the model with this combined dataset to see if it could generate even more unique and diverse motifs after the addition of unique motifs from the new dataset.
As seen in Table 3, the best FID score is achieved by the baseline+diffusion model with a value of approximately 33.0104, slightly higher than the previous training. Similarly, the best KID score is also obtained by the baseline+diffusion model with a value of around 0.006705. The precision and recall values are obtained by the baseline+diffusion and baseline+wloss models, with values of approximately 0.1777 and 0.16861, respectively.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Model & FID \(\downarrow\) & KID \(\downarrow\) & Prec \(\uparrow\) & Rec \(\uparrow\) \\ \hline base & 65.699 & 0.0153 & 0.249 & 0.0573 \\ base+Wloss & 57.123 & 0.0180 & 0.251 & **0.1414** \\ base+Diffusion & **45.611** & **0.0084** & **0.269** & 0.0785 \\ base+Wloss+Diffusion & 48.297 & 0.0135 & 0.254 & 0.1391 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Experiment with 10.000 data results, **Bold** number show best score while underline shows second bests
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Model & FID \(\downarrow\) & KID \(\downarrow\) & Prec \(\uparrow\) & Rec \(\uparrow\) \\ \hline base & 46.799 & 0.01287 & 0.1590 & 0.1117 \\ base+Wloss & 48.781 & 0.01800 & 0.1566 & **0.1612** \\ base+Diffusion & **29.045** & **0.00643** & 0.1628 & 0.125 \\ base+Wloss+Diffusion & 36.756 & 0.01073 & **0.1744** & 0.1426 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experiment with 20.000 data results, **Bold** number show best score while underline shows second bests
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Model & FID \(\downarrow\) & KID \(\downarrow\) & Prec \(\uparrow\) & Rec \(\uparrow\) \\ \hline base & 57.4302 & 0.01689 & 0.1315 & 0.08601 \\ base+Wloss & 42.9192 & 0.01423 & 0.16889 & **0.16861** \\ base+Diffusion & **33.0104** & **0.006705** & **0.1777** & 0.1172 \\ base+Wloss+Diffusion & 43.4331 & 0.01550 & 0.1756 & 0.1372 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Experiment with combined 20.000 data and ITB-mBatik data results,
**Bold** number show best score while underline shows second bests
### Image Results
Here are the results from the tested models. According to our evaluation, the outputs generated using Wasserstein loss exhibited more unique motifs compared to those without it. However, these motifs tended to lack organization and neatness. On the other hand, the model employing the Diffusion-GAN method demonstrated the best performance, both in terms of FID scores and the visual quality of the generated images.
Using the baseline model, the generated batik images produced by this method as shown in Figure 7 exhibit vibrant and high-quality colors, as well as neat and well-defined patterns. The batik motifs are clearly visible, and the details are accurately depicted. One of the strengths of this method is its ability to preserve the authenticity of the colors and shapes of the original batik motifs while maintaining the stability of the generated patterns. However, a limitation of this method is its limited capability to generate completely new motifs. Although the generated motifs may possess new and distinct patterns from the original data, they tend to resemble existing batik patterns.
Figure 7: Baseline results (FID 46.799).
As illustrated in the Figure 8 above, the generated sample showcase some additional new patterns, yet they still bear a resemblance to the original batik style, namely the Nitik motifs. However, it is important to note that the motifs in the samples represent fresh and distinct variations compared to the original dataset.
**Fig. 8**: Comparison between generated sample and real data: (a) Generated Sample, (b) Real Data
**Fig. 9**: Baseline results with combined datasets (FID 57.4302).
The Figure 9 above represents the outcomes of training the baseline model using a combined dataset, showcasing an enhancement in uniqueness and the emergence of new diverse patterns in the generated samples.
Using the baseline model with Wasserstein loss, the generated batik images produced by this method as shown in Figure 10 exhibit new and distinct patterns compared to the original dataset. Additionally, the arrangement of patterns forming the motifs also differs from the original data. This method excels in generating novel pattern shapes and their arrangement within the motifs. However, a limitation of this method is its reduced ability to generate neatly organized motifs, as the patterns within the motifs may appear somewhat random.
Figure 10: Baseline+Wassertein loss results (FID 48.781).
In Figure 11, the generated samples exhibit motifs that are relatively new, incorporating elements from batik styles such as Sidoluhur, Nitik, and Kawung. However, it is also apparent that the patterns in these samples appear random and lack organization.
Figure 11: Comparison between generated sample and real data: (a) Generated Sample, (b) Real Data
Figure 12: Baseline+WLoss results with combined datasets (FID 42.9192).
The results of the baseline model utilizing Wasserstein loss as shown in Figure 12 are depicted, revealing a more organized arrangement of motifs in the generated output compared to the utilization of the previous dataset.
By incorporating the diffusion technique into the baseline model, the resulting batik images as shown in Figure 13 exhibit a wide range of vibrant and diverse colors, accompanied by novel patterns that differ from the original dataset. The arrangement of these patterns forming new motifs is well-structured and orderly. This method excels in generating high-quality, fresh, and neatly organized new motifs. However, a limitation of this approach is the presence of batik motifs that still resemble the original data, as not all generated motifs attain the same level of high-quality novelty.
Figure 13: Baseline+Diffusion results (FID 29.045).
In Figure 14 shows the results portray fresh and new motifs, subtly combining elements from Betawi and Cendrawasih batik patterns, resulting in an elegantly arranged motif within the sample.
Figure 14: Comparison between generated sample and real data: (a) Generated Sample, (b) Real Data
Figure 15: Baseline+Diffusion results with combined datasets (FID 33.0104).
Figure 15 showcases the generated samples from the baseline+diffusion model. When employing the diffusion method, it is evident that the results tend to exhibit more intricate patterns compared to the model without diffusion. This is attributed to the injection of noise during training, enabling the model to gain a better understanding of image motifs.
The last approach is the baseline model with the addition of diffusion technique and the utilization of Wasserstein loss. The resulting batik images using this method as shown in Figure 16 share similarities with the previous diffusion-based approach in terms of vibrant colors and fresh new patterns. However, similar to the previous Wasserstein loss method, the generated patterns tend to be random and lack organization. This method excels in producing high-quality, innovative motifs where the patterns created are fresh and new. A drawback of this approach is the lack of orderly arrangement and the presence of randomness in the generated patterns.
## 5 Conclusion
In this paper, we have proposed a novel approach to generate the model with a diffusion-based approach. The proposed method is able to generate the model with a diffusion-based approach.
In the last model's results as depicted in the Figure 17 above, the generated samples showcase fresh and new motifs. However, the resulting patterns also lack organization and appear random. On the other hand, in the results obtained using the combined dataset, as observed in Figure 18 below, the motifs in the images appear more organized compared to before.
**Fig. 17**: Comparison between generated sample and real data: (a) Generated Sample, (b) Real Data
**Fig. 18**: Baseline+Diffusion+WLoss results with combined datasets (FID 43.4331).
### Analysis
From the tests that have been conducted, evaluation is carried out using several metrics such as FID, KID, Precision and Recall. Images are judged based on the quality and diversity or variation of the resulting new motifs. On metrics like FID, which measures quality and diversity based on the distance between the feature distributions of the original image and the resulting image, in contrast to "traditional distance", this refers to the distance between two multivariate probability distributions. In this case, the "point" is not just a single point in space, but the entire probability distribution. The Diffusion+Baseline model produces the lowest score indicating that the resulting image is of good quality where the important features of the original image can be replicated properly in the resulting image, and also the resulting image has good diversity or variation which is different from, where the FID with a low value indicates that the distribution of the resulting image features is similar to the distribution of the original image features. Meanwhile, based on visual evaluation, the Diffusion+Baseline model has the best new motif variations compared to other models, where combinations of patterns appear on motifs, even though these patterns are still taken from datasets.
In addition, the results can be analyzed that testing the model with Wassertein loss, although it shows an FID value nearly identical to that of StyleGAN2Loss, may not always yield high-quality images. This is because in Wassertein loss, the logits used to calculate the loss function are directly averaged to measure the distance between fake and original images, without the use of an activation function. This results in a wider range of variations in the produced samples, as even minor adjustments to the generator's weights and biases can lead to significant changes in the output. However, this could hinder the generator's ability to understand the dataset's intricacies, as these sudden changes could lead to an unstable learning phase.
In contrast, the StyleGAN2Loss mechanism applies a softplus activation function to the logits, which has the ability to refine its input. It's important to understand that even minor modifications in the input will correspondingly influence the output, and the same is true in reverse. This inherent characteristic contributes to a more stable training process, as slight adjustments to the generator's weights and biases won't result in a sudden decline in the quality of the generated samples. This allows the generator to achieve a more refined representation of the data [3].
The effectiveness of GAN is considerably increased by the incorporation of the diffusion approach. This is because the Diffusion approach uses a wide range of cutting-edge techniques, including Differential Augmentation [14]. Differential Augmentation offers a data-efficient method that increases the learning capacity of the model by modifying the data and samples generated before they are submitted to the discriminator. By adding changes to the dataset, differential augmentation can also assist avoid overfitting, diversify the training data, and improve training stability.
Additionally, there is a novel adaptive diffusion method that employs the same adaptive diffusion method to diffuse both observed and generated data. Every stage of the diffusion process uses a different noise-to-data ratio. As a result, the model is better equipped to manage noise changes, which improves its capacity to produce original and lifelike samples. Additionally, the discriminator is taught to distinguish between diffused created data and diffused real data at every stage of the diffusion process.
Due to the environment this produces, the model's ability to adjust to different noise levels is much improved.
## 5 Conclusion
In the research that has been done, we have explored the use of the Diffusion-GAN method to synthesize batik motifs, we have conducted qualitative and quantitative tests on the results. Using the FID metric, we evaluate the results of the batik images produced by the model. The results obtained by the model show a value of around 29.045, indicating a good ability of the model to produce high-quality and diverse batik images
Besides that, we have also made a comparison between the Diffusion-GAN model and other approaches. In the experiments that have been carried out, we found that the Diffusion-GAN method is able to improve performance in producing batik motifs where high quality, diverse results are obtained and maintain the aesthetic appeal of the batik motifs themselves.
To achieve this result, we explored the use of loss in this study, such as Wassertein Loss and StyleGAN2Loss which use Non-Saturating loss, the results show Wassertein Loss produces patterns that tend to have new shapes, but the motifs tend to be not neatly arranged, compared to StyleGAN2Loss which produces neat motifs.
In addition, the dataset used in this research played a significant role in the success of the proposed method. The batik dataset we collected and utilized was diverse, encompassing various motif types and color variations. This diversity helped our model learn and emulate the distinctive characteristics of batik motifs more effectively. Moreover, the high quality of the dataset had a positive impact on the quality of the synthesized batik motifs.
Overall, this study contributes to the integration of Diffusion-GAN technology with traditional art and culture, particularly in the synthesis of batik motifs. The proposed method demonstrates its superiority in generating high-quality and authentic batik motifs. However, there is room for further development, such as improving the finesse and accuracy in generating finer batik motifs. With this research, we hope to inspire further advancements in the synthesis of batik motifs using machine learning-based approaches, ultimately supporting the sustainability and development of batik art and culture.
## Declarations
* Funding This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
* Availability of data and materials The datasets generated and/or analysed during the current study are available in [https://github.com/octadion/diffusion-stylegan2-ada-pytorch](https://github.com/octadion/diffusion-stylegan2-ada-pytorch)
* Competing interests The authors declare that they have no competing interests
* Authors' contributions OO performed acquisition, analysis, interpretation of data, creation of new software used in the work, drafting the work, and substantively revised it. NY has made supervision, conception, design of the work, analysis, draft of the work, and substantively revised it;. DK supervised the works, has drafted the work, and substantively revised it. All authors read and approved the final manuscript
* Consent for publication Not applicable
* Acknowledgements Authors would thank Intelligent System Laboratory of Faculty of Computer Science, Brawijaya University and AI Center of Brawijaya University for providing high performance computing server.
|
2302.06951 | Few-shot learning approaches for classifying low resource domain
specific software requirements | With the advent of strong pre-trained natural language processing models like
BERT, DeBERTa, MiniLM, T5, the data requirement for industries to fine-tune
these models to their niche use cases has drastically reduced (typically to a
few hundred annotated samples for achieving a reasonable performance). However,
the availability of even a few hundred annotated samples may not always be
guaranteed in low resource domains like automotive, which often limits the
usage of such deep learning models in an industrial setting. In this paper we
aim to address the challenge of fine-tuning such pre-trained models with only a
few annotated samples, also known as Few-shot learning. Our experiments focus
on evaluating the performance of a diverse set of algorithms and methodologies
to achieve the task of classifying BOSCH automotive domain textual software
requirements into 3 categories, while utilizing only 15 annotated samples per
category for fine-tuning. We find that while SciBERT and DeBERTa based models
tend to be the most accurate at 15 training samples, their performance
improvement scales minimally as the number of annotated samples is increased to
50 in comparison to Siamese and T5 based models. | Anmol Nayak, Hari Prasad Timmapathini, Vidhya Murali, Atul Anil Gohad | 2023-02-14T10:19:23Z | http://arxiv.org/abs/2302.06951v1 | # Few-shot learning approaches for classifying low resource domain specific software requirements
###### Abstract
With the advent of strong pre-trained natural language processing models like BERT, DeBERTa, MiniLM, T5, the data requirement for industries to fine-tune these models to their niche use cases has drastically reduced (typically to a few hundred annotated samples for achieving a reasonable performance). However, the availability of even a few hundred annotated samples may not always be guaranteed in low resource domains like automotive, which often limits the usage of such deep learning models in an industrial setting. In this paper we aim to address the challenge of fine-tuning such pre-trained models with only a few annotated samples, also known as Few-shot learning. Our experiments focus on evaluating the performance of a diverse set of algorithms and methodologies to achieve the task of classifying BOSCH automotive domain textual software requirements into 3 categories, while utilizing only 15 annotated samples per category for fine-tuning. We find that while SciBERT and DeBERTa based models tend to be the most accurate at 15 training samples, their performance improvement scales minimally as the number of annotated samples is increased to 50 in comparison to Siamese and T5 based models.
Few-shot learning, Requirements classification, Contrastive learning, Natural Language Processing
## I Introduction
Few-shot learning (FSL) is a crucial area of research in machine learning that focuses on developing algorithms that can learn to recognize patterns with very limited training data. The aim of FSL is to equip machines with the ability to generalize from limited experiences, similar to the way human beings can recognize new patterns with only a few examples. The challenge of FSL lies in the fact that deep neural networks, which are commonly used in computer vision and natural language processing (NLP), require large amounts of training data to perform well. This has motivated the development of innovative approaches that can effectively learn from few examples and generalize to new, unseen data. The exact number of examples considered "few" can vary depending on the task and the complexity of the model. In general, FSL in NLP is often considered to involve learning from less than a hundred examples per class.
There have been many research papers in recent years that apply FSL in the field of NLP. These papers aim to address the challenge of learning from limited data in NLP tasks such as text classification, named entity recognition, and machine translation. One common approach in these papers is to use meta-learning, where the model is trained on a variety of tasks with limited data and then fine-tuned on a specific task with few examples. Another approach is to use transfer learning, where pre-trained language models are fine-tuned on a few examples of the target task. A comprehensive survey of these papers showed the effectiveness of FSL in NLP and provides insights into the design of models and algorithms that can perform well with limited data [1].
Further, there have been several research papers that have explored the use of data augmentation techniques for FSL in NLP. Data augmentation involves generating synthetic data from existing data, which can be used to increase the size of the training set and improve model performance. A comprehensive survey of these techniques showed that this can be achieved with rule based techniques like replacing named entities with synonyms or similar expressions, model based techniques like back translation or generative approaches [2].
Recent advancements in pre-trained transformer models such as BERT [3], DeBERTa [4], RoBERTa [5], MiniLM [6], T5 [7] have lead to state-of-the art performance on several NLP tasks of the General Language Understanding Evaluation (GLUE) benchmark [8]. This has enabled the use of such models for various downstream tasks with minimal domain specific fine-tuning. However, domain specific annotated data is often very limited and difficult to generate in low resource domains like automotive as it requires a domain experts understanding to perform annotation. While fine-tuning tends to give good performance when the number of annotated training samples are in hundreds or more, the performance of such pre-trained models falls drastically when the number of annotated samples are very few due to the following reasons:
* Difficulty in learning the semantics of a niche domain as they are significantly different from the pre-training data that was used to train these models.
* Complexity and Uniqueness of the domain specific tasks in comparison to the pre-training tasks of these models. |
2305.05227 | Privacy in Speech Technology | Speech technology for communication, accessing information and services has
rapidly improved in quality. It is convenient and appealing because speech is
the primary mode of communication for humans. Such technology however also
presents proven threats to privacy. Speech is a tool for communication and it
will thus inherently contain private information. Importantly, it however also
contains a wealth of side information, such as information related to health,
emotions, affiliations, and relationships, all of which are private. Exposing
such private information can lead to serious threats such as price gouging,
harassment, extortion, and stalking. This paper is a tutorial on privacy issues
related to speech technology, modeling their threats, approaches for protecting
users' privacy, measuring the performance of privacy-protecting methods,
perception of privacy as well as societal and legal consequences. In addition
to a tutorial overview, it also presents lines for further development where
improvements are most urgently needed. | Tom Bäckström | 2023-05-09T07:41:36Z | http://arxiv.org/abs/2305.05227v2 | # Privacy in Speech Technology
###### Abstract
Speech technology for communication, accessing information and services has rapidly improved in quality. It is convenient and appealing because speech is the primary mode of communication for humans. Such technology however also presents proven threats to privacy. Speech is a tool for communication and it will thus inherently contain private information. Importantly, it however also contains a wealth of side information, such as information related to health, emotions, affiliations, and relationships, all of which are private. Exposing such private information can lead to serious threats such as price gouging, harassment, extortion, and stalking. This paper is a tutorial on privacy issues related to speech technology, modeling their threats, approaches for protecting users' privacy, measuring the performance of privacy-protecting methods, perception of privacy as well as societal and legal consequences. In addition to a tutorial overview, it also presents lines for further development where improvements are most urgently needed.
speech technology, privacy and security, machine learning
## I Introduction
Speech is a mode of communication and thus inherently contains a wealth of information (see table I). Communicating information already known by the receiver is pointless, and efficient communication will thus mainly contain information that is _not_ widely known or which is private. In addition, speech contains also a wide range of side information like the state of health and emotions, as well as physical, psychological, and social identity, most of which is private information. Finally, speaking is a dynamic interaction between two or more speakers. Dialogues thus also contain information about the relationship between participants such as their level of familiarity, affiliation, intimacy, relative hierarchy, shared interests, and history. From an information content perspective, we can thus expect that _privacy is a multifaceted issue in all areas of speech technology_.
Studying and improving privacy in speech technology is important because breaches in privacy can have serious consequences. Table II lists examples of threats and exploits. The list is incomplete and we can expect new threats to be discovered. The generic solution template is however typically always the same: _Minimize transmission and storage of as well as access to sensitive information which is irrelevant to the service that the users want_. It is then "merely" a question of _how_ such minimization is achieved, _what_ information is relevant, and _how_ to determine what the user wants (provide information and enable control of services and threats).
With respect to threats, media attention is often focused on _breaches which have large economic consequences_[1, 2, 3]. While such breaches are important by themselves, the media attention introduces unfortunate biases in two ways. First, a breach is a worst-case event whereas threats, which have not yet led to a breach event, can already have a large impact. For example, users may choose not to use systems that threaten their privacy; known weaknesses and even a lax attitude of the service provider toward privacy can therefore
have an effect on the adoption, retention, and sales of products and services. Importantly, users can avoid systems that are _perceived_ threatening, even when there is no actual threat. To maintain users' trust, it is therefore important to both uphold the users' actual privacy, but also design systems to actively and frequently communicate the level of privacy, including known threats, their current status, and measures taken to protect against them.
Second, while a single breach in a large service can have large economic, psychological, societal, and legal consequences, small crimes are so common that their joint effect is comparable or larger in size [4]. With speech technology, such "small" crimes include stalking, extortion, harassment, humiliation, and inappropriate advances (see table II). While the economic damage of a single such incident can be small, when combined, their joint psychological and societal effect is potentially large and their prevalence makes them a considerable threat also economically.
This paper focuses on the two primary uses of speech technology, telecommunication, and human-computer interfaces. With respect to telecommunication, approaches to ethical, legal, and technological questions related to privacy in telecommunication over landlines are well-established, and open discussions are primarily related to how the existing regulation and oversight should be extended to cover also mobile telecommunication and voice-over-Internet protocols (VoIP) [5, 6, 7]. The primary focus of this paper is thus human-computer interfaces where a computer processes speech signals (see fig. 3), but threats to communication applications are also considered where applicable. Focus is here however limited to the acoustic speech signal also known as the _voice_ since natural language processing is a distinct and largely independent field that warrants its own treatment (e.g. [8]).
Privacy is closely related to challenges in security and it is often difficult to distinguish between them. Here we strike the balance by considering _privacy scenarios where an agent has legitimate access to some private speech data of the user, but uses it for purposes contradicting with, or gains access beyond, the users' expectations or preferences_. For example, a voice interface can be used to control home automation, but if the service provider shares that voice data also to advertisers against the users' preferences, then it is a violation of privacy (see fig. 4). Consequently, security concerns such as identity spoofing, deep fakes as well as attacks on devices or networks are also excluded here, as these threats are more related to security rather than privacy.
This paper presents a tutorial overview of privacy in speech technology that covers a wide range of threats, methodologies, and algorithms. Analysis of threats however demonstrates that while attack surfaces have great variety, the threat models are similar (see section II). Protection against those threats have four distinct categories of approaches, removing side information, improving overall system performance, limiting access to private message and limiting access to reproduced audio (see section III). To quantify of extent of protections we further need methods for objective evaluation (see section IV). While objective evaluation quantifies the actual level of privacy, users' impressions of privacy do not necessarily follow the objective level. For the best user experience, therefore, we need to quantify and understand users' perceptions, experiences, and preferences related to privacy, as well as design systems accordingly (see section V). The range of applications and content types where privacy-preserving processing methods are needed is vast and section VI presents a brief overview thereof. Finally, threats to and breaches in privacy have considerable consequences on both societal, economic, and individual levels, which has motivated governments to increasingly regulate the use of technology (see section VII).
To the author's knowledge, this is the first wider tutorial overview of privacy in speech technology. Recent works in the field however include an technical overview [9], as well as a popular-science review [10] of this area, and several doctoral theses have their respective summaries, e.g. [11, 12, 13, 14]. Privacy has been extensively discussed in other areas of science, such as the neighboring field of natural language processing [8], in statistical theory of privacy [15], in social sciences [16] and psychology [17]. There is even an excellent and thorough meta-study of all areas of science which discuss privacy [18].
## II Threats
### _Exposure_
Speech information can be _exposed_ in two forms (see fig. 1): First, when a _private message is transmitted_, it is a threat to privacy when contrary to the preferences of the user, it is used in an unexpected way or transmitted to a third party. Second, since speech contains a wide range of private information (see table I) which is bundled in the acoustic signal in complicated ways, it is challenging to extract only the desired message. Typically speech information thus always contains consequential side-information, bundled into the private message. It is a threat to privacy when such _private side-information is transmitted_ to an undesired recipient alongside the private message. Per definition, we here assume that the legitimate recipient can be trusted with the side information and that any undesirable use of it is labeled as an undesirable service.
The difference between the two forms of information makes an important difference in approaches to mitigation. When a private message is exposed, then our only available solutions are to either reduce the accuracy of, or to limit access to the private message for example through cryptography
Fig. 1: The high-level _threat model_ of speech interaction, where a private message is sent through a channel to the legitimate recipient, but consequential side-information is bundled to that message. It is a threat to privacy when an undesired recipient gains access to that private message or side information (marked by red arrows and exclamation marks).
(see sections III-C2, III-D and IV-B). However when side-information is exposed, as additional methods we can also use signal processing to better remove, replace or distort such side-information (see section III-A).
Observe that the threat exposure does not depend on the _route of the information_, as long as the content arriving to the undesired recipient is the same. The attack surface or channel which leaks information to the undesired recipient however does have a large impact on the choice of mitigation (see section II-E). The _type of undesired recipient_, be it a service, device, or person, has an effect mainly on its potential _ability to extract and use_ private information in a nefarious way. The main difference in the _type of sender_, person, device, or service, is their different _ability to remove_ side-information from the speech signal prior to transmission.
### _Inference of Attributes and Identity_
Threats to the speakers' privacy can be categorized into two varieties; _property/attribute inference_ and _re-identification_. The difference is that given a single known speaker, through property inference we can associate new private information or attributes to that speaker. In comparison, by analyzing (anonymized) speech data of an unknown speaker, we can assign the identity to the most likely speaker within a database of a multitude of speakers. The only difference is thus that in property inference we assign attributes to a single speaker, whereas in re-identification we look at many speakers and assign attributes to one (or a subset) of them. However, if we treat the physical identity (like the name on the passport) as a property or attribute of the speaker, then re-identification means that we have been able to infer a property of the speaker. Re-identification is, in this sense, one particular type of property inference.
### _Attacker Scenarios_
_Attacks_ can further be classified according to the amount of information available for the attacker, such as information about the speakers and about the trained models. Access to the training data as well as speech samples from the specific user (known as _enrollment data_) are particularly useful for the attacker. For example, in the anonymization task of the VoicePrivacy 2022 challenge, the objective was to replace (pseudonymize) speaker identity but retain all other speech characteristics such as linguistic information. The anonymized sentences are known as _trial_ utterances. The attack scenarios were then classified as [19]:
1. _Unprotected_: no anonymization is performed; attackers have access to original trial and enrollment data.
2. _Ignorant attacker_: Trial data is anonymized, but attackers are unaware of it, hence they use original data for enrollment.
3. _Lazy-informed_: Both trial and enrollment data are available to attackers, but anonymized with different pseudo-speaker.
4. _Semi-informed_: As in lazy-informed, attackers can also train their model with the same anonymization system but different pseudo-speakers.
With increasing information, obviously, the attacker's task becomes easier and the accuracy of the attacker's models improves. Further, such attack scenarios can then be devised according to the specific use case. For example, if the task is to anonymize emotional or health status, we can define different attack scenarios depending on the extent to which the attacker has information about categories of emotions and health status used in anonymization.
### _Attack Model_
Figure 2 illustrates an attack model for measuring the extent of privacy [20]. We assume that there is some private data, which is anonymized such that it can be used in a trusted task. The attacker has access to the anonymized data, which is here called "public data". Observe that the public data is not necessarily openly available for anyone to see, but it only indicates that the data is sufficiently freely available that the attacker has access to it. The attacker has also access to some other data about speakers, found from some other source (found data), which helps in extracting private information. Again, the term "found data" is a loose term and denotes any data that is available to the attacker.
### _Attack Surfaces_
We categorize scenarios according to the attack surface from where information is extracted (see fig. 3); a channel between cloud services (section II-E1), a user interface to the edge device (section II-E2) or to the cloud service (section II-E3), from the local network (section II-E4), the acoustic pathway (section II-E5), or through the shared user interface of the local device (section II-E6). We consider only cases where a piece of technology is receiving or transmitting information and exclude human-to-human communication without devices. We also assume that devices and network connections are secured such that only authorized services can communicate with them. The threats we focus on are thus illustrated with numbered, solid red lines in fig. 3.
#### Iii-D1 Cloud Leak
Suppose a user accesses a remote cloud service through an edge device (see fig. 4). The cloud service thus has legitimate access to the private message of the user. The cloud server however can then use that private
Fig. 2: An _attack model_ for the evaluation of privacy-preserving anonymization, where private data is anonymized to remove private information, and the anonymized data is shared publicly. An attacker uses any available (found) data and anonymized public data to infer private information contrary to the users’ preferences. Anonymized data flow is indicated by dashed lines and the attack by red lines and an exclamation mark.
message or bundled side information for some other purpose than that requested by the user, extract more information than anticipated, combine it with other information, or share information with a third party. For example, the cloud server of a voice assistant could inappropriately share information with an advertiser.
#### Iii-B2 False Activation
Devices with speech interfaces can hear all conversations in the same acoustic space as the device and therefore need mechanisms to determine which utterances are intended for the speech interface. A popular approach is to use a specific utterance, known as a wake word, to start all interactions with the interface [21]. The wake word is then like a rudimentary password, which prevents the interface from activating when speech is not directed to the device. Unfortunately, designing wake word detectors is non-trivial, and they will occasionally make mistakes. They might sometimes miss a wake word when it is spoken (false negative) or mistake some other unrelated sound as a wake word (false positive). While false negatives are annoying for the user when the service does not activate, false positives potentially present serious threats to privacy. In some famous cases, speech interfaces have activated from sounds on the television to buy unwanted items at the users' behest, and users' private conversations have been leaked to third parties [22, 23].
#### Iii-B3 Cloud Access
Figure 6 illustrates the threat where a user Alice accesses a cloud service through an edge device, and where some private information of Alice is stored. A second user Eve can then access the same cloud service through another device and potentially gain access to the stored private information, contrary to Alice's preferences. This attack surface is similar to the Cloud Leak scenario, with the main difference being the recipient, which is here a person, whereas, in the Cloud Leak scenario, it is an automated agent.
An example of this threat is medical records, which can be collected from patients to a centralized database. A researcher with authorization to access the database can then potentially extract private information beyond the expected. [24]
#### Iii-B4 Wasm Authentication
The audio quality of speech pickup as well as the usability of voice interfaces can be improved by using all available connected devices with microphones that reside in the same room or acoustic space [25, 26, 27]. Such collaboration can be realized with acoustic sensor networks, where several independent devices simultaneously pick up speech and those channels are combined to obtain a high-quality signal (see fig. 7) [11, 13]. A sensitive question is however authorization; Which devices are allowed to share information with each other? One approach is to assume that devices which are in the same room or acoustic space can hear the same signal [13, 28]. Presence in the same space is thus already an implicit authorization to participate in a joint signal pickup. To protect privacy, it is then necessary to determine which devices reside in the same acoustic space. Devices in a different room can belong to the same company or family, and they can be connected to the same network, but still, they are outside the sphere of the current discussion. Without proper authorization mechanisms, devices outside the room could then gain access to private speech inside the room.
#### Iii-B5 Speech Interface and Discussion Leaks
Figure 8 illustrates a scenario where a device or person overhears a discussion between a user Alice and a local device. Similarly, fig. 9 illustrates a discussion between two persons, overheard by a local device. The defining property of both scenarios is the leak in the acoustic pathway, which necessitates that the eavesdropper is physically present in the same acoustic space as the private communication. The difference between these leaks and False Activation (section II-E2 and fig. 5) is that here the eavesdropper is collecting data inappropriately, whereas in False Activation an incorrect functionality is activated.
There are four distinct cases to consider depending on
Fig. 4: Threat scenario “_Cloud Leak_”, where a user Alice accesses a (primary) cloud service using an edge device, but the information is shared to a secondary service contrary to preferences (red arrow and exclamation mark).
Fig. 5: Threat scenario “_False Activation_”, where a user Alice accesses an edge device, but the information is shared to a cloud service contrary to preferences (red arrow and exclamation mark).
Fig. 3: Threats to the privacy of a user Alice when speaking with another person Bob or a local device, but where information is leaked or shared through the acoustic pathways, the edge device, the network or the cloud service, to another person Eve, device or service, contrary to the preferences of Alice (red arrows). Each threat is marked with the number of the corresponding part of section II-E. Dotted lines marked with “\(X\)” indicate threats outside the current scope and are not considered here.
Fig. 6: Threat scenario “_Cloud Access_”, where user Alice uses a cloud service through an edge device, where it is potentially stored and shared with user Eve, contrary to Alice’s preferences (red arrow and exclamation mark).
whose speech is heard, Alice (or Bob) or the local device, and who is the eavesdropper, Eve, or the other local device. In the case where Eve overhears Alice's speech, we assume that people have a learned awareness of which other people are present in the same room and adjust their speech accordingly; Then either Alice does not mind that Eve overhears her speech (inconsequential information), Alice can change her speech style to a whisper, or change content, such that Eve does not hear anything private (reduced information transmission) or she can go to a different room to continue the interaction in private (modified acoustic channel).
#### Ii-B6 Shared Device
Many practical scenarios include _multiple users_ sharing one or several devices. For example, a family can share a smart speaker or television, an office meeting room can have smart devices and customer service points can have a phone shared across duty officers. Each of these devices can collect private information about the user(s) over time and can potentially share it with other users (see fig. 10). Notably, this scenario highlights that leak happens over a distance in _time_, whereas threats occurring over a _spatial_ distance often receive the majority of the attention.
This threat model clearly demonstrates that devices and services used by multiple users need to employ access control and authorization management if they store any private information. It is also obviously related to many security threats - unauthorized access to devices should be prevented - but those are outside the scope of this work.
#### Ii-B7 Levels of Trust
In the Cloud Leak-scenario in fig. 4, observe that it represents a sequence of automated agents - edge, primary and secondary cloud services - where it would be desirable that with each step in the sequence, the amount of information shared is reduced (see fig. 11). Such minimization of the potential attack surface is sensible from a risk management perspective. However, at the same time, we can interpret this as _levels of trust_. The user can exert direct control over the local acoustic space and the edge device, and thus correspondingly we can expect that the user has the highest level of trust in the edge device. With each step further in the sequence, the level of control will diminish and similarly, the level of trust decreases.
Trust is clearly a concept that depends on at least the psychological, social, and cultural context, making it hard to define. One possible definition of trust is as follows. If user Alice has observed an agent for some time, they can form a prediction of how the agent will behave. If the probability distribution of the prediction is narrow, then Alice has high confidence in that prediction. We then say that Alice has trust in the agent. We thus define that _a prediction of high confidence is equal to trust_. If the prediction is also that the agent behaves in a beneficial way for Alice, then they _perceive_ the agent _as trustworthy_. This is however dependent on Alice's ability to predict actions. If the agent actually behaves beneficially, then the agent is _objectively trustworthy_.
## III Protections
### _Reducing Side Information_
The primary approach for reducing private side information in speech signals is signal processing. It has two opposing objectives corresponding respectively to _utility_ and _privacy_; 1) the _trusted task_ of processing or analysis, where some category of information is extracted for a legitimate purpose and
Fig. 8: Threat scenario “_Speech Interface Leak_”, where a user Alice accesses an edge device (potentially including also a remote service), but another local person Eve, or device in the same room or acoustic space overhears the speech interaction contrary to preferences (red arrow and exclamation mark).
Fig. 10: Threat scenario “_Shared Device_”, where user Alice uses an edge device, where information is stored, and shared with another user Eve, contrary to Alice’s preferences (red arrow and exclamation mark).
Fig. 7: Threat scenario “_WASN Authentication_”, where a user uses a service through a distributed sensor network inside a room, but another device, outside the room in the same network, joins the distributed sensor network and shares information contrary to preferences (red arrow and exclamation mark).
2) protection against the _threat task_, where a nefarious operator tries to extract private information beyond the legitimate objective. Quality of the trusted task typically follows classic speech processing methodology, while protection against the threat task can be achieved by removing, replacing, or distorting private information. The central question is how information for the trusted task is separated from other private information.
The most common approach for protecting against this scenario is to limit information flow from the local edge device to the cloud server (see fig. 11). Such limitation or reduction in information flow can be implemented by removing, replacing, or distorting private content as in fig. 11(a). At the extreme, the edge device can be entirely disconnected from the cloud or network, and process information only locally, as in fig. 11(b) [29]. In doing so, we then need to assume that the local device has sufficient capacity and software to complete the requested tasks, that it is not compromised, and that the restriction of information flow is sufficient to ensure that private information is not leaked.
The two main approaches for removing private information are based on an _information bottleneck_ or an _adversarial model_. Information bottlenecks are based on squeezing the desired information through a metaphorical bottleneck, such that there is no room for side-information to pass through [30]. Often such models follow an autoencoder structure, which is trained to reconstruct the input signal from the transmitted signal (see section III-A1).
Adversarial models (which should not be confused with generative adversarial models) are in turn used in the training of machine learning models to make sure that no private information can be deduced from the transmitted information. It is based on modeling both the trusted and threat tasks in parallel, but the transmitted message is optimized so that the trusted task succeeds and the threat task fails (see section III-A2). A generalization of the information bottleneck idea is to _disentangle_ speech into multiple independent data streams. This allows applications to cherry-pick the private information that should be transmitted case by case (see section III-A3).
#### Iii-A1 Information Bottleneck
To distill only the private message and discard any side-information, with the information bottleneck approach, speech information is passed through a bottleneck so tight, that only the legitimate message can pass through [31, 32] (see fig. 12). This approach thus provides protection against an attacker who has access to the output of the bottleneck. The challenges are to design a bottleneck that is sufficiently tight such that private side information is discarded and a model structure and training methodology which optimizes the quality of the legitimate message.
Information content at the bottleneck can be reduced by either reducing the signal rank (reducing the dimensionality of data passed through the bottleneck, e.g. [33]) or by quantizing and coding the signal at a low bitrate (e.g. [9, 34]). The trade-off between the accuracy of reconstruction and bottleneck entropy then determines the extent of privacy (see section IV).
In parallel, when using a machine learning model, the model has to be trained for the best trade-off between utility and privacy. Typically this entails minimizing the loss in the accuracy of the private message. For example, if the task is to extract text content with an automatic speech recognizer, then the error rate of that recognizer should be minimized. Similarly, in applications where the output is played to human listeners, we should use both objective quality evaluation methods like POLQA and PESQ [35, 36], as well as subjective listening tests like P.800 and MUSHRA [37, 38].
A frequently used approach to implementing a bottleneck is an autoencoder structure, consisting of an encoder, bottleneck, and a decoder, where the objective is to reconstruct the input signal from the bottleneck output [33, 39, 40] (see fig. 13). However, to be privacy-preserving, the bottleneck should be sufficiently tight that the original speech signal cannot be perfectly reconstructed to resemble the original in their waveform. This makes it challenging to design a loss function because the output does not sufficiently resemble the input. One solution akin to representation learning is to feed the output of the autoencoder again to the encoder and compare the bottleneck features [41, 42].
Fig. 11: Two alternative protections against the “_Cloud Leak_” scenario, where the transmission of private information to the cloud is either (a) reduced or (b) prevented.
Fig. 12: Training of a privacy-preserving speech analysis method with the _information bottleneck_ principle. The point of attack is indicated by the red arrow and exclamation mark.
Fig. 13: Training of a privacy-preserving an _autoencoder_ structure as an example of the information bottleneck principle. The point of attack is indicated by the red arrow and exclamation mark.
#### Iii-A2 Adversarial Approach
An alternative to reducing the size of the information bottleneck is to use an adversarial model during training, such that side-information in the bottleneck is minimized [43, 44]. Figure 14 illustrates the model structure, where the trusted and threat tasks correspond respectively to utility and privacy, which in turn respectively correspond to the extraction of the private message and side-information. The threat task is independently optimized to extract private side information from the bottleneck output. While keeping the threat task fixed, the encoder and the trusted tasks are jointly optimized to maximize the accuracy of the private message _and_ to minimize the accuracy of private side information. Adversarial training thus has a mixture of maximizing and minimizing the accuracy of private side information. This forces the encoder to minimize private side information passed through the bottleneck, since the threat task tries to maximize the extraction of private side information.
An advantage of an adversarial configuration is that the designer can specifically choose which type of threat and information category is removed from the data. The system can also be optimized end-to-end, such that all components are jointly optimized for the best performance. This can lead to an efficient model in terms of the trade-off between computational cost and output quality.
The principal issue with adversarial training is that it does not give any theoretical guarantees or measures of privacy. The best it can do is to demonstrate the extent to which the chosen adversarial model was unable to extract private side information from the chosen category of private information. We can thus expect that some other attacker with a better model could still extract private side-information _and_ that other categories of private information are potentially present in the bottleneck output.
#### Iii-A3 Disentanglement
Information bottlenecks can be generalized to encompass multiple bottleneck channels, where each bottleneck represents a _disentangled_ representation (see fig. 15). That is, each channel represents a distinct and independent attribute of the signal such that the extent of anonymization can be cherry-picked per channel as per use-case e.g. [45, 46, 47].
The main challenge with disentanglement is to define the constraints with which information is funneled to the corresponding channel. For example, the model can be trained to match specific channels with labels in the dataset (e.g. [46, 47]). Alternatively, using representation learning, we can constrain channels based on objective criteria such as time-scope, i.e. the length of time over which data is integrated (e.g. [48, 49, 41]).
### _Improving Performance_
In many use cases, like False Activation, the actual culprit is the inadequate performance of the service. For example, if a wake word detector is incorrectly triggered (false positive), then the system will start to listen to a conversation when it was not supposed to, thus breaching privacy. Since the inadequate wake word detector is thus causing a privacy breach, then the best solution is to improve the wake word detector. The treatment thus addresses the cause rather than the symptoms thereof.
This approach has however two principal challenges. First, improving performance often requires an increase in computational power and other resource consumption. This is not only financially costly but has also an environmental penalty [50]. In particular, balancing the computational load is easier on a cloud server, and thus existing resources can be more efficiently used. Second, even with improved performance, speech interfaces will always have occasional errors. That is, the design of privacy-preserving speech technology must include multiple layers of protection, especially when it comes to operations with large consequences [51]. Say, when the wake word detector is activated, then before any actions which would potentially breach privacy, the system could require that the speaker is in the same room, or the speaker identity is verified, or an extra confirmation step "_Are you sure?_" or similar. The intrusiveness of such additional protections should then reflect the severity of the potential breach such that the protections are not perceived as overly obtrusive and decrease the utility of the service.
Improving performance can also mean that the system requires additional functionalities. For example, in the case a second local device overhears Alice's interaction with Bob (Discussion Leak) or the primary device (Speech Interface Leak), the required privacy protection is that the secondary device either 1) is aware that it is not the intended recipient of speech (speech analysis), or 2) notifies the user of its presence
Fig. 14: Training of a privacy-preserving speech analysis method with an _adversarial_ approach. The trusted task is authorized to extract some information, while the threat task (drawn in red) is extracting some other private information. During training, the trusted task is competing with the adversarial threat task, such that in the encoder block, all private information is removed. During inference, the threat task can be ignored.
Fig. 15: Privacy-preserving speech processing through _disentanglement_, where speech is decomposed into independent streams of information, each representing a distinct category of information and where the level of anonymization can be individually chosen for each category. The point of attack is indicated by the red arrow and exclamation mark.
(user-interface design). This highlights the importance of trusting the devices present in the acoustic space. The secondary local device _can hear_ everything spoken in the local space. The user thus has to have a high level of trust in the device and service provider that it is sufficiently _competent_ to know when it is part of a conversation, that it is sufficiently _benevolent_ to protect the users' privacy as well as notify the user of potentially privacy-infringing activity (cf. [52, 53]).
### _Limiting Access to Private Messages_
Every speech application involves the transmission of necessary, legitimate message, which is likely to be private. Since transmission of such private information is thus unavoidable, it exposes the user to threats to their privacy. Insofar we need the speech application, threats can be mitigated by 1) _reducing_ the accuracy of information by adding noise, 2) making information inaccessible by e.g. _encryption_, or 3) choosing not to send anything but apply the processing only locally, known as _edge processing_ (see section III-A).
#### Iv-C1 Reducing Accuracy and Differential Privacy
Many applications require only population averages rather than information about specific individuals. For example, a call center could plausible need to know the distribution of genders, but not need to know the gender identity of any specific customer. This information can be extracted with a simple stochastic scheme known from the area of differential privacy [54]. Namely, the customer would first flip a coin to determine whether they should lie or tell the truth. When lying, the customer then flips a coin again to determine which gender to report (here we use only binary gender categories for brevity). This gives a 75 % likelihood that the true identity was reported and 25 % likelihood that it was false [55]. In other words, this approach corresponds to distorting the signal, or to adding noise to the signal to protect privacy (see fig. 16).
The customer can then always claim that they were lying, such that this system gives _plausible deniability_. Simultaneously, given a sufficiently large sample, we can then always estimate the true distribution from the noisy sample. Observe that while this approach gives plausible deniability, information is still statistically correlated with the true information. Given multiple noisy pieces of information, it may then still be possible to recover accurate, private information about the speaker. It is thus paramount to quantify the extent of protection with measures of differential privacy (see section IV-B).
The above example readily generalizes to any categorical information like political or religious affiliation, but also to continuous parameters like the age of the speaker or other biometric characterizations. Heuristically, reducing the accuracy of continuous parameters thus becomes similar to additive noise, which can be measured by the signal-to-noise ratio.
#### Iv-C2 Cryptography and Secure Computing Systems
If the transmitted information is encrypted, then it cannot be used for malicious purposes without access to the key. Surprisingly, however, it is possible to process encrypted information when using _homomorphic encryption_, such that the encrypted result of the computation can be returned and opened [56, 57, 24] (see fig. 17). This can be applied for example in the extraction of spectral features of speech or speech recognition in the cloud, without plain text access to the speech signal [58, 59]. The edge device would then encrypt the speech signal, and send the encrypted data to the cloud which extracts encrypted information, and returns it to the edge device, which can open the encrypted result. While homomorphic encryption in principle provides a beautiful solution to privacy, it often comes at a prohibitively great cost in computational cost. The number of operations performed and bits transmitted increase exponentially with the complexity of the original problem.
Another approach is _secure multiparty computation_ (MPC), where multiple parties can compute a joint function without revealing anything to each other [60]. MPC provides a much smaller overhead in computations and communication, while it can simultaneously be shown that the unlinkability, irreversibility, and renewability of biometric information are granted [61]. It can be applied for example in speaker recognition in the cloud, such that the users' speaker model is not revealed to the cloud and the recognition model is not revealed to the user [62].
#### Iv-C3 Distributed learning
A majority of advanced speech processing today uses machine learning, which has to be
Fig. 16: Protecting privacy by adding noise to private data, following the idea of _differential privacy_.
Fig. 17: Protecting privacy by _encrypting_ communication and processing. The gray area indicates domain which is encrypted and red, double arrows marked with exclamation mark indicate the protected stream of data.
Fig. 18: _Federated learning_ as an example of distributed learning, where devices train models locally and transmit only model updates to the cloud. Red dashed arrows marked with exclamation marks indicate the reduced flow of information.
trained using large databases of speech. The best quality data correspond closely to scenarios where the services are used. Recording users' interactions with their devices is then attractive for training improved models since it corresponds exactly to the use case. This presents a considerable threat to privacy because such unrestricted recording could capture a wide range of private information, including all interactions with the device but potentially also any and all speech in the vicinity of the device.
Distributed learning is an approach to training models without the need for centralized data collection [63]. Models are trained on local edge devices and only model updates are shared between the nodes and/or with a central server (see fig. 18). Since raw data then never leaves the edge device, this corresponds to a privacy protection approach where information flow is restricted (cf. section III-C1).
Federated learning is one of the flavors of distributed learning, where a central, cloud server collects, merges, and redistributes model updates. It has been used for example in speaker and emotion recognition, language modeling, as well as for unsupervised estimation of microphone clusters in sensor networks [64, 65, 66, 67, 68, 69, 70, 71].
Overall, distributed learning is a promising approach, even if it has several challenges. First, constructing, training, and testing systems architectures is much more complicated than regular machine learning. Second, model updates are model-specific and cannot easily be reused if the model structure is updated. The learning accumulated during training is then effectively lost every time the model is updated, which also jeopardizes fair comparison of competing approaches. Third, even if distributed learning does improve privacy, it is not a guarantee for privacy, since model updates can also contain private information [72].
### _Limiting Access to Reproduced Audio_
When a user is interacting with a speech interface, then the spoken answer of the speech interface can contain private information. This private information can be overheard by other users in the same acoustic space and that presents a threat to privacy (see fig. 8 and section II-E5). In theory, it would be possible to identify and track users in the same acoustic space and communicate private information only when it does not pose a threat to privacy (see fig. 19). This places great trust in the local device and in its ability to track and identify authorization levels of the people who are present in the acoustic space. Such systems have however not yet been widely published.
Figure 20 presents another solution, which uses constructive and destructive interference between loudspeakers to create _sound zones_ where the private information is, respectively, intelligible or distorted (see e.g. [73, 74, 75]). By choosing the spatial location where speech is intelligible and assuming we know where the target user is located, we can thus limit access to private information only to the target speaker. Note that sound zones with destructive interference to receive a partial observation of the private message, but it is (hopefully) distorted to the extent that it is unintelligible. This approach is thus another method that uses distortion of the private message to preserve privacy (cf. section III-A). The central challenges of this approach are to make the constructive sound zone large enough that it allows for small head movements, and to make the destructive interference uniform everywhere else, such that there are no isolated points with constructive interference outside the desired sound zone. A benefit of this approach is however that it requires tracking of only the target listener, which is, while difficult, still much simpler than tracking all the people in the room.
## IV Evaluating Privacy
To evaluate the performance of any privacy-preserving methods, we need performance measures for both utility and privacy, corresponding respectively to the trusted and threat tasks (see fig. 1). The performance measures of a trusted task are defined by the application, they follow the typical procedures of conventional speech processing methodology [76, 77] and thus need not be discussed further here. Only need to note that there is often a trade-off between measures of utility and privacy, such that it is pointless to evaluate only one, but proper experiments should always evaluate the trade-off. The objective is then to define performance measures for measuring the extent of privacy provided by the method.
### _Objective Metrics_
Metrics applicable to the above attack model include:
* The _equal error rate (EER)_ considers an attacker which makes decisions by applying a threshold to a scoring function, where the threshold is chosen such that false
Fig. 19: Protection against a reproduction leak by _authorization tracking_ of users, where the device keeps track of people present in the room such that private information is not shared with unauthorized listeners (red dashed line and exclamation mark).
Fig. 20: Protection against a reproduction leak by using _sound zones_, where constructive interference between loudspeakers is used to retain intelligibility for user Alice, and destructive interference distorts it for user Eve (red dashed lines and exclamation mark).
positives and false negatives are equal [20, 78]. An increase in EER means that the attacker has made more errors and privacy has improved.
* The _application-independent log-likelihood-ratio cost function \(C_{llr}^{\min}\)_ generalizes the EER by considering optimal thresholds over all possible prior probabilities and all possible error cost functions [20, 79].
* _Linkability_ is defined as the (log-)likelihood that two datasets are instances from the same or different origin [80].
* _Mutual information_ can be used to quantify how much new information about a speaker we gain from a new dataset [81].
Note that all above metrics share two notable weaknesses and these are shared with the differential privacy approach (below, section IV-B). First, they do not protect against future attacks. This is particularly evident for the EER, which measures the performance of one implementation of an attacker. It is clear that a larger, more advanced model could make a more powerful attack. This applies also to the likelihood-based measures, where the definitions of metrics are model-independent, but rely which rely on case-specific models of probability distributions. When those statistical models are improved, then new weaknesses may be discovered.
Second, each of the above metrics is related to single observations. In contrast, speech is a continuous flow of information that gives a series of observations. With each observation, we can reduce the confidence intervals as long as different instances have different probability distributions. Repeated observations will thus, with all certainty, breach privacy for all distinguishable characteristics, when the number of observations is sufficiently high.
A more comprehensive evaluation of objective privacy metrics is provided in [82] and the VoicePrivacy challenge gives a practical example of how to apply the metrics [19].
### _Differential privacy_
Anonymization and privacy are never absolute. Even if current methods and currently available datasets do not allow us to infer anything private from an anonymized dataset, there is no guarantee that the user would be protected also when the novel methodology is introduced in the future or when new associated datasets become available. We can therefore only make claims about _how well_ users are protected. _Differential privacy_ is such a theory and methodology for characterizing the extent of protection to privacy for the users [83, 54].
Differential privacy operates with a database with private information, such as the age of \(n\) users, from which we calculate the average age \(m_{n}\). Such population statistics can usually be treated as anonymized when the population size is sufficiently large. However, suppose a new user Alice is enrolled in the database, and the average age is updated to \(m_{n+1}\). If an attacker then gains access to the two averages, \(m_{n}\) and \(m_{n+1}\) as well as the number of users before Alice \(n\), then we can trivially find Alice's age as \(m_{n+1}(n+1)-m_{n}n\). In other words, by tracking changes in anonymized information, we were still able to reveal private information!
To protect privacy, we can however add noise to individual measurements (see section III-C1), and this gives some protection against the above demonstrated differential attack. Calculation of the population statistics thus becomes a randomized algorithm \(\mathcal{M}\). Formally, we say that the randomized algorithm \(\mathcal{M}\) gives \(\epsilon\)_-differential privacy_ if for all data sets \(D\) and \(D^{\prime}\), differing by one user, and subset of the output \(S\subseteq\mathrm{Range}\left(\mathcal{M}\right)\),
\[\ln\Pr\left[\mathcal{M}\left(D\right)\in S\right]\leq\epsilon+\ln\Pr\left[ \mathcal{M}\left(D^{\prime}\right)\in S\right]. \tag{1}\]
Here \(\Pr[\cdot]\) refers to the probability, and \(\epsilon\) is the loss of privacy. The smaller \(\epsilon\) the better privacy.
An interpretation of this definition is that _it constrains the effect of any single user on the overall log-likelihood of the output to be smaller than \(\epsilon\)_. Since log-likelihoods characterize entropy, \(\epsilon\) thus corresponds to the amount of private information available to the attacker. While differential privacy is defined using membership in a dataset as its basis, it can be applied to any attribute of the speaker. In other words, if an attacker is interested in any particular attribute of a speaker, then an algorithm with \(\epsilon\)-differential privacy will give at most \(\epsilon\) nats of information about that attribute. Here the unit nat corresponds to natural units of information, defined as the natural logarithm of likelihood, whereas bits correspond to the base-2 logarithm of the likelihood.
The benefit of differential privacy is that it provides an exact mathematical framework for analysis of the extent of privacy. It is however necessarily always based on a statistical model which approximates the underlying system. Even if differential privacy thus gives exact answers, their reliability then still always depends on the accuracy of the underlying statistical models. Nonetheless, with differential privacy, we can for example analyze the effect of having two parallel sources of information, with differential privacy of \(\epsilon_{1}\) and \(\epsilon_{2}\), respectively. Since the \(\epsilon_{k}\) values represent the loss in privacy, then the combined loss of privacy from two sources of information is at most their sum \(\epsilon_{1}+\epsilon_{2}\)[83].
Information bottlenecks (see section III-A1, [31]) have a notable parallel with differential privacy. Where information bottlenecks limit the amount of information that is allowed to pass through, eq. (1) also quantifies the amount of information leaked. The difference is that the bottleneck contains both the intended private message and leaked information, whereas eq. (1) contains only leaked information. If the bitrate of the intended private message is known, then obviously the leak size of a bottleneck can be quantified. This interpretation is however based on two implicit assumptions. First, the bottleneck has to be quantized, like in the vector-quantized variational autoencoder (VQ-VAE) approach [84], since a continuous-valued bottleneck can, _in theory_, hold an unlimited amount of information because any \(N\)-dimensional space can be mapped to a 1-dimensional scalar using space-filling curves [85]. Second, eq. (1) holds only for a single observation, whereas a time series gives repeated observations. With every new observation, we receive new information. We can expect such repeated observations to be correlated, but nevertheless, with a sufficient amount of observations, we can differentiate between any distinct distributions. In its plain
form, differential privacy thus does not give any protection when we can observe a time series for a sufficiently long time.
Methodology of differential privacy has been only recently introduced within speech technology, among others privacy-preserving speech recognition [68], emotion recognition [64] and speaker anonymization [86]. As the mathematical rigor of differential privacy has obvious advantages, it is likely that the adoption of this methodology will increase.
## V Perception and Psychology of Privacy
People tend to have a deep sense of _ownership_ of some things, both material and immaterial. In particular, people have a _feeling of ownership_ toward information about themselves [87, 88]. Such feelings are to some extent detached from the material consequences of breaches in privacy. For example, even if publishing an audio recording revealing a secret intimate relationship would not have direct economic consequences, it can cause psychological damage and harm personal relationships. The effect of threats to privacy then necessarily becomes a question of psychology and social sciences. To qualitatively or quantitatively measure such effects, we then also need subjective tests with users.
Concurrently, people have well-established social rules regarding human-to-human privacy [16]. Moreover, users also have a tendency to anthropomorphize technology, that is, treat devices and services as if they were human [89, 90], such that they will likely _assume_ that devices apply human-like social rules to privacy. Observe that such social rules are related to human-like behavior and performance. That does not reveal whether they are applicable to the super-human performance that computers can possess, such as the ability to integrate information over massive databases and to retain accurate records of events long past. How machines should then behave especially with respect to super-human capabilities, is then not only a question of user-interface design but of moral psychology [91]: We need discussions on a societal level of how automated services _should behave_ and how they are allowed to behave (see also section VII).
### _Perception and Experience of Privacy_
Irrespective of whether a system _actually_ preserves privacy or not, some users can _perceive_ the system as threatening and others can be oblivious to the threats it poses [92]. Clearly, users do not like using services they perceive as threatening. Understanding how people perceive privacy is interesting in its own right but such understanding is essential in the design of effective user interfaces.
Perception and experience of privacy can be approached in two alternative ways. We can make user studies where people interact with either machines or each other. The distinction is important in the sense that human-to-human interaction relies on social rules which are well-established and developed over a long time. We can thus expect them to be stable over time, whereas human-computer interaction is continuously evolving as people learn more. Moreover, by using studies of human-to-human interaction, we can learn effects related to human-like performance but can probably not rely on observing effects related to super-human performance. Studies of human-to-human interaction are however always a proxy if the actual target is to design human-to-computer interfaces.
Studies of human-computer interaction are thus characterizing the desired phenomenon directly. The compromise is however that people's understanding of privacy with devices might not reflect the true level of privacy, and conclusions made based on the users' opinions might then not reflect their true preferences _and_ those preferences are moving targets. As people gain more experience with and as they learn more about technology, their attitudes and perception of it change.
Despite these shortcomings, both types of experiments are essential for improving our understanding of privacy and for improving technology. For example, human-to-human studies have revealed that people experience privacy differently in different acoustic environments; a noisy cafeteria can be better at masking sounds and defending against eavesdropping than a reverberant hallway [93, 94, 12, 13] and multiple-bed patient rooms in hospitals have privacy-concerns [96]. Similarly, human-machine studies have revealed that when a chatbot actively communicates choices related to privacy it improves users' experience of privacy [97], voice interfaces with unknown features cause fear in users, and reduce retention of services [98, 99, 100] and breaches in privacy are highly detrimental to the trust in services and reduce users' willingness to use the services, but such trust can be rebuilt [101].
### _User Interface Design_
For usable privacy, it is important that information about privacy is readily provided, changes in the extent of privacy are promptly notified, and that users have control of the level of privacy [99, 101, 102, 103, 97]. In comparison, visual interfaces can use lights or icons for monitoring and tactile interfaces for controlling the level of privacy. While sound can be used to monitor system status with _sonification_[104, 105], this is not in widespread use within signaling of privacy. It is a compromise between filling the acoustic space with information and the ability and tendency of the human auditory system to block out monotonous sounds. To be effective, the sound should be perceivable when the user consciously wants to check the privacy level and changes in privacy level should evoke correspondingly and appropriately large changes in the soundscape that users consciously register such changes. Furthermore, user interfaces should give correct information about and allow controlling the privacy level [99, 10].
In any case, it is imperative that services are designed such that they _reflect the true extent of privacy_. Observe that it can well be possible to design systems that communicate an advanced level of privacy even when the system does not respect user privacy. In fact, service providers have short-term incentives to follow such approaches to design as long as they improve overall user satisfaction. However, such approaches to design are known as _dark patterns_ or _deceptive design patterns_ and they are considered unethical [106]. Through deception, such dark patterns lull users to believe they are safe when in fact they are abused. Good design of privacy
should therefore actively communicate and enable control of the true extent of privacy. Such design practice is not only ethical but also rewards service providers in the long term by improving retention of that services [101].
## VI Content Categories and Applications
To give a complete picture of the implications of privacy in speech technology, this section provides a brief discussion both on categories of private information, complementing table I as well as the application of such information. Observe that this is not a complete list (nor is table I) of the content nor of application categories. The purpose is merely to provide a characterization of work done and challenges in the research field, with accompanying references.
First, note that while all presented categories are (potentially) private information, all _sustained_ information can potentially be used to identify a speaker. For example, while the current emotional state cannot alone identify a speaker, the tendency to display emotional states can aid in identification. However, where we want to _verify_ that a speaker is who they claim to be, we can use only information which cannot be willfully changed. That is, speech style is a particular example of a property that a good voice actor can freely choose, making it "easy" to change also for fraudulent purposes.
Recognizing the speaker's identity then becomes the natural starting point for studies in privacy. We can attempt to recognize who is speaking (speaker recognition), verify whether a speaker has the claimed identity (speaker verification), clustering audio to segments with a single speaker (speaker diarization) and we can develop methods for deceiving identity (spoofing), e.g. [65, 66, 67, 107, 108, 109, 110, 101, 107, 108, 109, 111, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115]. Similarly, by voice conversion we can anonymize a speaker identity by replacing it with a random identity (anonymization) or a specific one (pseudonymization), e.g. [121, 108, 116, 117, 118, 119, 120, 121]. Speaker characterization is the natural complement of speaker idenfication [122, 82]. Such methods related to speaker identity can be applied for example to recognize the user of a device, verify a customer at a bank, or as a voice avatar in online gaming. Similarly, anonymization and pseudonymization can be used to hide the identity from the public media and gaming.
Speech recognition, as in speech-to-text, is probably the largest sub-area of speech research. With respect to privacy, it contains two obvious challenges for privacy. We can try to limit to side-information, such as eliminating all non-text information from the data stream, e.g. [123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135], or we can use natural language processing to anonymize the text content, e.g. [128, 129, 130, 131, 132].
Privacy is even more important in always-on applications like wake-word detection, i.e. when the interface is triggered by a specific keyword like "Computer" in "Computer, lights-off". This application is more sensitive exactly because it is always on as users cannot choose when data is processed [133, 134, 135, 28, 136, 137, 138, 139, 28]. The always-on characteristic is also prominent in assisted-living applications, which can "monitor people's daily exercises, consumption of calories and sleeping patterns, and to provide coaching interventions to foster positive behavior" [136]. Such ambient voice interfaces are often implemented through acoustic sensor networks which pose their own challenges [137, 137, 138, 26, 28].
Speech enhancement refers to removing background noises and distortions from the desired speech signal (private message) [76]. This task can have an impact on privacy in two ways. First, the recording environment can reveal private information about the speaker, and second, background noises, like a competing talker, can contain private information [138, 139]. Attenuating background noises and competing speakers as well as removing reverberation can thus improve privacy. In addition, while using a sensor network to capture speech can improve utility, it introduces novel threats as well [67, 13].
Speech is typically considered to be _biometric information_ in its entirety, including all information categories in table I as such categories can be used to identify a person [140, 141, 142, 78]. Some however limit the term biometric to refer only physical and behavioral characteristics. Such properties include health [143], emotions [64] and gender identity [43, 46, 134, 144], each warranting their own treatment.
## VII Legal and Societal Landscape
Privacy in speech technology has a great impact on both the individual and societal levels, as already discussed in section I. The magnitude of societal impact can be appreciated by recalling the Cambridge Analytica privacy scandal [145, 146] where private information was extracted from social media and used for targeted political advertising. In both speech technology and social media, services operate on massive user bases and involve interaction between multiple users. This exposes both areas to the same magnitude of risks. In the case of Cambridge Analytica, the most famous consequence was that it influenced election results in a large democratic country. Another prominent scandal from biometrics is the case where supposedly anonymized patient data for 10 % of Australians was released to the public, only to later be shown that individuals were readily identifiable [147]. All health data of those patients was thus made public contrary to user preferences and with unknown long-term consequences.
While we have not (yet) seen breaches related to speech technology with consequences of comparable magnitude, these parallels highlight the _potential_ effect of breaches. Scandals directly related to speech technology include eavesdropping on private persons by employees and contractors of service providers [1, 2, 148].
Section V-B also makes the point that service and technology providers have clear short-term incentives which conflict with users' preferences for privacy. We can easily find examples where users do not have real choices in protecting their privacy. For example, suppose all friends and family of a user use a particular platform for social and voice interaction even if its privacy configuration is inadequate. The individual user faces then the choice of either disconnecting from their social network or compromising privacy. This applies to all users with a sufficiently large portion of their communities participating in that platform. This is a version of the prisoner's dilemma, where no single user has any incentive to transition to a solution that would be best for everyone.
On the individual level, the examples of potential exploits in table II cause clear damage to individuals, including psychological harm in particular (e.g. [143]). Additional dangers include stalkers [149, 150, 151]. While such individual damage is "small" on the societal level, their prevalence makes their joint impact significant [4].
These examples demonstrate the inherent need for society-level regulation of speech technology with respect to privacy. Governments have already responded to this need, with the European Union spearheading the process with the General Data Protection Regulation (GDPR) [152, 153], with the State of California following soon thereafter with the California Consumer Privacy Act (CCPA) of 2018 [154]. While these laws cover only a small percentage of the global population, as cloud services typically operate globally, they need the capability to follow local laws. In many cases, it can be easiest to apply the strictest laws on all users, such that the most strict laws benefit the privacy of all users. Service providers have therefore widely adopted the requirements of GDPR and CCPA and that has likely had a large impact also on users outside the scope of these regulations.
With respect to regulation, an important consequence of the objective measures of privacy in section IV is that our tools and measurements will give as an output only _statistical characterizations_ of privacy, but they can never give absolute confidence. This is in stark contrast with the concept of _unique_ identifiability used in legal documents, such as the General Data Protection Regulation (GDPR) by the European Union [152], which does not explicitly leave room for statistical uncertainty. This is reflected for example in the Guidelines for virtual voice assistants by the European Data Protection Board, which states that: [142, page 13, SS31]
... voice data is inherently biometric personal data. As a result, when such data is processed for the purpose of uniquely identifying a natural person... the processing must have a valid legal basis...
This leaves the interpretation open. It is possible to argue that it is never possible to obtain absolute confidence in speaker identification such that the GDPR is never triggered. It is also possible to argue that all voice data contains personal information which can be used to uniquely identify a person, such that all processing must have a valid legal basis. Both interpretations lead to absurdity, which suggests that the truth must lie somewhere in the middle. In fact, the GDPR in practice requires (see [142, page 4]) that the design process of voice assistants includes a _data protection impact assessment_, where the risks and consequences are evaluated such that the designer can take appropriate precautions to preserve privacy. Authors of the GDPR are thus clearly aware that it is impossible to give absolute guarantees of privacy, but that the impact assessment (i.e. objective measures of privacy) must necessarily be based on statistical measures, even if such measures have not been defined.
While governments are in the process of regulating privacy, corporations and non-governmental organizations have also realized that proper privacy is an opportunity. For example, the Open Voice Network seeks to develop and standardize open technical standards and ethical guidelines for voice assistance [155] and the MyData Global seeks to help people and organizations to benefit from personal data in a human-centric way [156, 157]. Within the research community, the author of this paper has been involved in establishing a special interest group within the International Speech Communication Association (ISCA) devoted to "Security and Privacy in Speech Communication" [158]. It is as far as we know, the world-wide largest community focused on this topic.
## VIII Discussion and Conclusions
The quality and use of speech interfaces have increased rapidly in recent years. As with any new technology, the rapid progress has also revealed the dangers and in particular the threats to privacy it demonstrably poses. Unprotected users are exposed to threats like stalking, algorithmic stereotyping, harassment, and price gouging. Researchers, service providers, and governments thus have the impetus to protect the users, not only because it is ethical, but also because it makes for better products and long-term business.
This paper is a tutorial on privacy for speech technology. Its most notable contribution is an exhaustive categorization of threats (see fig. 3 and section II). Protections against those threats are further categorized according to whether they relate to the private message or side information. The pertinent difference is that transmitting a private message is the whole purpose of communication and there is not very much we can do to protect it other than encryption. With side-information, that is, all the other information that is bundled into a speech like health status and gender identity, we have a much larger arsenal of protections. The primary approach is however to remove as much of the side information as possible as early as possible. As the private message is all that communication that we need, all side-information should be removed to the extent it is possible. Such removal rapidly however demonstrates that paralinguistic information like speech style is often very useful in conveying the intended message. It is thus not always clear what constitutes the legitimate private message.
The first conclusion from this paper is that the range of possible threats to privacy is vast. Each agent - be it human or device - participating in an interaction as well as the acoustic pathways and network connection through which they are connected, is a potential attack surface. Any actor which can interact with the other agent or listen to the connection is a potential eavesdropper. Since we define privacy as a scenario where an agent is authorized with some access, but over-exceeds that authorization (intentionally or inadvertently), we cannot just cut connections but need more refined designs and methodologies. We thus need to dynamically adjust access according to need. Conversely, systems need to actively monitor the privacy status to determine appropriate actions.
Second, we find that privacy and ethics are largely overlapping challenges. Our ethical values govern our preferences for privacy. Most potential breaches of ethics in speech technology are based on breaching privacy. That means that we need a society-wide ethical discussion about what is allowed with respect to user privacy. Such discussions are needed
to prevent an Cambridge Analytica-style scandal for speech technology [146].
A third implication of this paper is that, while research in this field has picked up only very recently, there is already a substantial body of research available. The research is not however mature but in a phase of rapid development, and there are important sub-areas that have not yet seen much work. This makes it a fruitful area for research as we can expect important understanding to be discovered in the coming years.
Particular research questions where the author sees an urgent need for and expects to see new results include:
ConsentWhile management of acquiring informed consent has established traditions and best practices for most interface types [159], speech, audio, and ambient systems are notably unique. Namely, acoustic information is a time-varying stream. Reading out a pages-long consent form before an interaction can start is clearly much too obtrusive and unnecessarily detailed. Privacy requirements also vary over time. Consent should thus be acquired per actual need basis. In addition to being more usable, it would also make choices better connected to the actual needs, since consent is acquired only once it is actually needed.
Metrics for StreamingThe available theoretical metrics reflect privacy with respect to a finite dataset, whereas speech is an open-ended stream of data. The consequence is that, in theory, we can resolve any private attribute or identity, provided that it has a unique probability distribution and we have a sufficiently long observation. We would thus need methodologies for characterizing the effect that the length of observation has on privacy.
Metrics for Out-of-category InformationThe metrics discussed in section IV are all related to specific categories of private information and in particular, we can provide protection only to identified threats. For example, we can measure the threat to privacy related to health information, but that does not tell anything about the threat related to information about ethnic background. We thus need methods for evaluating privacy jointly with respect to _all categories_ of private information except the private message.
Future-proof MetricsMetrics are generally based on a model of the signal or the attacker. The metrics are thus subject to change when those models are improved in the future, and it will likely expose new threats. Though it is likely difficult, it would be extremely useful if we could characterize, for example, privacy threats as a function of computational complexity. Over time, improvements in efficiency reduce algorithmic complexity and hardware with better capabilities become available, this would allow characterizing how protections will survive the test of time.
Multi-user interactionPrivacy research is categorically focused on _personal_ and _user-centric_ privacy. However, speech is by definition communication between multiple agents, and when exposed, threatens _all participants simultaneously_. This is not an issue from a legal point of view, because privacy protections apply to all individual users equally. However, from an authorization and consent management perspective, this is an underappreciated issue. If user A records a discussion with user B, then both clearly have some level of ownership and privacy requirements on that recording. Another case is smart technology with multiple users, like smart TVs; even if one user has consented to data collection, that does not mean that others would agree. We do not yet have any widely accepted standard approaches for handling privacy, ownership, and consent in such multi-user scenarios.
DisentanglementIf we could disentangle all categories of speech information as in fig. 15, then it would be easy to anonymize each category to an appropriate degree. This approach thus seemingly solves all our problems. The issue is that we do not yet have sufficiently sophisticated methods to do that. The difficulty in developing disentanglement algorithms is that information categories in table I are vaguely and heuristically defined and there is significant overlap between them. We cannot even demonstrate that this would be a complete list of information categories. Without exact definitions of those categories, we have no hope of developing methods for them. An alternative approach is to use representation learning methods to create unsupervised clustering of information categories. The compromise is that we cannot guarantee that the learned representations correspond to heuristically meaningful categories. Still, since disentanglement is _the ideal_ solution, that should continue to be a central focus of research.
Perception, Experience, and Design of PrivacyMost of the speech-specific research on privacy has focused on privacy-preserving processing and systems structures. This is useful because it is the mandatory prerequisite for privacy-preserving technology. However, as discussed in section V-B, users' experience of services is to some extent independent of the objective level of privacy. We need much more user studies on how, for example, voice characteristics and word choices influence trust, how the privacy level can be monitored during interactions and how changes are notified, how the environment and content of interaction influence user experiences, etc. By improving the user experience with respect to privacy, we are likely to improve user satisfaction and retention of the overall service, while also improving the service objectively.
In conclusion, threats and breaches of privacy have significant negative consequences on individual, societal, ethical, and economic levels. While further improvements in smart technology are expected to improve the utility of the technology, it likely also introduces new threats. The protection of privacy in speech technology has thus been important already for a long time and the importance is increasing. Fortunately, research in the area has picked up speed and this tutorial presents the most important concepts, approaches, and methodology. It is however likely that fundamental results and new technologies will be introduced in the near future. This is thus an exciting time for researchers in the area.
## Acknowledgments
A majority of the content in this paper has risen directly from discussions with a long list of colleagues, who were far too many to include as co-authors. These colleagues include, but are not limited to, in alphabetical order, Chris Brzuska, Mads Graesboll Christensen, Sneha Das, Nicholas Evans, Johannes Fischer, Florin Ghido, Meeri Haataja, Ivan
Habernal, Aki Harma, Dorothea Kolossa, Michael Laakasuo, Martha Larson, Rainer Martin, Joachim Meyer, Sebastian Moller, Andreas Nautsch, Birgit Popp, Nitin Sawhney, Ingo Siegert, Stephan Sigg, Rech Silas, Isabel Trancoso, Sanna Toropainen, Emmanuel Vincent, Jennifer Williams, and Pablo Perez Zarazaga. I will remain forever grateful.
|
2306.13115 | A Model Based Framework for Testing Safety and Security in Operational
Technology Environments | Todays industrial control systems consist of tightly coupled components
allowing adversaries to exploit security attack surfaces from the information
technology side, and, thus, also get access to automation devices residing at
the operational technology level to compromise their safety functions. To
identify these concerns, we propose a model-based testing approach which we
consider a promising way to analyze the safety and security behavior of a
system under test providing means to protect its components and to increase the
quality and efficiency of the overall system. The structure of the underlying
framework is divided into four parts, according to the critical factors in
testing of operational technology environments. As a first step, this paper
describes the ingredients of the envisioned framework. A system model allows to
overview possible attack surfaces, while the foundations of testing and the
recommendation of mitigation strategies will be based on process-specific
safety and security standard procedures with the combination of existing
vulnerability databases. | Mukund Bhole, Wolfgang Kastner, Thilo Sauter | 2023-06-22T05:37:09Z | http://arxiv.org/abs/2306.13115v1 | # A Model Based Framework for Testing Safety and Security
###### Abstract
Today's industrial control systems consist of tightly coupled components allowing adversaries to exploit security attack surfaces from the information technology side, and, thus, also get access to automation devices residing at the operational technology level to compromise their safety functions. To identify these concerns, we propose a model-based testing approach which we consider a promising way to analyze the safety and security behavior of a system under test providing means to protect its components and to increase the quality and efficiency of the overall system. The structure of the underlying framework is divided into four parts, according to the critical factors in testing of operational technology environments. As a first step, this paper describes the ingredients of the envisioned framework. A system model allows to overview possible attack surfaces, while the foundations of testing and the recommendation of mitigation strategies will be based on process-specific safety and security standard procedures with the combination of existing vulnerability databases.
Industrial Control System, Operational Technology, Model-Based Testing, Safety and Security
## I Introduction
Over the years, automation systems have been developed considering mainly safety hazards. As automation technologies are nowadays are closely connected to Information Technology (IT) systems and former borders to Operational Technology (OT) get blurred, adversaries can start a chain of security attacks from the enterprise level and gain access to compromise the safety of Industrial Control Systems (ICSs) [1]. Thus, convergence with increased security flaws at the IT level opens doors for harming safety at the OT level. Investigating different methodologies is necessary to evaluate the resilience of the system against attacks on the different levels while ensuring that the system considers interdependencies of safety and security. The paper addresses the mutual dependency of engineering branches on safety and security from a model-based perspective with Model Based Systems Engineering (MBSE) and safety-security requirement engineering in mind, as these two branches of engineering are not integrated, yet. Integration of these branches can bring benefit on topics such as automated asset management, system complexity management, risk-cost assessment, multidisciplinary team management, and integrated system safety and security evaluation [13]. The proposed framework supports a methodology to define requirements and analyze the design implementation of an ICS before and after the commissioning of OT components in the system. Its purpose is to test whether the OT system meets safety and security requirements according to the underlying standards. It integrates four domains into MBSE, such as requirements/capabilities _(asset information)_, behavior _(operations/methods)_, structure/architecture _(system modelling)_, verification and validation _(system model testing)_[2]. Input data essential for our framework are asset information, communication types between assets, operations/methods types executed by assets, and policies or safety/security measures on asset. The envisioned framework shall be able to retrieve safety and security requirements and their related measures or mitigation strategies from a database. The latter concern best practices to follow and updates to be deployed [3]. The paper is structured as follows. Section II outlines the concepts and ingredients of our model-based testing framework. Section III proposes the testing framework approach in brief and sketches an application for a small use case in Section IV. Section V draws some concluding remarks and next steps.
## II Background
### _Model Based System Engineering (MBSE)_
MBSE is a formalized application of modeling to support system requirements, design analysis, verification, and validation activities starting from conceptual design and continuing throughout development, and later lifecycle phases [2]. MBSE can be beneficial in terms of reduced development cost, system quality, process, and timeline management [4].
### _Model Based Testing (MBT)_
MBT is a part of the MBSE lifecycle, which works on a deterministic system and demonstrates the implementation's behavior. The most challenging parts for MBT are the development of automated test case generation, creation of test data, and definitions of procedures to test the system. MBT has proven to increase the quality and efficiency of the system by behavioral analysis of System Under Test (SUT) models. Once these models have been ensured to reflect the system requirements, these scenarios can serve as ideal test cases in the testing phase [5].
### _Vulnerability Assessment_
The vulnerability assessment focuses on criticality, OT components' current vulnerabilities, and mitigation. The assessment includes details such as the severity of a threat and
a corresponding risk level, determined by, e.g., Common Vulnerability Scoring System (CVSS) or Common Vulnerabilities and Exposure (CVE) references, which can be retrieved from vulnerability databases. Moreover, suggestions of mitigation strategies can be part of it, such as available updates and information about patches like name, release date, and update type. Before updates are done, integrity checks need to be carried out [3].
### _Test Case Generation_
The proposed framework will address automated test case generation using state-based models, which may include Finite State Machines (FSMs), Extended Finite State Machines (EFSMs), UML State Machine Diagrams, Timed Automatas (TAs), and Markov Chain Usage Models (MCUMs). At the moment, individual benefits of these test generation techniques are investigated. FSMs can be used to generate tests under sandboxing for SUT and check if corresponding tests hold for the system and protocol implementation. EFSMs can generate tests in control and data parts of the system specification. UML can be used to generate a test for multiple processes executing simultaneously in the SUT and is also suitable for unit tests based on object-oriented components. TA are the best option for test generation in a real-time system with timing constraints and model checking tools. MCUM test generation is based on the statistics of execution of states which can be used in complex system components in the SUT [5].
### _Test Verification & Validation_
The test verification and validation (V&V) is a well-defined approach that evaluates the system throughout its life cycle to the end of product life [14]. In V&V of OT systems, we considered the most relevant safety and security standards summarized in Table I. These standards serve as a basis for the definition of a protection catalog. Based on the catalog, a rule-based system will be developed to deal with the imprecision, modeling method of human behavior, and achieving control of ICSs by executing a sequence of commands which can or cannot be modeled rigorously [11].
## III Proposed Framework
The proposed semi-automated approach to the model-based testing framework for OT environments is divided into four parts illustrated in Fig.1.
**Part-I Asset Information (Requirements/Capabilities):** The asset information can be extracted with a semi-automated approach from different sources such as data historians, electronic data sheets, Electronic Device Description Language (EDDL) artifacts, and network-based asset discovery tools. Data extracted from these sources are sketched in Table II. This information is stored in the system model schema in the tables _asset_information, methods_operations_, and _component_connections_.
**Part-II Asset Behavior (Operations/Methods):** In this part, operations or functions performed by the asset can be specified (if applicable) in the form of pseudo-code in any supported language for analyzing the asset behavior. Any attribute change from the asset components will impact the methods carrying those attributes, which must be tested. Moreover, it will be possible for developers to check whether the programming language standards are followed, for example, if IEC 61131-3 programming guidelines are met. Additionally, a user can describe the pseudo-code for the operation to be performed by the PLC. The pseudo-code can be validated against the user-defined policies based on programming language standards, such as found in the table _safety_security_measures from the database. These operations/functions are stored in the table _methods_operations_.
**Part III System Modelling (Structure/Architecture):** After collecting details of system components (i.e., asset information, methods/operations, and components connection), the next step is to build a system model from two viewpoints: (1) the System View Point is associated with the management of the sub-system along with the component attributes and
Fig. 1: Constituent Parts of the Testing Framework
holds all static information. (2) the control view point provides the perspective when the system operates and is administered. Thus, the control viewpoint includes the behavioral part of a system generated using table _methods_operations_, while the system viewpoint holds information about the individual asset components using _asset_information_. The cardinality and relation between asset components class is maintained by the table _component_connection_. The final model will be based on SysML or Automation ML [12].
**Part-IV Model Testing (Verification and Validation):** The verification and validation of the system model are performed on the rule-based system description (cf. Subsection II-E). The rule-based approach allows a tester to check whether the expected ideal condition is met, as defined in the table _test_cases_. These ideal conditions are validated against the test cases to check whether the system satisfies the required pre-condition, actions, and post-condition with the expected results. Test cases are input to the compiled model (cf. Subsection II-D). After testing, if validation fails, required mitigation measures are suggested for successful validation, and changes can be visualized in the system model.
## IV Illustrative Example
A simple use case of an ICS shall illustrate the implementation of the envisioned framework. The use case (Fig.2) consists of OT components up to level 4 of the Purdue model [15] with sensors, actuators, IO masters, PLCs, RTUs, switches, and workstations. The software side executes firmware, PLC code, SCADA, and MES. Using different techniques mentioned in Table II, we extract asset information for Part-I and Part-II of the framework. In Part-I, discovered asset type and asset name information shown in Table V are gathered using data historian and network-based asset discovery tools. For the connections of asset components, we can use network-based discovery tools, electronic data sheets, and information from EDDL consisting of connection information of the source and destination asset, and communication protocols shown in Table VII. In Part II, we extract relevant method information of assets using data historian, electronic data sheets, and EDDL data consisting of an operation name. The programming language used to run the operation (if applicable) and a description can be a pseudo code or range of values shown in Table VI. In Part III, the framework retrieves extracted system information for generating the system and control viewpoint models in SysML or Automation ML. The system viewpoint model is generated using Tables V and VII, while the control viewpoint model is generated using Tables V and VI. In Part IV, first, we generate a test case, for example, to test a Safety Instrumented System (SIS) functionality with a PLC (H07) as target. Here test attributes will be an input to automated test case generation (cf. Subsection II-D). As shown in Table IV, the pre-condition, action, post-condition, and expected result of the test will be fetched. Post-conditions and tester-defined expected results are compared to validate whether the test succeeded or failed. In this example, the expected results and post-conditions are the same, implying a successful validation. In case of a failure, mitigation policies/measures from Table III are required. As we are testing the SIS functionality of the PLC, policies related to safety are recommended (i.e., P01). If the recommended mitigations are fulfilled, then changes are reflected in the model, as shown in Part III of the framework.
## V Conclusion and Future Work
The paper attempts to present a model-based testing framework for the safety and security of OT systems. The test framework is expected to resolve the safety and security flaws in ICSs generated due to a lack of synchronization among different development teams and provide the mitigation for flaws detected based on the system model. Fulfilling the prospects of legacy and modern systems on a component level and helping to optimize the system aligned with Industry 4.0. Further implementation of the framework from design to an actual prototype will be considered important as it will involve fundamental challenges such as dynamic automation technology needs, resource optimization, modeling techniques, and adhering to safety and security standards. We plan to expand the scope of the safety and security measures/policies on a generic level. This way, we might achieve its deployment in multi-domain industries considering the specific needs of those domains, leading to a better evaluation of system components. For this step, the involvement of industry partners is necessary with the ultimate goal of developing a generic meta-model from which a system model in the framework can be derived.
## Acknowledgement
This paper was supported by TUV AUSTRIA #SafeSecLab Research Lab for Safety and Security in Industry, a research cooperation between TU Wien and TUV AUSTRIA.
|
2307.04740 | On the image of graph distance matrices | Let $G=(V,E)$ be a finite, simple, connected, combinatorial graph on $n$
vertices and let $D \in \mathbb{R}^{n \times n}$ be its graph distance matrix
$D_{ij} = d(v_i, v_j)$. Steinerberger (J. Graph Theory, 2023) empirically
observed that the linear system of equations $Dx =\mathbf{1}$, where
$\mathbf{1} = (1,1,\dots, 1)^{T}$, very frequently has a solution (even in
cases where $D$ is not invertible). The smallest nontrivial example of a graph
where the linear system is not solvable are two graphs on 7 vertices. We prove
that, in fact, counterexamples exists for all $n\geq 7$. The construction is
somewhat delicate and further suggests that such examples are perhaps rare. We
also prove that for Erd\H{o}s-R\'enyi random graphs the graph distance matrix
$D$ is invertible with high probability. We conclude with some structural
results on the Perron-Frobenius eigenvector for a distance matrix. | William Dudarov, Noah Feinberg, Raymond Guo, Ansel Goh, Andrea Ottolini, Alicia Stepin, Raghavenda Tripathi, Joia Zhang | 2023-07-10T17:52:19Z | http://arxiv.org/abs/2307.04740v1 | # On the image of graph distance matrices
###### Abstract.
Let \(G=(V,E)\) be a finite, simple, connected, combinatorial graph on \(n\) vertices and let \(D\in\mathbb{R}^{n\times n}\) be its graph distance matrix \(D_{ij}=d(v_{i},v_{j})\). Steinerberger (J. Graph Theory, 2023) empirically observed that the linear system of equations \(Dx=1\), where \(1=(1,1,\dots,1)^{T}\), very frequently has a solution (even in cases where \(D\) is not invertible). The smallest nontrivial example of a graph where the linear system is not solvable are two graphs on \(7\) vertices. We prove that, in fact, counterexamples exists for all \(n\geq 7\). The construction is somewhat delicate and further suggests that such examples are perhaps rare. We also prove that for Erdos-Renyi random graphs the graph distance matrix \(D\) is invertible with high probability. We conclude with some structural results on the Perron-Frobenius eigenvector for a distance matrix.
Key words and phrases:Graph Distance Matrix, Invertibility, Image 2020 Mathematics Subject Classification: 05C12, 05C50
## 1. Introduction
Let \(G=(V,E)\) be a finite, simple, connected, combinatorial graph on \(|V|=n\) vertices. A naturally associated matrix with \(G\) is the _graph distance matrix_\(D\in\mathbb{R}^{n\times n}\) such that \(D_{ij}=d(v_{i},v_{j})\) is the distance between the vertex \(v_{i}\) and \(v_{j}\). The matrix is symmetric, integer-valued and has zero on diagonals. The graph distance matrix has been extensively studied, we refer to the survey Aouchiche-Hansen [1]. The problem of characterizing graph distance matrices was studied in [1]. A result of Graham-Pollack [1] ensures that \(D\) is invertible when the graph is a tree. Invertibility of graph distance matrix continues to receive attention and various extension of Graham-Pollack has been obtained in recent times [1, 1, 2, 1, 3, 4]. However, one can easily construct graphs whose distance matrices are non-invertible. Thus, in general the graph distance matrix may exhibit complex behaviour.
Our motivation comes from an observation made by Steinerberger [11] who observed that for a graph distance matrix \(D\), the linear system of equations \(Dx=1\), where \(1\) is a column vector of all \(1\) entries, tends to frequently have a solution-even when \(D\) is not invertible. An illustrative piece of statistics is as follows. Among the
\[9969\] connected graphs in Mathematica 13.2 with \[\#V\leq 100,\] \[3877\] have a non-invertible distance matrix \[\operatorname{rank}(D)<n\] but only \[7\] have the property that \[1\notin\operatorname{image}(D).\]
This is certainly curious. It could be interpreted in a couple of different ways. A first natural guess would be that the graphs implemented in Mathematica are presumably more interesting than 'typical' graphs and are endowed with additional
symmetries. For instance, it is clear that if \(D\) is the distance matrix of a vertex-transitive graph (on more than \(1\) vertices) then \(Dx=1\) has a solution. Another guess would be that this is implicitly some type of statement about the equilibrium measure on finite metric spaces. For instance, it is known [10] that the eigenvector corresponding to the largest eigenvalue of \(D\) is positive (this follows from the Perron-Frobenius theorem) and very nearly constant in the sense of all the entries having a uniform lower bound. The sequence A354465 [1] in the OEIS lists the number of graphs on \(n\) vertices with \(1\notin\operatorname{image}(D)\) as
\[1,0,0,0,0,0,2,14,398,23923,\ldots\]
where the first entry corresponds to the graph on a single vertex for which \(D=(0)\). We see that the sequence is small when compared to the number of graphs but it is hard to predict a trend based on such little information. The first nontrivial counterexamples are given by two graphs on \(n=7\) vertices.
Lastly, it could also simply be a'small \(n\)' effect where the small examples behave in a way that is perhaps not entirely representative of the asymptotic behavior. It is not inconceivable to imagine that the phenomenon disappears completely once \(n\) is sufficiently large. We believe that understanding this is an interesting problem.
### Acknowledgements
This project was carried out under the umbrella of the Washington Experimental Mathematics Lab (WXML). The authors are grateful for useful conversations with Stefan Steinerberger. A.O. was supported by an AMS-Simons travel grant.
## 2. Main Results
### A plethora of examples
Notice that the sequence A354465 [1] in the OEIS lists suggests that for \(n\geq 7\) one can always find a graph on \(n\) vertices for which \(Dx=1\) does not have a solution. Here, we recall that \(D\) represents the distance matrix of the graph, and \(1\) represents a vector with all of its \(|V|\) entries that are equal to one (we often omit the explicit dependence on \(|V|\), when it is understood from the context). The main result of this section is the following.
**Theorem 1**.: _For each \(n\geq 7\), there exists a graph \(G\) on \(n\) vertices such that \(Dx=1\) does not have a solution._
Figure 1. The two smallest graphs for which \(1\notin\operatorname{image}(D)\).
Since we know that no counterexample exists for \(n<7\), the result is sharp. Our approach to find many examples of graphs for which \(Dx=\mathbb{1}\) has no solutions is to prove some structural results (of independent interests) that show how to obtain bigger examples out of smaller ones. For a careful statement of such structural results, we will need some definitions. We start with the notion of graph join.
**Definition 2**.: The graph join \(G+H\) of two graphs \(G\) and \(H\) is a graph on the vertex set \(V(G)\cup V(H)\) with edges connecting every vertex in \(G\) with every vertex in \(H\) along with the edges of graph \(G\) and \(H\).
Our structural result on the distance matrix of the graph join of two graphs is better phrased with the following definition.
**Definition 3**.: Let \(G\) be a graph with adjacency matrix \(A_{G}\). Then, define \(\widetilde{D}_{G}=2J-2I-A_{G}\).
Observe that for a graph of diameter \(2\), \(\widetilde{D}_{G}\) is the distance matrix, justifying this choice of notation. We now state the main ingredient in the proof of Theorem 1.
**Theorem 4**.: _Let \(G\) and \(H\) be a graphs and suppose that \(\widetilde{D}_{G}x=\mathbb{1}\) has no solution. Then, the distance matrix \(D\) of the graph join \(G+H\) has no solution to \(Dx=\mathbb{1}\) if and only if there exists a solution to \(\widetilde{D}_{H}x=\mathbb{1}\) such that \(\langle x,\mathbb{1}\rangle=0\)._
Figure 3. The Cartesian product of two paths.
Figure 2. The graph join of two paths.
An alternative approach to the proof of Theorem 1, that unfortunately does not allow for the same sharp conclusion (though it can be used to generate examples for infinitely many values of \(n\)) relies instead of the notion of Cartesian product.
**Definition 5**.: Given two graphs \(G=(V_{1},E_{1})\) and \(H=(V_{2},E_{2})\) their _Cartesian product_\(G\times H\) is a graph on the vertex set \(V=V_{1}\times V_{2}\) such that there is an edge between vertices \((v_{1},v_{2})\) and \((v_{1}^{\prime},v_{2}^{\prime})\) if and only if either \(v_{1}=v_{1}^{\prime}\) and \(v_{2}\) is adjacent to \(v_{2}^{\prime}\) in \(H\) or \(v_{2}=v_{2}^{\prime}\) and \(v_{1}\) is adjacent to \(v_{1}^{\prime}\) in \(G\).
**Theorem 6**.: _If \(G\) and \(H\) are graphs such that \(\mathbb{1}\) is not in the image of their distance matrices, then the Cartesian product graph \(G\times H\) also has the property that \(\mathbb{1}\) is not in the image of its distance matrix._
We note that examples for which \(Dx=\mathbb{1}\) are not so easy to construct. In addition to the numerical evidence we provided in the introduction, we are able to give a rigorous, albeit partial, explanation of why this is the case (see Lemma 18).
### Erdos-Renyi random graphs
We conclude with a result about Erdos-Renyi random graphs. We first recall their definition.
**Definition 7**.: An _Erdos-Renyi_ graph with parameters \((n,p)\) is a random graph on the labeled vertex set \(V=\{v_{1},v_{2},...,v_{n}\}\) for which there is an edge between any pair \((v_{i},v_{j})\) of vertices with independent probability \(p\).
The following theorem shows that their distance matrices are invertible with high probability. As a consequence, \(Dx=\mathbb{1}\) has a solution for Erdos-Renyi graphs with high probability, as we summarize in the following Theorem.
**Theorem 8**.: _Let \(0<p<1\) and let \(D_{n,p}\) be the (random) graph distance matrix associated of a random graph in \(G(n,p)\). Then, as \(n\to\infty\),_
\[\mathbb{P}\left(\det(D_{n,p})=0\right)\to 0.\]
It is a natural question to ask how quickly this convergence to \(0\) happens. Our approach relies heavily on recent results [20] about the invertibility of a much larger class of random matrices with discrete entries, providing some explicit bounds that are likely to be loose. We propose a conjecture, which is reminiscent of work on the probability that a matrix with random \(\pm 1\) Rademacher entries is singular, we refer to work of Komlos [19] and the recent solution by Tikhomirov [21]. One might be inclined to believe that the most likely way that \(D_{n,p}\) can fail to be invertible is if two rows happen to be identical. This would happen if there are two vertices \(v,w\) that are not connected by an edge which, for every other vertex \(u\in V\), are both either connected to \(u\) or not connected to \(u\). For a graph \(G\in G(n,p)\) each vertex is connected to roughly \(\sim np\) vertices and not connected to \(\sim(1-p)n\) vertices. This motivates the following
**Question.** Is it true that
\[\lim_{n\to\infty}\frac{\log\left(\mathbb{P}\left(\det(D_{n,p})=0\right)\right) }{n}=\log\left(p^{p}(1-p)^{1-p}\right)\quad?\]
The right-hand side \(\log\left(p^{p}(1-p)^{1-p}\right)=p\log\left(p\right)+(1-p)\log\left(1-p\right)\) is merely (up to constants) the _entropy_ of a Bernoulli random variable.
### Perron-Frobenius eigenvectors are nearly constant
Let \((X,d)\) be a metric space and let \(x_{1},\ldots,x_{n}\) be \(n\) distinct points in \(X\). The notion of distance matrix naturally extends to this case. That is, we define \(D\in\mathbb{R}^{n\times n}\) by setting \(D_{ij}=d(x_{i},x_{j})\). This notion clearly agrees with the graph distance matrix if \(X\) is a graph equipped with the usual shortest path metric. Let \(\lambda_{D}\) be the Perron-Frobenius eigenvalue of \(D\) and let \(v\) be the corresponding eigenvector with non-negative entries. In the following we will always assume that \(v\) is normalized to have \(L^{2}\) norm \(1\) unless otherwise stated. In [10], it was proved that
\[\frac{\langle v,1\rangle}{\sqrt{n}}\geq\frac{1}{\sqrt{2}};.\]
It is also shown in [10] that the above inequality is sharp in general for the distance matrix in arbitrary metric space. However, it was observed that for graphs in the Mathematica database, the inner product tends to be very close to \(1\), and it was not known if the lower bound of \(1/\sqrt{2}\) is sharp for graphs. We show that this bound is sharp for graph distance matrices as well. The lower bound is achieved asymptotically by the _Comet graph_ that we define below.
**Definition 9**.: We define a _comet graph_, \(C_{m_{1}}^{m_{2}}\), to be the disjoint union of a complete graph on \(m_{1}\) vertices with the path graph on \(m_{2}\) vertices and adding an edge between one end of the path graph and any vertex of the complete graph.
**Theorem 10**.: _Let \(D_{m}\) be the graph distance matrix of the Comet graph \(C_{m^{2}}^{m}\). Let \(v_{m}\) be the top eigenvector (normalized to have unit \(L^{2}\) norm) of the distance matrix \(D_{m}\). Then,_
\[\lim_{m\to\infty}\frac{\langle v_{m},1\rangle}{\sqrt{n}}=\frac{1}{\sqrt{2}}\;,\]
_where \(n=m^{2}+m\) is the number of vertices in \(C_{m^{2}}^{m}\)._
While Theorem 10 shows that the lower bound \(1/\sqrt{2}\) is sharp, it does not reveal the complete truth. It is worth emphasizing that the lower bound is achieved only in the limit as the size of the graph goes to infinity. The following theorem shows that if a graph has diameter \(2\) then, \(\langle v,1\rangle/\sqrt{n}\) is significantly larger.
**Theorem 11**.: _Let \(G\) be a graph with diameter \(2\) and let \(D\) be the distance matrix of \(G\). Let \(v\) be the top-eigenvector of \(D\) normalized to have \(L^{2}\) norm \(1\). Then,_
\[\frac{\langle v,1\rangle}{\sqrt{n}}\geq\frac{4}{3}\cdot\frac{1}{\sqrt{2}}\;.\]
In the light of above theorem, it is reasonable to expect a more general result of the following form that we leave open.
Figure 4. The comet graph \(C_{5}^{3}\)
**Problem.** Let \(G\) be a graph on \(n\) vertices with distance matrix \(D\). Let \(v\) be the top eigenvector of \(D\) with unit \(L^{2}\) norm. If \(G\) has diameter \(d\) then,
\[\frac{\langle v,\mathbb{1}\rangle}{n}\geq\frac{1}{\sqrt{2}}(1+f(d))\;,\]
for some \(f\) such that \(f(d)\to 0\) as \(d\to\infty\).
## 3. Proof of Theorem 1
This section is dedicated to the proof of the main Theorem 1. Since the main ingredient is the structural result about the distance matrix of the graph join (Theorem 4), we begin the section with the proof of that.
Proof of Theorem 4.: Observe that the distance matrix of \(G+H\) is given by
\[D=\begin{pmatrix}\widetilde{D}_{G}&J\\ J&\widetilde{D}_{H}\end{pmatrix}.\]
Recall that the orthogonal complement of the kernel for a symmetric matrix is the image of the matrix because the kernel of a matrix is orthogonal to the row space, which in this case, is the column space. In particular, this applies to \(\widetilde{D}_{G}\) and \(\widetilde{D}_{H}\).
To prove the forwards direction, we will show the contrapositive. We have two cases, namely the case where \(\widetilde{D}_{H}x=\mathbb{1}\) has no solution and the case where there is a solution to \(\widetilde{D}_{H}x=\mathbb{1}\) where \(\langle x,\mathbb{1}\rangle\neq 0\)
First, assume that \(\widetilde{D}_{H}x=\mathbb{1}\) has no solution. Then, we have that \(\ker\widetilde{D}_{G}\not\subset\mathbb{1}\) and \(\ker\widetilde{D}_{H}\not\subset\mathbb{1}\) because \(\mathbb{1}\not\in\operatorname{Im}\widetilde{D}_{G}\) and \(\mathbb{1}\not\in\operatorname{Im}\widetilde{D}_{H}\). So, there exists \(x_{1}\in\ker\widetilde{D}_{G}\) and \(x_{2}\in\ker\widetilde{D}_{H}\) such that \(\langle x_{1},\mathbb{1}\rangle=\langle x_{2},\mathbb{1}\rangle=1\). Observe that the vector \(x=(x_{1},x_{2})^{T}\) satisfies \(Dx=\mathbb{1}\) so we are done with this case.
Now, suppose that there exists \(x\) such that \(\widetilde{D}_{H}x=\mathbb{1}\) and \(\langle x,\mathbb{1}\rangle\neq 0\). Then, let \(x_{2}=x/\langle x,\mathbb{1}\rangle\). Once again, \(\ker\widetilde{D}_{G}\not\subset\mathbb{1}\) so there exists \(x_{1}\in\ker\widetilde{D}_{G}\) such that \(\langle x_{1},\mathbb{1}\rangle=1-1/\langle x,\mathbb{1}\rangle\). Then, the vector \(x=(x_{1},x_{2})^{T}\) satisfies \(Dx=\mathbb{1}\). Thus, we are done with this direction.
Now, for the reverse direction, suppose that there exists \(y\) such that \(\widetilde{D}_{H}y=\mathbb{1}\) and \(\langle y,\mathbb{1}\rangle=0\). Assume for a contradiction that there exists a solution to \(Dx=\mathbb{1}\). Then, we have \(x_{1},x_{2}\) such that \(\widetilde{D}_{G}x_{1}+Jx_{2}=\mathbb{1}\) and \(Jx_{1}+\widetilde{D}_{H}x_{2}=\mathbb{1}\).
First, suppose that \(\langle x_{1},\mathbb{1}\rangle=1\). Then, we have \(\widetilde{D}_{H}x_{2}=0\) so \(x_{2}\in\ker\widetilde{D}_{H}\). Note that \(\mathbb{1}\in\operatorname{Im}\widetilde{D}_{H}\) so \(\ker\widetilde{D}_{H}\perp\mathbb{1}\). Thus, \(\langle x_{2},\mathbb{1}\rangle=0\), implying that \(Jx_{2}=0\). However, this implies that \(\widetilde{D}_{G}x_{1}=\mathbb{1}\), which is a contradiction.
Now, suppose that \(\langle x_{1},\mathbb{1}\rangle\neq 1\). Then, \(\widetilde{D}_{H}x_{2}=c\mathbb{1}\) for some \(c\neq 0\). So, \(x_{2}=y/c+z\) for some \(z\in\ker\widetilde{D}_{H}\). Noting that \(\ker\widetilde{D}_{H}\perp\mathbb{1}\), we have \(\langle x_{2},\mathbb{1}\rangle=\langle y,\mathbb{1}\rangle/c=0\). So, \(Jx_{2}=0\) implying that \(\widetilde{D}_{G}x_{1}=\mathbb{1}\), which is a contradiction.
Now, we will construct a family of graphs \(\{H_{n}\}_{n=3}^{\infty}\) such that each \(H_{n}\) has \(2n\) vertices and there exists \(x\) satisfying \(\widetilde{D}_{H_{n}}x=\mathbb{1}\) with \(\langle x,\mathbb{1}\rangle=1\). First, we will define \(\{H_{n}\}_{i=3}^{\infty}\).
**Definition 12**.: For each \(n\geq 3\), define \(H_{n}=C_{n}^{c}+K_{n}\), where \(+\) is the graph join and \(C_{n}^{c}\) is the complement of the cycle graph on \(n\) vertices.
**Lemma 13**.: _For each \(n\geq 3\), there exists \(x\) satisfying \(\widetilde{D}_{H_{n}}x=\mathbb{1}\) with \(\langle x,\mathbb{1}\rangle=0\)._
Proof.: To start, observe that \(\widetilde{D}_{H_{n}}\) is of the form
\[\begin{pmatrix}B&J_{n}\\ J_{n}&J_{n}-I_{n}\end{pmatrix}\]
where \(B\) is defined by
\[B_{i,j}=\begin{cases}0&i=j\\ 2&i=j\pm 1\mod n\;.\\ 1&\text{otherwise}\end{cases}\]
The vector \(x=(\mathbb{1}_{n},-\mathbb{1}_{n})^{T}\) satisfies \(\widetilde{D}_{H_{n}}x=\mathbb{1}\,\) with \(\langle x,1\rangle=0\) so we are done.
Observe that each \(H_{i}\) has an even number of vertices. We will now show construct a family of graphs \(\{H_{n}^{\prime}\}_{n=3}^{\infty}\) such that each \(H_{n}^{\prime}\) has \(2n+1\) vertices.
**Definition 14**.: For each \(n\geq 3\), define \(H_{n}^{\prime}\) to be the graph formed by attaching one vertex to every vertex of \(H_{n}\) except for one of the vertices of the \(C_{n}^{c}\) component of \(H_{n}\).
**Lemma 15**.: _For each \(n\geq 3\), there exists \(x\) satisfying \(\widetilde{D}_{H_{n}^{\prime}}x=\mathbb{1}\,\) with \(\langle x,\mathbb{1}\rangle=0\)._
Proof.: To start, observe that we can write \(\widetilde{D}_{H_{n}^{\prime}}\) as
\[\begin{pmatrix}\widetilde{D}_{H_{n}}\\ y\end{pmatrix}\]
where \(y=(2,1,\ldots,1,0)\). Then, the vector \(x=(\mathbb{1}_{n},-\mathbb{1}_{n},0)^{T}\) satisfies \(\widetilde{D}_{H_{n}^{\prime}}x=\mathbb{1}\,\) with \(\langle x,1\rangle=0\) so we are done.
Now, for sake of notation, we will recall the definition of the cone of a graph.
**Definition 16**.: Given a graph \(G\), the graph \(\operatorname{cone}(G)\) is defined as the graph join of \(G\) with the trivial graph.
Proof of Theorem 1.: Take \(G=\operatorname{cone}(H_{(n-1)/2})\) if \(n\) is odd, and \(G=\operatorname{cone}(H_{n/2-1}^{{}^{\prime}})\) if \(n\) is even. The proof is immediate from Theorem 4, Lemma 13 and Lemma 15.
We now move to the proof of Theorem 6, that allows for an alternative way of constructing graphs for which \(Dx=\mathbb{1}\,\) does not have a solution. To this aim, let \(G\) and \(H\) be two graphs on \(n\) and \(m\) vertices, respectively. Let \(A\in\mathbb{R}^{n\times n}\) and \(B\in\mathbb{R}^{m\times m}\) be the distance matrices of \(G\) and \(H\) respectively. It is well-known (see for instance [13, Corollary 1.35], [1, Lemma 1]) that the distance matrix of the Cartesian product \(G\times H\) is given by \(J_{m}\otimes A+B\otimes J_{n}\in\mathbb{R}^{nm\times nm}\) where \(\otimes\) is the Kronecker product and \(J_{\ell}\) denotes \(\ell\times\ell\) matrix with all \(1\) entries. Theorem 6 is an immediate consequence of the following Lemma 17.
**Lemma 17**.: _Suppose that \(A\) is a \(n\times n\) matrix and \(B\) is an \(m\times m\) matrix such that the linear systems \(Ay=\mathbb{1}_{n}\) and \(Bz=\mathbb{1}_{m}\) have no solution. Then,_
\[(J_{m}\otimes A+B\otimes J_{n})x=\mathbb{1}_{nm}\]
_has no solution._
Proof.: Assume for the sake of contradiction that there exists \(x\in\mathbb{R}^{nm\times nm}\) with
\[(J_{m}\otimes A+B\otimes J_{n})x=\mathbb{1}_{\,nm}.\]
Then, we have
\[(J_{m}\otimes A)x=\mathbb{1}_{\,nm}-(B\otimes J_{n})x=(c_{1},\ldots,c_{m})^{T}\;,\]
where each \(c_{i}\in\mathbb{R}^{1\times n}\) is a vector with constant entries. Since \(Bz=\mathbb{1}_{\,m}\) has no solutions, there must be some \(1\leq j\leq m\) for which \(c_{j}=\alpha\mathbb{1}_{\,n}\), where \(\alpha\neq 0\). Writing \(x\) as the block vector \((x_{1},...,x_{m})^{T}\) where each \(x_{i}\in\mathbb{R}^{1\times n}\), we note that
\[A(x_{1}+\ldots+x_{m})=c_{i},\quad\forall 1\leq i\leq m\;.\]
In particular the above equation holds for \(i=j\). Thus, we obtain \(Ay=\mathbb{1}_{\,n}\) for \(y=(x_{1}+\cdots+x_{m})/\alpha\) which contradicts our assumption.
As we pointed out in Section 2, while we have established that there are infinitely many graphs \(G\) such that \(Dx=\mathbb{1}\) does not have a solution, finding such graphs can be hard. To illustrate this, we conclude this section with a structural result about family of graphs for which \(Dx=\mathbb{1}\) does have a solution.
**Lemma 18**.: _Let \(G=(V,E)\) be a connected graph. Suppose there are two vertices \(v,w\in V\) such that the following conditions hold._
1. \(v\) _is not connected to_ \(w\)__
2. \(v\sim x\) _for every_ \(x\in V\setminus\{w\}\)__
3. \(w\sim x\) _for every_ \(x\in V\setminus\{v\}\)_._
_If \(D\) is the graph distance matrix of \(G\) then \(Dx=\mathbb{1}\) has a solution. Furthermore, if there are two or more distinct pairs of vertices satisfying \(\mathbb{1}\)-\(\mathbb{3}\) then \(D\) is non-invertible._
Proof.: Observe that we can write the distance of \(G\) such that the first two columns of \(D\) are \((0,2,1,\ldots,1)^{T}\) and \((2,0,1,\ldots,1)^{T}\). Therefore \(x=(1/2,1/2,0,...,0)^{T}\) satisfies \(Dx=\mathbb{1}\). If there are two pair of vertices, say w.l.o.g \(v_{1},v_{2}\) and \(v_{3},v_{4}\) satisfying conditions 1-3 then the first four columns of \(D\) look like
\[\begin{pmatrix}0&2&1&1\\ 2&0&1&1\\ 1&1&0&2\\ 1&1&2&0\\ 1&1&1&1\\ \vdots&\vdots&\vdots&\vdots\\ 1&1&1&1\end{pmatrix}.\]
Labeling the columns \(c_{1},\ldots,c_{4}\), we have \(c_{1}+c_{2}-c_{3}=c_{4}\). \(D\) must be singular.
## 4. Proof of Theorem 8
We start with the following well-known result (see, e.g., [1]) about the diameter of an Erdos-Renyi graph.
**Lemma 19**.: _Let \(p\in(0,1)\). Let \(P_{p,n}\) be the probability that a random Erdos-Renyi graph \(G(n,p)\) has diameter at least \(3\). Then, \(\lim_{n\to\infty}P_{p,n}=0\)._
Let \(I\) be the identity matrix, \(J\) be the all-ones matrix, and \(A\) be the graph's adjacency matrix. Owing to the Lemma (19), we can write, with high probability, the distance matrix as \(D=2J-A-2I\). We will now state the following theorem from [20], which describes the smallest singular value \(\sigma_{n}\) of a matrix \(M_{n}=F_{n}+X_{n}\) where \(F_{n}\) is a fixed matrix and \(X_{n}\) is a random symmetric matrix under certain conditions.
**Condition 20**.: _Assume that \(\xi\) has zero mean, unit variance, and there exist positive constants \(c_{1}<c_{2}\) and \(c_{3}\) such that_
\[\mathbb{P}(c_{1}\leq|\xi-\xi^{\prime}|\leq c_{2})\geq c_{3},\]
_where \(\xi^{\prime}\) is an independent copy of \(\xi\)_
**Theorem 21**.: _Assume that the upper diagonal entries of \(x_{ij}\) are i.i.d copies of a random variable \(\xi\) satisfying 20. Assume also that the entries \(f_{ij}\) of the symmetric matrix \(F_{n}\) satisfy \(|f_{ij}|\leq n^{\gamma}\) for some \(\gamma>0\). Then, for any \(B>0\), there exists \(A>0\) such that_
\[\mathbb{P}(\sigma_{n}(M_{n})\leq n^{-A})\leq n^{-B}.\]
Combining all these results, we can prove the main result of the section.
Proof of Theorem 8.: Owing to Lemma 19, we can assume that with high probability the distance matrix has the form \(D=2J-2A-2I\). Note that the upper diagonal entries of \(A\) are i.i.d copies of a random variable satisfying Condition 20 with \(c_{1}=c_{3}=1\) and \(c_{2}=1\). Furthermore, \(2(J-I)\) is symmetric and its entries are bounded. Therefore, the result follows from Theorem 21.
## 5. Proof of Theorem 10
Let \(D_{m}\) be the graph distance matrix of \(C_{m^{2}}^{m}\). We start by observing that
\[D_{m}=\begin{bmatrix}J_{m^{2}}-I_{m^{2}}&B_{m}\\ (B_{m})^{\top}&A_{m}\end{bmatrix}\;,\]
where \(A_{m}\) as a matrix \(m\times m\) matrix such that \((A_{m})_{ij}=|i-j|\) and \(B_{m}\) is \(m\times m\) matrix defined by
\[B_{m}=\begin{bmatrix}2&3&\cdots&m+1\\ \vdots&\vdots&\vdots&\vdots\\ 2&3&\cdots&m+1\\ 1&2&\cdots&m\end{bmatrix}\]
Our first observation is that the first eigenvector of \(D_{m}\) is constant for the first \(m^{2}-1\) entries (considering the symmetry of the graph, this is not surprising).
**Lemma 22**.: _Let \(\lambda_{m}\) denote the largest eigenvalue of \(D_{m}\) and let \(v\) be the corresponding eigenvector. Then, for all \(i,j\leq m^{2}-1\), we have \(v_{i}=v_{j}\)._
Proof.: Let \(r_{i},r_{j}\) be \(i\)-th and \(j\)-th rows of \(D\) respectively. We first note that \(r_{i}-r_{j}=e_{i}-e_{j}\) for \(i,j\leq m^{2}-1\). Now observe that
\[\lambda_{m}v_{j}-\lambda_{m}v_{i} =\langle r_{j},v\rangle-\langle r_{i},v\rangle\] \[=\langle e_{i}-e_{j},v\rangle=v_{i}-v_{j}\;.\]
The conclusion follows since \(\lambda_{m}\geq 0\)
We start with an estimate for \(\lambda_{m}\) that will later allow us to bound entries of \(v\).
**Lemma 23**.: _Let \(\lambda_{m}\) be the largest eigenvalue of \(D_{m}\) then_
\[\lambda_{m}=(1+o(1))\cdot\frac{m^{5/2}}{\sqrt{3}}\;.\]
Proof.: Write \(D=D_{m}\) and let \(\lambda_{m}\) be as above. Let \(A\) be the \(m^{2}+m\) by \(m^{2}+m\) matrix defined by
\[A_{i,j}=\begin{cases}i-m^{2}&\text{if }i>m^{2},j\leq m^{2}\\ j-m^{2}&\text{if }j>m^{2},i\leq m^{2}\\ 0&\text{otherwise}\;.\end{cases} \tag{1}\]
Let \(B\) be the \(m^{2}+m\) by \(m^{2}+m\) matrix defined by
\[B_{i,j}=\begin{cases}1&\text{if }i,j\leq m^{2}\\ 0&\text{otherwise}\end{cases}\;. \tag{2}\]
Let \(C\) be the \(m^{2}+m\) by \(m^{2}+m\) matrix defined by
\[C_{i,j}=\begin{cases}m+1&\text{if }i,j>m^{2}\\ 0&\text{otherwise}\end{cases}\;. \tag{3}\]
Note that
\[A\leq D\leq A+B+C\]
where the inequalities refer to entrywise inequalities. This means that for all \(x\in\mathbb{R}^{m^{2}+m}\) with nonnegative entries,
\[x^{T}Ax\leq x^{T}Dx\leq x^{T}(A+B+C)x\]
Let \(\lambda_{A},\lambda_{B},\lambda_{C}\) be the top eigenvalue of \(A\), \(B\), and \(C\) respectively and let \(\lambda_{A+B+C}\) be the top eigenvalue of \(A+B+C\). Noting that \(A,B,C\) are all symmetric nonnegative matrices, letting \(S\subset\mathbb{R}^{m^{2}+m}\) be the subset of vectors with nonnegative entries such that \(\|x\|_{2}\leq 1\). Then,
\[\lambda_{A}\leq\lambda_{m}\leq\lambda_{A+B+C}\leq\lambda_{A}+\lambda_{B}+ \lambda_{C}\;.\]
It is easily seen that \(\lambda_{B}=m^{2}\) and \(\lambda_{C}=m(m+1)\). We can also compute \(\lambda_{A}\) explicitly. Let \(v\) be the top eigenvector of \(A\). Since the first \(m^{2}\) rows and columns of \(M\) are all identical, the first \(m^{2}\) entries of \(v\) are the same. Normalize \(v\) so that the first \(m^{2}\) entries are \(1\). Then \(\lambda_{A}v=Dv\) yields
\[\lambda_{A}v_{1}=\lambda_{A}=\sum_{j=1}^{m}A_{1,j}v_{m^{2}+j}=\sum_{j=1}^{m}jv _{m^{2}+j}\]
and for \(1\leq k\leq m\),
\[\lambda_{A}v_{m^{2}+k}=\sum_{j=1}^{m^{2}}kv_{j}=\sum_{j=1}^{m^{2}}k=m^{2}k\;.\]
Plugging \(v_{m^{2}+k}=\frac{m^{2}k}{\lambda_{A}}\) into the first equation, we get
\[\lambda_{A}^{2}=\sum_{j=1}^{m}m^{2}j^{2}=\frac{m^{2}(m)(m+1)(2m+1)}{6}\;.\]
This yields,
\[\sqrt{\frac{m^{3}(m+1)(2m+1)}{6}}\leq\lambda_{m}\leq\sqrt{\frac{m^{3}(m+1)(2m+1)}{6 }}+m^{2}+m(m+1)\;.\]
With this estimate in hand we can now show stronger bounds on \(\|v\|_{\infty}\) than are directly implied by [10] in the general case.
**Lemma 24**.: _Let \(v\) be the top eigenvector of \(D_{m}\) normalized so that \(v_{1}=1\) we have_
\[\|v\|_{\infty}=\mathcal{O}(\sqrt{m})\]
Proof.: It follows from [10] that \(\|v\|_{\infty}=\mathcal{O}(m)\). when we have normalized \(v\) such that \(v_{1}=1\). Since the first \(m^{2}-1\) terms of \(v\) are \(1\) and the entries in \(D\) are at most \((m+1)\) we get
\[\lambda_{m}v_{i} =\sum_{k=1}^{m^{2}-1}(D_{m})_{i,k}v_{k}+\sum_{k=m^{2}}^{m^{2}+m}( D_{m})_{i,k}v_{k}\] \[\leq m^{2}(m+1)+2m(m+1)^{2}=\mathcal{O}(m^{3})\;.\]
Since \(\lambda_{m}\geq m^{5/2}/\sqrt{3}\), it follows that \(v_{i}\leq\mathcal{O}(\sqrt{m})\).
**Lemma 25**.: _Let \(v\) be as above. There exists \(C>0\) such that for \(i\geq m^{2}\), we have_
\[\sqrt{\frac{1}{3m}}-\frac{C}{m}\leq(v_{i}-v_{i-1})\leq\sqrt{\frac{3}{m}}+\frac {C}{m}\;,\]
_for all sufficiently large \(m\)._
Proof.: For \(i\geq m^{2}\) we consider the following difference \(r_{i}-r_{i-1}\). Observe that first \(i-1\) coordinates are \(1\) followed by \(n+m+1-i\) many \(-1\). Therefore,
\[\lambda(v_{i}-v_{i-1}) =(D_{m}v)_{i}-(D_{m}v)_{i-1}=\langle r_{i}-r_{i-1},v\rangle\] \[=\sum_{k=1}^{i-1}v_{i}-\sum_{k=i}^{m^{2}+m}v_{i}=(m^{2}-1)+\sum_{ k=m^{2}}^{i-1}v_{i}-\sum_{k=i}^{m^{2}+m}v_{i}\;.\]
Using the fact that \(v_{i}\leq C\sqrt{m}\) for all \(i\) we obtain
\[m^{2}-1-Cm^{3/2}\leq\lambda(v_{i}-v_{i-1})\leq m^{2}-1+Cm^{3/2}\;.\]
Since \(\lambda_{m}\sim m^{5/2}/\sqrt{3}\), the desired conclusion follows.
Proof of Theorem 10.: To conclude the proof we first note that from above
\[\langle 1,v\rangle\geq m^{2}.\]
On the other hand, We also obtain
\[\|v\|_{2}^{2}\leq 2m^{2}+C(m+1)^{3/2}\;.\]
Combining these results tells us that
\[\liminf_{m\to\infty}\frac{\langle 1,v\rangle}{\|v\|_{2}\cdot\|1\,\|_{2}}\geq \frac{1}{\sqrt{2}}\;.\]
## 6. Proof of Theorem 11
Let \(G\) be any graph with diameter \(2\). Since \(D_{ij}\) is either \(1\) or \(2\) (except for \(D_{ii}=0\)), it is easy to see that
\[\langle 1,v\rangle-v_{i}\leq\lambda v_{i}=\sum_{j=1}^{n}D_{i,j}v_{j}\leq 2( \langle 1,v\rangle-v_{i}).\]
Rearranging, we obtain the uniform two-sided bound
\[\frac{\langle 1,v\rangle}{\lambda+1}\leq v_{i}<2\frac{\langle 1,v\rangle}{\lambda+1}.\]
This yields, in particular, that for all \(1\leq i,j\leq n\)
\[1\leq\frac{v_{i}}{v_{j}}\leq 2\;.\]
This defines a convex region, that we denote by \(D\). In order to prove our result, it suffices to prove that the minimum of \(\|v\|_{1}=\langle 1,v\rangle\) over the set \(D\), subject to the constraint \(\|v\|_{2}=1\), is at least \(4/(3\sqrt{2})\). To this aim, we first notice that the minimizers of this problem are the same, up to a scalar factor, of the maximizers of \(\|v_{2}\|_{2}\) in \(D\) subject to \(\|v\|_{1}=1\) (in fact, in both cases they must be minimizers of the homogeneous function \(\|v\|_{1}/\|v\|_{2}\) on \(D\)). Since the latter is a maximization problem for a strictly convex function on a convex set, the maximizers must be extreme points of \(D\). In particular, going back to the original formulation, we conclude that the smallest that \(\langle 1,v\rangle\) can be will be when all entries of \(v\) are \(c,2c\) for some \(c\) so that \(\|v\|_{2}=1\). Suppose now that we have \(m\) entries equal to \(c\) and \(n-m\) equal to \(2c\), then
\[1=\|v\|_{2}^{2}=\sum_{k=1}^{m}c^{2}+\sum_{k=m+1}^{n}(2c)^{2}=mc^{2}+(n-m)4c^{2}\]
Then solving for \(c\) we find
\[c=\frac{1}{\sqrt{4n-3m}}\]
So now we can optimize over \(m\) to minimize the \(\ell_{1}\) norm
\[\frac{\|v\|_{1}}{\sqrt{n}}=\frac{mc+(n-m)2c}{\sqrt{n}}=\frac{2n-m}{\sqrt{n(4n- 3m)}}\]
Now treating \(n\) as a constant and differentiating wrt to \(m\) we get
\[\frac{d}{dm}\frac{2n-m}{\sqrt{n(4n-3m)}}=\frac{-\sqrt{4n^{2}-3mn}+\frac{3n(2n -m)}{2\sqrt{4n^{2}-3mn}}}{4n^{2}-3mn}=\frac{3mn-2n^{2}}{2(4n^{2}-3mn)^{\frac{ 3}{2}}}\]
If we want to set this equal to \(0\) we only care about the denominator so we solve
\[0 =3mn-2n^{2}\] \[0 =n(3m-2n)\]
Which gives solutions \(n=0\), \(\frac{2n}{3}\) from which we see the latter is the minimum. Now if we substitute this into our formula for the \(\ell_{1}\) norm we get
\[\frac{2n-m}{\sqrt{n(4n-3m)}}=\frac{\frac{4n}{3}}{\sqrt{n(4n-2n)}}=\frac{4}{3} \cdot\frac{1}{\sqrt{2}}\]
Now by 19 we know that if \(G\) is a random graph, then for large \(n\) it will have diameter 2 and this bound will hold.
|
2304.02661 | Deep learning approach for identification of HII regions during
reionization in 21-cm observations -- II. foreground contamination | The upcoming Square Kilometre Array Observatory (SKAO) will produce images of
neutral hydrogen distribution during the epoch of reionization by observing the
corresponding 21-cm signal. However, the 21-cm signal will be subject to
instrumental limitations such as noise and galactic foreground contamination
which pose a challenge for accurate detection. In this study, we present the
SegU-Net v2 framework, an enhanced version of our convolutional neural network,
built to identify neutral and ionized regions in the 21-cm signal contaminated
with foreground emission. We trained our neural network on 21-cm image data
processed by a foreground removal method based on Principal Component Analysis
achieving an average classification accuracy of 71 per cent between redshift
$z=7$ to $11$. We tested SegU-Net v2 against various foreground removal
methods, including Gaussian Process Regression, Polynomial Fitting, and
Foreground-Wedge Removal. Results show comparable performance, highlighting
SegU-Net v2's independence on these pre-processing methods. Statistical
analysis shows that a perfect classification score with $AUC=95\%$ is possible
for $8<z<10$. While the network prediction lacks the ability to correctly
identify ionized regions at higher redshift and differentiate well the few
remaining neutral regions at lower redshift due to low contrast between 21-cm
signal, noise and foreground residual in images. Moreover, as the photon
sources driving reionization are expected to be located inside ionised regions,
we show that SegU-Net v2 can be used to correctly identify and measure the
volume of isolated bubbles with $V_{\rm ion}>(10\, {\rm cMpc})^3$ at $z>9$, for
follow-up studies with infrared/optical telescopes to detect these sources. | Michele Bianco, Sambit. K. Giri, David Prelogović, Tianyue Chen, Florent G. Mertens, Emma Tolley, Andrei Mesinger, Jean-Paul Kneib | 2023-04-05T18:00:01Z | http://arxiv.org/abs/2304.02661v2 | Deep learning approach for identification of Hii regions during reionization in 21-cm observations - II. foreground contamination
###### Abstract
The upcoming Square Kilometre Array Observatory (SKAO) will produce images of neutral hydrogen distribution during the epoch of reionization by observing the corresponding 21-cm signal. However, the 21-cm signal will be subject to instrumental limitations such as noise, foreground contamination, and limited resolution, which pose a challenge for accurate detection. In this study, we present the SegU-Net v2 framework, which is an enhanced version of our U-Net architecture-based convolutional neural network built for segmenting image data into meaningful features. This framework is designed to identify neutral and ionized regions in the 21-cm signal contaminated with foreground emission that is \(\sim\)3 order of magnitude larger. We demonstrate the effectiveness of our method by estimating the true ionization history from mock observations of SKA with an observation time of 1000 h, achieving an average classification accuracy of 71 per cent. As the photon sources driving reionization are expected to be located inside the ionised regions identified by SegU-Net v2, this tool can be used to identify locations for follow-up studies with infrared/optical telescopes to detect these sources. Additionally, we derive summary statistics, such as the size distribution of neutral islands, from evaluating the reliability of our method on the tomographic data expected from the SKA-Low. Our study suggests that SegU-Net v2 can be a stable and reliable tool for analyzing the 3D tomographic data produced by the SKA and recovering important information about the non-Gaussian nature of the reionization process.
keywords: cosmology: dark ages, reionization, first stars, early Universe - techniques: image processing, interferometric
## 1 Introduction
Radiation emitted by the first luminous sources drastically influenced the chemical composition and thermal history of the intergalactic medium (IGM), transitioning the Universe from an initial cold and neutral state to a final hot and ionized state (e.g. Furlanetto et al., 2006; Ferrara and Pandolfi, 2014; Choudhury, 2022). These sources most likely formed at locations where dark matter structures collapsed into gravitational bound structures during redshift \(z\gtrsim 10\)(Abel et al., 2001; Bromm et al., 2009; Pawlik et al., 2011). The newly launched _James Webb Space Telescope (JWST)1_ is already providing preliminary results by detecting possible ionizing source candidates at these high redshifts (Castellano et al., 2022; Naidu et al., 2022; Bakx et al., 2022), which will help us understand the conditions for early galaxy formation (e.g. Boylan-Kolchin, 2022; Hutsi et al., 2023; Dayal and Giri, 2023).
Footnote 1: [http://jwst.nasa.gov](http://jwst.nasa.gov)
Another way to probe the appearance of these first luminous sources is to observe the evolution of neutral hydrogen (Hi) in the IGM. The ground state spin-flip transition of neutral hydrogen produces a signal with a wavelength of 21 cm in the rest frame, known as the _21-cm signal_. The presence of this signal is directly correlated with the number density of neutral hydrogen present in the early Universe, and with the Universe expansion, the 21-cm signal wavelength redshift into the radio frequency. As the first stars and galaxies formed and began emitting ultraviolet radiation, they started to ionize neutral gas in their surrounding. These primordial sources produce enough photons to escape their hosting environment and propagate into the IGM. As the hydrogen in the IGM becomes ionized, the intensity of the 21-cm signal decreases. Therefore, by observing the 21-cm signal from the early Universe with radio telescopes, we can study the reionization process and learn about the properties of the first luminous sources (e.g. Madau et al., 1997; Furlanetto et al., 2006). Several radio experiments, such as the Low-frequency Array2 (LOFAR; e.g. Mertens et al., 2020; Ghara et al., 2020), Murchison Wide-field Array3 (MWA; e.g. Trott et al., 2020; Ghara et al., 2021) and Hydrogen Epoch of Reionization Array4 (HERA; e.g. The HERA Collaboration et al.
2022b,a), have been trying to detect this signal during the epoch of reionization (EoR).
Currently, the low-frequency band component of the Square Kilometre Array5 (SKA-Low; e.g. Mellema et al., 2013), which will observe the sky at a frequency range between 50 and 350 MHz, is under construction. SKA-Low will have a field of view covering \(\sim\)(10 deg)\({}^{2}\) on the sky (Koopmans et al., 2015). This radio interferometer will be sensitive enough to capture the evolution of the IGM during EoR with images of the 21-cm signal from redshift \(z=30\) to 5. This sequence of 21-cm maps observed at different frequencies will be stuck together to constitute a three-dimensional set of data, known as the multi-frequency tomographic dataset (e.g. Mellema et al., 2015; Wyithe et al., 2015; Giri et al., 2018). The 21-cm signal image data produced by the SKA-Low will contain imprints of the ionised regions (or bubbles) caused by the luminous sources (Giri et al., 2018,b) and neutral regions (or islands) tracing the cosmic voids (Giri et al., 2019). By detecting these bubbles, we can learn about the locations of the first luminous sources (Zackrisson et al., 2020). We can also understand the nature and distribution of the photon sources driving the reionization process by studying the evolution of their sizes and morphology (e.g. Giri et al., 2018, 2019; Giri & Mellema, 2021; Kapahia et al., 2019, 2021; Gazagnes et al., 2021; Elbers & van de Weygaert, 2022). However, detecting these ionised bubbles in radio telescope observations is not trivial due to several limitations of the telescope, such as the limited resolution and instrument noise.
Footnote 5: [https://skatelecscope.org](https://skatelecscope.org)
To detect these bubbles, previous authors have developed methods using visibilities data smoothed with appropriated filters to represent the sizes and shapes of the bubbles, then a likelihood for Bayesian approach estimates the parameters of the ionized regions filtered (e.g. Datta et al., 2007; Ghara & Choudhury, 2020). Other authors employ the image data of radio telescopes. This approach can be intensity-based, where the method filters the image based on a threshold value or region-based, by agglomerate clustering correlated pixels into groups with common traits within the image (e.g. Achanta et al., 2012; Mehra & Neeru, 2016; Giri et al., 2018). This task is a well know assignment in Artificial intelligence (AI) and is called segmentation. Therefore, another approach would be to consider a deep learning application. Recent work by Gagnon-Hartman et al. (2021) demonstrated that a combination of foreground avoidance and machine learning techniques enable 21-cm segmentation and bubble detection for experiments that are not necessarily optimized for imaging. Moreover, recently we presented our first work (see Bianco et al., 2021, hereafter Paper I), where we introduced a deep learning approach to identify the distribution of Hi regions in SKA 21-cm tomographic image using a U-shaped convolutional neural network (U-Net) (Ronneberger et al., 2015). We named our framework SegU-Net and we assessed how this network could process 21-cm images during the EoR contaminated by systematic noise simulated for SKA-Low and segment the images into ionized and neutral regions with an average of 87% accuracy for redshift between 7 and 9. Moreover, we assessed that our network outperforms the Super-Pixel method (Giri et al., 2018), considered the state-of-the-art algorithm for EoR segmentation, with, on average, 10 to 20% more accuracy. We also demonstrated that SegU-Net could be used to recover the bubble size distributions with a relative difference within the 5% and other summary statistics with the same level of accuracy. Moreover, we provided our method with a per-pixel uncertainty map that provides a confidence interval for its prediction and the derived statistics. We have tested the response of our framework to different noise levels based on a shorter (250 h) and more extended (1500 h) observing time, corresponding to an under- and overestimation of the noise level, respectively. We demonstrated that SegU-Net tolerates noise up to \(\sqrt{2}\) times larger than the one employed in the training process, obtaining the same level of accuracy. By studying the uncertainty map and the response to the noise level, we realised that machine learning models are sensitive to the dynamic range and the intrinsic resolution of the simulated images.
While our previous work demonstrated excellent performance in detecting Hi regions from EoR images, it should be considered a proof-of-concept as we consider EoR images with only telescope systematic noise and we did not include any foreground contamination. The biggest challenge for the SKA-Low observation, just like other radio telescopes, is to separate the 21-cm signal from the undesired extra-galactic and galactic foreground contamination, which outshine the cosmological signal by several orders of magnitude (Jelic et al., 2008; Bowman et al., 2009). The key goal of this work is to develop tools which remove these foregrounds while recovering the regions of Hi during EoR from the 21-cm signal image data.
In this work, we will develop further our deep learning-based method to determine the ionised bubbles in image data with the presence of realistic galactic and extra-galactic foregrounds expected from the SKA-Low. Therefore, here we present SegU-Net v2, which extends the previous work by including foreground emissions of galactic origin and a complete study of its dependency on the foreground mitigation pre-processing step that partially subtracts the foreground signal, thus reducing the dynamic range in the 21-cm images before starting the network training. In the last three decades, several foreground removal methods with different approaches have been developed. Some of the early attempts take advantage of the spectral smoothness of the galactic and extra-galactic contaminants to fit along the line of sight and remove the foreground in either real or \(uv\) space (e.g.: Morales et al., 2006,b; Wang et al., 2006; Gieser et al., 2008; Liu et al., 2009, 2013). However, more recent approaches suggest a non-parametric subtraction (e.g. Harker et al., 2009; Gu et al., 2013; Chapman et al., 2012, 2013; Bonaldi & Brown, 2015; Mertens et al., 2018) as the frequency smoothness of the foreground spectrum can be corrupted by beam effect and incomplete \(uv\) coverage (Liu et al., 2009). Therefore, we perform a complete study of different available approaches for foreground subtraction in the case of the SKA-Low 21-cm tomographic dataset applied to SegU-Net v2. We analyse the effect of the subtraction process on the predicted binary maps so that we can establish if a particular foreground removal method provides a concrete advantage for our task.
This paper is organised as follows. In SS 2, we present how we generate the simulated data sets used for this work, including details of our foreground model in SS 2.3 and a description of the mock 21-cm observation in SS 2.4. In SS 4, we describe the design and the training of our neural network. In SS 5, we discuss its application to our simulated SKA-Low data sets contaminated by the foreground signal, and we analyse summary statistics such as the mean ionization fraction, power spectra and topological quantities. In SS 5.2 we test our framework on a different foreground removal method. We discuss and summarize our conclusions in SS 6. Throughout this work, we assume a flat \(\Lambda\)CDM cosmology with the following parameters: \(\Omega_{\Lambda}=0.73\), \(\Omega_{m}=0.27\), \(\Omega_{b}=0.046\), \(H_{0}=70\,\mathrm{km\,s^{-1}Mpc^{-1}}\), \(\sigma_{\mathrm{S}}=0.82\), \(n_{\mathrm{S}}=0.96\). These values are based on the WMAP 5 years observation (Komatsu et al., 2009) and consistent with _Planck 2018_ (Planck Collaboration et al., 2020) results.
## 2 21-cm Signal
This section illustrates the process we follow to create 21-cm mock observations of the EoR. Development of the network requires mock 21-cm observations of the EoR for network training, validation and testing, which will be described in SS 4.
### Simulating the Cosmological 21-cm Signal during EoR
The intensity of the redshifted 21-cm signal emerging from a neutral cloud of hydrogen can be observed by a radio interferometric telescope as the difference against the CMB temperature \(T_{\rm CMB}\), i.e. \(\delta T_{b}\equiv T_{b}-T_{\rm CMB}\). For a given sky angular position \(\mathbf{\hat{n}}\) and redshift \(z\), we can define it to be (e.g. Zaroubi, 2012; Mellema et al., 2013)
\[\delta T_{\rm b}(\mathbf{r},z) =T_{0}(z)\left(1-\frac{T_{\rm CMB}(z)}{T_{\rm S}(\mathbf{r},z)}\right) \left[1+\delta_{b}(\mathbf{r},z)\right]x_{\rm HI}(\mathbf{r},z), \tag{1}\] \[T_{0}(z) \approx 27\ {\rm mK}\left(\frac{\Omega_{\rm b}}{0.044}\right) \left(\frac{\rm h}{0.7}\right)\sqrt{\left(\frac{1+z}{10}\right)\left(\frac{0.2 7}{\Omega_{\rm m}}\right)} \tag{2}\]
where \(x_{\rm HI}\) is the neutral hydrogen fraction, \(\delta_{\rm b}\) is the baryonic overdensity, and \(T_{\rm S}\) is the spin temperature. We assume that the IGM is heated well above the CMB temperature (\(T_{\rm S}\gg T_{\rm CMB}\)) at \(z\lesssim 12\), which is consistent with theoretical predictions (e.g. Pritchard & Furlanetto, 2007; Ross et al., 2017, 2019, 2021) 6. In this context, Equation 1 is always positive and can be approximated as \(\delta T_{b}\propto(1+\delta_{b})\,x_{\rm HI}\), while the presence of ionized regions is characterized by a lack of signal, \(\delta T_{b}=0\,{\rm mK}\). The radio interferometer cannot observe the absolute \(\delta T_{b}\). Therefore, the ionised regions cannot be identified by finding pixels with zero signal in the 21-cm image data. To model the large-scale cosmological 21-cm signal expected during reionisation, we employ the Python wrapper of the 21cmFAST semi-numerical code (Mesinger et al., 2011; Murray et al., 2020). The code models the dark matter density evolution and gravitational collapse using the second-order Lagrangian perturbation theory (2LPT). From the generated large-scale density field, a region is considered collapsed when it exceeds a defined mass threshold, which can be related to a minimum virial temperature \(T_{\rm vir}^{\rm min}\). The excursion set formalism is then employed to calculate ionised regions (Furlanetto et al., 2004). The code outputs a coeval cube at different redshifts that are then used for constructing 21-cm lightcones. We refer the readers to e.g. Giri et al. (2018) for more general details on the construction of lightcone from coeval cube simulations. In this work, we simulate the signal in coeval cubes for a total of \(\sim 20\) snapshot for redshift \(z=[7,11]\) with a mesh grid of \(128^{3}\) that is 256 Mpc along each direction.
Footnote 6: Note that the current 21-cm signal measurements have not completely ruled out the possibility of cold reionization (see e.g. Ghara et al., 2020, 2021; The HERA Collaboration et al., 2022). The signal becomes very complicated if \(T_{\rm S}\sim T_{\rm CMB}\) when reionization begins (Ross et al., 2021; Schneider et al., 2023). Therefore we defer a detailed exploration to the future.
### Systematic Noise
We model the SKA-Low antenna receiver noise by a random Gaussian distribution with mean value zero and variance (Ghara et al., 2017; Giri et al., 2018)
\[\sigma_{\rm uv}=\frac{k_{\rm B}\,T_{\rm sys}}{A_{\rm eff}}\sqrt{\frac{2\,t_{ \rm daily}}{\Delta v\,V_{\rm uv}\,t_{\rm obs}\,t_{\rm int}}} \tag{3}\]
Here \(t_{\rm int}\) is the integration time, \(t_{\rm daily}\) is the window of observation per day, \(T_{\rm sys}\) is the system temperature, \(A_{\rm eff}\) is the effective collecting area, \(\Delta v\) is the bandwidth, \(N_{\rm uv}\) is the number of measurements that are detected in each cell of the \(uv\)-coverage grid. We assume an observation length of \(t_{\rm obs}=1000\) h. We list the SKA-Low telescope parameters in Table 1. The \(uv\)-coverage grid is simulated assuming the current plan for antennae distribution of SKA-Low7. In the top-right panel of Figure 1, we show an example slice of the 21-cm signal and a noise realisation at \(z=8.24\). As the map is degraded to a resolution corresponding to a maximum baseline of \(B=2\,{\rm km}\), we can see the large-scale distribution of the neutral and ionised regions.
Footnote 7: The SKA-Low design is given at [https://www.skao.int/sites/default/files/documents/d18-SKA-TEL-SKO-0000422_02_SKA1_LowConfigurationCoordinates-1.pdf](https://www.skao.int/sites/default/files/documents/d18-SKA-TEL-SKO-0000422_02_SKA1_LowConfigurationCoordinates-1.pdf).
### Foreground Contamination
Between 250 and 30 MHz, the dominant emission comes from the Galactic synchrotron radiation. This emission alone is expected to contribute to the majority of the total foreground contamination of the comic 21-cm signal (Di Matteo et al., 2002, 2004; Santos et al., 2005; Datta et al., 2007; Jelic et al., 2008; Kerrigan et al., 2018). Other contributors can include emissions from unresolved extra-galactic point sources, Galactic free-free emissions, supernova remnants and extra-galactic radio clusters, which for simplicity, have been neglected in this study. We based our Galactic synchrotron emission model on the Choudhuri et al. (2014) study. We express the foreground radiation with a Gaussian random field with an angular power spectrum as
\[C_{l}^{\rm syn}(\nu)=A_{150}\,\left(\frac{1000}{l}\right)^{\overline{B}}\left( \frac{\nu}{\nu_{\star}}\right)^{-2\overline{\alpha}_{\rm syn}-2\Delta\overline{ \alpha}_{\rm syn}\log\left(\frac{\nu}{\nu_{\star}}\right)} \tag{4}\]
here the parameter for the Galactic synchrotron emission is the power spectra amplitude \(A_{150}=512\,{\rm mK}^{2}\) at the reference frequency \(\nu_{\star}=150\,{\rm MHz}\), the angular scaling \(\overline{\beta}=2.34\), the spectra index \(\overline{\alpha}_{\rm syn}=2.8\) and the running spectral index \(\Delta\overline{\alpha}_{\rm syn}=0.1\). These quantities are taken from Platania et al. (1998), and Wang et al. (2006). We then generate the foreground temperature fluctuations map following the relation
\[\delta T_{b}^{\rm ffg}(U,\ \nu)=\sqrt{\frac{\Omega_{\rm SKA}\,C_{l}^{\rm syn }(\nu)}{2}}\ [x_{l}(U)+i\cdot y_{l}(U)] \tag{5}\]
where \(\Omega_{\rm SKA}\) is the total SKA-Low solid angle and \(U=l/2\pi\). The two quantities \(x_{I}\) and \(y_{I}\) are two independent random Gaussian variables with mean zero and variance of one, \(\mathcal{N}\sim(0,1)\). By performing two-dimensional inverse fast-Fourier transform of Equation 5, we get the spatial distribution of the foreground contamination \(\delta T_{b}^{\rm ffg}(\mathbf{\hat{n}},\ z)\). With each lightcone simulation, we fix the random variables seed
\begin{table}
\begin{tabular}{l l l} \hline \hline Parameters & Values & \\ \hline System temperature & \(T_{\rm sys}\) & \(60(\frac{\nu}{\nu_{\rm CMB}})^{-2.55}\,{\rm K}\) \\ Effective collecting area & \(A_{\rm eff}\) & \(962\,{\rm m}^{2}\) \\ Declination & \(\theta_{\rm c}\) & \(\cdot 30^{\circ}\) \\ Frequency channel width & \(\Delta\nu\) & \(118-96\,{\rm kHz}\) \\ Observation hour per day & \(t_{\rm daily}\) & \(6\,{\rm hours}\) \\ Signal integration time & \(t_{\rm int}\) & \(10\,{\rm seconds}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The telescope parameters used in this work. For the frequency channel width, we indicate the quantity at \(z=7\) and \(11\).
for the lowest redshift, \(z=7\), and compute Equation 4 for the corresponding frequency of the image.
### Mock 21-cm Observation
From the simulated coeval cubes described in SS2.1, we create 3D Ightcones with differential brightness \(\delta T_{b}^{\rm sim}(\boldsymbol{\hat{n}},z)\equiv\delta T_{b}^{\rm sim}(x,y,z)\) at \(x,y\) coordinates for a total box size of 256%Mpc and spatial resolution of \(\Delta x=2\) cMpc, both in comoving units, corresponding to an angular mesh-size of \(128^{2}\). This scale corresponds to an angular resolution of \(\Delta\theta=0.77\) arcmin at redshift \(z=7\). The redshift coordinate is divided into 552 bins at equal comoving distance \(\Delta x\) from \(z=11\) to 7, corresponding of frequencies from \(\nu_{\rm obs}=118\) MHz to 178 MHz and a frequency resolution of approximately \(\Delta y\simeq 0.11\) MHz.
We select one tomographic simulation from the prediction dataset as our _fiducal_ simulation. In Figure 1, left column, we show a slice of this _fiducial_ lightcone at redshift \(z=8.24\), corresponding to \(\nu_{\rm obs}=152.90\) MHz. At this stage, the simulated lightcone is 50% ionised. The top panel show the neutral fraction \(x_{\rm HI}\), with blue and red regions being the neutral and ionised regions, respectively. At the same time, the green colour indicated regions of transitions with \(x\simeq 0.5\). The differential brightness is calculated with Equation 1 with the approximation discussed in SS2.1. The bottom panel shows the differential brightness after smoothing the field in the angular direction with a Gaussian kernel, \(G(\boldsymbol{\hat{n}},z)\), with Full-Width at Half Maximum (FWHM) of \(\lambda_{0}(1+z)/B\), where \(\lambda_{0}=21\) cm and \(B=2\) km corresponds to the maximum baseline of SKA-Low core station. For reference, in this interferometric smoothing \(B\) corresponds to an angular resolution of \(\sim 2.9\) arcmins at \(z\approx 7\) and \(\sim 4.3\) arcmins at \(z\approx 11\). In the frequency direction, we apply a top-hat bandwidth filter with the same width as the FWHM in the angular direction. We implement the method explained in SS2.2 and the parameters listed in Table 1 to simulate the effect of the systematic noise, \(\delta T_{b}^{\rm noise}(\boldsymbol{\hat{n}},z)\). We create a random field with the same mesh size as the lightcone and add the simulated differential brightness. We then apply the same interferometric smoothing mentioned here above, and the result is shown in Figure 1, top right panel. As a reference for the reader, this was the network input in our previous work (Paper I).
In this paper, we want to extend our previous effort as we want to recover the neutral binary map in the presence of contamination
Figure 1: An example of a slice through the sky-plane used during the network training. _Top Left_: the neutral hydrogen fraction at simulation resolution when the reionisation process is halfway complete. _Bottom Left_: the simulated 21-cm signal after the interferometric smoothing with a maximum baseline of \(B=2\) km and matching frequency resolution. We then subtract the frequency mean signal to mimic the effect of the lack of a zero baseline. _Top Right_: systematic noise added to the 21-cm signal for an observing time of 1000 hours. A solid black line indicates the neutral field after the same interferometric smoothing scale. _Bottom right_: the Galactic synchrotron emission added to the 21-cm signal with the systematic. We can notice how the dynamic range is a few orders of magnitude larger and completely outshines the 21-cm signal. For all the differential brightness images, the units are in mK.
due to the synchrotron Galactic foreground, \(\delta\overline{T}_{\rm obs}^{\rm fog}(\mathbf{\hat{n}},z)\). The result of the model described in SS2.3 is shown in Figure 1, bottom right panel. As we can see, the dynamic range of the observed changes drastically. From our previous work, we noticed that our method is sensitive to the SNR level between the noise and the 21-cm signal. Therefore, we need to introduce an additional pre-processing step in our framework to mitigate foreground contamination and decrease the dynamic range of the contaminated images before providing them for network training. We will discuss this method in more detail in SS3.
We can describe our mock observation pipeline by combining together the components and operations described here above as (e.g. Liu & Shaw, 2020)
\[\delta\overline{T}_{\rm obs}(\mathbf{\hat{n}},z)=\delta\overline{T}_{b}^{\rm sim}( \mathbf{\hat{n}},z)+\delta\overline{T}_{b}^{\rm frg}(\mathbf{\hat{n}},z)+\delta \overline{T}_{b}^{\rm noise}(z). \tag{6}\]
For each realization of the lightcone \(\delta\overline{T}_{\rm obs}(\mathbf{\hat{n}},z)\), illustrated with Figure 1, we calculate the mean along the frequency channels,
\[\delta\overline{T}_{\rm obs}(z)=\frac{1}{N_{x}N_{y}}\sum_{i=1}^{N_{x}}\sum_{j =1}^{N_{y}}\delta\overline{T}_{\rm obs}(x_{i},y_{j},z)\, \tag{7}\]
where \(N_{x}\) and \(N_{y}\) are the dimension in the angular-direction of the \(128^{2}\) mesh. We subtract this quantity from \(\delta\overline{T}_{\rm obs}\) to account for the effect of the null baseline in interferometry telescopes. For this reason, the colour bar in the figure shows a negative value. W convolve the subtracted term with the Gaussian kernel \(G\) mentioned above
\[\delta\overline{T}_{\rm obs}(\mathbf{\hat{n}},z)=\int_{\Omega_{\rm SKA}}\left[ \delta\overline{T}_{\rm obs}(\mathbf{\hat{n}}^{\prime},z)-\delta\overline{T}_{\rm obs }(z)\right]\cdot G(\mathbf{\hat{n}}-\mathbf{\hat{n}}^{\prime},z)\ d\mathbf{\hat{n}}^{ \prime}. \tag{8}\]
This result constitutes a realistic mock observation of the SKA-Low interferometric telescope that includes systematic noise, Galactic foreground contamination and telescope limited resolution effect. We employ this pipeline to create the training, validation and _random testing set_. In SS3, we explain how we pre-process this type of data before inputting it into our neural network.
Finally, we create an additional field that serves as the target of the network training. We apply the interferometric smoothing explained here above to the simulated neutral fraction field \(x_{\rm HI}\) (top left panel Figure 1). We then choose a threshold of \(x_{\rm th}=0.5\) to discern the ionised and neutral regions. The result is a binary lightcone, \(x_{\rm HI}^{\rm H}(\mathbf{\hat{n}},z)\), where neutral and ionized regions are classified by 1 and 0, respectively. For a visual comparison, we over-plot the contour of this binary field as a black line in Figure 1 top right panel.
## 3 Foreground mitigation
As we outlined in SS2.4, the presence of foreground contamination poses a huge problem in detecting the 21-cm signal, as this signal is several orders of magnitude fainter in comparison. In Figure 2, we illustrate the effect of the foreground contamination on the 2D cylindrical power spectrum for a lightcone sub-volume centred at redshift \(z_{c}=8.24\) and frequency width of \(\Delta v\pm 10\) MHz. This quantity of the 21-cm signal (top panel) is compared with the same signal contaminated by the Galactic foreground signal (bottom panel). We observe that the contamination is visible at \(k_{\parallel}\leq 10^{-1}\) Mpc/h with signal intensity of \(\geq 10^{9}\) mK\({}^{2}\). The black dashed line in the figure indicates the foreground wedge. We will discuss this line later in SS3.2. To reduce the dynamic range of the foreground contaminated images to a level that is manageable for the neural network, we include a pre-processing step on the observed data, \(\delta\overline{T}_{\rm obs}(\mathbf{\hat{n}},z)\). Hereafter, we refer to the resulting images of this pre-process as _residual lightcone_ or _images_, \(\delta\overline{T}_{\rm res}(\mathbf{\hat{n}},z)\).
In the context of foreground mitigation, we can consider two types of methods foreground subtraction or avoidance (Chapman & Jelic, 2019). Here, we consider three of the former case, namely PCA, GPR and Polynomial fitting, and one of the latter techniques, Wedge removal. In this section, we briefly describe four different pre-processing methods that we test.
### Principal Component Analysis
Principal Component Analysis (PCA) is a commonly used method to remove foregrounds in 21-cm experiments (e.g. Alonso et al., 2015; Cunningham et al., 2023; Chen et al., 2023). The method exploits the fact that foregrounds have large amplitude and smooth frequency coherence. PCA simultaneously identifies the largest foreground components and an optimal set of basis functions that describe the frequency structure of the foregrounds. As the foregrounds are highly correlated in frequency, the frequency-frequency co-variance matrix of the foregrounds will have a particular eigensystem where most of the information can be sufficiently described by a small set of very large eigenvalues, the other ones being negligibly small. Thus, we can attempt to subtract the foregrounds by eliminating the components corresponding to the eigenvectors of the frequency co-variance
Figure 2: Cylindrical power spectra for a lightcone sub-volume centered at redshift \(z_{c}=8.24\) and frequency depth of \(\pm 10\) MHz. _Top Panel:_ 2D Power spectra from the simulated 21-cm signal only. _Bottom Panel:_ Same quantity but with the galactic foreground contribution. The black dashed line indicates the wedge slope with \(\theta=2.25^{\circ}\) and \(b=8\times 10^{-2}\) h Mpc\({}^{-1}\).
matrix with the largest associated eigenvalues. In practice, we choose to remove 4 components which captured most of the variance of the foreground modes. PCA is a relatively fast and computationally efficient method that does not require any prior assumptions about the foregrounds or the 21-cm signal. However, PCA is not well-suited to handle non-linear relationships between the foregrounds and the 21-cm signal, and it can struggle to remove residual foregrounds that are not well-described by the largest components.
### Wedge Remove
We consider another pre-process that focuses on discarding the Fourier modes that are dominated by foreground contamination. This method assumes that the contaminated modes are contained in specific regions in the \(k_{\perp}-k_{\parallel}\) space, named the _foreground wedge_. These contaminated \(k\)-modes can be defined by (e.g. Liu et al., 2014; Murray and Trott, 2018)
\[k_{\parallel}\leq\left\|{\bf k}_{\perp}\right\|\frac{H(z)}{1+z}\int_{0}^{z} \frac{dz^{\prime}}{H(z^{\prime})}\cdot\sin\theta+b \tag{9}\]
where \(H(z)\) is the Hubble parameter and \({\bf k}_{\perp}\) is the Fourier component perpendicular to the line of sight. \(\theta\) is the angular size of the field of view, which can be interpreted as the horizon limit angle. \(b\) is the bias that accounts for the presence of an intrinsic foreground limit at low \(k_{\parallel}\)-values. Pessimistic and arguably more realistic assumptions consider the horizon limit to be \(\theta=90^{\circ}\) justified by antenna sidelobes effect (Pober et al., 2014; Dillon et al., 2014). In our case, we select \(\theta=2.25^{\circ}\), corresponding to the field of view (FoV), at redshift \(z=7\) and comoving size of \(256\,\mathrm{Mpc}\), of our dataset. We then select \(b=8\times 10^{-2}\,\mathrm{h\,Mpc}^{-1}\) based on the 2D cylindrical power spectrum shown in the right panel of Figure 2. The dashed black line indicates Equation 9 for the \(\theta\) and \(b\) mentioned before.
In this work, we employ a simplified version of the code developed by Prelogovic et al. (2021). Here we give a brief description, referring the reader to the original paper for more details. First, we perform a 2D Fourier transform in the angular direction of a lightcone subvolume, Equation 8, centred at redshift \(z_{c}\) and with a given frequency depth, \(\pm\Delta\nu\). Subsequently, an iterating procedure along the line-of-sight axis calculates Equation 9 and sets to zero the \(k\)-modes that obey the condition. To avoid artificial ringing in the Fourier space, a Blackmann-Harris taper function of the same angular and redshift size is multiplied by the lightcone. However, this taper has the limitation that at low \(k_{\parallel}\), it reduces the Fourier-space side lobes, while the opposite effect occurs at high \(k_{\parallel}\). Finally, we do an inverse Fourier transform to get back the real-space lightcone sub-volume.
An example of data with the foreground contamination removed by this algorithm can be seen in the second column of Figure 3. The top panel shows the residual image, while black contours indicate the ground truth. The bottom panel shows the 2D cylindrical power spectrum for the fiducial lightcone sub-volume centred at \(z_{c}=8.24\) and frequency depth of \(\pm 10\,\mathrm{MHz}\). The dark blue colour indicates the \(k_{\perp}-k_{\parallel}\) modes where the wedge removes method is applied.
### Gaussian Process Regression
The Gaussian regression processes (GPR) method was developed in Mertens et al. (2018) to separate foregrounds from 21-cm signal by modelling the two components as a stochastic process and separating them using a Bayesian approach. The method involves constructing a prior statistical model of the foregrounds and the 21-cm signal and then using the model to estimate the posterior distribution of the 21-cm signal given the observed data. This is done by assuming that both the foregrounds and 21-cm signal are realizations of Gaussian processes, which are fully defined by their covariance. The selection of the prior covariance model in GPR is made under a Bayesian framework by maximizing the marginal likelihood. The Matern class of covariance functions is commonly used as prior covariance for the different components of the data. Following Mertens et al. (2018), a Radial Basis Function (RBF) kernel is used as the prior covariance model for the foreground component, while an Exponential kernel
Figure 3: Comparison between different foreground mitigation methods. From left to right, we have PCA, wedge removal, GPR and polynomial fitting. First row, a visual example at redshift \(z=8.24\) of the residual image after the corresponding method. Second row, the cylindrical power spectrum for a lightcone sub-volume centred at \(z_{c}=8.24\) and frequency depth \(\pm 10\,\mathrm{MHz}\).
is used for the 21-cm signal. This method can effectively remove foreground contamination from the 21-cm signal and has the advantage of being able to incorporate prior knowledge about the signal and foregrounds. However, it requires accurate modelling of the foregrounds and assumptions about the statistical properties of the signal and foregrounds.
### Polynomial fitting
We can also use Polynomial fitting to remove foreground contamination from the 21-cm signal (Wang et al., 2006; Alonso et al., 2015). The method involves modelling the foregrounds as a smooth polynomial function in log-space and fitting this function to the observed data, \(\delta\overline{T}_{\text{obs}}\).
\[\log\left(T(\mathbf{\hat{n}},z)\right)=\sum_{k=1}^{N_{fg}}\alpha_{k}(\mathbf{\hat{n}}) \ \left[\log\left(\frac{\nu_{0}}{1+z}\right)\right]^{k-1} \tag{10}\]
Here, \(\nu_{0}\) is the 21-cm frequency and \(N_{\text{fg}}\) indicates the polynomial degree. In our study, we consider a fourth-degree polynomial. The resulting fit is then subtracted from the data to remove the foreground contamination \(\delta\overline{T}_{\text{res}}=\delta\overline{T}_{\text{obs}}-T(\mathbf{\hat{n}},z)\).
This approach has the advantage of being simple and computationally efficient but may not be as effective at removing foregrounds as other, more sophisticated methods. One limitation of the polynomial fitting is that it assumes the foregrounds can be well-described by a smooth polynomial, which may not always be the case (e.g. Thyagarajan et al., 2015). Additionally, if the polynomial fit is not high enough order, it may leave some foregrounds in the data, while an overly high-order polynomial may remove the signal as well. The polynomial fitting has been used in combination with other foreground removal methods in some studies to improve the overall performance of the foreground removal process.
## 4 U-Net for 21-cm image segmentation
The network architecture of SegU-Net v2 is the same as presented in Paper I. The only implementation consists of a simplistic hyperparameter optimization analysis on a few of the network parameters. The survey was performed for the kernel size, type of pooling operation, size of the low dimensional latent space and number of convolutional levels. The analysis suggested that the hyper-parameters that contribute the most to minimising the loss are 2D average pooling layers instead of max pooling operation and a kernel size of \(7^{2}\) instead of \(3^{2}\).
Here we give a brief description of our network architecture. We refer the reader to our previous work for more details. SegU-Net is a U-shaped deep convolutional neural network composed of a contracting (encoder) and an expanding path (decoder). The former has two convolutional blocks, followed by the 2D averaging pooling operation of size \(2^{2}\) and a dropout layer with a 5 per cent rate, Encoder-Level=2*ConvBlock+AvrgPool+Drop. A convolutional block consists of a 2D convolutional layer with kernel size \(7^{2}\), followed by batch normalization and Rectified Linear Unit (ReLU) activation function, ConvBlock=Conv2D+BN+ReLU. The latter path consists of transposed 2D convolution followed by the concatenation with the corresponding output of the convolutional encoder block, dropout layer and two convolutional blocks, Decoder-Level=TConv2D+CC+Drop+2*ConvBlock. This structure is repeated four times for both the encoder and decoder. At each level, the pooling operation halves the angular dimension of the input and doubles the number of channels. The network takes as input a redshift slice from the residual lightcone, \(\delta\overline{T}_{\text{res}}\), and outputs the corresponding 2D binary image, \(x^{B}_{\text{HI}}\).
We generated a large set of realisations of the SKA multi-frequency tomographic dataset by changing the initial conditions and the following three astrophysical parameters. We sample the high-redshift galaxy efficiency \(\zeta\) and the mean-free path of ionising photons \(R_{\text{mfp}}\) with a normal distribution with mean and variance \(\mathcal{N}(82,\,18)\) and \(\mathcal{N}(17.5\,\text{Mpc},\,4.5\,\text{Mpc})\), respectively. At the same time, the minimum virial temperature for star-forming halos \(T^{\text{min}}_{\text{vir}}\) is sampled in logarithmic space with distribution \(\mathcal{N}(4.7,\,0.2)\). We chose this sampling of parameters because we want the global volume-averaged neutral fraction \(\overline{x}_{\text{HI}}\) of all data to be at least greater than 90% at redshift \(z=11\) and less than 10% at redshift 7. We updated the dataset from Paper I in this work for a total of 10,000 samples for the network training and 1,500 for validation. Once the network is trained, we will test its accuracy and generalisation ability on additional 300 mock observations during the prediction step. We will refer to this dataset as the _random testing set_. The training dataset is employed during the forward- and back-propagation (Rumelhart and Zipser, 1985), while the validation dataset is used to validate the accuracy of network results during training. We want to clarify that we trained SegU-Net v2 on \(\delta\overline{T}_{\text{res}}\) data pre-processed only with the PCA eigen-decomposition on the full redshift range, \(z=7\) to 11, which is explained in SS3.1. The testing dataset is an independent set of simulations on which we will validate the final results of the trained network.
We consider a true positive detection (\(TP\)) to be the number of pixels correctly identified as neutral, while a true negative (\(TN\)) is the opposite case. False positives (\(FP\)) and false negatives (\(FN\)) represent the number of pixels wrongly classified as neutral or ionised, respectively. Therefore, we can define the Matthews correlation coefficient (MCC) for quantifying the accuracy of our network predictions as
\[r_{\phi}=\frac{TP\cdot TN-FP\cdot FN}{\sqrt{(TP+FP)(TP+FN)(TN+FP)(TN+FN)}} \tag{11}\]
This metric can have values between \(-1\leqslant r_{\phi}\leqslant 1\), and it quantifies the quality of binary field (two-class) classifications. A negative value indicates anti-correlation, zero represents a completely random classification, and positive values indicate a positive correlation. For a direct comparison with previous studies on segmentation of 21-cm image data (e.g. Gangon-Hartman et al., 2021), we define three additional statistical metrics as follows
\[\text{Accuracy}=\frac{TP+TN}{TP+FP+FN+TN} \tag{12}\]
Here, this metric indicates how well a model is able to predict the target variable correctly.
\[\text{Precision}=\frac{TP}{TP+FP} \tag{13}\]
This second metric refers to the level of consistency or repeatability of a predicted value. While accuracy and precision are important metrics in evaluating the performance of a network, they may not be sufficient in certain scenarios. For instance, in our binary classification problem, there can be scenarios when neutral regions can be much rarer than ionised regions and vice versa. In this case, accuracy can be misleading as the model may achieve high accuracy by simply predicting the majority class for all instances. In such cases, precision and recall are more informative metrics as they take into account the class imbalance.
\[\text{IoU}=\frac{TP}{TP+FP+FN} \tag{14}\]
However, here we include the third additional metric, known as the Intersection over Union (IoU), that quantifies how well the predicted neutral region of interest overlaps with the true one. We will use these metrics later in SS5.2.
The error calculation is implemented with the same method as presented in Paper I. In the prediction step, we employ temporal time augmentation (TTA) operations (Perez and Wang, 2017; Wang et al., 2020) on the network input data to create several copies of the same realisation. SegU-Net v2 gives predictions for each copy that are then finally combined to give the predicted binary field and a per-pixel uncertainty map. In this work, we fix the axis of symmetry and rotation to the frequency direction. Thus reducing the number of manipulations for calculating the per-pixel uncertainty map to a sample of 16 copies. This number corresponds to the maximum of independent operations we can apply to an image.
## 5 Results
In this section, we discuss the result obtained with SegU-Net v2 acting on data pre-processed with the PCA foreground removal method as explained in SS3.1. Here, we evaluate the result on the predicted binary maps and the network performance on the different methods (illustrated in SS3) in SS5.1 and SS5.2 respectively. Finally, in SS5.3, we demonstrate a possible astrophysical application of SegU-Net v2.
### Identifying Hii Regions with SegU-Net v2
In Figure 4, we visually evaluate one realisation of the network predicted neutral (red) and ionized (blue) regions. We refer to this simulated lightcone as the _fiducial_ simulation. In the right column, we show a slice at redshift \(z=8.24\) (\(\sigma_{\rm obs}=152.90\,\rm MHz\)), corresponding when the global volume average neutral fraction is \(\overline{x}_{\rm HI}=0.5\). From top to bottom, we show the residual image after the PCA pre-processing employed as the input of the neural network, the binary map predicted with SegU-Net v2 from the PCA pre-processed data and the derived per-pixel uncertainty, respectively. In the left column, we show the redshift evolution of the same fields along one given direction of the corresponding fields.
First, when we compare the bottom right panel in Figure 1 with the top right panel in Figure 4, we can notice that the pre-processing step drastically reduces the signal from \(\delta T_{b}\sim\pm 10^{5}\,\rm mK\) to just an
Figure 4: Visualisation of the different fields for our fiducial lightcone. _Top Left:_ for a given position on the x-direction, the redshift evolution of the residual lightcone after the PCA pre-processing step. _Top Right:_ residual image at redshift \(z=8.24\) (\(\overline{x}_{\rm HI}=0.5\)). Same image as in Figure 1. _Middle Left:_ redshift evolution of the predicted neutral (red) and ionised (blue) lightcones. _Middle Right:_ predicted map at the corresponding redshift. _Bottom Left:_ the corresponding per-pixel error lightcone, orange colour indicates the intensity of the uncertainty. _Bottom Right:_ the corresponding per-pixel error map. For all panels, we over-plot contours that represent the ground truth.
observed differential brightness of few tens \(\delta T_{b}\sim\pm 40\) mK. Nevertheless, some of the foreground contamination is still visible. For instance, in Figure 4 top left panel, we can clearly see that across a few frequency bands at \(z\approx 10.8\) presents an anomalous feature. Moreover, we can see that foreground residual is still present between \(7\leq z\leq 8.2\). This signal excess is self-evident in the per-pixel uncertainty for the same redshift range. Here, some frequency bands are saturated with large uncertainty \(\sigma_{\rm sd}\sim 0.3\). This is because the foreground component is correlated along the frequency direction and is primarily diffused over large angular scales. The foreground residuals thus observe extended features along the \(z\) direction over multiple adjacent frequency channels. From the redshift evolution of the predicted binary field (left middle panel), we notice that the network can either falsely detect bubbles when most of the light-cone is still highly neutral, \(z\geq 9.5\), or completely miss ionised bubbles that are completely surrounded by neutral hydrogen. In both cases, the mislabelling is limited to bubbles with sizes close to or smaller than the interferometric smoothing scale, \(\Delta x\sim 9\) Mpc, as the network confuses structures with small-scale noise fluctuations. Thus, posing a hard limit on the possibility of measuring and detecting the smallest HII bubble close to the instrument resolution. We discuss this further in SS5.3. This limitation is also visible from the recovered binary field at redshift \(z=8.24\) (middle right panel). Here, the detection of the bubbles at \(180\) Mpc \(\leq\) x \(\leq 210\) Mpc are completely missed. We observe the same outcome for the island of neutral hydrogen at coordinates \((x,y)\approx(75,75)\) Mpc. These erroneous findings are associated with a moderate to high uncertainty \(\sigma_{\rm shr}\geq 0.2\). As we mentioned above, the per-pixel uncertainty shows that at the early stage of reionization, \(z>9\), most of the uncertainty is either situated around small HII volumes, \(V\leq(10^{3}\,{\rm Mpc})^{3}\), or at the border between neutral and ionised regions. On the other hand, at the late stages, \(z<8.2\), high uncertainty is mostly located in the vast, interconnected ionised IGM.
In Figure 5, we show two statistical analyses for the entire _random testing set_. In the left panel, we show the correlation plot between the true global averaged neutral fraction \(\bar{x}_{\rm HI,true}\) against the predicted \(\bar{x}_{\rm HI,pred}\). The dashed green line indicates the 95 per cent data contour, corresponding to a \(2\sigma\) difference from the ground truth. The \(2\sigma\) contour clearly shows a deviation on the left-hand side of the black dashed line (perfect correlation), indicating that the predicted images tend to be considered more neutral than they should be. This trend is more visible at lower redshift \(z<8.5\) (\(\bar{x}_{\rm HI,true}<0.4\)) as more points reside outside of the 95% percentile. This behaviour can be motivated by the presence of residuals from the foreground that the PCA process was not able to remove. In fact, as we mention in SS3.1, we consider the first four components to contain most of the foreground information. These components are most representative at higher frequency as the foreground amplitude increases inversely proportional to redshift, Equation 4. Therefore, for tomographic data with a wide redshift range, the decomposition can under-represent foreground contamination at lower redshift, resulting in more residuals when we reconstruct the image from the remaining components at the corresponding redshift slices. This effect is visible in the uncertainty map in Figure 4.
In Figure 5, right panel, we show the correlation coefficient against the same quantity as before, \(x_{\rm HI,true}\). Here, each point corresponds to an image at a given redshift indicated by the colour bar. On this panel, we add the 68 per cent data contour (solid line), corresponding to a \(1\sigma\) difference from the ground truth. We first noticed that we obtain a global accuracy that is approximately 15% lower, \(\bar{r}_{\phi}=0.71\), compared to our previous work in Paper I. This lower score with basically the same network structure and architecture is justified by the fact that any signal extrapolation in the presence of foreground contamination is extremely arduous when compared to forecasting in the presence of just telescope systematic noise. Moreover, as we stated before, we notice that at lower redshift \(z<8.5\) (\(\bar{x}_{\rm HI,true}<0.4\)), a sizable portion of the redshift slices have a difference larger than \(2\sigma\). This behaviour is also evident from the increase of the uncertainty map in Figure 4 for images at \(z<8.5\).
### Sensitivity to the Choice of Pre-processing Method
We trained SegU-Net v2 on the signal that is pre-processed using the PCA method. Therefore, it is vital to investigate how sensitive the trained model is to the choice of pre-processing method used to mitigate foreground. Here we test SegU-Net v2 on the different
Figure 5: Statistical analysis of the predicted binary maps for the testing dataset. Each point indicates an image at a given redshift indicated in the colour bar. _Left Panel_: correlation plot between the ground truth volume average neutral fraction, \(\bar{x}_{\rm HI,true}\), against the predicted, \(\bar{x}_{\rm HI,pred}\). _Right Panel_: Matthew correlation coefficient \(r_{\phi}\) against global volume-averaged neutral fraction. The dashed blue line indicates the redshift averaged \(r_{\phi}\). Here, solid green lines indicate the 68 per cent (\(1\sigma\)) and dashed green lines the 95 per cent (\(2\sigma\)) data contour.
foreground mitigation processes that we presented in SS3. We cannot use the entire lightcone as the GPR module currently available has been validated only for a bandwidth of 20 MHz. From the entire lightcone, we use three sub-volume centred at redshift \(z_{c}=7.68\), 8.24 and 8.97 with frequency size of 20 MHz, corresponding to 172, 181 and 186 redshifts bins from \(z\in[7.19,\,8.24]\), \([7.68,\,8.88]\) and \([8.31,\,9.72]\), respectively. The volume average neutral fraction of these sub-volumes is \(\overline{\mathrm{x}}_{\mathrm{HI}}\simeq 0.25\), 0.50 and 0.75, corresponding to the late, middle and early stages of reionization, respectively.
We then apply to each of these sub-volumes four different foreground mitigation pre-processing steps, which are PCA, Wedge Remove, GPR and Polynomial fitting. From the residual volumes, we predict the neutral/ionised regions from the trained SegU-Net v2, with PCA, pre-processing step as presented in SS5.1. By applying different foreground mitigation processes, we can quantify the robustness and adaptability of our trained network.
#### 5.2.1 Visual Evaluation
We perform a visual comparison for the middle stage of reionization sub-volume for the four cases in Figure 6. From the left to right column, we have PCA, Wedge Remove, GPR and Polynomial fitting, respectively. The top panels show a visual comparison of an image at the sub-volume central redshift \(z_{c}=8.24\) for the different pre-processes. In the bottom panels, we show the corresponding uncertainty map from the SegU-Net v2. We notice that for the case of the fiducial simulation, the Polynomial fitting and GPR pre-processing obtain similar results with correlation \(r_{\phi}(z_{c})=0.81\) and \(r(z_{c})_{\phi}=0.84\), respectively. The former case appears to overestimate the extent of the neutral regions (see at position \((x,y)\simeq(75,125)\) Mpc) as well as falsely detecting the presence of isolated neutral island in the vast ionised region, for instance, see around \((x,y)\sim(75,100)\) Mpc. The PCA obtains approximately 10% less accuracy, \(r_{\phi}(z_{c})=0.70\), its limitation comes forth when predicting the vast ionised region (see at position \(50\,\mathrm{Mpc}\leq x\leq 125\,\mathrm{Mpc}\) and \(75\,\mathrm{Mpc}\leq y\leq 125\,\mathrm{Mpc}\)) as the network is over-predicting the presence of an interconnected neutral hydrogen region. Wedge Remove method has the lowest performance, with \(r_{\phi}(z_{c})=0.62\). In this example, the pre-process is forecasting an excess of neutral hydrogen outside the ground truth. On the other hand, this method underestimates its presence within the extensive neutral cloud. In Table 2 third column, we show the resulting \(r_{\phi}(z_{c})\) for each pre-process.
Among the method presented, the Wedge Remove method appears to be the least efficient for SegU-Net v2. The uncertainty map in Figure 6 shows that the Wedge Remove method has high inextrictive in the vast interconnected H ii regions, for \(x\in[0,\,125]\) Mpc and \(y\in[0,\,150]\) Mpc, as well as between nearby H iregions, for instance at \((x,\,y)\simeq(120,160)\) Mpc. The presence of a higher foreground residual compared to the other methods (visible in the same region in Figure 3) indicates that lower performance is attributed to a harsh and perhaps undisclosed subtraction that does not aim at portraying the foreground contamination but rather removes its contribution. Overall, the GPR method, followed by PCA decomposition, appears to give an advantage compared to the other pre-processing. At the same time, all the cases fail to detect either ionised or neutral regions of sizes close to the interferometric smoothing scale, \(\Delta x\simeq 9\) Mpc.
#### 5.2.2 Redshift Evolution
In Figure 7, we show the redshift evolution of the Matthew correlation coefficient \(r_{\phi}\) for the four different methods. On each panel, we show the results from the early (\(z_{c}=8.97\), in red), middle (\(z_{c}=8.24\), in green) and late (\(z_{c}=7.68\), in blue) stage of reionization sub-volumes with the corresponding error bar represented by the shadow area.
Figure 6: Comparison of the recovered binary field from different foreground mitigation pre-processes. From left to right, we have PCA, wedge removal, GPR and polynomial fitting. _Top panels_: a visual example of the recovered binary map at redshift \(z=8.24\) after the mentioned pre-processing step. The red/blue colour indicates the predicted neutral/ionised regions, while the green contour indicates the ground truth. _Bottom panels_: the corresponding per-pixel uncertainty map derived by SegU–Net v2. The orange colour indicates the intensity of the uncertainty, defined as a general standard deviation. In the title, we include the resulting \(r_{\phi}\) at this redshift.
The horizontal dashed line denotes the redshift averaged correlation coefficient, \(\overline{r}_{\phi}\). In Table 2 fourth column, we show the resulting \(\overline{r}_{\phi}\) for each sub-volume and sub-volume. Based on this quantity, we notice that the ranking goes by the GPR method with \(\overline{r}_{\phi}=0.71\) at \(z_{C}=7.68\), \(0.67\) at \(z_{C}=8.24\) and \(0.63\) at \(z_{C}=8.97\), followed by the PCA with \(\overline{r}_{\phi}=0.68\), \(0.67\) and \(0.62\), respectively. Polynomial fitting follows with \(\overline{r}_{\phi}=0.65\), \(0.62\) and \(0.60\), while Wedge Remove follows with \(\overline{r}_{\phi}=0.18\), \(0.19\) and \(0.15\), respectively. An important remark, in this comparison, we limit the PCA decomposition to the sub-volumes redshift bins (172, 181 and 186), and it is performing slightly worst when compared to the same results in the previous section on the 552 redshift bins. Therefore, we attribute the decrease in the performance to be related to the reduced number of redshift bins that directly lower the number of orthogonal components with which the data are represented. For the case of PCA in Figure 7, we plot on the same panel the performance of the PCA decomposition on the 552 redshifts (dark blue line). Here, we can notice how the redshift averaged correlation coefficient is substantially higher, \(\overline{r}_{\phi}=0.82\) at \(z_{C}=7.68\), \(0.80\) at \(z_{C}=8.24\) and \(0.76\) at \(z_{C}=8.97\), hence indicating that the PCA pre-process is preferred if we have at our disposal a tomographic dataset with an extended redshift range. The sharp increase at \(z\simeq 8.76\), the sudden increase at \(z\geq 9\) and the constant broadening for \(z\leq 8.1\) of the uncertainty error in Figure 7 indicates that the PCA, GPR and Polynomial fitting are sensible to the evolution and distinctiveness of the same structures in the data.
Moreover, all processes, except for PCA, show a slight decrease in accuracy close to the redshift extremities values of the sub-volume. The Wedge remove is efficiently helping recover the binary maps only for the central part, close to the central redshift, of the selected sub-volume. While the accuracy decreases rapidly toward the edges as the foreground removal becomes inefficient in our simplified version of the wedge remove code we do not include the sliding trough process, see SS3.2. Therefore, a comparison between the Wedge remove and the other pre-processing should be strictly limited to the central part of the sub-volumes.
#### 5.2.3 Recovered Neutral Island Size Distribution
In Figure 8, we compare the neutral island size distribution (ISD) derived from the Hi binary field predicted with the different pre-processing methods presented in SS3. We employ the Mean-Free Path (MFP; Mesinger & Furlanetto 2007) method to derive the probability density distribution (\(RdP/dR\)) of the neutral region sizes or radius \(R\). This size distribution measures the topological evolution of the reionization process (Friedrich et al. 2011; Giri et al. 2018a). See Giri et al. (2019) for a detailed study of ISDS during reionization.
In Figure 8, each panel shows the predicted ISD (solid line) for three sub-volumes centred at redshift \(z_{C}=7.68\) (blue), \(8.24\) (green) and \(8.97\) (red) against the ground truth ISD (dashed line). In the bottom part of each panel, we show the difference with the ground truth. Similarly to before, in the case of PCA, the estimated distribution with PCA decomposition on the full redshift range, from \(7\) to \(11\), is shown with a darker colour. We show the uncertainty error on the predicted ISD with a shadow area of the same colour. From neutral island distribution analysis, the GPR method and the Polynomial fitting appear to have the best fit. Differences are visible only at a large scale, \(R\geq 100\) Mpc, with a factor \(\sim 3\) larger for the early and middle stage of reionisation sub-volumes. For the early stage sub-volume, the only noticeable difference is for the extremely large sizes, \(R\approx 300\) Mpc. The results from the training pre-processing (darker colour) tend to predict an ISD consistently shifted toward a larger scale for the case of \(z_{C}=7.68\) and \(8.24\). Deviations from the ground truth start to be visible for scale \(R\geq 40\) Mpc and \(R\geq 80\) Mpc with differences from up to a factor of \(\sim 2\) and a maximum of \(5\) at \(R\approx 200\) Mpc. On the other hand, for the case of the sub-volume centred at \(z_{C}=8.97\), the predicted ISD shows no virtual difference. These results confirm what we concluded in SS5.1, with the analysis
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \(z_{c}\) & pre-process & \(r_{\phi}(z_{c})\) & Accuracy & Precision & IoU & \(\overline{r}_{\phi}\) & \(\overline{x}_{\rm HI}\) & \(\overline{R}_{C}\) [cMpc] \\ \hline
7.68 & Ground Truth & - & - & - & - & - & **0.24** & **19.89** \\ & all z PCA & 0.78 & 0.94 & 0.81 & 0.67 & 0.82 & \(0.26\pm 0.12\) & \(21.62^{+4.34}_{-3.30}\) \\ & PCA & 0.75 & 0.90 & 0.76 & 0.67 & 0.73 & \(0.26\pm 0.15\) & \(17.96^{+6.66}_{-6.66}\) \\ & Wedge & 0.55 & 0.80 & 0.65 & 0.20 & 0.28 & \(0.07\pm 0.12\) & \(11.96^{+9.66}_{-5.44}\) \\ & GPR & 0.77 & 0.91 & 0.77 & 0.71 & 0.77 & \(0.28\pm 0.14\) & \(19.75^{+6.93}_{-5.03}\) \\ & Polynomial & 0.75 & 0.91 & 0.76 & 0.69 & 0.76 & \(0.27\pm 0.15\) & \(19.17^{+7.24}_{-5.18}\) \\ \hline
8.24 & Ground Truth & - & - & - & - & - & **0.45** & **29.54** \\ & all z PCA & 0.84 & 0.91 & 0.86 & 0.72 & 0.80 & \(0.48\pm 0.07\) & \(31.37^{+3.09}_{-3.39}\) \\ & PCA & 0.70 & 0.86 & 0.79 & 0.72 & 0.69 & \(0.49\pm 0.11\) & \(27.65^{+9.13}_{-6.12}\) \\ & Wedge & 0.62 & 0.64 & 0.65 & 0.22 & 0.22 & \(0.16\pm 0.13\) & \(15.20^{+24.13}_{-6.18}\) \\ & GPR & 0.84 & 0.89 & 0.82 & 0.76 & 0.75 & \(0.48\pm 0.09\) & \(29.14^{+5.26}_{-4.89}\) \\ & Polynomial & 0.81 & 0.88 & 0.81 & 0.75 & 0.74 & \(0.49\pm 0.10\) & \(29.21^{+5.83}_{-5.21}\) \\ \hline
8.97 & Ground Truth & - & - & - & - & - & **0.72** & **49.09** \\ & all z PCA & 0.78 & 0.92 & 0.93 & 0.85 & 0.76 & \(0.74\pm 0.29\) & \(48.57^{+5.93}_{-6.36}\) \\ & PCA & 0.72 & 0.89 & 0.90 & 0.85 & 0.68 & \(0.75\pm 0.33\) & \(46,06^{+6.94}_{-6.24}\) \\ & Wedge & 0.53 & 0.51 & 0.76 & 0.37 & 0.19 & \(0.38\pm 0.11\) & \(28.57^{+1.16}_{-8.54}\) \\ & GPR & 0.75 & 0.90 & 0.92 & 0.87 & 0.72 & \(0.74\pm 0.28\) & \(46.64^{+2.11}_{-5.21}\) \\ & Polynomial & 0.74 & 0.90 & 0.92 & 0.87 & 0.72 & \(0.74\pm 0.29\) & \(47.24^{+2.07}_{-7.21}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Result summary of the predicted binary field for the tested pre-processing step on the three lightcone sub-volume at representative stages of reionization.
from Figure 5 (left panel). The PCA performed on the sub-volume redshift range shows the same factorial difference but with an opposite behaviour. Differences are more prominent for the late stage of reionisation sub-volume and get gradually better at the early stage. In this analysis, the Wedge method fails to depict the Hi distribution for all the sub-volumes. For small neutral regions, \(R\leq 20\) Mpc, the predicted distribution is a factor 2 larger, while for larger sizes the distribution can be severely underestimated, with \(RdP/dR\) two orders of magnitude smaller than the ground truth distribution. This performance is an indication that with the Wedge pre-processing, SegU-Net v2 is struggling to connect large neutral regions due to the missing 21-cm signal lying in the _foreground wedge_ region that has been removed along with the foreground.
From the probability density distribution \(RdP/dR\), we can estimate the mean radius of the neutral islands at a given redshift, defined as
\[\overline{R}_{C}(z)=\int_{R_{\rm min}}^{\infty}R\ \frac{dP}{dR}(z)\ dR \tag{15}\]
In our case, we set the lower limit to the intrinsic resolution of our simulation \(R_{\rm min}=2\) cMpc. In Table 2, rightmost column, we list this quantity derived from the predicted binary field with the different pre-process. The ground truth average radius is \(\overline{R}_{C}=19.89\) cMpc for the sub-volume centered at \(z_{c}=7.68\), \(\overline{R}_{C}=29.54\) cMpc for \(z_{c}=8.24\) and \(\overline{R}_{C}=49.09\) cMpc for \(z_{c}=8.97\). Based on this quantity, we notice that the GPR method and Polynomial fitting produce a better prediction for the late and middle EoR sub-volumes, with a difference to the ground truth below the cMpc, while for the early stage scenario, they tend to underestimate of a few cMpc. In the case of both PCA decomposition, the predicted quantity differs by a few cMpc in excess and deficit, respectively. This trend is also visible from the predicted ISD, as PCA shows a systematic underestimation, while the same decomposition on the entire redshift range shows an overestimation for the same scale, \(R\geq 30\) cMpc. The Wedge method seems to work reasonably well only for the case of the late stage of reionization, considering the provided uncertainty. Although, for this scenario, the predicted ISD does not match. At late stages, the Wedge Removal prediction of \(\overline{R}_{C}\) can not be trusted, as this quantity differs substantially.
### Relation between ionised volume and total ionising photons
Zackrisson et al. (2020) illustrated the possibility of employing SKA-Low tomographic data as a foreshadowing method to identify the region of interest for future and ongoing experiments that aim to observe galaxy formation in the early Universe, such as the JWST, Euclid and Nancy Grace Roman Space Telescope (e.g. Beardsley et al., 2015; Geil et al., 2017). This work demonstrated that there is a simple relation between the volume of isolated HiI bubbles, V\({}_{\rm ion}\), and the grand total of ionising photons, N\({}_{\rm Y,\,tot}\), produced by
Figure 7: Redshift evolution of the r\({}_{\phi}\) correlation coefficient for the different tested pre-processing step. Each panel shows the result on three lightcone sub-volumes centred at \(z_{c}=7.68\) (blue), 8.24 (green) and 8.97 (red) with a \(\pm 10\) MHz frequency depth. These redshifts correspond to the late, middle and early stages of reionization, respectively. Solid lines indicate the r\({}_{\phi}\) coefficient for the predicted binary maps. Shadow areas indicate the error due to the uncertainty map. Horizontal dashed lines indicate the redshift averaged \(\overline{r}_{\phi}\) coefficient. For the case of PCA, we plot as a reference the decomposition executed on the full redshift range (dark blue).
the primordial sources within the same ionised region. Although we are overlooking relevant instrumental effects (e.g. incomplete uv-coverage, absence of gain error, beam effect and more), we assume that our framework, described in SS2.4, produces realistic enough mock observation to demonstrate the challenge of identifying and measuring the sizes of such bubbles and its derived relation.
For this analysis, we require the mass and the position of the sources within the ionised bubbles. Therefore, here we decided to use a simulation run with the C\({}^{2}\)Ray radiative transfer code (Mellema et al., 2006). In Paper I, we demonstrated how SegU-Net works reasonably well on simulations other than those employed for the training and validation. Here, we employ the obtained ionised hydrogen and density coeval cubes to calculate the 21-cm differential brightness with Equation 1 and by following the mock observation procedure explained in subsection 2.4. We consider the third axis to be the frequency direction to create the corresponding network input and target. We use one realisation of the simulated coeval cube at redshift z = 8.89 with box and mesh size of 348 cMpc and 250, respectively. We interpolate the 250 mesh-grid into a grid of 166 per side to a corresponding intrinsic resolution similar to our dataset of \(\Delta\mathrm{x}=2.09\) Mpc. One of the inputs of the C\({}^{2}\)Ray code is the cumulative halo mass smoothed into the mesh grid. In this way, we can associate an ionised bubble to the sources within the same region by converting the total halo distribution mass \(M_{\mathrm{h,tot}}\) to the total ionising photon produced \(\mathrm{N_{\gamma,tot}}=f_{\gamma}\,\Omega_{\mathrm{m}}/\Omega_{\mathrm{b}}\, \mathrm{M_{h,tot}}\). We refer the reader to Iliev et al. (2006, 2012) and Dixon et al. (2016) for further reading on the halo source model.
Though SegU-Net v2 is not trained on simulations produced with C\({}^{2}\)Ray, we still find that the ionised regions are identified with accuracy. This analysis shows that the trained model is quite general and, therefore, capable of finding physical features in real observations. In Figure 9, we show the relation between \(\mathrm{V_{ion}}\) and \(\mathrm{N_{\gamma,tot}}\) derived from the simulation data (blue crosses) and the predicted binary maps (orange points). We notice that SegU-Net v2 is failing to correctly quantify the number of ionising photons for volumes \(\mathrm{V_{ion}}\lesssim(10\,\mathrm{cMpc})^{3}\), vertical black dash line. This limitation corresponds to the 2 km interferometric smoothing scale we apply in our mock observation pipeline. At \(z=8.89\), the Gaussian kernel has an angular scale of \(\Delta\theta\approx 3.57\,\mathrm{arcmin}\), corresponding to a comoving size of 9.9 cMpc. This limitation is also consistent with the results in Figure 5, where the correlation between prediction and ground truth slowly decreases, \(r_{\phi}\leq 80\%\), for higher redshift, \(z\geq 9\).
Figure 8: Island size distribution for the different pre-processing steps. Each panel shows the predicted size distribution \(R\,dP/dR\) (top section) and the difference to the ground truth (bottom section). The colours indicate the lightcone sub-volume at the late (\(z_{c}=7.68\), blue), middle (\(z_{c}=8.24\), green) and early (\(z_{c}=8.97\), red) stage of reionization. The results from the neutral regions in the predicted fields are shown with solid lines and the ground truth with dashed lines. For the case of PCA, we plot as a reference the predicted size distribution with a dot-dashed line.
## 6 Discussion & Conclusions
With this work, we improved our previous effort in Paper 1 and updated our deep learning framework, SegU-Net v2, for the identification of neutral and ionized regions in realistic 21-cm mock observation expected from SKA-Low. One of the advantages of our network is the possibility to provide per-pixel uncertainty maps on its predictions. In SS2.4, we introduced our extended mock observation pipeline by including synchrotron Galactic foreground contamination, presented in SS2.3. Additionally, we performed machine learning hyper-parameter optimisation. For the same network architecture, we find an advantage by changing to average pooling operation and kernels \(7^{2}\) for the convolution layers to maximise the data extrapolation.
In this work, we combine our network with a foreground mitigation method that pre-processes the input data and reduces, in part, the foreground contribution. We trained SegU-Net v2 on \(10,000\) lightcones with 552 redshift slices from \(z=7\) to 11 pre-processed with PCA on 4 components for the full redshift range. We choose this pre-processing method as it is the most commonly used method for foreground contamination and provides fast and rather efficient mitigation. In SS5.1, the analysis on a random sample dataset, composed of 300 lightcone with the same redshift extent and bins, shows that the updated version of our network works well, with an average correlation of 71%, on 21-cm images contaminated and pre-processed by a foreground contamination method. This level of accuracy is almost \(\sim 20\%\) less than our previous results and is to be attributed to the added complexity due to the presence of the Galactic foreground. We show that SegU-Net v2 recovered binary fields that tend to be considered more neutral at \(z\leq 8.5\). We attribute this to the under-subtraction of the PCA pre-processing method employed during the training process. This trend is confirmed by the increase of the uncertainty map for the same redshift extent that saturates entire frequency channels (see the bottom panel in Figure 4).
In SS5.2, we compared the binary maps predicted with SegU-Net v2 on different pre-processing foreground mitigation and one avoidance method. We consider three sub-volume of the fiducial simulation with frequency width \(\Delta v=\pm 10\) MHz centred at redshift \(z_{c}=7.68\), 8.24 and 8.97, representing a late, middle and early stage of reionisation. In this work, we consider PCA decomposition (SS3.1), Wedge removal (SS3.2), Gaussian Process Regression (SS3.3) and Polynomial fitting (SS3.4). We demonstrated that SegU-Net v2 is able to recover Hi regions with varying accuracy for all the pre-processing methods we tested. In our case, the network is able to generalize enough and work with the same level of accuracy as the training case on pre-processing methods that were not employed during its training (see summary statistics in Table 2). Moreover, in SS5.2.3, we study the island size distribution (ISD) of the predicted binary maps. GPR and Polynomial fitting work better in recovering the ISDs, as well as the average distribution size \(R_{C}\) of neutral regions, than the two cases of the PCA pre-processing (applied on the full redshift range and the sub-volume redshift range).
Therefore, we can conclude that SegU-Net v2 is pre-processing method agnostic, providing accurate predictions independent of the pre-processing method, as long as the foreground mitigation provides reasonable residual images of the original 21-cm signal. Another conclusion is that PCA decomposition on lightcone data with a wide redshift range, e.g. frequency depth of the order of 60 MHz or larger, is to be preferred. In the case of smaller available sub-volumes, with frequency depth between 20 MHz and 30 MHz, other methods such as GPR or Polynomial fitting are to be preferred as they provide better prediction when compared to PCA on the same redshift range.
Finally, we provided a concrete use case of SegU-Net v2 in the context of 21-cm SKA-Low tomographic observation. Previous work demonstrated that a linear relation could be derived between the size of the ionised volume and the grand total number of ionising photons produced by the hosted source. In SS5.3, we demonstrated that our network could recover with precision the linear relation for ionised volumes that are resolved. Here, we stipulate the limited resolution of the SKA-Low layout by the interferometric smoothing scale for the maximum baseline of \(B=2\) km, which corresponds to an angular scale of approximately 3.57 arcmin at redshift \(z=8.89\), corresponding to an early stage of reionisation scenario, \(\hat{x}_{\rm HI}=0.75\).
When comparing the pre-processing method, we take into account also the computational time required to compute the foreground mitigation/avoidance method. In our setup, one lightcone sub-volume of frequency depth 20 MHz with 200 redshift bins takes about 7 s CPU time to compute with PCA and 2 s with Polynomial fitting. Wedge remover provides faster pre-processing with 230 ms but inefficient foreground mitigation. On the other hand, GPR provides slow but reliable mitigation with a computing time of \(\sim 1.2\) CPU hours.
Our analysis shows that using image data from SKA-Low, SegU-Net v2 accurately determines the ionization fraction at different stages of reionization. Additionally, we have identified how the ionized regions detected by SegU-Net v2 can be used as markers for locating the galaxies responsible for driving the reionization process. These findings demonstrate the potential of our framework for synergy studies with other telescopes, such as the JWST, Euclid and Nancy Grace Roman Space Telescope.
## Acknowledgements
The authors would like to thank Bharat Kumar Geholt for his useful discussions and comments. MB acknowledges the financial support from the Swiss National Science Foundation (SNSF) under the Sinterga Astrosignals grant (CRSII5_193826). We acknowledge access to Piz Daint at the Swiss National Supercomputing Centre, Switzerland, under the SKA's share with the project ID sk09. This work
Figure 9: Relation between the volume of ionised region versus the grand total of ionising photons within the same region. For a coeval cube at redshift \(z=9\) (\(\overline{x}_{\rm HI}=0.75\)) and box size of \(\rm L_{box}\approx 348\) cMpc. Relation derived from the ground truth is represented with blue cross data, while orange circle points are derived from SegU–Net prediction. The dashed red line corresponds to the linear fit of the ground truth data points. The vertical line indicates the 2 km baseline smoothed resolution.
has been done in partnership with the SKACH consortium through funding by SERI. Nordita is supported in part by NordForsk.
The deep learning implementation was possible thanks to the application programming interface of Tensorflow (Abadi et al., 2015) and Keras (Chollet et al., 2017). The algorithms and image processing tools operated on our data were performed with the help of NumPy (Harris et al., 2020), SciPy (Virtanen et al., 2020), scikit-learn (Pedregosa et al., 2011) and scikit-image (van der Walt et al., 2014) packages. All figures were created with mathplotlib (Hunter, 2007).
## Data Availability
The data underlying this article is available upon request and can also be re-generated from scratch using the publicly available 21cmFAST (Mesinger et al., 2011), CUBEP3M (Harnois-Deraps et al., 2013), C2RAY (Mellema et al., 2006) and Tools21cm (Giri et al., 2020) code. The SegU-Net code and its trained network weights are available on the author's GitHub page: [https://github.com/micibia/SegU-Net](https://github.com/micibia/SegU-Net).
|
2303.09793 | Robust Analysis of Almost Sure Convergence of Zeroth-Order Mirror
Descent Algorithm | This letter presents an almost sure convergence of the zeroth-order mirror
descent algorithm. The algorithm admits non-smooth convex functions and a
biased oracle which only provides noisy function value at any desired point. We
approximate the subgradient of the objective function using Nesterov's Gaussian
Approximation (NGA) with certain alternations suggested by some practical
applications. We prove an almost sure convergence of the iterates' function
value to the neighbourhood of optimal function value, which can not be made
arbitrarily small, a manifestation of a biased oracle. This letter ends with a
concentration inequality, which is a finite time analysis that predicts the
likelihood that the function value of the iterates is in the neighbourhood of
the optimal value at any finite iteration. | Anik Kumar Paul, Arun D Mahindrakar, Rachel K Kalaimani | 2023-03-17T06:30:06Z | http://arxiv.org/abs/2303.09793v2 | # Robust Analysis of Almost Sure Convergence of Zeroth-Order Mirror Descent Algorithm
###### Abstract
This letter presents an almost sure convergence of the zeroth-order mirror descent algorithm. The algorithm admits non-smooth convex functions and a biased oracle which only provides noisy function value at any desired point. We approximate the subgradient of the objective function using Nesterov's Gaussian Approximation (NGA) with certain alternations suggested by some practical applications. We prove an almost sure convergence of the iterates' function value to the neighbourhood of optimal function value, which can not be made arbitrarily small, a manifestation of a biased oracle. This letter ends with a concentration inequality, which is a finite time analysis that predicts the likelihood that the function value of the iterates is in the neighbourhood of the optimal value at any finite iteration.
Almost sure convergence, subgradient approximation, mirror descent algorithm
## I Introduction
One of the earliest subfields of optimization is derivative-free optimization [1, 2, 3]. It refers to an optimization problem with an oracle that only provides noisy function value at a desired point. Following numerous attempts by researchers to accurately approximate a function's subgradient from its value (for example see, [4, 5]), it has now gained popularity in the optimization community due to its use in a variety of different domains. For a full introduction to derivative-free optimization and its various applications in diverse domains, see [6] (and the references therein).
In this letter, we focus on the zeroth-order mirror descent algorithm [7], where the approximated subgradient established in [5] replaces the subgradient of the convex objective function in standard mirror descent algorithm [8]. Originally, the mirror descent algorithm generalizes the standard gradient descent algorithm in a more general non-Euclidean space [9]. In recent years, the mirror descent algorithm has grasped significant attention in the large-scale optimization problems, data-driven control and learning, power system, robotics and game theoretic problems [10]. For the stochastic mirror descent algorithm, we refer the reader to [11]. However, precise information regarding the convex objective function's subgradient or stochastic subgradient is accessible in these articles. In this letter, we assume that we can only access the noisy evaluation of the convex objective function at a desired point via a "biased zeroth-order" oracle. The oracle setting is driven by a large number of practical applications in which only the noisy function values are provided at a point and obtaining a subgradient or stochastic subgradient may not be feasible at that point. As a result, we must approximate the function's subgradient from the noisy measurement of the function value. This gives rise to the notion of zeroth-order optimization [12]. Every step in the zeroth-order algorithm is similar to its first-order counterpart (such as gradient descent or mirror descent), except that the function's subgradient must be approximated at every point. There has recently been a surge of interest generated in different variants of zeroth-order optimization, for both convex and non-convex functions [13, 14, 15, 16, 17, 18], where the subgradient is approximated by NGA [5].
We extend the analysis of zeroth-order optimization in this letter, focusing on the zeroth-order mirror descent (ZOMD) algorithm. The problem framework and analysis in this work differ significantly from the recent literature. The main objective of this study is to show the almost sure convergence of the function value of iterates of the ZOMD algorithm to a neighbourhood of optimal value, as compared to the bulk of the literature, which focuses on showing that the expected error in function value converges to the neighbourhood of zero. An almost sure convergence guarantee to a neighbourhood of optimal value is more significant than the convergence in expectation since it describes what happens to the individual trajectory in each iteration. To the best of our knowledge, no prior work on almost sure convergence for zeroth-order optimization has been published. The problem framework in this study differs from most other works in that it includes a biased oracle that delivers only biased measurement of function value (the expectation of noise in the function measurement is non-zero) at any specified point. The motivation to consider "biased oracle" can be found in application of reinforcement learning and financial risk measurement (see [19] and references therein for more details). Furthermore, unlike other publications, we consider that the oracle returns distinct noise values for two different points. Lastly, in addition to showing almost sure convergence, we estimate the likelihood that the function value of the iterates will be in the neighbourhood of optimal value in any finite iteration. This analysis aids in determining the relationship between the convergence of the ZOMD algorithm and the various parameters of the approximated subgradient. The following list summarises the key contribution of this study.
1. We analyse the ZOMD algorithm under the assumption
that a biased oracle returns noisy function value at a predetermined point where the expected error is nonzero. For the biased oracles, we re-evaluate the parameters of the approximated subgradient of the objective function at a specific location, which is calculated using NGA.
2. We prove that, under certain assumptions, the function values of the iterates of ZOMD algorithm almost surely converges to the neighbourhood of optimal function value. This neighbourhood is determined by several parameters, which are explored in this study.
3. Finally, we show that for any confidence level and a given neighbourhood around the optimal function value, the function value of the iterate sequence should be in that neighbourhood after some finite iteration with that confidence. We also present an expression for that finite iteration that is influenced by the neighbourhood, confidence level, and other properties stated in the letter.
## II Notation and Mathematical Preliminaries
Let \(\mathbb{R}\) and \(\mathbb{R}^{n}\) represent the set of real numbers, set of \(n\) dimensional real vectors. Let \(\left\|.\right\|\) denote any norm on \(\mathbb{R}^{n}\). Given a norm \(\left\|.\right\|\) on \(\mathbb{R}^{n}\), the dual norm of \(x\in\mathbb{R}\) is \(\left\|x\right\|_{*}:=\sup\{\langle x,y\rangle:\left\|y\right\|\leq 1,y\in \mathbb{R}^{n}\}\), where \(\langle x,y\rangle\) denotes the standard inner-product on \(\mathbb{R}^{n}\). \(I_{n}\) is \(n\times n\) identity matrix. A random vector \(X\sim\mathcal{N}(0_{n},I_{n})\) denotes a \(n\)-dimensional normal random vector with zero-mean and unit standard-deviation. For two random variables \(X\) and \(Y\), \(\sigma(X,Y)\) is the smallest sigma-algebra generated by random variables \(X\) and \(Y\). Because of equivalence of norm \(\left\|.\right\|_{2}\leq\kappa_{1}\left\|.\right\|_{*}\) and \(\left\|.\right\|_{2}\leq\kappa_{2}\left\|.\right\|\) and \(\kappa=\kappa_{1}\kappa_{2}\).
Let \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) be a convex function. For \(\delta\geq 0\), the vector \(g_{\delta}\in\mathbb{R}^{n}\) is called a \(\delta\)-subgradient of \(f\) at \(x\) if and only if \(f(y)\geq f(x)+\langle g_{\delta},y-x\rangle-\delta\ \ \ \ \forall\ y\in\mathbb{R}^{n}\)[5]. The set of all \(\delta\)-subgradients at a point \(x\) is called the \(\delta\)-subdifferential of \(f\), denoted by \(\partial_{\delta}f(x)\). If \(\delta=0\), we simply write the notation \(\partial f(x)\). If \(f\) is differentiable at \(x\), then \(\partial f(x)=\{\nabla f(x)\}\), gradient of \(f\) at \(x\). We say \(f\in\mathcal{C}^{0,0}\) if \(\exists\ L_{0}>0\) such that \(\left\|f(x)-f(y)\right\|\leq L_{0}\left\|x-y\right\|\) and \(f\in\mathcal{C}^{1,1}\) if \(f\) is continuously differentiable and \(\exists\ L_{1}>0\) such that \(\left\|\nabla f(x)-\nabla f(y)\right\|\leq L_{1}\left\|x-y\right\|\forall\ x,y\in\mathbb{R}^{n}\).
If \(f\) has directional derivative in all directions, then we can form the Gaussian approximation as follows: \(f_{\mu}(x)=\frac{1}{(2\pi)^{\frac{1}{2}}}\int\limits_{\mathbb{R}^{n}}f(x+\mu u )e^{-\frac{1}{2}\left\|u\right\|^{2}}du\), where \(\mu>0\) is any constant. The function \(f_{\mu}\) is differentiable at each \(x\in\mathbb{R}^{n}\) and \(\nabla f_{\mu}(x)=\frac{(x)^{\frac{1}{2}}}{\pi}\int\limits_{\mathbb{R}^{n}}uf (x+\mu u)e^{-\frac{1}{2}\left\|u\right\|^{2}}du\). It can also be seen that \(\nabla f_{\mu}(x)\in\partial_{\delta}f(x)\), where, \(\delta=\mu L_{0}\sqrt{n}\) if \(f\in\mathcal{C}^{0,0}\) and \(\delta=\frac{\mu^{2}}{2}L_{1}\sqrt{n}\) if \(f\in\mathcal{C}^{1,1}\).
Let \((\Omega,\mathcal{F},\mathbb{F})\) denote a probability space. An event \(A\in\mathcal{F}\) is occurred almost surely (a.s.) if \(\mathbb{P}(A)=1\). If \(X\sim\mathcal{N}(0_{n},I_{n})\), it can be shown that \(\mathbb{E}[\left\|X\right\|_{2}^{p}]\leq n^{\frac{p}{2}}\) if \(p\in[0,2]\) and \(\mathbb{E}[\left\|X\right\|_{2}^{p}]\leq(p+n)^{\frac{p}{2}}\) if \(p>2\). We will use the following two Lemmas in our analysis.
**Lemma 1** ([20]): _Let \(\{X_{t}\}_{t\geq 1}\) be a martingale with respect to a filtration \(\{\mathcal{F}_{t}\}_{t\geq 1}\) such that \(\mathbb{E}[\left\|X_{t}\right\|]<\infty\) and \(\{\beta(t)\}\) is a non-decreasing sequence of positive numbers such that \(\lim\limits_{t\rightarrow\infty}\beta(t)=\infty\) and \(\sum\limits_{t\geq 1}\frac{\mathbb{E}[\left\|X_{t}-X_{t-1}\right\|^{2}| \mathcal{F}_{t-1}]}{\beta(t)^{2}}<\infty\), then \(\lim\limits_{t\rightarrow\infty}\frac{X_{t}}{\beta(t)}=0\) a.s._
**Lemma 2**: _If \(\{X_{t},\mathcal{F}_{t}\}_{t\geq 1}\) is a non-negative submartingales, then, for any \(\epsilon>0\) we have \(\mathbb{P}(\max\limits_{1\leq t\leq T}X(t)\geq\epsilon)\leq\frac{E[X(T)]}{\epsilon}\)._
## III Problem Statement
Consider the following optimization problem
\[\min\limits_{x\in\mathbb{X}}f(x)\] (CP1)
The constraint set \(\mathbb{X}\) is a convex and compact subset of \(\mathbb{R}^{n}\) with diameter \(D\). The function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is a convex. Define \(f^{*}=\min\limits_{x\in\mathbb{X}}f(x)\) and \(\mathbb{X}^{*}=\{x^{*}\in\mathbb{X}|f(x^{*})=f^{*}\}\). Observe that \(\mathbb{X}^{*}\) is nonempty due to compactness of the constraint set \(\mathbb{X}\) and continuity of \(f\). We assume in this letter that we have an oracle which generates a noisy value of the function at a given point \(x\in\mathbb{X}\). That is, at each point \(x\in\mathbb{X}\), we have only the information \(\hat{f}(x)=f(x)+e(x,\omega)\), where \(e(x,\omega):\mathbb{R}^{n}\times\Omega\rightarrow\mathbb{R}\) is a random variable for each \(x\in\mathbb{X}\) satisfying
\[\begin{split}&\mathbb{E}[e(x,\omega)]=b(x)\ \ \text{with}\ \left\|b(x)\right\|_{*}\leq B\\ &\text{and}\ \ \mathbb{E}[\left\|e(x,\omega)\right\|^{2}]\leq\text{V}^{2} \end{split} \tag{1}\]
where, \(B\) is a non-negative constant, and \(V\) can be any constant.
**Remark 1**: _In the context of zeroth-order stochastic optimization problem [13, 14, 15], the objective is to solve the optimization problem: \(\min\limits_{x\in\mathbb{X}}f(x)=\mathbb{E}[F(x,\omega)]\) and the oracle only provides \(F(x,\omega)\) at any desired \(x\in\mathbb{R}^{n}\). In such a situation, it is straightforward to verify that \(\mathbb{E}[e(x,\omega)]=0\), implying that \(B=0\). The assumption of positive \(B\) makes the problem more generic than previous recent studies. In a broader sense, if \(B=0\), we call it an unbiased oracle._
However, \(B\) is non-zero in many applications (see [19] and references therein for further details), therefore the problem in this study is more general than in other recent works due to the presence of positive \(B\).
For sake of brevity, we henceforth use \(e(x)\) to denote \(e(x,\omega)\). In the next section, we discuss the zeroth-order mirror descent algorithm.
## IV Zeroth-Order Mirror Descent Algorithm
Mirror descent algorithm is a generalization of standard subgradient descent algorithm where the Euclidean norm is replaced with a more general Bergman divergence as a proximal function. Let \(R\) be the \(\sigma_{R}\)-strongly convex function and differentiable over an open set that contains the set \(\mathbb{X}\). The Bergman divergence \(\mathbb{D}_{R}(x,y):\mathbb{X}\times\mathbb{X}\rightarrow\mathbb{R}\) is \(\mathbb{D}_{R}(x,y):=R(x)-R(y)-\langle\nabla R(y),x-y\rangle\ \ \forall\ x,y\in\mathbb{X}\). It is clear from the definition of strong convexity that
\[\begin{split}\mathbb{D}_{R}(x,y)\geq\frac{\sigma_{R}}{2}\left\|x-y \right\|^{2}.\end{split} \tag{2}\]
\[\begin{split}\mathbb{D}_{R}(z,y)-\mathbb{D}_{R}(z,x)-\mathbb{D}_{R}(x,y)&=\langle\nabla R(x)-\nabla R(y),z-x\rangle\\ &\forall\ x,y,z\in\mathbb{X}.\end{split} \tag{3}\]
We outline the steps of the mirror descent algorithm.
At iteration \(t\), let \(x_{t}\) be the iterates of the ZOMD algorithm. We approximate the subgradient of function \(f(x)\) at \(x=x_{t}\) as follows. We generate a normal random vector \(u_{t}\sim\mathcal{N}(0_{n},I_{n})\). We use the zeroth-order oracles to get the noisy function values (\(\hat{f}\)) at two distinct values, that is,
\(\hat{f}(x_{t}+\mu u_{t})=f(x_{t}+\mu u_{t})+e(x_{t}+\mu u_{t},\omega_{t}^{1})\) and
\(\hat{f}(x_{t})=f(x_{t})+e(x_{t},\omega_{t}^{2})\). Note that \(\omega_{t}^{1}\) and \(\omega_{t}^{2}\) are two independent realizations from the sample space \(\Omega\) according to the probability law \(\mathbb{P}\). Hence, we approximate the subgradient of \(f\) at \(x=x_{t}\), denoted by \(\tilde{g}(t)\) as \(\tilde{g}(t)=\frac{\tilde{f}(x_{t}+\mu u_{t})-\tilde{f}(x_{t})}{\mu}u_{t}\). The next iterate \(x_{t+1}\) is calculated as follows:
\[x_{t+1}=\operatorname*{arg\,min}_{x\in\mathbb{X}}\{\langle\tilde{g}(t),x-x_{ t}\rangle\}+\frac{1}{\alpha(t)}\mathbb{D}_{R}(x,x_{t}))\} \tag{4}\]
where, \(\alpha(t)\) is the step-size of the algorithm. To show almost sure convergence, we consider weighted averaging akin to the recent work [11] in first-order algorithm as \(z_{t}=\frac{\sum\limits_{t=1}^{\infty}\alpha(x)x_{t}}{\sum\limits_{t=1}^{ \infty}\alpha(k)}\).
The Bergman divergence should be chosen in such a way that (4) is computationally easier to execute or a closed form solution to (4) is available [21].
**Assumption 1**: _The step-size \(\alpha(t)\) is a decreasing sequence which satisfies \(\sum\limits_{t=1}^{\infty}\alpha(t)=\infty\) and \(\sum\limits_{t=1}^{\infty}\alpha(t)^{2}<\infty\)._
From Assumption 1, we can conclude that \(\lim\limits_{t\to\infty}\alpha(t)=0\).
**Assumption 2**: _Let the following hold._
1. _The generating random vectors_ \(u_{t}\in\mathbb{R}^{n}(\forall\ t\in\mathbb{N})\) _are mutually independent and normally distributed and for each_ \(t\in\mathbb{N}\)__\(u_{t}\) _is independent of_ \(x_{t}\)_._
2. _The random variables_ \(e(x_{t},.):\Omega\to\mathbb{R}\) _and_ \(e(x_{t}+\mu u_{t},.):\Omega\to\mathbb{R}\)__\((\forall\ t\in\mathbb{N})\) _are mutually independent and identically distributed in the probability space_ \((\Omega,\mathcal{F},\mathbb{P})\)_._
3. _The random variables_ \(e(x_{t}+\mu u_{t})\) _and_ \(e(x_{t})\) _are independent of_ \(x_{t}\) _and_ \(u_{t}\)_._
Using Assumption 2 and (1), we can write \(\mathbb{E}[e(x_{t}+\mu u_{t})|\sigma\{x_{t},u_{t}\}]=b(x_{t}+\mu u_{t})\) and \(\mathbb{E}[e(x_{t})|\sigma\{x_{t}\}]=b(x_{t})\), where, \(\left\|b(x_{t}+\mu u_{t})\right\|_{*}\) and \(\left\|b(x_{t})\right\|_{*}\)\(\leq B\) a.s. Similarly, \(\mathbb{E}[\left\|e(x_{t}+\mu u_{t})\right\|^{2}|\sigma\{x_{t},u_{t}\}]\leq \mathbb{V}^{2}\) and \(\mathbb{E}[\left\|e(x_{t})\right\|^{2}|\sigma\{x_{t}\}]\leq\mathbb{V}^{2}\) a.s. For an unbiased oracle \(B=0\).
**Remark 2**: _Note that, most recent literature on zeroth-order stochastic optimization computes function value at two separate points \(x_{t}\) and \(x_{t}+\mu u_{t}\) under the assumption that the stochastic parameters \(e(x_{t})\) and \(e(x_{t}+\mu u_{t})\) are the same. For many applications, this is rather a stringent assumption. In this letter, we avoid such an assumption, which in turn leads to significant deviation in the properties of approximated subgradient and the pertinent properties will be discussed in the ensuing section._
## V Main Result
In this section we discuss the properties of approximated subgradient, almost sure convergence and the finite time analysis. Before proceeding further, first define \(\mathcal{F}_{t}=\sigma\{x_{l}|1\leq l\leq t\}\)\(\forall\ t\in\mathbb{N}\). Hence we get a filtration such as \(\mathcal{F}_{1}\subseteq\mathcal{F}_{2}\subseteq\cdots\subseteq\mathcal{F}_{t}\). Observe that \(\tilde{g}(t-1)\) is \(\mathcal{F}_{t}\) measurable in view of (4) and also the Bergman divergence \(\mathbb{D}_{R}(x,x_{t})\) (\(\forall\ x\in\mathbb{X}\)) is \(\mathcal{F}_{t}\) measurable. Define another filtration as \(\{\mathcal{G}_{t}\}_{t\geq 1}\) such that \(\mathcal{G}_{t-1}=\mathcal{F}_{t}\), which will be helpful in the subsequent analysis.
### Properties of Approximated Subgradient
The analysis in this subsection borrows some steps from [5]. However, our analysis contains significant deviations, most notably, the result concerning the properties of approximated subgradient, which is derived using the noisy information of the function value.
**Lemma 3**: \[\mathbb{E}[\tilde{g}(t)|\mathcal{F}_{t}]=\nabla f_{\mu}(x_{t})+\mathrm{B}(t) \ \ \text{a.s.}\]
_where, \(\mathrm{B}(t)\) is \(\mathcal{F}_{t}\) measurable and satisfies \(\left\|\mathrm{B}(t)\right\|_{*}\leq\frac{2\kappa_{1}\mathrm{B}}{\mu}\sqrt{n}\) a.s and we have (a.s.)_
\[\mathbb{E}[\left\|\tilde{g}(t)\right\|_{*}^{2}|\mathcal{F}_{t}]\leq\] \[\begin{cases}\kappa_{1}^{2}(2L_{0}^{2}n+8\Big{(}\frac{\mathrm{V}}{ \mu}\Big{)}^{2}n)\ \ \text{if}\ \ f\in\mathcal{C}^{0,0}\\ \kappa_{1}^{2}(\frac{3}{4}L_{1}\mu^{2}\kappa_{2}^{4}(n+6)^{3}+3G^{2}(n+4)^{2}+12 \frac{\mathrm{V}^{2}}{\mu^{2}}n)\ \ \text{if}\ \ f\in\mathcal{C}^{1,1}.\end{cases}\]
Consider the \(\sigma\)-algebra \(\mathcal{H}_{t}\) defined as \(\mathcal{H}_{t}=\sigma(\{x_{t}\}_{k=1}^{k},u_{t})\). Consider the term
\[\mathbb{E}[\tilde{g}(t)|\mathcal{H}_{t}] \tag{5}\] \[= \nabla f_{\mu}(x_{t})+\mathbb{E}\Big{[}\frac{e(x_{t}+\mu u_{t})-e( x_{t})}{\mu}u_{t}|\sigma(x_{t},u_{t})\Big{]}\ \ \text{a.s.}\]
Note that because of Assumption 2, \(\mathbb{E}\Big{[}\frac{f(x_{t}+\mu u_{t})-f(x_{t})}{\mu}u_{t}|\mathcal{H}_{t} \Big{]}\)\(=\mathbb{E}\Big{[}\frac{f(x_{t}+\mu u_{t})-f(x_{t})}{\mu}u_{t}|\sigma(x_{t},u_{t}) \Big{]}\) a.s. and
\[\left\|\mathbb{E}\Big{[}\frac{e(x_{t}+\mu u_{t})-e(x_{t})}{\mu}u_{ t}|\sigma(x_{t},u_{t})\Big{]}\right\|_{*}\] \[\leq \left\|\frac{b(x_{t}+\mu u_{t})-b(x_{t})}{\mu}\right\|_{*}\left\|u_{ t}\right\|_{*}\leq\frac{2B}{\mu}\left\|u_{t}\right\|_{*}\ \ \text{a.s.}\]
Observe that \(\mathcal{F}_{t}\subseteq\mathcal{H}_{t}\) and hence by using the Towering property we get \(\mathbb{E}[\tilde{g}(t)|\mathcal{F}_{t}]=\mathbb{E}[\mathbb{E}[\tilde{g}(t)| \mathcal{H}_{t}]|\mathcal{F}_{t}]=\nabla f_{\mu}(x_{t})+\mathrm{B}(t)\), Where, \(\mathrm{B}(t)=\mathbb{E}\Big{[}\mathbb{E}\Big{[}\frac{e(x_{t}+\mu u_{t})-e(x_{t})}{ \mu}u_{t}|\sigma(x_{t},u_{t})\Big{]}|\mathcal{F}_{t}\Big{]}\) satisfies \(\left\|\mathrm{B}(t)\right\|_{*}\leq\frac{2\mathrm{B}\kappa_{1}}{\mu}\mathbb{E}[ \left\|u_{t}\right\|_{2}|\mathcal{F}_{t}]\leq\frac{2\mathrm{B}\kappa_{1}}{\mu} \sqrt{n}\) a.s. Consider the term \(\left\|\frac{f(x_{t}+\mu u_{t})+e(x_{t}+\mu u_{t})-f(x_{t})-e(x_{t})}{\mu}u_{t} \right\|_{*}^{2}\leq 2\kappa_{1}^{2}\left\|\frac{f(x_{t}+\mu u_{t})-f(x_{t})}{\mu}u_{t}\right\|_{ 2}^{2}+2\kappa_{1}^{2}\left\|\frac{e(x_{t}+\mu u_{t})-e(x_{t})}{\mu}u_
Hence, by applying Towering property in (6) we get,
\[\mathbb{E}[\left\|\tilde{g}(t)\right\|_{*}^{2}|\mathcal{F}_{t}]\leq 2\kappa^{2}L_{0 }^{2}n+8\kappa_{1}^{2}\frac{\mathrm{V}^{2}}{\mu^{2}}n\;\;\text{a.s.}\]
For \(f\in\mathcal{C}^{1,1}\), consider
\[\left\|\frac{f(x_{t}+\mu u_{t})+e(x_{t}+\mu u_{t})-f(x_{t})-e(x_{ t})}{\mu}u_{t}\right\|_{*}^{2} \tag{7}\] \[\leq 3\kappa_{1}^{2}\left\|\frac{f(x_{t}+\mu u_{t})-f(x_{t})-\mu\left< \nabla f(x_{t}),u_{t}\right>}{\mu}u_{t}\right\|_{2}^{2}\] \[+3\kappa_{1}^{2}\left\|\nabla f(x_{t})\right\|_{2}^{2}\left\|u_{t }\right\|_{2}^{4}+3\kappa_{1}^{2}\left\|\frac{e(x_{t}+\mu u_{t})-e(x_{t})}{\mu }u_{t}\right\|_{2}^{2}\]
Note that \(\left\|\frac{f(x_{t}+\mu u_{t})-f(x_{t})-\mu\left<\nabla f(x_{t}),u_{t}\right> }{\mu}u_{t}\right\|_{2}^{2}\leq\frac{L_{0}^{2}\mu^{2}\kappa_{0}^{2}}{4}\left\| u_{t}\right\|_{2}^{6}\) because of the definition of \(\mathcal{C}^{1,1}\). Taking conditional expectation on (7) we get the result.
Using the similar procedure we can extend the analysis for \(f\in\mathcal{C}^{2,2}\) and so on. It is important to note that because of consideration of more generic framework \(\mathbb{E}[\left\|\tilde{g}(t)\right\|_{*}^{2}]=\mathcal{O}(\frac{1}{\mu^{2}})\) for small values of \(\mu\), as opposed to [5] because of consideration of more general framework. This result plays a significant role in the subsequent discussion of this letter.
**Corollary 1**: _For unbiased oracle, \(\mathbb{E}[\tilde{g}_{t}|\mathcal{F}_{t}]=\nabla f_{\mu}(x_{t})\) a.s._
### Almost Sure Convergence of the ZOMD Algorithm
Based on the discussion in Lemma 3, we redefine properties of biased subgradient as follows
\[\tilde{g}(t)=g_{\delta}(t)+\mathrm{B}(t)+\zeta(t) \tag{8}\]
where, \(g_{\delta}(t)\in\partial_{\delta}f(x)\) at \(x=x_{t}\) and \(\mathrm{B}(t)\) is \(\mathcal{F}_{t}\) measurable and \(\left\|\mathbb{B}(t)\right\|_{*}\leq B_{1}\) a.s. Moreover, \(\mathbb{E}[\zeta(t)|\mathcal{F}_{t}]=0\) and \(\mathbb{E}[\left\|\tilde{g}(t)\right\|_{*}^{2}|\mathcal{F}_{t}]\leq\mathrm{K}\) a.s. Note that we can get an expression of \(\delta\), \(B_{1}\) and \(K\) from Lemma 3 depending on the properties of the noise and the smoothness of \(f\).
**Theorem 1**: _Under Assumptions 1 and 2 and \(\forall\)\(\epsilon>0\), for the iterate sequence generated by ZOMD algorithm \(\{x_{t}\}\), there exists a subsequence \(\{x_{t_{k}}\}\) such that \(f(x_{t_{k}})-f^{*}\leq\delta+B_{1}D+\epsilon\) a.s._
_For the iterate sequence \(\{z_{t}\}\), \(\exists\)\(t_{0}\in\mathbb{N}\) such that \(\forall\)\(t\geq t_{0}\) we have \(f(z_{t})-f^{*}\leq\delta+B_{1}D+\epsilon\) a.s._
Before proving the Theorem 1, we need the following three Lemmas which we discuss here.
**Lemma 4**: \(\sum\limits_{t\geq 1}\frac{\alpha(t)^{2}}{2\sigma_{R}}\left\|\tilde{g}(t) \right\|_{*}^{2}<\infty\)_. a.s._
\(\lim\limits_{t\to\infty}\mathbb{E}\Big{[}\sum\limits_{k=1}^{t}\frac{ \alpha(k)^{2}}{2\sigma_{R}}\left\|\tilde{g}(k)\right\|_{*}^{2}\Big{]}\leq \sum\limits_{t\geq 1}\frac{\alpha(t)^{2}}{2\sigma_{R}}\mathrm{K}<\infty\)__
By applying Fatou's Lemma we get
\[\mathbb{E}[\liminf\limits_{t\to\inf}\sum\limits_{k=1}^{t}\frac{\alpha(k)^{2} }{2\sigma_{R}}\left\|\tilde{g}(k)\right\|_{*}^{2}]\leq\liminf\limits_{t\to \infty}\mathbb{E}[\sum\limits_{k=1}^{t}\frac{\alpha(k)^{2}}{2\sigma_{R}}\left\| \tilde{g}(k)\right\|_{*}^{2}]\]
\(<\infty\). Hence we can say \(\sum\limits_{t\geq 1}\frac{\alpha(t)^{2}}{2\sigma_{R}}\left\|\tilde{g}(t)\right\|_{*}^{2}<\infty\) a.s.
**Lemma 5**: \(\exists\)\(C>0\) _such that \(\mathbb{E}[\left\|\zeta(t)\right\|_{*}^{2}|\mathcal{F}_{t}]<C\) a.s._
\(\lim\limits_{t\to\inf}\sum\limits_{k=1}^{t}\frac{\alpha(k)^{2}}{2\sigma_{R}} \left\|\tilde{g}(k)\right\|_{*}^{2}+\left\|\mathrm{B}(t)\right\|_{2}^{2}+\left\| g_{\delta}(t)\right\|_{2}^{2})\)._
Notice that \(\exists\)\(K_{1}>0\) such that \(\left\|g_{\delta}(t)\right\|\leq K_{1}\)\(\forall\)\(t\) because of compactness of \(\mathbb{X}\). Taking expectation on both sides of (9), we get (a.s.) \(\mathbb{E}[\left\|\zeta(t)\right\|_{*}^{2}|\mathcal{F}_{t}]\leq 3\kappa_{1}^{2}( \mathrm{K}+B_{1}^{2}+K_{1})\triangleq C\).
**Lemma 6**: \[\frac{\sum\limits_{t\geq 1}\alpha(t)\left<\zeta(t),x-x_{t}\right>}{\sum\limits_{t \geq 1}\alpha(t)}=0\;\;\text{a.s.}\;\;\forall\;\;x\in\mathbb{X}.\]
Define \(X(t)=\sum\limits_{k=1}^{t}\alpha(k)\left<\zeta(k),x-x_{k}\right>\). In the light of definition of \(\zeta(t)\) and since \(X(t)\) is \(\mathcal{F}_{t}\) measurable we get that \(\mathbb{E}[X(t)|\mathcal{F}_{t}]=X(t-1)\). Hence \(\{X(t),\mathcal{G}_{t}\}\) is a martingale. On the other hand, it can be seen that (a.s.)
\[\sum\limits_{t\geq 1}\frac{\mathbb{E}[\left\|X(t)-X(t-1)\right\|^{2}| \mathcal{F}_{t}]}{(\sum\limits_{k=1}^{t}\alpha(k))^{2}}\leq\] \[\sum\limits_{t\geq 1}\frac{\mathbb{E}[\alpha(t)^{2}\left\|\zeta(t) \right\|_{*}^{2}\left\|x-x_{t}\right\|^{2}|\mathcal{F}_{t}]}{(\sum\limits_{k=1}^{ t}\alpha(k))^{2}}\leq\sum\limits_{t\geq 1}\frac{\alpha(t)^{2}D^{2}C}{(\sum\limits_{k=1}^{t}\alpha(t))^{2}}<\infty.\]
The last line is because of Lemma 5 and the diameter of the compact set \(\mathbb{X}\). Hence by applying Lemma 1, the result follows.
Now we are in a position to prove the main result.
The application of first-order optimality condition to (4) yields
\[\alpha(t)\left<\tilde{g}(t),x-x_{t+1}\right>\geq-\left<\nabla R(x_{t+1})-\nabla R (x_{t}),x-x_{t+1}\right>\] \[\geq\mathbb{D}_{R}(x_{t+1},x_{t})+\mathbb{D}_{R}(x,x_{t+1})- \mathbb{D}_{R}(x,x_{t}). \tag{10}\]
The last inequality in (10) is due to (3). From the LHS of (10), we obtain
\[\alpha(t)\left<\tilde{g}(t),x-x_{t+1}\right>=\alpha(t)\left< \tilde{g}(t),x-x_{t}+x_{t}-x_{t+1}\right>\] \[\leq \alpha(t)\left<\tilde{g}(t),x-x_{t}\right>+\frac{\alpha(t)^{2}}{2 \sigma_{R}}\left\|\tilde{g}(t)\right\|_{*}^{2}+\frac{\sigma_{R}}{2}\left\|x_{t}-x _{t+1}\right\|^{2}.\]
The last inequality follows by applying the Young-Fenchel inequality to the term \(\alpha(t)\left<\tilde{g}(t),x_{t}-x_{t+1}\right>\). Hence from (10), we get that
\[\mathbb{D}_{R}(x,x_{t+1}) \tag{11}\] \[\leq \mathbb{D}_{R}(x,x_{t})+\alpha(t)\left<\tilde{g}(t),x-x_{t}\right>+ \frac{\alpha(t)^{2}}{2\sigma_{R}}\left\|\tilde{g}(t)\right\|_{*}^{2}.\]
Notice that \(\mathbb{D}_{R}(x_{t+1},x_{t})\geq\frac{\sigma_{
\[+\sum\limits_{k=1}^{t}\alpha(k)\Big{(}f^{*}-f(x_{k})+\delta+B_{1}D+ \langle\zeta(k),x^{*}-x_{k}\rangle\Big{)}.\]
Let \(\epsilon>0\) and define the sequence of stopping times \(\{T_{p}\}_{p\geq 1}\) and \(\{T^{p}\}_{p\geq 1}\) as follows:
\[\begin{array}{l}T_{1}=\inf\{f(x_{t})-f^{*}\geq\delta+B_{1}D+ \epsilon\}\\ T^{1}=\inf\{t\geq T_{1}|f(x_{t})-f^{*}<\delta+B_{1}D+\epsilon\}\\ \vdots\\ T^{p}=\inf\{t\geq T_{p}|f(x_{t})-f^{*}<\delta+B_{1}D+\epsilon\}\\ T_{p+1}=\inf\{t\geq T^{p}|f(x_{t})-f^{*}\geq\delta+B_{1}D+\epsilon\ \}.\end{array}\]
If \(\exists\ p\in\mathbb{N}\) such that infimum does not exist, we assume that \(T_{p}=\infty\) or \(T^{p}=\infty\).
Claim-1 - If \(T_{p}<\infty\), then \(T^{p}<\infty\) a.s. \(\forall\ p\in\mathbb{N}\).
Suppose, ad absurdum, \(\exists\ p_{0}\in\mathbb{N}\) such that \(T_{p_{0}}<\infty\) but \(T^{p_{0}}=\infty\) with probability (w.p.) \(\eta\). Let \(T_{p_{0}}=t_{0}\), then it implies that \(\forall\ t\geq t_{0}\), \(f(x_{t})-f^{*}\geq\delta+B_{1}D+\epsilon\) w.p. \(\eta\). From (13), we deduce that \(\forall\ t\geq t_{0}\) (w.p. \(\eta\))
\[\begin{array}{l}\mathbb{D}_{R}(x^{*},x_{t+1})\leq\mathbb{D}_{R}(x^{*},x_{t_ {0}})+\sum\limits_{k=t_{0}}^{t}\alpha(k)\Big{(}-\epsilon\\ +\langle\zeta(k),x^{*}-x_{k}\rangle\Big{)}+\sum\limits_{k=t_{0}}^{t}\frac{ \alpha(k)^{2}}{2\sigma_{R}}\left\|\tilde{g}(k)\right\|_{*}^{2}.\end{array} \tag{14}\]
Let \(t\rightarrow\infty\). Notice that in view of Lemma 6, \(\sum\limits_{k\geq t_{0}}\alpha(k)(-\epsilon+\langle\zeta(k),x^{*}-x_{k} \rangle)=-\infty\) and also in view of Lemma 4\(\sum\limits_{k\geq t_{0}}\frac{\alpha(k)^{2}}{2\sigma_{R}}\left\|\tilde{g}(k) \right\|_{*}^{2}<\infty\) a.s. Hence, from (14) we get \(\limsup\mathbb{D}_{R}(x^{*},x_{t})=-\infty\) w.p. atleast \(\eta\), which implies \(\eta=0\). Thus, \(T^{p_{0}}<\infty\) a.s. This establishes Claim-1. Hence \(\exists\ \{x_{t_{k}}\}\subseteq\{x_{t}\}\) such that \(f(x(t_{k}))-f^{*}\leq\delta+B_{1}D+\epsilon\) a.s.
From the definition of convexity of \(f\) we get that \(\sum\limits_{k=1}^{t}\alpha(k)f(z_{t})\leq\sum\limits_{j=1}^{t}\alpha(j)f(x_{ j})\). Hence, from (13) we get that
\[\begin{array}{l}\mathbb{D}_{R}(x^{*},x_{t+1})\leq\mathbb{D}_{R}(x^{*},x_{1 })+\sum\limits_{k=1}^{t}\frac{\alpha(k)^{2}}{2\sigma_{R}}\left\|\tilde{g}(k) \right\|_{*}^{2}\end{array} \tag{15}\]
In a similar fashion, define the sequence of stopping times \(\{\bar{T}_{p}\}_{p\geq 1}\) and \(\{\bar{T}^{p}\}_{p\geq 1}\) as follows:
\[\begin{array}{l}\bar{T}_{1}=\inf\{f(z_{t})-f^{*}\geq\delta+B_{1}D+\epsilon\} \\ \bar{T}^{1}=\inf\{t\geq\bar{T}_{1}|f(z_{t})-f^{*}<\delta+B_{1}D+\epsilon\}\\ \vdots\\ \bar{T}^{p}=\inf\{t\geq T_{p}|f(z_{t})-f^{*}<\delta+B_{1}D+\epsilon\ \}\\ \bar{T}_{p+1}=\inf\{t\geq T^{p}|f(z_{t})-f^{*}\geq\delta+B_{1}D+\epsilon\ \}.\end{array}\]
If \(\bar{T}_{p}<\infty\) then \(\bar{T}^{p}<\infty\) a.s. The reason is similar to the proof of Claim-1.
Claim- 2: \(\exists\ p_{0}\in\mathbb{N}\) such that \(\bar{T}_{p_{0}}=\infty\) a.s. If this claim is true, it proves the second part of the Theorem.
Otherwise, \(\forall\ t_{1}\in\mathbb{N}\), \(\exists\ t>t_{1}\) such that \((f(z_{t})-f^{*})\geq\delta+B_{1}D+\epsilon\) with some probability \(\eta\). Hence, from (15) we get that (14) holds for that \(t\). Letting \(t\rightarrow\infty\) and using similar arguments we get \(\liminf\limits_{t\rightarrow\infty}\mathbb{D}_{R}(x^{*},x_{t})=-\infty\) w.p. atleast \(\eta\), that means \(\eta=0\). Hence, the Claim-2 holds.
**Corollary 2** (ZOMD with unbiased oracle): _For all \(\epsilon>0\), \(\exists\ t_{0}\in\mathbb{N}\) such that \(\forall\ t\geq t_{0}\)_
\[f(z_{t})-f^{*}\leq\begin{cases}\mu L_{0}\sqrt{n}+\epsilon\ \ \text{if}\ f\in\mathcal{C}^{0,0}\\ \frac{\mu^{2}}{2}L_{1}n+\epsilon\ \ \text{if}\ f\in\mathcal{C}^{1,1}.\end{cases}\text{ a.s.}\]
Corollary 2 shows that by selecting a very small \(\mu\), the function value of iterate sequence converges to a small neighbourhood of the optimal value. Notice that, \(\mathbb{E}[\left\|\tilde{g}(t)\right\|_{*}^{2}]=\mathcal{O}(\frac{1}{\mu^{2}})\) for small value of \(\mu\), hence we cannot make \(\mu\) arbitrarily small. However, an analytic presentation on how small \(\mu\) influences the algorithm's performance will be discussed in the ensuing section.
**Corollary 3** (ZOMD with biased oracle): _For all \(\epsilon>0\)\(\exists\ t_{0}\in\mathbb{N}\) such that \(\forall\ t\geq t_{0}\) the following holds_
\[f(z_{t})-f^{*}\leq\begin{cases}\mu L_{0}\sqrt{n}+\frac{2\epsilon_{1}B}{\mu} \sqrt{n}D+\epsilon\ \ \text{if}\ f\in\mathcal{C}^{0,0}\\ \frac{\mu^{2}}{2}L_{1}n+\frac{2\epsilon_{1}B}{\mu}\sqrt{n}D+\epsilon\ \ \text{if}\ f\in\mathcal{C}^{1,1}.\end{cases}\text{ a.s.}\]
_As Corollary 3 shows, we can not make \(\mu\) very small for biased oracle. Nonetheless, an optimal \(\mu^{*}\) can be calculated using Corollary 3 to show almost sure convergence to an optimal neighbourhood around the optimal value._
### Concentration Bound - Finite Time Analysis
In the next Theorem, we will show that a very small \(\mu\) actually deteriorates the convergence rate of the ZOMD algorithm.
**Theorem 2**: _Consider any \(t_{0}\in\mathbb{N}\) such that \(\sum\limits_{k=1}^{t_{0}}\alpha(k)\geq\frac{3}{\epsilon}D\). Then \(\forall\ t\geq t_{0}\) the following holds._
\[\begin{array}{l}\mathbb{P}(f(z_{t})-f^{*}\geq\delta+B_{1}D+\epsilon)\\ \leq\!\frac{3\text{K}}{\epsilon}\frac{\sum\limits_{k=1}^{t}\alpha(k)^{2}}{ \epsilon}+\frac{9CD}{\epsilon^{2}}\frac{\sum\limits_{k=1}^{t}\alpha(k)^{2}}{( \sum\limits_{k=1}^{t}\alpha(k))^{2}}.\end{cases} \tag{16}\]
Using the first-order optimality condition as in the proof of Theorem 1, we get,
\[\begin{array}{l}f(z_{t})-f^{*}\leq\delta+B_{1}D+\frac{\mathbb{D}_{R}(x^{*},x_{1 })}{\sum\limits_{k=1}^{t}\alpha(k)}\\ +\frac{\sum\limits_{k=1}^{t}\alpha(k)\,\langle\zeta(k),x^{*}-x_{k}\rangle}{ \sum\limits_{k=1}^{t}\alpha(k)}+\frac{\sum\limits_{k=1}^{t}\alpha(k)^{2}\left\| \tilde{g}(k)\right\|_{*}^{2}}{2\sigma_{R}\sum\limits_{k=1}^{t}\alpha(k)}.\end{array} \tag{17}\]
Define \(X(t)=\sum\limits_{k=1}^{t}\alpha(k)\,\langle\zeta(k),x^{*}-x_{k}\rangle\) and \(Y(t)=\sum\limits_{k=1}^{t}\frac{\alpha(k)^{2}}{2\sigma_{R}}\left\|\tilde{g}(k) \right\|_{*}^{2}\). It can be seen from the definition of \(\tilde{g}(t)\) that \(\mathbb{E}[Y(t)|\mathcal{F}_{t}]=Y(t-1)+\frac{\alpha(t)^{2}}{2\sigma_{R}}\left\| \tilde{g}(t)\right\|_{
been shown in the proof of Lemma 6 that \(\{X(t),\mathcal{G}_{t}\}\) is a martingale, which implies \(\left\{\left\|X(t)\right\|^{2},\mathcal{G}_{t}\right\}\) is a sub-martingale \(\forall\ t\geq t_{0}\) and in view of Assumption 1, \(t_{0}<\infty\). Consider any \(t>t_{0}\) and from (17) if \(f(z_{t})-f^{*}\geq B_{1}D+\delta+\epsilon\) then atleast one of the following holds.
\(X(t)\geq\frac{\epsilon}{3}\sum\limits_{k=1}^{t}\alpha(k)\) or, \(Y(t)\geq\frac{\epsilon}{3}\sum\limits_{k=1}^{t}\alpha(k)\). That implies that \(\forall\ t\geq t_{0}\)
\[\begin{split}&\mathbb{P}(f(z_{t})-f^{*}\geq\delta+B_{1}D+ \epsilon)\\ \leq&\mathbb{P}(X(t)\geq\frac{\epsilon}{3}\sum \limits_{k=1}^{t}\alpha(k))+\mathbb{P}(Y(t)\geq\frac{\epsilon}{3}\sum\limits_ {k=1}^{t}\alpha(k)).\end{split} \tag{18}\]
Note that
\(\mathbb{P}(Y(t)\geq\frac{\epsilon}{3}\sum\limits_{k=1}^{t}\alpha(k))\ \leq\ \mathbb{P}(\max\limits_{1\leq j\leq t}Y(j)\geq\frac{ \epsilon}{3}\sum\limits_{k=1}^{t}\alpha(k)))\). Hence, by applying Lemma 2 we arrive at
\[\mathbb{P}(Y(t)\geq\frac{\epsilon}{3}\sum\limits_{k=1}^{t}\alpha(k))\leq\frac{ 3}{\epsilon}\frac{E[Y(t)]}{\sum\limits_{k=1}^{t}\alpha(k)}\leq\frac{3\mathrm{K }}{\epsilon}\frac{\sum\limits_{k=1}^{t}\alpha(k)^{2}}{\sum\limits_{k=1}^{t} \alpha(k)}. \tag{19}\]
and similarly,
\[\begin{split}&\mathbb{P}(X(t)\geq\frac{\epsilon}{3}\sum \limits_{k=1}^{t}\alpha(k))\leq\mathbb{P}(\|X(t)\|^{2}\geq\frac{\epsilon^{2}}{ 9}(\sum\limits_{k=1}^{t}\alpha(k))^{2})\\ &\leq\frac{9}{\epsilon^{2}}\frac{\mathbb{E}\left\|X(t)\right\|^{ 2}}{(\sum\limits_{k=1}^{t}\alpha(k))^{2}}\leq\frac{9CD}{\epsilon^{2}}\frac{ \sum\limits_{k=1}^{t}\alpha(k)^{2}}{(\sum\limits_{k=1}^{t}\alpha(k))^{2}}. \end{split} \tag{20}\]
Hence, by plugging (19) and (20) into (18), we get (16).
**Remark 3**: _Notice that both \(\mathrm{K}\) and \(C\) are \(\mathcal{O}(\frac{1}{\mu^{2}})\) from Lemma 3, this implies that an arbitrary small \(\mu\) makes the convergence of the function value to the neighbourhood of the optimal solution slower. Hence, there is a trade-off between accuracy of the convergence to the optimal value and convergence speed of the algorithm in the choice of \(\mu\). In the next Corollary, we capture this in detail._
**Corollary 4**: _For any \(\epsilon>0\) and a confidence level \(0<p<1\), let \(p_{1}=1-p\). Define \(t_{1}\) such that \(\forall\ t\geq t_{1}\ \sum\limits_{k=1}^{t}\alpha(k)\geq\frac{6\mathrm{K}}{ \mathrm{er}_{1}}\sum\limits_{k=1}^{t}\alpha(k)^{2}\) and \((\sum\limits_{k=1}^{t}\alpha(k))^{2}\geq\frac{18CD}{\epsilon^{2}p_{1}}\sum \limits_{k=1}^{t}\alpha(k)^{2}\). Then \(\forall\ t\geq\max\{t_{0},t_{1}\}\) we obtain_
\[\mathbb{P}(f(z_{t})-f^{*}<\delta+B_{1}D+\epsilon)\geq p.\]
Notice that \(t_{1}<\infty\) due to Assumption 1.
## VI Conclusion
In this letter, we proved almost sure convergence of function value of ZOMD algorithm to the neighbourhood of the optimal value. Further, we derive the concentration inequality which provides bounds on how the function value of the iterates of ZOMD algorithm deviates from the neighbourhood in any finite time. This analysis sheds some new insight to the field of zeroth-order optimization. The future path of research will attempt to demonstrate a higher convergence rate using a variance reduction technique.
|
2305.02548 | Two-particle bound states on a lattice | Two-particle lattice states are important for physics of magnetism,
superconducting oxides, and cold quantum gases. The quantum-mechanical lattice
problem is exactly solvable for finite-range interaction potentials. A two-body
Schroedinder equation can be reduced to a system of linear equations whose
numbers scale with the number of interacting sites. For the simplest cases such
as on-site or nearest-neighbor attractions, many pair properties can be derived
analytically, although final expressions can be quite complicated. In this
work, we systematically investigate bound pairs in one-, two-, and
three-dimensional lattices. We derive pairing conditions, plot phase diagrams,
and compute effective masses, radii, and energies. Along the way, we analyze
nontrivial physical effects such as light pairs and the dependence of binding
thresholds on pair momenta. At the end, we discuss the preformed-pair mechanism
of superconductivity and stability of many-pair systems against phase
separation. The paper is a combination of original work and pedagogical
tutorial. | Pavel E. Kornilovitch | 2023-05-04T05:01:16Z | http://arxiv.org/abs/2305.02548v2 | # Two-particle bound states on a lattice
###### Abstract
Two-particle lattice states are important for physics of magnetism, superconducting oxides, and cold quantum gases. The quantum-mechanical lattice problem is exactly solvable for finite-range interaction potentials. A two-body Schrodinder equation can be reduced to a system of linear equations whose numbers scale with the number of interacting sites. For the simplest cases such as on-site or nearest-neighbor attractions, many pair properties can be derived analytically, although final expressions can be quite complicated. In this work, we systematically investigate bound pairs in one-, two-, and three-dimensional lattices. We derive pairing conditions, plot phase diagrams, and compute effective masses, radii, and energies. Along the way, we analyze nontrivial physical effects such as light pairs and the dependence of binding thresholds on pair momenta. At the end, we discuss the preformed-pair mechanism of superconductivity and stability of many-pair systems against phase separation. The paper is a combination of original work and pedagogical tutorial.
## I Introduction
In 1986, Daniel Mattis published a review on few-body lattice problems.[1] The motivation for that work was deeper understanding of magnetism. Many successful models of magnetism are formulated in terms of localized spins arranged in regular lattices.[2] Exact results obtained for two and three interacting magnons provided valuable physical insights. Mattis also mentioned other areas that could benefit from similar analysis, specifically excitons and superconductivity ('\(\ldots\) do Cooper pairs have bound states of "Cooper molecules"?), Ref. [[1], p. 362]. At about the same time, high-temperature superconductivity (HTSC) was discovered by Bednorz and Muller,[3] which generated enormous interest in unconventional theories of superconductivity. The unusual HTSC properties such as a low carrier density, a short coherence length,[4] and a Bose-gas-like scaling of the magnetic penetration depth with \(T_{c}\)[5; 6] revived the _pre-BCS_ proposal[7; 8; 9; 10] that superconductivity could be a Bose-Einstein condensation (BEC) of charged bosons formed by binding of electrons or holes into real-space pairs. Many properties of superconducting oxides have been shown to be well described by charged Bose-gas phenomenology.[11; 12; 13; 14; 15; 16] More recently, normal-state pairs were observed in iron-based superconductors[17; 18] and in the shot noise in copper oxide junctions.[19] The debates about the _preformed pairs_ mechanism of superconductivity and its relevance to HTSC and pseudogap physics continue today.[20]
Physics of real-space pairs is tightly linked with the pairing mechanism. In superconducting oxides, main candidates are: strong electron-phonon interaction,[21] the Jahn-Teller effect,[3; 22; 3; 23; 24] and spin fluctuations,[25] although other mechanisms and combinations have been proposed.[11; 26; 27; 28; 29] However, the complexity of the problem suggests splitting one big puzzle into smaller ones. By postulating a simple phenomenological attraction of _some_ kind, one can investigate the BEC-BCS crossover,[16; 30; 31] the physics of pseudogap, the role of anisotropy,[32] phase separation,[33; 34; 35] electrodynamics, and other nontrivial topics, all without arguing about the specific nature of pairing interaction.
An added benefit of this approach is that the two-particle lattice problem is exactly solvable for a wide class of non-retarded interaction potentials, which provides a welcome rigor and often an analytical formula. The binding threshold, pair energy, dispersion, effective mass \(m_{p}^{*}\), effective radius \(r_{p}^{*}\), and wave function: all can be determined without approximations. These properties provide important physical insights. For example, mass anisotropy of _pairs_ differs from the bare anisotropy of the member particles, see, e.g., Sec. III.6 and IX.3. This relationship may be helpful in understanding the anisotropy of transport and electromagnetic properties of HTSC. Additionally, pair wave function is directly proportional to a macroscopic superconducting order parameter.[36] In particular, both share the same orbital symmetry. Knowledge of the exact pair wave function helps elucidate the relationship between the order parameter and other elements of the system. For example, it was proposed[37] that correlation-induced diagonal hopping may produce a \(d\)-symmetric ground-state pair.
A striking application of two-particle properties to unconventional superconductivity comes from analyzing the BEC temperature of pairs:
\[T_{\rm BEC}=C\,\frac{\hbar^{2}}{m_{p}^{*}}\,n_{p}^{2/3}\,. \tag{1}\]
Here, \(C\) is a numerical coefficient and \(n_{p}\) is the pair number density assumed to be known from chemical composition. \(m_{p}^{*}\) is the pair mass provided by an exact solution. Consider system's evolution with \(n_{p}\) shown in Fig. 1. (A) At low \(n_{p}\), the average distance between pairs is larger than their size, and Eq. (1) applies. (B) With \(n_{p}\) increasing, the system reaches "close-packing" when the pairs begin to overlap. The corresponding density is approximately given by an inverse pair volume, \(n_{\rm cp}=\Omega_{p}^{-1}=(r_{p}^{*})^{-3}\). The pairs interact strongly but the phase transition is still of BEC type with a transition
temperature approximately given by
\[T_{\rm BEC}(n_{\rm cp})=T_{\rm BEC}^{*}=C\,\frac{\hbar^{2}}{m_{p}^{*}}\frac{1}{r_ {p}^{*2}}\propto\frac{1}{m_{p}^{*}\,r_{p}^{*2}}\,. \tag{2}\]
(C) Upon further increase of density, the pairs overlap much more, with the average distance smaller than the pair size. The constituent fermions begin to form a Fermi sea,[38] and the phase transition gradually shifts to the BCS type. Equation (1) no longer applies and \(T_{c}\) begins to fall. Thus, the "maximal attainable" transition temperature is given by Eq. (2).[39; 40; 32] Remarkably, the latter contains only the properties of a single pair, with both \(m_{p}^{*}\) and \(r_{p}^{*}\) supplied by the exact solution. Then, various effects on \(T_{\rm BEC}^{*}\) may be studied rigorously. This methodology was applied, for example, to understanding the effects of interlayer hopping in Ref. [32]. This topic is the subject of Sec. IX.3.
Early developments of short-range attractive models of superconductivity[41; 42; 43] were summarized by Micnas, Ranninger, and Robaszkiewicz.[11] Those authors considered two groups of models. The first group included _on-site_ attraction, and was essentially derived from the attractive Hubbard model. Those models were useful to follow the BCS-BEC crossover, to understand system's thermodynamics, electrodynamics and other properties. At the same time, those models were too simplistic to describe real superconducting materials. It is hard to come up with a physical mechanism strong enough to overcome Coulomb repulsion between charge carriers and still keep the pairs mobile. The second group of models included _intersite_ attraction and were more realistic. Moving attraction to finite distances allowed keeping a strong repulsive core (Hubbard repulsion) that could model a screened Coulomb repulsion. In addition, intersite models add the possibility of antiferromagnetic ordering, of \(p\)- and \(d\)-pairing, and make a better connection to lattice models of magnetism. Such models with on-site repulsion and intersite attraction will be the main focus of the present work. They will be referred hereafter as "\(UV\) models". These models continue to be actively investigated today.[44; 45; 46; 47; 48; 56]
Even nearest-neighbor attraction is an approximation to inter-electron or inter-hole attractions of real materials. The latter results from mediation by an intermediary subsystem such as phonons, and typically extend beyond nearest neighbors.[57; 58] Such two-particle models are still exactly solvable but the complexity of solution increases sharply with the range. One example is analyzed in Sec. V.4. What is important, however, is that \(UV\) models comprise the two most essential elements: _some_ repulsion \(U\) representing strong correlations and _some_ attraction \(V\) representing a mediating subsystem. As such, the \(UV\) potential is the simplest representative of an entire class of potentials that possess common properties. For example, all such models have a threshold function \(V_{\rm cr}(U)\) that separates bound and unbound states and saturates in the \(U\to\infty\) limit. Effects of other factors, such as anisotropy or pair motion, on the delicate balance between the two forces can be studied within the \(UV\) model, at least qualitatively.
Intersite attraction also appear within another popular theory of unconventional superconductivity based on the \(t\)-\(J\) model.[59; 60; 61; 62; 33; 50; 63; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62] In the dilute limit (many holes and few electrons), the exchange interaction is equivalent to a nearest-neighbor attraction and the system reduces to a gas of bound pairs. As the \(t\)-\(J\) model has mostly been studied on square lattices, this case will be covered in Sec. IV.
In the last two decades, optical lattices and cold atoms emerged as another physical realization of models with
Figure 1: Evolution of BEC of real-space pairs with density. \(n_{\rm cp}=\Omega_{p}^{-1}\) is the close-packing density, \(T_{c}\) is a “critical temperature of phase transition”. \(T_{\rm BEC}^{*}\) is determined by the mass and size of a single pair, see Eq. (2). The symmetry of a macroscopic order parameter is defined by the pair wave function.[36]
short-range attraction.[63; 64; 65; 66] Whereas in the cuprates the \(UV\) model is an approximation to real inter-particle potentials, in optical lattices it can be precisely engineered and studied in pure form. The onsite interaction is controlled via Feshbach resonances[67] and can be made either repulsive[68] or attractive.[69] The intersite interaction can be controlled either by exciting dressed Rydberg atoms to large quantum numbers[70] or via proper alignment of dipolar quantum gases.[71] Precise manipulation of few particles in optical traps[72; 73; 74] and BEC of _molecules_ in an attractive Fermi gas[75] have been demonstrated. Local pairing can now be measured directly using gas microscopy.[76; 77]
Bound states of two spin waves also appear in models of quantum magnetism.[78; 79]
Basic properties of two-body states in \(UV\) models were discussed by Micnas, Ranninger, and Robaszkiewicz.[11] In particular, conditions for pair formation and binding energies were found for 1D, 2D, and 3D quasi-cubic \(UV\) models. Since then, the field has seen several developments. For example, it was realized that threshold of pair formation depends on the pair momentum.[80]\(UV\) models have been solved for triangular,[81] tetragonal,[32] BCC,[82] and FCC[83] lattices. New results obtained for cubic Watson integrals[84; 85; 86; 87; 88; 89; 90; 91] suggest revisiting the cubic models for deeper analysis. Given also that the HTSC puzzle remains largely unsolved and the recent observation of real-space pairs in the normal state,[19] a fresh review of two-body states on a lattice seems to be worthwhile.
The purpose of this work is twofold. First, we collect results obtained for different lattices[92; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 92; 93; 94; 95] and develop them systematically in one place and with unified formalism. Second, we present a large body of new results that were developed over the last 30 years but remained unpublished until now. The paper has a pedagogical side as well, as we included some textbook material to make exposition self-contained. In particular, we collected in Appendixes available information on lattice Green's functions in different geometries. In general, we find the two-particle problem to be an excellent primer on non-relativistic quantum mechanics that teaches wavefunction symmetry, emergence of complex band dispersions, the Galilean invariance and lack thereof, multi-component wave functions, scattering states, and other topics.
The paper is organized as follows. After formulating the general theory (Sec. II), we analyze bound states in the attractive Hubbard model in different dimensions (Sec. III), and then the \(UV\) model in 1D (Sec. IV), 2D (Sec. V, VI, and VII), and 3D (Sec. VIII and IX). Section X briefly summarizes two-particle problems not covered in this work in detail. Section XI is devoted to the important topic of stability of pairs against phase separation. Section XII.1 summarizes the most important common physical properties of lattice bound states. Sections IX.3 and XII.2 discuss relevance to HTSC. The extensive Appendixes contain the necessary mathematics. Multi-orbital models[97; 98; 99] are excluded from this review because they are not yet ready for systematic exposition.
## II General Theory
### Model
We begin with formulating an underlying model. We restrict consideration to Bravais lattices only. Two-body problems in more complex lattices can also be solved[97; 98; 39] but the resulting algebraic expressions are more cumbersome. The inversion symmetry of Bravais lattices allows for easier separation of singlet and triplet pair states, which greatly reduces the complexity of equations. Second, we will consider only finite-range potentials. The two-particle Schrodinger equation reduces to an algebraic matrix equation with a size equal to the number of nonzero elements in the inter-particle potential. An infinite-range potential will lead to an infinitely large matrix and the advantages of an exact solution will be lost. Even for finite ranges, the complexity of the matrix solution grows rapidly with the interaction range. In practice, only solutions with short-range potentials are simple enough to produce analytical results. Most space in this paper will be devoted to models with on-site and nearest-neighbor interactions, although one case of a longer range interaction will be discussed in Sec. V.4. A typical \(UV\) model is illustrated in Fig. 2. Third, we will consider only spin-\(\frac{1}{2}\) fermions. Their coordinate wave function can be either symmetric or antisymmetric; so
Figure 2: Illustration of a \(UV\) model on the square lattice with nearest-neighbor attraction and hopping. The cartoon in the center illustrates the \(s\)-symmetric wave function in which the on-site amplitude is suppressed relative to the inter-sites ones.
the solutions will also cover spin-0 bosons and spinless fermions. Finally, bound states will be the main focus, and scattering states will not be discussed.
A second-quantized model Hamiltonian reads
\[\hat{H}=-\sum_{\mathbf{m},\mathbf{\bar{b}},\sigma}t_{\mathbf{\bar{b}}}\,c^{ \dagger}_{\mathbf{m+\bar{b}},\sigma}c_{\mathbf{m}\sigma}+\frac{U}{2}\sum_{ \mathbf{m}}\hat{n}_{\mathbf{m}}\left(\hat{n}_{\mathbf{m}}-1\right)+\frac{1}{2 }\sum_{\mathbf{m},\mathbf{b}}V_{\mathbf{b}}\,\hat{n}_{\mathbf{m+b}}\,\hat{n}_{ \mathbf{m}}\,. \tag{3}\]
Here \(\hat{n}_{\mathbf{m}}=\sum_{\sigma}\hat{n}_{\mathbf{m}\sigma}=c^{\dagger}_{ \mathbf{m}\uparrow}c_{\mathbf{m}\uparrow}+c^{\dagger}_{\mathbf{m}\downarrow}c _{\mathbf{m}\downarrow}\) is the fermion number operator on site \(\mathbf{m}\). \(\mathbf{\bar{b}}\) and \(\mathbf{b}\) are hopping and interaction neighbor vectors, respectively. The first term in Eq. (3) is kinetic energy of free fermions defined by spin-independent hopping integrals \(t_{\mathbf{\bar{b}}}\). We will invariably consider only negative hopping integrals, which is reflected by explicitly writing a negative sign in front of the sum. Therefore, \(t_{\mathbf{\bar{b}}}>0\) and \(t_{-\mathbf{\bar{b}}}=t_{\mathbf{\bar{b}}}\) for all \(\mathbf{\bar{b}}\). The energy of atomic orbitals is used as a zero energy, and the corresponding term is not written. The second term is the on-site (Hubbard) repulsion with amplitude \(U\). Because of the property \(\hat{n}_{\sigma}^{2}=\hat{n}_{\sigma}\), it is equivalent to the usual form \(U\sum_{\mathbf{m}}\hat{n}_{\mathbf{m}\uparrow}\hat{n}_{\mathbf{m}\downarrow}\). The last term in Eq. (3) represents nearest-neighbor interaction, where \(V_{-\mathbf{b}}=V_{\mathbf{b}}\). The prefactor \(\frac{1}{2}\) is included to compensate double-counting. Equation (3) is written in the \(UV\) form to emphasize the special role of on-site interaction \(U\). For most of the paper, we will set the lattice spacing to one, \(a=1\), only restoring \(a\) in places where it provides additional physical insights.
### Unsymmetrized solution
In first quantization, two-body wave function \(\Psi(\mathbf{m}_{1},\mathbf{m}_{2})\) must satisfy the Schrodinger equation:
\[-\!\sum_{\mathbf{\bar{b}}}t_{\mathbf{\bar{b}}}\left[\Psi(\mathbf{m}_{1}+ \mathbf{\bar{b}},\mathbf{m}_{2})+\Psi(\mathbf{m}_{1},\mathbf{m}_{2}+\mathbf{ \bar{b}})\right]\!+\!U\,\delta_{\mathbf{m}_{1},\mathbf{m}_{2}}\Psi(\mathbf{m} _{1},\mathbf{m}_{2})\!+\!\sum_{\mathbf{b}}V_{\mathbf{b}}\,\delta_{\mathbf{m}_{ 1}-\mathbf{m}_{2},\mathbf{b}}\Psi(\mathbf{m}_{1},\mathbf{m}_{2})=E\,\Psi( \mathbf{m}_{1},\mathbf{m}_{2}), \tag{4}\]
where \(E\) is the total energy. Equation (4) is converted in momentum space using the transformation
\[\Psi(\mathbf{m}_{1},\mathbf{m}_{2}) =\frac{1}{N}\sum_{\mathbf{k}_{1}\mathbf{k}_{2}}\psi_{\mathbf{k}_{ 1}\mathbf{k}_{2}}\,e^{i\mathbf{k}_{1}\mathbf{m}_{1}+i\mathbf{k}_{2}\mathbf{m} _{2}}\,, \tag{5}\] \[\psi_{\mathbf{k}_{1}\mathbf{k}_{2}} =\frac{1}{N}\sum_{\mathbf{m}_{1}\mathbf{m}_{2}}\Psi(\mathbf{m}_{ 1},\mathbf{m}_{2})\,e^{-i\mathbf{k}_{1}\mathbf{m}_{1}-i\mathbf{k}_{2}\mathbf{m} _{2}}\,, \tag{6}\]
where \(N\) is the total number of lattice sites. A transformed equation reads
\[\left(E-\varepsilon_{\mathbf{k}_{1}}-\varepsilon_{\mathbf{k}_{2}}\right)\psi_ {\mathbf{k}_{1}\mathbf{k}_{2}}=U\frac{1}{N}\sum_{\mathbf{q}}\psi_{\mathbf{q}, \mathbf{k}_{1}+\mathbf{k}_{2}-\mathbf{q}}+\frac{1}{N}\sum_{\mathbf{b}\mathbf{q }}V_{\mathbf{b}}\,e^{i(\mathbf{q}-\mathbf{k}_{1})\mathbf{b}}\,\psi_{\mathbf{q},\mathbf{k}_{1}+\mathbf{k}_{2}-\mathbf{q}}\,. \tag{7}\]
Here,
\[\varepsilon_{\mathbf{k}}=-\sum_{\mathbf{\bar{b}}}t_{\mathbf{\bar{b}}}\,e^{i \mathbf{k}\mathbf{\bar{b}}} \tag{8}\]
is the one-particle dispersion of the model. The right-hand-side of Eq. (7) is a linear combination of quantities
\[\Phi_{\mathbf{0}}(\mathbf{k}_{1}+\mathbf{k}_{2}) =\Phi_{\mathbf{0}}(\mathbf{P})\equiv\frac{1}{N}\sum_{\mathbf{q} }\psi_{\mathbf{q},\mathbf{k}_{1}+\mathbf{k}_{2}-\mathbf{q}}=\frac{1}{N}\sum_ {\mathbf{q}}\psi_{\mathbf{q},\mathbf{P}-\mathbf{q}}\,, \tag{9}\] \[\Phi_{\mathbf{b}}(\mathbf{k}_{1}+\mathbf{k}_{2}) =\Phi_{\mathbf{b}}(\mathbf{P})\equiv\frac{1}{N}\sum_{\mathbf{q} }e^{i\mathbf{q}\mathbf{b}}\,\psi_{\mathbf{q},\mathbf{k}_{1}+\mathbf{k}_{2}- \mathbf{q}}=\frac{1}{N}\sum_{\mathbf{q}}e^{i\mathbf{q}\mathbf{b}}\,\psi_{ \mathbf{q},\mathbf{P}-\mathbf{q}}\,, \tag{10}\]
where
\[\mathbf{P}=\mathbf{k}_{1}+\mathbf{k}_{2}\,, \tag{11}\]
is the total lattice momentum of the two fermions. It is critically important for the existence of an exact solution that \(\Phi_{\mathbf{0}}\) and \(\Phi_{\mathbf{b}}\) are functions of only _one_ argument \(\mathbf{P}\) rather than two separate arguments \(\mathbf{k}_{1}\) and \(\mathbf{k}_{2}\). Utilizing the definitions, Eqs. (9) and (10), the wave function is expressed from Eq. (7) as follows
\[\psi_{\mathbf{k}_{1}\mathbf{k}_{2}}=\frac{U}{E-\varepsilon_{\mathbf{k}_{1}}- \varepsilon_{\mathbf{k}_{2}}}\,\Phi_{\mathbf{0}}(\mathbf{P})+\sum_{\mathbf{b}}V _{\mathbf{b}}\,\frac{e^{-i\mathbf{k}_{1}\mathbf{b}}}{E-\varepsilon_{\mathbf{k}_{1} }-\varepsilon_{\mathbf{k}_{2}}}\,\Phi_{\mathbf{b}}(\mathbf{P})\,. \tag{12}\]
Substitution of Eq. (12) back in Eq. (9) yields a system of linear _algebraic_ equations for \(\Phi\):
\[\Phi_{\bf 0}({\bf P}) = -UM_{\bf 00}\,\Phi_{\bf 0}({\bf P})-\sum_{{\bf b}^{\prime}}V_{{\bf b}^{ \prime}}\,M_{\bf 0b^{\prime}}(E,{\bf P})\,\Phi_{{\bf b}^{\prime}}({\bf P})\:, \tag{13}\] \[\Phi_{\bf b}({\bf P}) = -UM_{\bf 0b}\,\Phi_{\bf 0}({\bf P})-\sum_{{\bf b}^{\prime}}V_{{\bf b} ^{\prime}}\,M_{\bf 0b^{\prime}}(E,{\bf P})\,\Phi_{{\bf b}^{\prime}}({\bf P})\:, \tag{14}\]
where
\[M_{\bf 00}(E,{\bf P}) = \frac{1}{N}\sum_{\bf q}\frac{1}{-E+\varepsilon_{\bf q}+ \varepsilon_{{\bf P}-{\bf q}}}\:,\hskip 28.452756ptM_{\bf 0b^{\prime}}(E,{\bf P })=\frac{1}{N}\sum_{\bf q}\frac{e^{-i{\bf qb}^{\prime}}}{-E+\varepsilon_{\bf q }+\varepsilon_{{\bf P}-{\bf q}}}\:, \tag{15}\] \[M_{\bf 0b}(E,{\bf P}) = \frac{1}{N}\sum_{\bf q}\frac{e^{i{\bf qb}}}{-E+\varepsilon_{\bf q }+\varepsilon_{{\bf P}-{\bf q}}}\:,\hskip 28.452756ptM_{\bf 0b^{\prime}}(E,{\bf P })=\frac{1}{N}\sum_{\bf q}\frac{e^{i{\bf q}({\bf b}-{\bf b}^{\prime})}}{-E+ \varepsilon_{\bf q}+\varepsilon_{{\bf P}-{\bf q}}}\:. \tag{16}\]
Notice how in the process of substitution, \({\bf k}_{1}\) is replaced by \({\bf q}\), and \({\bf k}_{2}\) by \({\bf P}-{\bf q}\). But the argument of \(\Phi\) remains unchanged: \({\bf P}={\bf k}_{1}+{\bf k}_{2}\to{\bf q}+{\bf P}-{\bf q}={\bf P}\). As a result, \(\Phi({\bf P})\) can be moved outside the \({\bf q}\) sums, and the equations reduce from integral to algebraic. Thus the entire method is predicated on conservation of total momentum.
Quantities \(M_{\bf bb^{\prime}}\) in Eqs. (15) and (16) are _two-body_ Green's functions of underlying lattices. In some lattices, \(M_{\bf bb^{\prime}}\) can be reduced to one-body Green's functions by an appropriate transformation. In solving a typical two-body problem, most effort is spent on calculating and analyzing \(M_{\bf bb^{\prime}}\), and the existence of close-form final formulas depends on whether \(M_{\bf bb^{\prime}}\) can be evaluated analytically. In 1D, integration is elementary leading to algebraic expressions. In 2D, \(M_{\bf bb^{\prime}}\) can be usually reduced to the complete elliptic integrals of different kinds. And in 3D, \(M_{\bf bb^{\prime}}\) are generalized Watson integrals for which only a handful of analytical results are known. Much of the Appendixes are devoted to deriving and listing available results for \(M_{\bf bb^{\prime}}\) on different lattices. Note that we have flipped the sign of the energy denominator in Eq. (15) for future convenience. In this work, we will be interested only in bounds states with \(E<0\). The definitions, Eqs. (15) and (16), render most of \(M_{\bf bb^{\prime}}\) positive. In the following, we will often write \(|E|\) instead of \(-E\) to avoid any confusion.
The consistency condition of Eqs. (13) and (14),
\[\det\left|\begin{array}{cc}UM_{\bf 00}(E,{\bf P})+1&V_{{\bf b}^{\prime}}\,M_{ \bf 0b^{\prime}}(E,{\bf P})\\ UM_{\bf 0b}(E,{\bf P})&V_{{\bf b}^{\prime}}\,M_{\bf bb^{\prime}}(E,{\bf P})+ \delta_{\bf bb^{\prime}}\end{array}\right|=0\:, \tag{17}\]
determines system's energy \(E\) as a function of total momentum \({\bf P}\) and, consequently, the pair energy, dispersion, and effective mass. If \(n_{\bf b}\) is the number of vectors \({\bf b}\) with nonzero \(V_{\bf b}\), then in Eq. (17), \(M_{\bf bb}\) is a \((n_{\bf b}\times 1)\) column, \(V_{{\bf b}^{\prime}}\,M_{\bf 0b^{\prime}}\) is a \((1\times n_{\bf b})\) row, and \(V_{{\bf b}^{\prime}}\,M_{\bf bb^{\prime}}\) is a \((n_{\bf b}\times n_{\bf b})\) matrix where each column is multiplied by its respective \(V_{{\bf b}^{\prime}}\). The eigenvector of Eqs. (13) and (14) determines a pair wave function via Eq. (12). Equations (12)-(17) constitute a general solution of the two-body lattice problem.
### Extension to non-Bravais lattices and multi-orbital models
When two fermions interact within a complex lattice with \(S>1\) orbitals per unit cell, an exact solution becomes considerably more complicated. Systematic investigation of such models are beyond the scope of this work. In this section, we briefly outline the procedure and point out how it differs from the basic \(S=1\) case.
Assuming the index \({\bf m}\) continues to number _unit cells_, the two-body wave function \(\Psi_{\alpha\beta}({\bf m}_{1},{\bf m}_{2})\) comprises \(S^{2}\) components arranged in an \((S^{2}\times 1)\) array. The Schrodinger equation, Eqs. (4) and (7), comprises \(S^{2}\) coupled equations. The right-hand-side of Eq. (7) still contains a _finite_ number of integrals \(\Phi({\bf P})\) which, similarly to Eq. (12), allows expressing \(\psi_{{\bf k}_{1}\alpha,{\bf k}_{2}\beta}\) as a linear combination of \(\Phi({\bf P})\). However, the respective energy-dependent coefficients are now components of an inverted \((S^{2}\times S^{2})\) matrix. Each coefficient is a ratio of an \((S^{2}-1)\)-degree polynomial of \(E\) and an \(S^{2}\)-degree polynomial of \(E\). The denominator can be factorized into a product of \(S^{2}\) factors \((E-\varepsilon_{\alpha{\bf k}_{1}}-\varepsilon_{\beta{\bf k}_{2}})\), where \(\varepsilon_{\alpha{\bf k}}\) are the \(S\) bands of the single-particle dispersion. When \(\varepsilon_{\alpha{\bf k}}\) is known analytically, factorization can also be performed analytically, at least in principle. Then each coefficient can be expanded into a sum of \(S^{2}\) simple fractions. Finally, substitution of \(\psi_{{\bf k}_{1}\alpha,{\bf k}_{2}\beta}\) into the definitions of \(\Phi({\bf P})\) produces a finite set of algebraic equations for \(\Phi({\bf P})\) but with matrix elements \(M\) being sums of \(S^{2}\) terms where each term has the general form of Eq. (15) with denominators \((-E+\varepsilon_{\alpha{\bf q}}+\varepsilon_{\beta,{\bf P}-{\bf q}})\) and additional functions of \({\bf q}\) in numerators.
In practice, this program can be completed for the simplest multiorbital models only.[39; 92; 97; 98] Often enough, \(\varepsilon_{\alpha{\bf k}}\) cannot be calculated analytically, and the above procedure pauses after the matrix inversion but before factorization. From that stage, everything must be completed numerically.
### Symmetrized solution. Spin singlets
The size and complexity of the main system, Eq. (17), rapidly grows with the radius of interaction. Take for example the square lattice. For a contact, zero-range interaction, Eq. (17) is a \((1\times 1)\) matrix, for a nearest-neighbor \(UV\) model it is a \((5\times 5)\) matrix (one zero-range potential plus four nearest-neighbor potentials), and for a next-nearest-neighbor \(UV\) model it is already a \((9\times 9)\) matrix. The ability to perform analytical calculations diminishes rapidly, especially for \(\mathbf{P}\)'s away from special symmetry points. In this situation, _permutation_ symmetry offers a way to simplify the solution. The two-fermion wave function must be either symmetric or antisymmetric with respect to argument exchange: \(\Psi(\mathbf{m}_{2},\mathbf{m}_{1})=\pm\Psi(\mathbf{m}_{1},\mathbf{m}_{2})\), corresponding to spin-singlet \((+)\) and spin-triplet \((-)\) states. Including this symmetry from the beginning reduces the final system's size by about half. An additional benefit is the \((+)\) solutions also describe bound pairs of spin-0 bosons and the \((-)\) solutions describe spinless fermions. In the rest of this section we derive the \((+)\) solution. The \((-)\) solution is derived in Sec. II.5.
In order to restrict the wave function symmetry, we use the following transformation instead of Eq. (5):
\[\Psi(\mathbf{m}_{1},\mathbf{m}_{2}) =\frac{1}{2N}\sum_{\mathbf{k}_{1}\mathbf{k}_{2}}\left(e^{i\mathbf{ k}_{1}\mathbf{m}_{1}+i\mathbf{k}_{2}\mathbf{m}_{2}}+e^{i\mathbf{k}_{1}\mathbf{m}_{2 }+i\mathbf{k}_{2}\mathbf{m}_{1}}\right)\phi^{+}_{\mathbf{k}_{1}\mathbf{k}_{2} }\,,\hskip 28.452756pt\Psi(\mathbf{m}_{2}\mathbf{m}_{1})=+\Psi(\mathbf{m}_{1} \mathbf{m}_{2})\,, \tag{18}\] \[\phi^{+}_{\mathbf{k}_{1}\mathbf{k}_{2}} =\frac{1}{2N}\sum_{\mathbf{m}_{1}\mathbf{m}_{2}}\left(e^{-i \mathbf{k}_{1}\mathbf{m}_{1}-i\mathbf{k}_{2}\mathbf{m}_{2}}+e^{-i\mathbf{k}_{ 1}\mathbf{m}_{2}-i\mathbf{k}_{2}\mathbf{m}_{1}}\right)\Psi(\mathbf{m}_{1}, \mathbf{m}_{2})\,,\hskip 28.452756pt\phi^{+}_{\mathbf{k}_{2}\mathbf{k}_{1}}=+ \phi^{+}_{\mathbf{k}_{1}\mathbf{k}_{2}}\,. \tag{19}\]
Next, we multiply the Schrodinger equation, Eq. (4), by the expression in parentheses in Eq. (19) and apply the operation \((2N)^{-1}\sum_{\mathbf{m}_{1}\mathbf{m}_{2}}\). The result is
\[\left(E-\varepsilon_{\mathbf{k}_{1}}-\varepsilon_{\mathbf{k}_{2}}\right) \phi^{+}_{\mathbf{k}_{1}\mathbf{k}_{2}}=U\frac{1}{N}\sum_{\mathbf{q}}\phi^{+} _{\mathbf{q}\mathbf{k}_{1}+\mathbf{k}_{2}-\mathbf{q}}+\sum_{\mathbf{b}}V_{ \mathbf{b}}\frac{1}{4N}\sum_{\mathbf{q}}\bigl{[}\left(e^{i\mathbf{k}_{1} \mathbf{b}}+e^{i\mathbf{k}_{2}\mathbf{b}}\right)e^{-i\mathbf{q}\mathbf{b}}+ \left(e^{-i\mathbf{k}_{1}\mathbf{b}}+e^{-i\mathbf{k}_{2}\mathbf{b}}\right)e^ {i\mathbf{q}\mathbf{b}}\bigr{]}\,\phi^{+}_{\mathbf{q}\mathbf{,k}_{1}+\mathbf{ k}_{2}-\mathbf{q}}\,. \tag{20}\]
The \(V\) term contains two groups of \(\mathbf{q}\) integrals: one that contains \(\exp\left(i\mathbf{q}\mathbf{b}\right)\) and another \(\exp\left(-i\mathbf{q}\mathbf{b}\right)\). We now observe that two groups transform into each other when \(\mathbf{b}\rightarrow-\mathbf{b}\). To make use of this symmetry, we arrange all vectors \(\mathbf{b}\) into pairs \((\mathbf{b},-\mathbf{b})\), then select only one vector from each pair and collect them in new group \(\mathbf{b}_{+}\). Thus, the full group splits into two subgroups:
\[\{\mathbf{b}\}\rightarrow\{\mathbf{b}_{+}\},-\{\mathbf{b}_{+}\}\,. \tag{21}\]
For example, in the square lattice with nearest-neighbor interaction, \(\{\mathbf{b}\}=\{+\mathbf{x},+\mathbf{y},-\mathbf{x},-\mathbf{y}\}\) and it that can be split in two pairs \(\{(+\mathbf{x},-\mathbf{x}),(+\mathbf{y},-\mathbf{y})\}\). Then one can choose \(\{\mathbf{b}_{+}\}\) out of four equivalent possibilities: \(\{\mathbf{b}_{+}\}=\{+\mathbf{x},+\mathbf{y}\}\), \(\{\mathbf{b}_{+}\}=\{+\mathbf{x},-\mathbf{y}\}\), \(\{\mathbf{b}_{+}\}=\{-\mathbf{x},+\mathbf{y}\}\), and \(\{\mathbf{b}_{+}\}=\{-\mathbf{x},-\mathbf{y}\}\).
The \(\mathbf{b}\) sum in the \(V\) term in Eq. (20) splits in two:
\[V\;\text{term} =\sum_{\mathbf{b}_{+}}V_{\mathbf{b}_{+}}\frac{1}{4N}\sum_{ \mathbf{q}}\left[\left(e^{i\mathbf{k}_{1}\mathbf{b}_{+}}+e^{i\mathbf{k}_{2} \mathbf{b}_{+}}\right)e^{-i\mathbf{q}\mathbf{b}_{+}}+\left(e^{-i\mathbf{k}_{1} \mathbf{b}_{+}}+e^{-i\mathbf{k}_{2}\mathbf{b}_{+}}\right)e^{i\mathbf{q} \mathbf{b}_{+}}\right]\phi^{+}_{\mathbf{q}\mathbf{,k}_{1}+\mathbf{k}_{2}- \mathbf{q}}\] \[\quad+\sum_{\mathbf{b}_{+}}V_{-\mathbf{b}_{+}}\frac{1}{4N}\sum_{ \mathbf{q}}\left[\left(e^{-i\mathbf{k}_{1}\mathbf{b}_{+}}+e^{-i\mathbf{k}_{2} \mathbf{b}_{+}}\right)e^{i\mathbf{q}\mathbf{b}_{+}}+\left(e^{i\mathbf{k}_{1} \mathbf{b}_{+}}+e^{i\mathbf{k}_{2}\mathbf{b}_{+}}\right)e^{-i\mathbf{q} \mathbf{b}_{+}}\right]\phi^{+}_{\mathbf{q}\mathbf{,k}_{1}+\mathbf{k}_{2}- \mathbf{q}}=\] \[=\sum_{\mathbf{b}_{+}}V_{\mathbf{b}_{+}}\frac{1}{2N}\sum_{ \mathbf{q}}\left[\left(e^{i\mathbf{k}_{1}\mathbf{b}_{+}}+e^{i\mathbf{k}_{2} \mathbf{b}_{+}}\right)e^{-i\mathbf{q}\mathbf{b}_{+}}+\left(e^{-i\mathbf{k}_{1} \mathbf{b}_{+}}+e^{-i\mathbf{k}_{2}\mathbf{b}_{+}}\right)e^{i\mathbf{q} \mathbf{b}_{+}}\right]\phi^{+}_{\mathbf{q}\mathbf{,k}_{1}+\mathbf{k}_{2}- \mathbf{q}}\,. \tag{22}\]
The last equality is true because we consider only symmetric potentials, \(V_{\mathbf{b}_{+}}=V_{-\mathbf{b}_{+}}\). In the next crucial step, we apply a variable change \(\mathbf{q}^{\prime}=\mathbf{k}_{1}+\mathbf{k}_{2}-\mathbf{q}\) to the _first half_ of Eq. (22). That renders the exponential terms equal to the exponential terms of the second half but the wave function transforms to \(\phi^{+}_{\mathbf{k}_{1}+\mathbf{k}_{2}-\mathbf{q},\mathbf{q}}\). However due to the permutation symmetry it is equal to \(\phi^{+}_{\mathbf{q}\mathbf{,k}_{1}+\mathbf{k}_{2}-\mathbf{q}}\). This proves that the two halves of Eq. (22) are equal. The entire \(V\) term can be written as twice the second half (for example). Returning to the full equation, Eq. (20), it reads
\[\left(E-\varepsilon_{\mathbf{k}_{1}}-\varepsilon_{\mathbf{k}_{2}}\right)\phi^{+} _{\mathbf{k}_{1}\mathbf{k}_{2}}=U\frac{1}{N}\sum_{\mathbf{q}}\phi^{+}_{\mathbf{ q}\mathbf{,k}_{1}+\mathbf{k}_{2}-\mathbf{q}}+\sum_{\mathbf{b}_{+}}V_{\mathbf{b}_{+}} \left(e^{-i\mathbf{k}_{1}\mathbf{b}_{+}}+e^{-i\mathbf{k}_{2}\mathbf{b}_{+}} \right)\frac{1}{N}\sum_{\mathbf{q}}e^{i\mathbf{q}\mathbf{b}_{+}}\phi^{+}_{ \mathbf{q}\mathbf{,k}_{1}+\mathbf{k}_{2}-\mathbf{q}}\,. \tag{23}\]
Equation (23) has only half as many \(V\) terms as its unsymmetrized counterpart, Eq. (7). Accordingly, we introduce
auxiliary functions
\[\Phi^{+}_{\mathbf{0}}(\mathbf{k}_{1}+\mathbf{k}_{2}) =\Phi^{+}_{\mathbf{0}}(\mathbf{P})\equiv\frac{1}{N}\sum_{\mathbf{q} }\phi^{+}_{\mathbf{q},\mathbf{k}_{1}+\mathbf{k}_{2}-\mathbf{q}}=\frac{1}{N} \sum_{\mathbf{q}}\phi^{+}_{\mathbf{q},\mathbf{P}-\mathbf{q}}\,, \tag{24}\] \[\Phi^{+}_{\mathbf{b}_{+}}(\mathbf{k}_{1}+\mathbf{k}_{2}) =\Phi^{+}_{\mathbf{b}_{+}}(\mathbf{P})\equiv\frac{1}{N}\sum_{ \mathbf{q}}e^{i\mathbf{q}\mathbf{b}_{+}}\,\phi^{+}_{\mathbf{q},\mathbf{k}_{1} +\mathbf{k}_{2}-\mathbf{q}}=\frac{1}{N}\sum_{\mathbf{q}}e^{i\mathbf{q} \mathbf{b}_{+}}\,\phi^{+}_{\mathbf{q},\mathbf{P}-\mathbf{q}}\,. \tag{25}\]
The wave function is expressed from Eq. (23)
\[\phi^{+}_{\mathbf{k}_{1}\mathbf{k}_{2}}=\frac{U}{E-\varepsilon_{\mathbf{k}_{1 }}-\varepsilon_{\mathbf{k}_{2}}}\,\Phi^{+}_{\mathbf{0}}(\mathbf{P})+\sum_{ \mathbf{b}_{+}}V_{\mathbf{b}_{+}}\frac{e^{-i\mathbf{k}_{1}\mathbf{b}_{+}}+e^{ -i\mathbf{k}_{2}\mathbf{b}_{+}}}{E-\varepsilon_{\mathbf{k}_{1}}-\varepsilon _{\mathbf{k}_{2}}}\,\Phi^{+}_{\mathbf{b}_{+}}(\mathbf{P})\,. \tag{26}\]
Substituting \(\phi^{+}\) back in the definitions of \(\Phi^{+}\), one obtains:
\[\Phi^{+}_{\mathbf{0}}(\mathbf{P}) =-UM^{+}_{\mathbf{0}\mathbf{0}}\,\Phi^{+}_{\mathbf{0}}(\mathbf{P}) -\sum_{\mathbf{b}^{\prime}_{+}}V_{\mathbf{b}^{\prime}_{+}}\,M^{+}_{\mathbf{0} \mathbf{b}^{\prime}_{+}}(E,\mathbf{P})\,\Phi^{+}_{\mathbf{b}^{\prime}_{+}}( \mathbf{P})\,, \tag{27}\] \[\Phi^{+}_{\mathbf{b}_{+}}(\mathbf{P}) =-UM^{+}_{\mathbf{b}_{+}\mathbf{0}}\,\Phi^{+}_{\mathbf{0}}( \mathbf{P})-\sum_{\mathbf{b}^{\prime}_{+}}V_{\mathbf{b}^{\prime}_{+}}\,M^{+}_ {\mathbf{b}_{+}\mathbf{b}^{\prime}_{+}}(E,\mathbf{P})\,\Phi^{+}_{\mathbf{b}^{ \prime}_{+}}(\mathbf{P})\,, \tag{28}\]
where
\[M^{+}_{\mathbf{0}\mathbf{0}}(E,\mathbf{P}) =\frac{1}{N}\sum_{\mathbf{q}}\frac{1}{-E+\varepsilon_{\mathbf{q} }+\varepsilon_{\mathbf{P}-\mathbf{q}}}\,, M^{+}_{\mathbf{0}\mathbf{b}^{\prime}_{+}}(E,\mathbf{P}) =\frac{1}{N}\sum_{\mathbf{q}}\frac{e^{-i\mathbf{q}\mathbf{b}^{\prime}_{+}}+e^{ -i(\mathbf{P}-\mathbf{q})\mathbf{b}^{\prime}_{+}}}{-E+\varepsilon_{\mathbf{q} }+\varepsilon_{\mathbf{P}-\mathbf{q}}}\,, \tag{29}\] \[M^{+}_{\mathbf{b}_{+}\mathbf{0}}(E,\mathbf{P}) =\frac{1}{N}\sum_{\mathbf{q}}\frac{e^{i\mathbf{q}\mathbf{b}_{+}}} {-E+\varepsilon_{\mathbf{q}}+\varepsilon_{\mathbf{P}-\mathbf{q}}}\,, M^{+}_{\mathbf{b}_{+}\mathbf{b}^{\prime}_{+}}(E,\mathbf{P}) =\frac{1}{N}\sum_{\mathbf{q}}\frac{e^{i\mathbf{q}(\mathbf{b}_{+}-\mathbf{b}^{ \prime}_{+})}+e^{i\mathbf{q}\mathbf{b}_{+}}e^{-i(\mathbf{P}-\mathbf{q}) \mathbf{b}^{\prime}_{+}}}{-E+\varepsilon_{\mathbf{q}}+\varepsilon_{\mathbf{P}- \mathbf{q}}}\,. \tag{30}\]
The consistency condition of the linear system, Eqs. (27) and (28), defines the pair's energy, and its eigenvector defines the pair's wave function. The system consists of only \(1+\frac{1}{2}n_{\mathbf{b}}\) linear equations versus \(1+n_{\mathbf{b}}\) equations in the unsymmetrized case. This is a significant simplification. For example, in the simple-cubic \(UV\) model with nearest neighbor interaction, symmetrization reduces the consistency condition from a \((7\times 7)\) matrix to a \((4\times 4)\) matrix. One should keep in mind that the symmetrized solution describes only spin-singlet pairs. Spin triplets are discussed next.
### Anti-symmetrized solution. Spin triplets
In this section, we repeat the derivation of Sec. II.4 for anti-symmetric wave functions that describe spin-triplet pairs. The corresponding Fourier transformation reads
\[\Psi(\mathbf{m}_{1},\mathbf{m}_{2}) =\frac{1}{2N}\sum_{\mathbf{k}_{1}\mathbf{k}_{2}}\left(e^{i\mathbf{ k}_{1}\mathbf{m}_{1}+i\mathbf{k}_{2}\mathbf{m}_{2}}-e^{i\mathbf{k}_{1} \mathbf{m}_{2}+i\mathbf{k}_{2}\mathbf{m}_{1}}\right)\phi^{-}_{\mathbf{k}_{1} \mathbf{k}_{2}}\,,\hskip 28.452756pt\Psi(\mathbf{m}_{2}\mathbf{m}_{1})=-\Psi( \mathbf{m}_{1}\mathbf{m}_{2})\,, \tag{31}\] \[\phi^{-}_{\mathbf{k}_{1}\mathbf{k}_{2}} =\frac{1}{2N}\sum_{\mathbf{m}_{1}\mathbf{m}_{2}}\left(e^{-i \mathbf{k}_{1}\mathbf{m}_{1}-i\mathbf{k}_{2}\mathbf{m}_{2}}-e^{-i\mathbf{k}_{1 }\mathbf{m}_{2}-i\mathbf{k}_{2}\mathbf{m}_{1}}\right)\Psi(\mathbf{m}_{1}, \mathbf{m}_{2})\,,\hskip 28.452756pt\phi^{-}_{\mathbf{k}_{2}\mathbf{k}_{1}}=- \phi^{-}_{\mathbf{k}_{1}\mathbf{k}_{2}}\,. \tag{32}\]
We multiply the Schrodinger equation, Eq. (4), by the expression in parentheses in Eq. (32) and apply the operation \((2N)^{-1}\sum_{\mathbf{m}_{1}\mathbf{m}_{2}}\). The result is
\[\left(E-\varepsilon_{\mathbf{k}_{1}}-\varepsilon_{\mathbf{k}_{2}}\right)\phi^{-} _{\mathbf{k}_{1}\mathbf{k}_{2}}=\sum_{\mathbf{b}}V_{\mathbf{b}}\frac{1}{4N} \sum_{\mathbf{q}}\bigl{[}\left(e^{i\mathbf{k}_{1}\mathbf{b}}-e^{i\mathbf{k}_{2} \mathbf{b}}\right)e^{-i\mathbf{q}\mathbf{b}}+\left(e^{-i\mathbf{k}_{1} \mathbf{b}}-e^{-i\mathbf{k}_{2}\mathbf{b}}\right)e^{i\mathbf{q}\mathbf{b}} \bigr{]}\,\phi^{-}_{\mathbf{q},\mathbf{k}_{1}+\mathbf{k}_{2}-\mathbf{q}}\,. \tag{33}\]
Of note here is the absence of a \(U\) term that cancels out due to antisymmetry. Next, we split the sum over \(\mathbf{b}\) in two partial sum: over \(\mathbf{b}_{+}\) and over \(-\mathbf{b}_{+}\), and then apply a variable change \(\mathbf{q}^{\prime}=\mathbf{k}_{1}+\mathbf{k}_{2}-\mathbf{q}\) to the \(\exp\left(-i\mathbf{q}\mathbf{b}_{+}\right)\) terms. The result is
\[\left(E-\varepsilon_{\mathbf{k}_{1}}-\varepsilon_{\mathbf{k}_{2}}\right)\phi^{-} _{\mathbf{k}_{1}\mathbf{k}_{2}}=\sum_{\mathbf{b}_{+}}V_{\mathbf{b}_{+}}\,\left(e^ {-i\mathbf{k}_{1}\mathbf{b}_{+}}-e^{-i\mathbf{k}_{2}\mathbf{b}_{+}}\right) \frac{1}{N}\sum_{\mathbf{q}}e^{i\mathbf{q}\mathbf{b}_{+}}\phi^{-}_{\mathbf{q},\mathbf{k}_{1}+\mathbf{k}_{2}-\mathbf{q}}\,. \tag{34}\]
To convert this to linear equations, we introduce \(\frac{1}{2}n_{\bf b}\) auxiliary functions
\[\Phi^{-}_{\mathbf{b}_{+}}(\mathbf{k}_{1}+\mathbf{k}_{2})=\Phi^{-}_{\mathbf{b}_{+} }(\mathbf{P})\equiv\frac{1}{N}\sum_{\bf q}e^{i\mathbf{q}\mathbf{b}_{+}}\,\phi^{ -}_{\mathbf{q},\mathbf{k}_{1}+\mathbf{k}_{2}-\mathbf{q}}=\frac{1}{N}\sum_{\bf q }e^{i\mathbf{q}\mathbf{b}_{+}}\,\phi^{-}_{\mathbf{q},\mathbf{P}-\mathbf{q}}\,. \tag{35}\]
Using these definitions, the pair wave function follows from Eq. (34):
\[\phi^{-}_{\mathbf{k}_{1}\mathbf{k}_{2}}=\sum_{\mathbf{b}_{+}}V_{\mathbf{b}_{+} }\,\frac{e^{-i\mathbf{k}_{1}\mathbf{b}_{+}}-e^{-i\mathbf{k}_{2}\mathbf{b}_{+} }}{E-\varepsilon_{\mathbf{k}_{1}}-\varepsilon_{\mathbf{k}_{2}}}\,\Phi^{-}_{ \mathbf{b}_{+}}(\mathbf{P})\,. \tag{36}\]
Substituting \(\phi^{-}_{\mathbf{k}_{1}\mathbf{k}_{2}}\) back in Eq. (35), one obtains the final system
\[\Phi^{-}_{\mathbf{b}_{+}}(\mathbf{P})=-\sum_{\mathbf{b}^{\prime}_{+}}V_{ \mathbf{b}^{\prime}_{+}}\,M^{-}_{\mathbf{b}_{+}\mathbf{b}^{\prime}_{+}}(E, \mathbf{P})\,\Phi^{-}_{\mathbf{b}^{\prime}_{+}}(\mathbf{P})\,, \tag{37}\]
\[M^{-}_{\mathbf{b}_{+}\mathbf{b}^{\prime}_{+}}(E,\mathbf{P})=\frac{1}{N}\sum_ {\bf q}\frac{e^{i\mathbf{q}(\mathbf{b}_{+}-\mathbf{b}^{\prime}_{+})}-e^{i \mathbf{q}\mathbf{b}_{+}}e^{-i(\mathbf{P}-\mathbf{q})\mathbf{b}^{\prime}_{+}}} {-E+\varepsilon_{\mathbf{q}}+\varepsilon_{\mathbf{P}-\mathbf{q}}}\,. \tag{38}\]
The size of the triplet system is one less that of the singlet system because of the lack of the Hubbard term. Therefore, triplet pairs are usually easier to deal with than singlets.
## III Negative-\(U\) Hubbard model
### General expressions
Before getting to more complex \(UV\) models, it is instructive to consider the simpler case of zero-range interaction, that is the negative Hubbard model. Several characteristic features of lattice bound states show up already at this level. Additionally, due to the model's relative simplicity, analytical calculations can be carried out to the fullest extent. The model is defined by the potential
\[U=-|U|\,,\hskip 28.452756ptV_{\bf b}=0\,. \tag{39}\]
One expects only one singlet bound state, so either unsymmetrized solution, Eq. (13), or symmetrized one, Eq. (27), can be applied. In both cases, the system reduces to a single equation for \(\Phi_{\bf 0}\) with the consistency condition
\[|U|\,M_{\bf 00}(E,\mathbf{P})=1\,, \tag{40}\]
which defines pair energy \(E(\mathbf{P})\). The pair wave function is
\[\psi_{\mathbf{k}_{1}\mathbf{k}_{2}}=\frac{1}{E-\varepsilon_{\mathbf{k}_{1}}- \varepsilon_{\mathbf{k}_{2}}}\,, \tag{41}\]
up to a normalization constant. Since total momentum \(\mathbf{P}\) is fixed, \(\psi\) is a function of only one argument:
\[\psi_{\mathbf{P}}(\mathbf{q})=\frac{1}{E-\varepsilon_{\mathbf{q}}-\varepsilon _{\mathbf{P}-\mathbf{q}}}\,. \tag{42}\]
The real-space wave function follows from Eq. (5)
\[\Psi(\mathbf{m}_{1},\mathbf{m}_{2}) = \frac{1}{N}\sum_{\bf q}\frac{e^{i\mathbf{q}\mathbf{m}_{1}+i( \mathbf{P}-\mathbf{q})\mathbf{m}_{2}}}{E-\varepsilon_{\mathbf{q}}-\varepsilon _{\mathbf{P}-\mathbf{q}}} \tag{43}\] \[= e^{i\mathbf{P}\frac{(\mathbf{m}_{1}+\mathbf{m}_{2})}{2}}\frac{1} {N}\sum_{\bf q}\frac{e^{i\mathbf{q}(\mathbf{m}_{1}-\mathbf{m}_{2})}}{E- \varepsilon_{\frac{\mathbf{P}}{2}+\mathbf{q}}-\varepsilon_{\frac{\mathbf{P}}{2 }-\mathbf{q}}}\,.\]
The first factor describes center-of-mass motion while the integral over \(\mathbf{q}\) describes the internal structure of the pair. We define pair effective radius components \(r^{*}_{pj}\) as follows
\[\left(r^{*}_{pj}\right)^{2}=\frac{\sum_{\bf m}m_{j}^{2}\,\Psi^{*}(\mathbf{m},0 )\Psi(\mathbf{m},0)}{\sum_{\bf m}\Psi^{*}(\mathbf{m},0)\Psi(\mathbf{m},0)}\,. \tag{44}\]
Analysis will continue for different lattices separately.
### 1D. One dimensional chain
The 1D attractive Hubbard model provides the simplest example of a lattice bound state. Many pair properties can be derived analytically. The basic integral, \(M_{\bf 00}\) in Eq. (15), is
\[M^{\rm 1D}_{\bf 00} = \int_{-\pi}^{\pi}\frac{dq}{2\pi}\frac{1}{|E|-4t\cos\left(\frac{P }{2}\right)\cos q} \tag{45}\] \[= \frac{1}{\sqrt{E^{2}-16\,t^{2}\cos^{2}\left(\frac{P}{2}\right)}}\,.\]
Substitution in Eq. (40) yields pair energy
\[E(P)=-\sqrt{|U|^{2}+16\,t^{2}\cos^{2}\left(\frac{P}{2}\right)}\,. \tag{46}\]
This is a rare case when pair energy is known as an explicit formula. Based on it, a number of interesting properties can be established. (i) The minimum energy of two _free_ particles with total momentum \(P\) is \(E_{11}=-4t\cos\left(P/2\right)\). Comparing that with Eq. (46), one finds \(E(P)<E_{11}(P)\) for any \(|U|>0\). In other words, \(|U|=0\) is the threshold of pair formation for any \(P\). The same conclusion can be reached by noting that \(M_{\bf 00}\) diverges at \(E\to E_{11}\), see Eq. (45). (ii) Energy \(E(P)\) is periodic with period \((2\pi)\), despite \(P\) being a
sum of two single-particle momenta, each of which varying between \(-\pi\) and \(\pi\). Thus, pairing leads to Brillouin Zone (BZ) folding and the pair behaves as one particle with \(-\pi\leq P\leq\pi\). (iii) Pair energy in BZ corners is \(E(\pm\pi)=-|U|\). (iv) The pair binding energy is quadratic near the threshold:
\[E(|U|\ll t)=-4t\cos\left(\frac{P}{2}\right)-\frac{|U|^{2}}{8t\cos\left(\frac{P }{2}\right)}\,. \tag{47}\]
The first term here is the minimum energy of two free particles with total momentum \(P\). (v) Expansion of Eq. (46) for small \(P\) yields the pair effective mass [in units of the bare one-particle mass \(m_{0}=\hbar^{2}/(2ta^{2})\)]:
\[\frac{m_{p}^{*}}{m_{0}}=\frac{\sqrt{|U|^{2}+16\,t^{2}}}{2t}\,. \tag{48}\]
The pair mass is not constant but increases with the binding energy. This is a common property of bound states [1] related to the lack of Galilean invariance on the lattice. Comparison between Eqs. (48) and (46) reveals a curious relationship between the pair mass and its "rest energy" \(E(0)\). Restoring for a moment the intersite distance \(a\), and using \(m_{0}=\hbar^{2}/(2ta^{2})\), one obtains
\[|E(0)|=m_{p}^{*}\left(\frac{2ta}{\hbar}\right)^{2}. \tag{49}\]
The expression in parentheses is recognized as the maximum group velocity on the lattice. Thus, Eq. (49) has the form of \(E=mc^{2}\) of relativistic physics.
Transitioning to the wave function, the integral in Eq. (43) can be calculated explicitly [99]:
\[\int_{-\pi}^{\pi}\frac{dq}{2\pi}\,\frac{\cos\left[q(m_{1}-m_{2}) \right]}{|E|-4t\cos\frac{P}{2}\,\cos q}\] \[=\frac{1}{\sqrt{|E|^{2}-\alpha^{2}}}\left[\frac{\alpha}{|E|+ \sqrt{|E|^{2}-\alpha^{2}}}\right]^{|m_{1}-m_{2}|}\] \[=\frac{1}{|U|}\left[\frac{|U|+\sqrt{|U|^{2}+\alpha^{2}}}{\alpha} \right]^{-|m_{1}-m_{2}|}. \tag{50}\]
where \(\alpha\equiv 4t\cos(P/2)\) has been set for brevity. Thus, the un-normalized wave function can be written as
\[\Psi(m_{1},m_{2})=e^{i\frac{P}{2}(m_{1}+m_{2})}\cdot e^{-\gamma|m_{1}-m_{2}|}\;, \tag{51}\]
where
\[\sinh\gamma=\frac{|U|}{\alpha}=\frac{|U|}{4t\cos\frac{P}{2}}\;, \tag{52}\]
and
\[E=-4t\cos\frac{P}{2}\cosh\gamma\;. \tag{53}\]
The same expressions can be derived directly from the real-space Schrodinger equation by means of a two-particle Bethe ansatz. Using the explicit form of \(\Psi\), it is straightforward to compute potential energy, kinetic energy, and effective radius of a moving pair:
\[E_{\rm pot}=\langle-|U|\,\delta_{m_{1},m_{2}}\rangle=-\frac{|U|^{2}}{\sqrt{|U |^{2}+16\,t^{2}\cos^{2}\!\frac{P}{2}}}\,, \tag{54}\]
\[E_{\rm kin}=E-E_{\rm pot}=-\frac{16\,t^{2}\cos^{2}\!\frac{P}{2}}{\sqrt{|U|^{2} +16\,t^{2}\cos^{2}\!\frac{P}{2}}}\,, \tag{55}\]
\[r_{p}^{*}=\left\langle(m_{1}-m_{2})^{2}\right\rangle^{1/2}=\frac{4t\cos\!\frac {P}{2}}{\sqrt{2}\,|U|}\;. \tag{56}\]
Interestingly, \(r_{p}^{*}(P\rightarrow\pi)\to 0\), which means the pair shrinks to a point. The same conclusion can also be derived directly from Eq. (43). At \(P=\pi\), the two kinetic terms in the denominator cancel out which renders the internal wave function to be \(\propto\delta_{\mathbf{m_{1}}\mathbf{m_{2}}}\). The pair energy, mass, and radius are plotted in Fig. 3.
### 2D. Square lattice
For the square lattice, the basic integral \(M_{\mathbf{00}}\) in Eq. (15) can be expressed via complete elliptic integral of the first kind \(\mathbf{K}(z)\), see Appendix A.5, Eq. (100), for details. Denoting \(\alpha\equiv 4t\cos\frac{P_{s}}{2}\) and \(\beta\equiv 4t\cos\frac{P_{y}}{2}\), one has
\[M_{\mathbf{00}}^{\rm sq} = \int\limits_{-\pi-\pi}^{\pi}\frac{dq_{x}\,dq_{y}}{(2\pi)^{2}} \frac{1}{|E|-\alpha\cos q_{x}-\beta\cos q_{y}} \tag{57}\] \[= \frac{2}{\pi\sqrt{|E|^{2}-(\alpha-\beta)^{2}}}\,\mathbf{K}\!\! \left[\sqrt{\frac{4\alpha\beta}{|E|^{2}-(\alpha-\beta)^{2}}}\right].\]
This result applies not only to the isotropic square model at arbitrary \(\mathbf{P}\), but also to the rectangular model with \(t_{x}\neq t_{y}\). Inserting Eq. (57) in Eq. (40) defines the pair energy in the most general case. The minimum energy of two free particles is \(E_{11}=2\varepsilon_{\mathbf{P}/2}=-(\alpha+\beta)\), at which the argument of \(\mathbf{K}\) reaches 1 and \(M_{\mathbf{00}}^{\rm sq}\) diverges logarithmically. Similar to 1D, the divergence is interpreted as the existence of a bound state at any nonzero \(|U|\). Thus, \(|U|=0\) is pair formation threshold at _any_\(\mathbf{P}\). Let us determine the pair energy near threshold. Setting \(E=-\alpha-\beta-\Delta\), \(\Delta\ll\alpha,\beta\), Eqs. (57) and (40) produce in the leading order
\[\frac{|U|}{\pi\sqrt{\alpha\beta}}\,\mathbf{K}\!\left(1-\frac{\alpha+\beta}{4 \alpha\beta}\,\Delta\right)=1\;. \tag{58}\]
Using the asymptote \(\mathbf{K}(1-z)\sim\frac{1}{2}\ln\frac{8}{z}\), at \(z\rightarrow+0\), one obtains
\[\Delta=\frac{32\,\alpha\beta}{\alpha+\beta}\exp\left(-\frac{2\pi\sqrt{\alpha \beta}}{|U|}\right). \tag{59}\]
In the ground state, \(\alpha=\beta=4t\), and the binding energy is
\[\Delta_{0}=64\,t\exp\left(-\frac{8\pi t}{|U|}\right). \tag{60}\]
Near BZ corners, pair energy is \(E(\pm\pi,\pm\pi)=-|U|\), like in the 1D case.
The pair effective mass is derived next. It is most convenient to consider BZ diagonal, \(P_{x}=P_{y}=P\), where Eq. (57) simplifies considerably. Writing \(|E|=|E_{0}|-\frac{\hbar^{2}P^{2}}{m_{p}^{2}}\), expanding Eq. (57) for \(P\ll 1\), and applying the formula
\[\frac{d\mathbf{K}(z)}{dz}=\frac{\mathbf{E}(z)}{z(1-z^{2})}-\frac{\mathbf{K}(z )}{z}\,, \tag{61}\]
one obtains from Eq. (40):
\[\frac{m_{0}}{m_{p}^{*}}=\frac{|E_{0}|}{16t}\left\{1-\left[1-\frac{(8t)^{2}}{|E _{0}|^{2}}\right]\frac{\mathbf{K}\left(\frac{8t}{|E_{0}|}\right)}{\mathbf{E} \left(\frac{8t}{|E_{0}|}\right)}\right\}\,, \tag{62}\]
where \(\mathbf{E}(\kappa)\) is complete elliptic integral of the second kind. Equation (62) has correct limits: \(m_{p}^{*}(|U|\to 0)=2m_{0}\) and \(m_{p}^{*}(|U|\rightarrow\infty)=\frac{|U|}{2t}\,m_{0}\). The factor \(\mathbf{K}\left(\frac{8t}{|E_{0}|}\right)\) in Eq. (62) can be written as \(\frac{\pi|E_{0}|}{2|U|}\), which follows from Eq. (40).
The pair effective radius is discussed next. Unlike 1D, there is no explicit formula for the pair wave function for _all_\(\mathbf{m}\) in 2D. However, for each given \(\mathbf{m}\), the wave function can be derived from a few basic integrals using recurrence relations, as explained in Appendix A. On BZ diagonals, \(\Psi(\mathbf{m}_{1},\mathbf{m}_{2})\) is always a linear combination of \(\mathbf{K}\) and \(\mathbf{E}\). At arbitrary \(\mathbf{P}\), the wave function is a linear combination of all three complete elliptic integrals \(\mathbf{K}\), \(\mathbf{E}\), and \(\mathbf{\Pi}\).
To calculate effective radius, we substitute Eq. (43) in Eq. (44) and perform the following transformation
\[r_{pj}^{*2} = \frac{\sum_{\mathbf{m}}\sum_{\mathbf{q}_{1}}\frac{\frac{\partial }{\partial t_{1j}}(e^{-i\mathbf{q}_{1}\mathbf{m}})}{E-\xi_{\mathbf{P},\mathbf{ q}_{1}}}\sum_{\mathbf{q}_{2}}\frac{\frac{\partial}{\partial t_{2j}}(e^{i \mathbf{q}_{2}\mathbf{m}})}{E-\xi_{\mathbf{P},\mathbf{q}_{2}}}}{\sum_{ \mathbf{m}}\sum_{\mathbf{q}_{1}}\frac{e^{-i\mathbf{q}_{1}\mathbf{m}}}{E-\xi_ {\mathbf{P},\mathbf{q}_{1}}}\sum_{\mathbf{q}_{2}}\frac{e^{i\mathbf{q}_{2} \mathbf{q}_{2}}}{E-\xi_{\mathbf{P},\mathbf{q}_{2}}}} \tag{63}\] \[= \frac{\sum_{\mathbf{q}}\left(\frac{\partial\xi_{\mathbf{P}, \mathbf{q}_{1}}}{\partial t_{2j}}\right)^{2}}{\sum_{\mathbf{q}}\left(\frac{E- \xi_{\mathbf{P},\mathbf{q}}}{|E|}\right)^{2}}=\frac{\left(-\frac{1}{6}\right) \frac{\partial^{3}}{\partial|E|^{3}}\sum_{\mathbf{q}}\frac{\left(\frac{ \partial\xi_{\mathbf{P},\mathbf{q}}}{|E|+\xi_{\mathbf{P},\mathbf{q}}}\right) ^{2}}{(-1)\frac{\partial}{\partial|E|}\sum_{\mathbf{q}}\frac{\left(\frac{ \partial\xi_{\mathbf{P},\mathbf{q}}}{|E|+\xi_{\mathbf{P},\mathbf{q}}}\right) }}}{(6-1)\frac{\partial}{\partial|E|}\sum_{\mathbf{q}}\frac{\left(\frac{ \partial\xi_{\mathbf{P},\mathbf{q}}}{|E|+\xi_{\mathbf{P},\mathbf{q}}}\right) }}\,,\]
where \(\xi_{\mathbf{P},\mathbf{q}}\equiv\varepsilon_{\frac{\mathbf{P}}{2}+\mathbf{q} }+\varepsilon_{\frac{\mathbf{P}}{2}-\mathbf{q}}\). Both sums in Eq. (63) are recognized as integrals \(M_{nm}^{\rm sq}\) of the square lattice evaluated in Appendix A. Let us limit consideration to BZ diagonals, \(P_{x}=P_{x}\equiv P\). With the notation of Appendix A.3, one writes
\[r_{px}^{*2}=r_{py}^{*2}=\frac{(4t\cos\frac{P}{2})^{2}}{12}\frac{\frac{\partial ^{3}}{\partial|E|^{3}}\left(M_{00}^{\rm sq}-M_{20}^{\rm sq}\right)}{\frac{ \partial}{\partial|E|}\left(M_{00}^{\rm sq}\right)}\,. \tag{64}\]
Using explicit expressions, Eqs. (119) and (120), one derives, after transformations, a final formula
\[r_{px}^{*2}=r_{py}^{*2}=\frac{1}{12}\left\{\frac{1+\left(\frac{8t\cos\frac{P }{2}}{|E|}\right)^{2}}{1-\left(\frac{8t\cos\frac{P}{2}}{|E|}\right)^{2}}-\frac{ \mathbf{K}\left(\frac{8t\cos\frac{P}{2}}{|E|}\right)}{\mathbf{E}\left(\frac{8t \cos\frac{P}{2}}{|E|}\right)}\right\}\,. \tag{65}\]
The pair shrinks to a point in the strong coupling limit, \(E\rightarrow-\infty\), and at \(P=\pm\pi\) for any \(|U|\). The radius diverges at the threshold, \(E\rightarrow-8t\cos\frac{P}{2}\), as expected on physical reasoning. In the ground state, \(\mathbf{P}=0\), the asymptotes are \(r_{px}^{*}(|U|\rightarrow\infty)\approx\frac{4t}{\sqrt{2}|U|}\) and \(r_{px}^{*}(|U|\to 0)\approx 96^{-\frac{1}{2}}\,\exp\frac{4\pi t}{|U|}\). The pair energy, mass, and radius are plotted in Fig. 3.
### 2D. Triangular lattice
In this section, we consider the two-dimensional triangular lattice with nearest-neighbor isotropic hopping,
Figure 3: Properties of \(\mathbf{P}=0\) bound pairs in the attractive Hubbard model on hyper-cubic lattices. The black circle marks the pair formation threshold in 3D, Eq. (72).
\(t_{\bf b}=t\). Single particle dispersion, Eq. (8), is
\[\varepsilon_{\bf k}=-2t\cos k_{x}-4t\cos\left(k_{x}/2\right)\cos\left(\sqrt{3}k_{ y}/2\right). \tag{66}\]
The double integral \(M^{\rm tr}_{\bf 00}\), Eq. (15), corresponding to this \(\varepsilon_{\bf k}\) is evaluated in Appendix B.1. For the ground state, \(P_{x}=P_{y}=0\), the result is
\[M^{\rm tr}_{\bf 00} = \frac{2}{\pi}\frac{1}{\sqrt{|E_{0}|^{2}-48t^{2}+16t\sqrt{2t|E_{0} |+12t^{2}}}} \tag{67}\] \[\times {\bf K}\!\left(\sqrt{\frac{32t\sqrt{2|E_{0}|t+12t^{2}}}{|E_{0}|^{ 2}-48t^{2}+16t\sqrt{2|E_{0}|t+12t^{2}}}}\right).\]
The lowest energy of two free particles on the triangular lattice is \(-12t\). When \(E\to-12t\) from below, the argument of \({\bf K}\) approaches 1 and \(M^{\rm tr}_{\bf 00}\) diverges logarithmically. Utilizing Eq. (40), one concludes that a bound pair is formed for any attractive \(U\).
Let us derive asymptotic behavior of \(E\) at small couplings. Setting \(|E_{0}|=12t+\Delta\), \(\Delta\ll t\) in Eq. (67), one obtains from Eq. (40)
\[\frac{|U|}{4\pi\sqrt{3}t}\,{\bf K}\!\left(1-\frac{\Delta}{18t}\right)=1\;, \tag{68}\]
from where the binding energy is
\[\Delta=144t\,\exp{\left(-\frac{8\pi\sqrt{3}\,t}{|U|}\right)}. \tag{69}\]
### 3D. Simple cubic lattice
The attractive Hubbard model in three dimensions possesses a new qualitative feature: a nonzero threshold of pair formation. In 3D, kinetic energy _alone_ is strong enough to counteract weak attraction. Mathematically, the triple integral in \(M_{\bf 00}\) converges when \(E\to E_{11}\) for any \({\bf P}\), rendering the critical potential depth \(|U_{\rm cr}|\) finite. Calculation of \(|U_{\rm cr}|\) requires evaluation of the celebrated Watson integrals.[100] Amazingly, on BZ diagonals they are known analytically for a general lattice point,[84; 85; 86; 89; 91] see Appendix C. Setting \(P_{x}=P_{y}=P_{z}=P\) and \(\alpha=4t\cos\left(P/2\right)\), one obtains
\[M^{\rm sc}_{\bf 00}=\iiint\limits_{-\pi\cdot\pi\cdot\pi}^{\pi}\frac{(2\pi)^{-3} dq_{x}\,dq_{y}\,dq_{z}}{|E|-\alpha\left(\cos q_{x}+\cos q_{y}+\cos q_{z}\right)}\;. \tag{70}\]
The minimum energy of two free particles is \(E_{11}=-3\alpha\) at which the classic result of Watson's reads[100]
\[M^{\rm sc}_{\bf 00}(E=-3\alpha) = \frac{4(18+12\sqrt{2}-10\sqrt{3}-7\sqrt{6})}{\alpha\pi^{2}} \tag{71}\] \[\times{\bf K}^{2}\left[(2-\sqrt{3})(\sqrt{3}-\sqrt{2})\right]\] \[= \frac{1}{\alpha}\,0.505462\ldots\;.\]
The binding threshold then follows from Eq. (40):
\[|U_{\rm cr}({\rm diag})|=(7.913552\ldots)\,t\cos\left(P/2\right). \tag{72}\]
The \(P=0\) value of \(7.914\ldots\) obtained by numerical integration was reported in Ref. [[101]]. The threshold is decreasing along the diagonal and becomes zero in the BZ corner. It also implies that for any \(|U|<7.913552\ldots\), there is a momentum \(P_{0}\) such that the particles are not bound for \(P<P_{0}\) but bound at \(P>P_{0}\). Thus, pairs become _more_ stable at large lattice momenta. It is a common property of lattice bound states, which will be encountered many times later in this paper.
The binding threshold can also be computed on the BZ _planes_ that pass through the four corners of BZ, for example on the plane \(P_{x}=P_{y}\). This is possible thanks to the extension of Watson's result to the anisotropic case by Montroll[91; 102]:
\[M^{\rm tg}_{\bf 00}(E=-8t\cos\left(P_{x}/2\right)-4t\cos\left(P_{z }/2\right))= \tag{73}\] \[= \frac{1}{\alpha\pi^{3}}\iiint\limits_{0}^{\pi\pi\,\pi}\frac{dq_{x }\,dq_{y}\,dq_{z}}{2-\cos q_{x}-\cos q_{y}+\xi(1-\cos q_{z})}\] \[= \frac{4}{\alpha\pi^{2}}\sqrt{\frac{2\kappa_{1}\kappa_{2}}{\xi}}\, {\bf K}(\kappa_{1}){\bf K}(\kappa_{2})\;,\]
where
\[\kappa_{1,2}=\frac{1}{2\xi}\,\Big{(}\sqrt{4+2\xi}\pm 2\Big{)}\Big{(}2\sqrt{1+\xi}- \sqrt{4+2\xi}\Big{)}\,, \tag{74}\]
\[\alpha\equiv 4t\cos\left(P_{x}/2\right),\hskip 28.452756pt\xi\equiv\frac{\cos \left(P_{z}/2\right)}{\cos\left(P_{x}/2\right)}\,. \tag{75}\]
Figure 4: Binding threshold surface for the negative Hubbard model in the 3D simple cubic lattice. What is shown is the surface’s cross-section with the plane \(P_{x}=P_{y}\) of the pair BZ for several \(|U|\). Filled circles indicate the threshold momenta on the BZ diagonal, which are given by \(P_{z}=P_{x}=P_{y}=2\arccos\left(|U|/7.913552\right)\).
The threshold value is an inverse of Eq. (73) by virtue of Eq. (40). Notice how pair's center-of-mass movement induces anisotropy.
For fixed \(|U|<7.913552\ldots\) and \(E=E_{11}\), Eq. (40) defines a surface in the BZ that separates bound and unbound states. Figure 4 shows the intersection of that surface with the \(P_{x}=P_{y}\) plane for several values of \(|U|\).
The ground state energy, \(E_{0}=E(\mathbf{P}=0)\), is discussed next. A close-form expression for \(M_{\mathbf{00}}^{\text{sc}}\) in the isotropic simple cubic model was found by Joyce[84; 85]
\[M_{\mathbf{00}}^{\text{sc}}(E_{0}\leq-12t)=\frac{1}{|E_{0}|}\frac{(1-9\zeta^{4 })}{(1-\zeta)^{3}(1+3\zeta)}\left[\frac{2}{\pi}\mathbf{K}(\kappa)\right]^{2}, \tag{76}\]
\[\kappa^{2}(\zeta)=\frac{16\,\zeta^{3}}{(1-\zeta)^{3}(1+3\zeta)}\,, \tag{77}\]
\[\zeta=\zeta(w)=\left[\frac{1-\sqrt{1-\frac{w^{2}}{9}}}{1+\sqrt{1-w^{2}}} \right]^{\frac{1}{2}},\ \ \ \ \ w=\frac{12\,t}{|E_{0}|}\,. \tag{78}\]
Using these formulas, Eq. (40) defines \(E_{0}\) as a function of \(|U|\). It is plotted in Fig. 3(a).
To derive the effective mass, we write \(|E|=|E_{0}|-\frac{3\hbar^{2}P^{2}}{2m_{p}^{*}}\) and expand Eq. (70) for small \(P\) to get
\[\frac{m_{0}}{m_{p}^{*}}=\frac{1}{6}\frac{\int\limits_{-\pi-\pi}^{\pi}\!\!\!\! \int\limits_{-\pi}^{\pi}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
and Zucker.[87; 88; 90] The cumbersome expression is given in Appendix C.2.
Of physical interest is _pair_ mass anisotropy \(m_{p\pi}^{*}/m_{p\pi}^{*}\). Expanding \(|U|M_{\mathbf{00}}^{\rm tg}=1\) for small \(\mathbf{P}\) one obtains
\[\frac{m_{p\pi}^{*}}{m_{p\pi}^{*}}=\frac{t_{z}}{t}\frac{\int\limits_{-\pi\pi \pi}^{\pi}\frac{\pi}{\int\limits_{-\pi\pi}^{\pi}}[\frac{\cos q_{z}\,dq_{x}\,dq_ {y}\,dq_{z}}{[E_{0}]-4t\,(\cos q_{x}+\cos q_{y})-4t_{z}\,\cos q_{z})]^{2}}}{ \int\limits_{-\pi\pi\pi}^{\pi}\frac{\cos q_{x}\,dq_{y}\,dq_{z}}{[E_{0}]-4t\, (\cos q_{x}+\cos q_{y})-4t_{z}\,\cos q_{z})]^{2}}}\,. \tag{82}\]
The mass ratio is shown in Fig. 5(b). For a single particle, \(m_{x}/m_{z}=t_{z}/t\), and the graph would be a straight line. Pairing _enhances_ mass anisotropy which approaches \((t_{z}/t)^{2}\) in the strong coupling limit. For intermediate \(|U|\), the mass anisotropy lies between \((t_{z}/t)\) and \((t_{z}/t)^{2}\).
Another interesting property is the ratio of effective radii. It follows from Eq. (81) that
\[\left(\frac{r_{p\pi}^{*}}{r_{p\pi}^{*}}\right)^{2}=\frac{t_{z}^{2}}{t^{2}} \frac{-\frac{\pi\pi}{\int\limits_{-\pi\pi}^{\pi}}\frac{\sin^{2}q_{z}\,dq_{x}\, dq_{y}\,dq_{z}}{[E_{0}]-4t\,(\cos q_{z}+\cos q_{y})-4t_{z}\,\cos q_{z}]^{4}}}{ \int\limits_{-\pi\pi\pi}^{\pi}\frac{\sin^{2}q_{z}\,dq_{x}\,dq_{y}\,dq_{z}}{[E_ {0}]-4t\,(\cos q_{z}+\cos q_{y})-4t_{z}\,\cos q_{z}]^{4}}}\,. \tag{83}\]
The pair size anisotropy is shown in Fig. 5(c). Near binding threshold, \(r_{p\pi}^{*}/r_{p\pi}^{*}=\sqrt{t_{z}/t}\) whereas in general it is confined between \(\sqrt{t_{z}/t}\) and \((t_{z}/t)\).
### 3D. Body-centered cubic (BCC) lattice
In a BCC lattice with nearest-neighbor hopping, \(\varepsilon_{\mathbf{k}}=-8t\cos\frac{k_{x}}{2}\cos\frac{k_{y}}{2}\cos\frac{k _{z}}{2}\). (In this and the following sections, we set the cube length, \(a=1\).) First, we consider the ground state, \(\mathbf{P}=0\). In calculating \(M_{\mathbf{00}}^{\rm bcc}\), integration over the BCC Brillouin zone can be replaced with one fourth of the integral over a cube with side length \(4\pi\). Changing momentum variables produces
\[M_{\mathbf{00}}^{\rm bcc}=\frac{1}{(2\pi)^{3}}\!\!\int\limits_{-\pi\pi\cdot \pi}^{\pi}\!\!\!\int\limits_{-\pi\cdot\pi}^{\pi}\!\!\!\frac{dq_{x}\,dq_{y}\,dq_ {z}}{|E_{0}|-16t\,\cos q_{x}\cos q_{y}\cos q_{z}}\,. \tag{84}\]
The integral here is recognized as one of the generalized Watson integrals that was first evaluated by Maradudin[103; 104] and later studied by other authors.[105; 106; 107] Application of those results leads to the energy equation
\[\frac{|U|}{|E_{0}|}\left(\frac{2}{\pi}\right)^{2}\mathbf{K}^{2}\left[\sqrt{ \frac{1}{2}-\frac{1}{2}\sqrt{1-\left(\frac{16t}{|E_{0}|}\right)^{2}}}\right]= 1\,. \tag{85}\]
The binding threshold is found by setting \(E_{0}=-16t\), which yields
\[|U_{\rm cr}^{\rm bcc}|=\frac{4\pi^{2}\,t}{\left[K\!\left(\frac{1}{\sqrt{2}} \right)\right]^{2}}=\frac{64\pi^{3}\,t}{\left[\Gamma\!\left(\frac{1}{4}\right) \right]^{4}}=(11.484320\ldots)\,t\,. \tag{86}\]
Expanding Eq. (85) at \(E_{0}\approx-16t\) yields a quadratic dependence of the binding energy near the threshold
\[E_{0}(|U|\approx|U_{\rm cr}^{\rm bcc}|)=-16\,t-\frac{\left[\Gamma\!\left(\frac {1}{4}\right)\right]^{16}}{2^{15}\pi^{10}}\frac{(|U|-|U_{\rm cr}^{\rm bcc}|)^{ 2}}{t}\,. \tag{87}\]
The numerical coefficient at the quadratic term is \(=0.290501\ldots\) Formula (87) is derived in Appendix D. Finally, expanding Eq. (85) at large \(E_{0}\), one obtains
\[E_{0}(|U|\rightarrow\infty)=-|U|-\frac{32\,t^{2}}{|U|}+o\!\left(\frac{t^{2}}{| U|}\right), \tag{88}\]
which is consistent with strong-coupling perturbation theory.
Nonzero pair momenta are discussed next. Cases with one nonzero component, for example \(\mathbf{P}=(P_{x},0,0)\), can be reduced to the ground state case. The energy denominator in \(M_{\mathbf{00}}^{\rm bcc}\) contains
\[\varepsilon_{\mathbf{q}}+\varepsilon_{\mathbf{P}-\mathbf{q}}=-16\,t\cos\frac{P_ {x}}{4}\cos\left(\frac{P_{x}}{4}-q_{x}\right)\cos q_{y}\cos q_{z}\,. \tag{89}\]
After shifting the integration variable \(q_{x}\), \(M_{\mathbf{00}}^{\rm bcc}\) is reduced to the ground state expression, Eq. (84), where \(t\) is replaced with \(t\cos\left(P_{x}/4\right)\). Without re-deriving all the results given above, let us just mention a generalization of the pair binding condition:
\[|U_{\rm cr}^{\rm bcc}(P_{x},0,0)|=(11.484320\ldots)\,t\cos\left(P_{x}/4\right). \tag{90}\]
This expression should be compared with simple the cubic lattice result, Eq. (72).
### 3D. Face-centered cubic (FCC) lattice
Similar to the FCC case, integration over the FCC Brillouin zone can be replaced by integration over the cube with side length \((4\pi)\), see Appendix B. In the ground state, \(\varepsilon_{\mathbf{q}}=\varepsilon_{\mathbf{P}-\mathbf{q}}\), and \(M_{\mathbf{00}}^{\rm fcc}\) reduces to
\[M_{\mathbf{00}}^{\rm fcc}=\frac{1}{8t\pi^{3}}\times\] \[\int\limits_{0\,0\,0}^{\pi\,\pi\,\pi}\!\!\!\int\limits_{0\,0}^{\pi \,\pi\,\pi}\!\!\!\!\frac{dq_{x}\,dq_{y}\,dq_{z}}{\frac{[E_{0}]}{\rm St}-\cos q_{x}\!\cos q_{y}-\cos q_{y}\!\cos q_{z}-\cos q_{z}\!\cos q _{x}}\,. \tag{91}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Lattice & \(z\) & \(|U_{\rm cr}|/t\) & \(|U_{\rm cr}|/(zt)\) \\ \hline \hline Simple cubic & 6 & 7.913552 \(\ldots\) & 1.318925 \(\ldots\) \\ \hline Body-centered cubic & 8 & 11.484320 \(\ldots\) & 1.435540 \(\ldots\) \\ \hline Face-centered cubic & 12 & 17.848362 \(\ldots\) & 1.487363 \(\ldots\) \\ \hline \end{tabular}
\end{table}
Table 1: Pair binding thresholds for the attractive Hubbard model in the three cubic lattices with isotropic nearest-neighbor hopping. \(z\) is the number of nearest neighbors. \(\mathbf{P}=0\).
This triple integral was first evaluated by Iwata [108] and later in a different form by Joyce. [85] Joyce's result reads
\[M_{\mathbf{00}}^{\text{fcc}}(E_{0}<-24t)=\frac{1}{|E_{0}|}\frac{(1+3\zeta^{2})^{ 2}}{(1-\zeta)^{3}(1+3\zeta)}\left[\frac{2}{\pi}\mathbf{K}(\kappa)\right]^{2}, \tag{92}\]
\[\kappa^{2}(\zeta)=\frac{16\,\zeta^{3}}{(1-\zeta)^{3}(1+3\zeta)}\,, \tag{93}\]
\[\zeta=\zeta(w)=\frac{-1+\sqrt{1+\frac{w}{3}}}{1+\sqrt{1-w}}\,,\ \ \ \ \ w=\frac{24\,t}{|E_{0}|}\,. \tag{94}\]
Pair formation takes place at \(E_{0}=-24\,t\) or \(w=1\). The binding threshold is
\[|U_{\text{cr}}^{\text{fcc}}(\mathbf{0})|=\frac{8\pi^{2}\,t}{\sqrt{3}\left[ \mathbf{K}\!\left(\frac{\sqrt{3}-1}{2\sqrt{2}}\right)\right]^{2}}=(17.848362 \ldots)\,t\;. \tag{95}\]
Table 1 summarizes threshold values for the three cubic lattices.
## IV One-dimensional UV model
We now transition to \(UV\) models with on-site repulsion \(U\) and _nearest-neighbor_ attraction \(V_{\mathbf{b}}\equiv-|V|\). A general feature of these models is the existence of multiple bound states, whose number increases with lattice dimensionality. Because of the complexity of general secular equation, Eq. (17), it is advantageous to utilize the (anti)symmetrized formalism developed in Section II.4. We begin with the one-dimensional \(UV\) model. [80]
### Singlet states
The symmetrized set of neighbor vectors consists of two elements: \(\{\mathbf{b}_{+}\}=\{(0),(1)\}\). A symmetrized Schrodinger equation, Eq. (23), reads:
\[(E_{s}-\varepsilon_{k_{1}}-\varepsilon_{k_{2}})\,\phi_{k_{1}k_{2} }^{+}=\] \[U\frac{1}{N}\sum_{q}\phi_{q,\,k_{1}+k_{2}-q}^{+}\] \[-|V|\frac{1}{N}\sum_{q}\phi_{q,\,k_{1}+k_{2}-q}^{+}e^{iq}\left(e ^{-ik_{1}}+e^{-ik_{2}}\right). \tag{96}\]
Here \(\varepsilon_{k}=-2t\cos k\), and subscript in \(E_{s}\) indicates "spin-singlet". Next, introduce two auxiliary functions:
\[\Phi_{0}^{+}(P) = \frac{1}{N}\sum_{q}\phi_{q,\,k_{1}+k_{2}-q}^{+}\,, \tag{97}\] \[\Phi_{1}^{+}(P) = \frac{1}{N}\sum_{q}\phi_{q,\,k_{1}+k_{2}-q}^{+}\,e^{iq}\,, \tag{98}\]
so that
\[\phi_{k_{1}k_{2}}^{+}=\frac{U\Phi_{0}^{+}(P)-|V|\Phi_{1}^{+}(P)\left(e^{-ik_{1 }}+e^{-ik_{2}}\right)}{E_{s}-\varepsilon_{k_{1}}-\varepsilon_{k_{2}}}\,. \tag{99}\]
Substituting Eq. (99) back into the definitions, Eqs. (97) and (98), one obtains:
\[\Phi_{0}^{+}(P) = U\Phi_{0}^{+}(P)\left(\frac{1}{N}\sum_{q}\frac{1}{E_{s}- \varepsilon_{q}-\varepsilon_{P-q}}\right)-|V|\Phi_{1}^{+}(P)\left(\frac{1}{N} \sum_{q}\frac{e^{-iq}+e^{-i(P-q)}}{E_{s}-\varepsilon_{q}-\varepsilon_{P-q}} \right), \tag{100}\] \[\Phi_{1}^{+}(P) = U\Phi_{0}^{+}(P)\left(\frac{1}{N}\sum_{q}\frac{e^{iq}}{E_{s}- \varepsilon_{q}-\varepsilon_{P-q}}\right)-|V|\Phi_{1}^{+}(P)\left(\frac{1}{N} \sum_{q}\frac{e^{iq}(e^{-iq}+e^{-i(P-q)})}{E_{s}-\varepsilon_{q}-\varepsilon_ {P-q}}\right). \tag{101}\]
Next, a change of variables \(q^{\prime}=q-\frac{P}{2}\) under the integrals results in a \((2\times 2)\) matrix equation
\[\Phi_{0}^{+} = -UM_{0}\cdot\Phi_{0}^{+}+2|V|\,e^{-i\frac{P}{2}}M_{1}\cdot\Phi_{1 }^{+}\,, \tag{102}\] \[\Phi_{1}^{+} = -U\,e^{i\frac{P}{2}}M_{1}\cdot\Phi_{0}^{+}+|V|(M_{0}+M_{2})\cdot \Phi_{1}^{+}\,, \tag{103}\]
where
\[M_{n} = \frac{1}{N}\sum_{q}\frac{\cos nq}{|E_{s}|-4t\cos\left(P/2\right) \cos q} \tag{104}\] \[= \frac{1}{\sqrt{|E_{s}|^{2}-\alpha^{2}}}\left[\frac{\sqrt{|E_{s}|^{ 2}-\alpha^{2}}-|E_{s}|}{-\alpha}\right]^{n},\]
and \(\alpha\equiv 4t\cos\left(P/2\right)\). Note that
\[M_{1} = \frac{1}{\alpha}\left(|E_{s}|M_{0}-1\right)\,, \tag{105}\] \[M_{2} = \frac{2|E_{s}|}{\alpha}M_{1}-M_{0}\,. \tag{106}\]
As a result, everything can be expressed via the basic integral \(M_{0}\). The bound state's energy is determined by the consistency condition of Eqs. (102) and (103). Expanding the determinant, one obtains
\[(UM_{0}+1)+\frac{2}{\alpha^{2}}|V|(|E_{s}|+U)(1-|E_{s}|M_{0})=0\;. \tag{107}\]
This form will be useful later in comparing with similar equations in higher dimensions. Substitution of the explicit form of \(M_{0}\) yields the final expression:
\[\Bigg{[}U+\sqrt{|E_{s}|^{2}\!-\!16t^{2}\cos\!\!\frac{P}{2}}\Bigg{]} \Bigg{[}|E_{s}|+\sqrt{|E_{s}|^{2}\!-\!16t^{2}\cos\!\!\frac{P}{2}}\Bigg{]}\] \[\quad-2\,|V|\,(U+|E_{s}|)=0\,. \tag{108}\]
This is a cubic equation for \(E_{s}(P)\), which does not have a simple-form analytical solution. Only at BZ boundary the equation simplifies to \(E_{s}(\pm\pi)=-|V|\). At strong coupling, \(U,|V|,|E_{s}|\gg t\), Eq. (108) yields for the ground state
\[E_{s}(P=0)=-|V|-\frac{4t^{2}}{|V|}-\frac{8t^{2}}{U+|V|}+o(t^{2}/|V|)\,, \tag{109}\]
which is consistent with second-order perturbation theory.
In order to obtain the binding threshold, set \(E_{s}\) equal to the lowest energy of two free carriers, \(-4t\cos{(P/2)}\), in Eq. (108). It results in
\[|V^{s}_{\rm cr}|=\frac{2Ut\cos{(P/2)}}{U+4t\cos{(P/2)}}\,. \tag{110}\]
This formula possesses several interesting properties. First of all, \(|V^{s}_{\rm cr}|\) is nonzero despite the model being one-dimensional. Here, the attraction competes not only with kinetic energy but also with repulsion \(U\), which leads to a nonzero threshold. At weak repulsion, \(U<t\), one has \(|V^{s}_{\rm cr}|\approx U/2\) which can be understood from the Born approximation: there are two attractive sites for one repulsive site, hence half as strong \(V\) is needed to overcome \(U\). In the opposite limit of strong repulsion, \(|V^{s}_{\rm cr}|\) approaches a finite limit. At large \(U\), the on-site wave function amplitude, \(\Psi(m,m)\to 0\), which becomes a boundary condition for the rest of \(\Psi\). Once attraction is strong enough to produce a bound state in the presence of this zero, further increase of \(U\) has no effect. Finally, \(|V^{s}_{\rm cr}|\) is a strong function of pair momentum. Like in 3D Hubbard models, the pair becomes _more_ stable at large \(P\). At BZ boundary, the pair is always stable for any \(U\), however large, and any \(|V|\), however small.
A typical pair dispersion for \(|V|<|V^{s}_{\rm cr}|\) is shown in Fig. 6.
Consider weakly bound pairs with \(P=0\). Expanding the exact dispersion relation, Eq. (108), for \(E_{s}\approx-4t\) and utilizing Eq. (110), one obtains
\[E_{s}(P=0)\approx-4t-\frac{(|V|-|V^{s}_{\rm cr,0}|)^{2}}{2t}\,, \tag{111}\]
where \(|V^{s}_{\rm cr,0}|=2Ut/(U+4t)\). Thus the pair energy varies quadratically near the threshold.
Expanding Eq. (108) for small \(P\), one obtains, after transformations, the singlet's effective mass
\[\frac{m^{*}_{ps}}{m_{0}}\!=\!\frac{(U\!-\!2E_{s}\!-\!2|V|)\sqrt{E^{2}_{s}\!- \!(4t)^{2}}\!+\!2E^{2}_{s}\!-\!UE_{s}\!-\!16t^{2}}{(2t)\left[2\sqrt{E^{2}_{s}- (4t)^{2}}+U-E_{s}\right]}\,. \tag{112}\]
At threshold, \(E_{s}=-4t\), and the last formula yields \(m^{*}_{ps}/m_{0}=2\), as expected. At strong attraction, the mass generally grows linearly with coupling, \(m^{*}_{ps}/m_{0}\propto|V|/t\), with the slope depending on \(U\) and the \(U/V\) ratio. As an example, Fig. 7 shows the pair mass for \(U=20\,t\).
Figure 6: Singlet-pair dispersion \(E_{s}\) in the 1D \(UV\) model computed from Eq. (108). \(U=15\,t\), \(|V|=1.3\,t\). Note that \(|V|\) is below the \(P=0\) binding threshold, \(2Ut/(U+4t)\). The shaded area is the continuum of two-particle scattering states. The circles mark threshold momenta, \(P=\pm 2\arccos{[UV/2/(U-2V)]}\), see Eq. (110). The triplet pair energy \(E_{t}\) from Eq. (130) is very close to \(E_{s}\) and therefore not shown in the figure.
Figure 7: Singlet pair mass in the 1D \(UV\) model computed from Eqs. (108) and (112). The blue trace corresponds to a fixed \(U=20\,t\). The mass grows linearly with \(|V|\). The blue circle marks threshold value, Eq. (110), for \(P=0\). The black trace corresponds to _light pairs_ regime \(U=-|V|\). The mass stays of order \(m_{0}\) for all \(V\). At strong coupling \(|V|\to\infty\), the mass approaches the theoretical limit, \(m^{*}_{ps}=\sqrt{8}\,m_{0}\), see Eq. (120).
### Light bound pairs
In this section, we introduce the topic of _light bound pairs_ that will be discussed later in several places of this paper. We use this term to describe a situation when pairs are strongly bound, \(E\to-\infty\), but at the same time remain mobile with an effective mass of order \(m_{0}\). It occurs when a pair can move through the lattice without changing its energy, i.e., without breaking the most attractive bond. There are two primary reasons why it can happen. First, because of geometry. In some lattices such as triangular and FCC, one member of a pair can hop to another site while still remaining a nearest neighbor to the second member, which keeps the configuration energy unchanged at \(-|V|\). This is followed by a similar hop by the second member. The two particles hop in turns in a "crab-like" fashion, which results in overall movement of the pair through the system. The second origin of light pairs is a flat segment in the attractive part of the inter-particle potential. In this case, the particles can also move in alternating order without changing their energy.
In this section, we use the relative simplicity of the 1D \(UV\) model to illustrate the second mechanism. To this end, we set \(U=-|V|\) in the formulas of Sec. IV.1. Additionally, the strong-coupling limit, \(U=V\to-\infty\), can be treated analytically. Consider Fig. 8(b). It is sufficient to include only two types of spin-singlet configurations:
\[A_{m} =|\uparrow\downarrow\rangle_{m}\:, \tag{113}\] \[B_{m} =\frac{1}{\sqrt{2}}\left(|\uparrow\rangle_{m}|\downarrow\rangle _{m+1}+|\downarrow\rangle_{m}|\uparrow\rangle_{m+1}\right)\:. \tag{114}\]
Hamiltonian action within this basis is
\[\hat{H}A_{m} =-\sqrt{2}tB_{m}-\sqrt{2}tB_{m-1}\:, \tag{115}\] \[\hat{H}B_{m} =-\sqrt{2}tA_{m}-\sqrt{2}tA_{m+1}\:. \tag{116}\]
The Schrodinger equation in momentum space is
\[\tilde{E}A_{P} =-\sqrt{2}t\left(1+e^{-iPa}\right)B_{P}\:, \tag{117}\] \[\tilde{E}B_{P} =-\sqrt{2}t\left(1+e^{iPa}\right)A_{P}\:, \tag{118}\]
where \(\tilde{E}\) is pair energy counted from \(-|V|\) and \(a\) is the lattice constant. Band dispersion is
\[\tilde{E}_{1,2}(P)=\pm 2\sqrt{2}t\cos\frac{Pa}{2}\:, \tag{119}\]
which corresponds to an effective mass
\[m^{*}_{ps}=\sqrt{2}\,\frac{\hbar^{2}}{ta^{2}}=2\sqrt{2}\,m_{0}\:. \tag{120}\]
Thus, the pair mass remains of the order of free-particle mass \(m_{0}\) even in the limit of infinitely strong attraction. Figure 7 shows a numerical solution of Eqs. (108) and (112) for the resonant potential \(U=-|V|\). Indeed, the pair mass never exceeds the strong-coupling limit \(\sqrt{8}\,m_{0}\).
One might think of an attractive interaction with \(U=-|V|\) as exotic, but it can potentially be realized in cold gases where both \(U\) and \(V\) can be independently controlled. In crystalline solids, one can envision more realistic potentials comprising a strong repulsive core and a long-range attractive tail. Such a potential will necessarily have a minimum at a finite separation between particles. If the minimum is wide compared with the interatomic distance, then there will be two separations with equal attractions with high probability. Such a situation is illustrated in Fig. 8(c), where attraction on the _third_ and _fourth_ nearest neighbors are assumed equal, \(V_{3}=V_{4}\). (Interestingly, other parts of the potential do not change the argument given below because if particles are allowed to access configurations outside of the \(-|V|\) basis, the pair mass will only decrease!) A proper ground state basis in this example is
\[A_{m}=|\bullet\rangle_{m}|\bullet\rangle_{m+3}\:;\hskip 14.226378ptB_{m}=| \bullet\rangle_{m}|\bullet\rangle_{m+4}\:. \tag{121}\]
(Since the particles cannot really exchange, there is no need to consider spin degrees of freedom. Singlet and triplet pairs will have the same mass.) The Schrodinger equation reads
\[\tilde{E}A_{P} =-t\left(1+e^{-iPa}\right)B_{P}\:, \tag{122}\] \[\tilde{E}B_{P} =-t\left(1+e^{iPa}\right)A_{P}\:, \tag{123}\]
which yields \(\tilde{E}_{1,2}(P)=\pm 2t\cos\frac{Pa}{2}\) and
\[m^{*}=\frac{2\hbar^{2}}{ta^{2}}=4\,m_{0}\:. \tag{124}\]
Thus, the bound pair is _no heavier_ than just four free particle masses. Note that this conclusion does not depend on the separation distance at which the flat section of the potential occurs.
Figure 8: Illustration of the light bound pair mechanism. (a) Conventional pair movement. The intermediate configuration has a larger energy, which results in a mass growing linearly with \(|V|\). (b) In the resonant case, \(U=-|V|\), there are intermediate configurations with the same energy as the initial and starting configurations. (c) An attractive potential with a flat section at a nonzero separation between two particles. In this case, \(V_{3}=V_{4}\).
### Triplet states
The antisymmetrized set of vectors consists of just one element \(\{\mathbf{b}_{-}\}=\{(1)\}\) and there is one basis function \(\Phi_{1}^{-}(P)\). An antisymmetrized Schrodinger equation, Eq. (34), reads:
\[(E_{t}-\varepsilon_{k_{1}}-\varepsilon_{k_{2}})\,\phi_{k_{1}k_{2}}^ {-}=\] \[-|V|\frac{1}{N}\sum_{q}\phi_{q,\,k_{1}+k_{2}-q}^{-}e^{iq}\left(e^ {-ik_{1}}-e^{-ik_{2}}\right). \tag{125}\]
In terms of the auxiliary function
\[\Phi_{1}^{-}(P)=\frac{1}{N}\sum_{q}\phi_{q,\,k_{1}+k_{2}-q}^{-}\,e^{iq}\;, \tag{126}\]
the wave function is expressed as
\[\phi_{k_{1}k_{2}}^{-}=\frac{-|V|\Phi_{1}^{-}(P)\left(e^{-ik_{1}}-e^{-ik_{2}} \right)}{E-\varepsilon_{k_{1}}-\varepsilon_{k_{2}}}\;. \tag{127}\]
Substituting Eq. (127) back in the definition, Eq. (126), one obtains:
\[\Phi_{1}^{-}(P)=-|V|\Phi_{1}^{-}(P)\left(\frac{1}{N}\sum_{q}\frac{e^{iq}(e^{- iq}-e^{-i(P-q)})}{E-\varepsilon_{q}-\varepsilon_{P-q}}\right)\,. \tag{128}\]
Changing variables \(q^{\prime}=q-\frac{P}{2}\) in the last equation yields
\[\left\{1-|V|\left(M_{0}-M_{2}\right)\right\}\Phi_{1}^{-}=0\;. \tag{129}\]
Note that it is independent of \(U\), as expected for a triplet pair. Direct calculation results in
\[E_{t}(P)=-|V|-\frac{4t^{2}\cos^{2}\frac{P}{2}}{|V|}\,, \tag{130}\]
for the triplet energy, and
\[\frac{m_{pt}^{*}}{m_{0}}=\frac{|V|}{t}\,, \tag{131}\]
for the triplet effective mass, where \(m_{0}=\hbar^{2}/(2ta^{2})\). The triplet pair is stable when
\[|V|>|V_{\rm cr}^{t}|=2t\,\cos\frac{P}{2}\;. \tag{132}\]
Notice that in the \(U\to\infty\) limit, \(|V_{\rm cr}^{s}|=|V_{\rm cr}^{t}|\) and \(E_{s}=E_{t}\). The phase diagram of the 1D \(UV\) model at \(P=0\) is shown in Fig. 9(a).
## V UV model on the square lattice
Two-dimensional lattice models at low carrier density have been popular in the studies of HTSC because most high-\(T_{c}\) superconductors including the copper oxides are highly anisotropic. According to this point of view, superconductivity in cuprates is essentially two-dimensional. The (repulsive) 2D Hubbard model[109] and its derivative, the 2D \(t\)-\(J\) model,[59] were both put forward as capturing the essential physics.[110] Although this simple picture is being increasingly challenged,[111; 112; 113; 114] pure 2D models possess rich physics and remain popular in the fields of HTSC[115] and cold gases.[63] For the purposes of this review, one should mention that the \(t\)-\(J\) model "in the hole-rich regime"[33; 60; 61; 62] bears similarities with the \(UV\) model studied here and many of the results derived later in this section apply equally to both models.
### Singlet states. \(\Gamma\)-point
The symmetrized set of neighbor vectors consists of three elements: \(\{\mathbf{b}_{+}\}=\{(0,0),(1,0),(0,1)\}\). In writing down the \((3\times 3)\) system, Eqs. (27) and (28), it is convenient to shift the inner variables in \(M^{+}\): \(q_{x}^{\prime}=q_{x}-\frac{P_{x}}{2}\) and \(q_{y}^{\prime}=q_{y}-\frac{P_{y}}{2}\). That leads to a new set of functions: \(\tilde{\Phi}_{0}^{+}=\Phi_{00}^{+}\), \(\tilde{\Phi}_{x}^{+}=e^{-i(P_{x}/2)}\Phi_{10}^{+}\), and \(\tilde{\Phi}_{y}^{+}=e^{-i(P_{y}/2)}\Phi_{01}^{+}\). In terms of the new set, the consistency condition reads
\[\left|\begin{array}{ccc}1+UM_{00}&-2|V|M_{10}&-2|V|M_{01}\\ UM_{10}&1-|V|(M_{20}+M_{00})&-2|V|M_{11}\\ UM_{01}&-2|V|M_{11}&1-|V|(M_{02}+M_{00})\end{array}\right|\] \[=0\;, \tag{133}\]
where
\[M_{nm}=\int\limits_{-\pi-\pi}^{\pi}\!\!\!\!\!\int\limits_{-\pi}^{\pi}\frac{dq_ {x}\,dq_{y}}{(2\pi)^{2}}\frac{\cos nq_{x}\cos mq_{y}}{|E|-\alpha\cos q_{x}- \beta\cos q_{y}}\,, \tag{134}\]
\(\alpha=4t\cos\left(P_{x}/2\right)\), and \(\beta=4t\cos\left(P_{y}/2\right)\). \(M_{00}\) was given in Eq. (57). Other matrix elements in Eq. (133) can also be expressed via complete elliptic integrals, see Appendixes A.1 and A.5. The double integrals can also be computed numerically.
At the \(\Gamma\) point, \(P_{x}=P_{y}=0\) and \(\alpha=\beta=4t\). In this case, \(M_{10}=M_{01}\), \(M_{20}=M_{02}\), and Eq. (133) acquires additional symmetry. Introducing a new basis \(\Phi_{0}^{+}\)
Figure 9: Phase diagrams of two particles in the 1D and 2D square \(UV\) models at \(\mathbf{P}=0\).
\(\Phi_{s}^{+}=\frac{1}{2}(\Phi_{\bf x}^{+}+\Phi_{\bf y}^{+})\), and \(\Phi_{d}^{+}=\frac{1}{2}(\Phi_{\bf x}^{+}-\Phi_{\bf y}^{+})\), the equation splits into \(s\)-symmetric and \(d\)-symmetric sectors. The \(s\)-sector involves functions \(\Phi_{\bf 0}^{+}\) and \(\Phi_{s}^{+}\) and its consistency condition reads
\[\left|\begin{array}{cc}1+UM_{00}&-4|V|M_{10}\\ UM_{10}&1-|V|(M_{20}+M_{00}+2M_{11})\end{array}\right|=0\:. \tag{135}\]
The \(d\)-sector equation is obtained by subtracting the last two lines of Eq. (133). It involves only one function \(\Phi_{d}^{+}\) and does not include \(U\):
\[\left\{1-|V|(M_{20}+M_{00}-2M_{11})\right\}\Phi_{d}^{+}=0\:. \tag{136}\]
We begin analysis with the \(s\)-symmetrical ground state described by Eq. (135). First, we note that the combination \(M_{20}+M_{00}+2M_{11}=\frac{2|E|}{\alpha}M_{10}\) can be expressed via \(M_{10}\), and the latter can be expressed via \(M_{00}\) as \(M_{10}=\frac{1}{2\alpha}(|E|M_{00}-1)\). Thus, all the matrix elements in Eq. (135) are expressible via the base integral \(M_{00}\). Expanding the determinant, one obtains
\[(UM_{00}+1)+\frac{1}{\alpha^{2}}|V|(|E_{s}|+U)(1-|E_{s}|M_{00})=0\:, \tag{137}\]
where
\[M_{00}=\frac{2}{\pi|E_{s}|}\,{\bf K}\left(\frac{2\alpha}{|E_{s}|}\right); \hskip 14.226378pt\alpha=4t\:. \tag{138}\]
Equation (137) determines the energy of \(s\)-states in the \(\Gamma\) point. Depending on the values of \(U\) and \(V\), there may be one, two, or no bound states. Equation (137) should be compared with its 1D counterpart, Eq. (107). The former has a factor 1 in the second term while the latter has a factor 2. Otherwise, the two equations have similar structures.
Let us determine the pairing threshold for a positive \(U\). To this end, set \(E_{s}=-8t-0\) in Eq. (137). Then \(M_{00}\) logarithmically diverges. This yields a critical coupling strength:
\[|V_{\rm cr}^{s}|=\frac{2Ut}{U+8t}\:. \tag{139}\]
This line separates the regions of "no pairs" and "s-pairs" in the 2D \(UV\) phase diagram, see Fig. 9(b). Using the asymptotic behavior of the elliptic integral
\[{\bf K}\left(\frac{8t}{8t+\Delta}\right)\simeq\frac{1}{2}\log\frac{64\,t}{ \Delta}\,;\hskip 14.226378pt\Delta\ll t\:, \tag{140}\]
one obtains the binding energy near the threshold:
\[\Delta=(64e^{-\pi})\,t\,\exp\left(-\frac{2\pi t}{|V|-|V_{\rm cr}^{s}|}\right). \tag{141}\]
Note that the exponent is four times less than in the corresponding expression in the attractive Hubbard model, Eq. (60). This is because the \(UV\) model has four attractive sites instead of one. In the \(U\to\infty\) limit, \(|V_{\rm cr}^{s}|\to 2t\), and the general expression simplifies to
\[\Delta(U=\infty)=64\,t\,\exp\left(-\frac{\pi|V|}{|V|-2t}\right). \tag{142}\]
In this form, the binding energy was given in Ref. [62].
Turning now to the \(d\)-symmetric state, Eq. (136), one observes that the combination \(M_{20}+M_{00}-2M_{11}\) converges in the limit \(E\to-8t\). Utilizing explicit expressions in Appendix A.3, one derives the \(d\)-pairing threshold
\[|V_{\rm cr}^{d}|=\frac{2\pi t}{4-\pi}=(7.319584\ldots)\,t\:. \tag{143}\]
It is independent of \(U\), as expected for a \(d\)-symmetric wave function. General expressions for \(M_{00}\), \(M_{11}\), and \(M_{20}\) at arbitrary \(E\) are given in Appendix A.3. Using those, Eq. (136) defines the \(d\)-state energy as a function of \(V\).
### Singlet states. Arbitrary momentum \({\bf P}\)
Separation of the singlet dispersion relation, Eq. (133), into \(s\) and \(d\) sectors is also possible on the BZ diagonal. This is because at \(P_{x}=P_{y}\), the relations \(M_{10}=M_{01}\) and \(M_{20}=M_{02}\) continue to be valid. Transformations described in Sec. V.1 still apply, leading to the final dispersion relations, Eqs. (137) and (136). The only difference is the modified expression for \(\alpha=4t\cos{(P_{x}/2)}\). This simple dependence on \(P\) along the BZ diagonal can be used, for example, to extract the effective mass separately for \(s\)- and \(d\)-symmetrical pairs. Another consequence is a simple modification of the binding thresholds:
\[|V_{\rm cr}^{s}(P_{x}=P_{y})|=\frac{2U\,t\cos{\frac{P_{x}}{2}}}{U+8\,t\cos{ \frac{P_{x}}{2}}}\:. \tag{144}\]
\[|V_{\rm cr}^{d}(P_{x}=P_{y})|=\frac{2\pi\,t}{4-\pi}\,\cos{\frac{P_{x}}{2}}\:. \tag{145}\]
Like in the 1D \(UV\) model, see Eq. (110), these expressions indicate that the pairing thresholds _decrease_ at large lattice momenta. The energy of bound states still grow with \({\bf P}\) but the lowest energy of two free particles _at the same_\({\bf P}\) grows even faster. As a result, a bound pair may form at a finite \({\bf P}\) even if it is unstable at \({\bf P}=0\). This physics is much richer in 2D than in 1D. Below, we investigate it in some detail. [80]
In order to determine the binding threshold at an arbitrary \({\bf P}\), energy \(E\) must be set equal to the minimal energy of two free particles \(E_{11}=-\alpha-\beta\). Upon substitution \(E=E_{11}\), all \(M_{nm}\) in Eq. (133) diverge logarithmically. To regularize the determinant, express each \(M_{nm}\) as a sum of \(M_{00}\) and remaining difference \(L_{nm}\):
\[M_{nm}=M_{00}+(M_{nm}-M_{00})\equiv M_{00}+L_{nm}\:. \tag{146}\]
Note that all \(L_{nm}\) converge to finite values in the \(E=E_{11}\) limit. Explicit analytical expressions are given in Appendix A.7.
Insertion of Eq. (146) into Eq. (133) and expansion of the determinant leads to a lengthy expression that is a third-order polynomial in \(M_{00}\). However, the
and \(M_{00}^{2}\) terms cancel identically, and the determinant assumes the form
\[A\cdot M_{00}+B=0\:, \tag{147}\]
where
\[A = U|V|^{2}\left[(L_{20}-4L_{10})(L_{02}-4L_{01})\right. \tag{148}\] \[\left.-4(L_{10}+L_{01}-L_{11})^{2}\right]\] \[+U|V|\left[(4L_{01}-L_{02})+(4L_{10}-L_{20})\right]+U\] \[+|V|^{2}\left[2(L_{02}+L_{20})-8L_{11}\right]-4|V|\:,\]
and the specific form of \(B\) is unimportant. In the limit \(E\to E_{11}\), both \(A\) and \(B\) remain finite whereas \(M_{00}\) diverges. Thus, the binding condition becomes
\[A=0\:. \tag{149}\]
Equations (148) and (149) determine the binding threshold sought. For a given \(\mathbf{P}\), they define a function \(|V_{\rm cr}|(U)\). Alternatively, for some fixed \(U\) and \(|V|\), the threshold defines a line inside the BZ that separates unbound states at small \(\mathbf{P}\) from bound pairs at large \(\mathbf{P}\). An example of such boundary lines is shown in Fig. 10. Properties of these lines were studied in Ref. [80].
The existence of such lines poses an interesting question about pair formation at a finite but low particle density. Figure 11 shows the same \(s\) paring line as Fig. 10, together with the free-dispersion Fermi line corresponding to a Fermi energy \(E_{F}=-2.7\,t\) and filling factor \(0.113\). Consider states within one of the four outlined sectors, for example the states with \(k_{x}=0\) and \(k_{y}=k_{F}\) (the Fermi momentum). The combined momentum of two such particles would be \(\mathbf{P}=(0,2k_{F})\) which would land _beyond_ the \(s\)-pairing line. Therefore, the particles would tend to form a bound state. Intriguingly, the states near the Fermi line close the BZ diagonals do not show the same tendency because their combined momentum lies within the pairing surface, see Fig. 11. The entire Fermi line splits into eight disconnected segments: four with pairing and four without. The segments without pairing should produce sharp features in photoemission experiments ("Fermi arcs"), while those without pairing will produce emission lines separated from the Fermi energy by a gap. Such a behavior is consistent with photoemission signatures of some cuprates superconductors.[116]
Clearly, the central question is whether this empty-lattice picture retains its qualitative features at small but finite densities. That requires a reliable many-body method that can handle large systems such as Quantum Monte Carlo[117] or low-density \(T\)-matrix approximation.[118; 119] Such a treatment is beyond the scope of the present work and is not analyzed here.
Next, we discuss full pair dispersion \(E(\mathbf{P})\) for all \(\mathbf{P}\). Due to complexity of the main dispersion relation, Eq. (133), very little can be done analytically beyond the separation of \(s\) and \(d\) energies on BZ diagonals. Some simplification occurs at BZ edges. Let us set for definiteness, \(P_{x}=\pm\pi\). Then \(M_{10}=M_{20}=M_{11}=0\) by symmetry. The remaining matrix elements, \(M_{00}\)
Figure 11: Comparison of the pairing and Fermi lines in the 2D square \(UV\) model. The solid red line is the same \(s\) pairing line drawn in Fig. 10. The hatched area are free-particle states enclosed by a Fermi line with energy \(E_{F}=-2.7\,t\). It corresponds to a filling factor of \(0.113\) and total particle density of \(n=0.226\). The dashed line is the _twice_-Fermi momentum line. The four thick-line segments mark free particle states that would tend to pair up if they were moving in an empty lattice.
Figure 10: The pairing lines in the 2D square \(UV\) model for \(U=30\,t\) and \(|V|=1.1\,t\). Bound pairs form _outside_ the respective lines. The \(s\) and \(d\) lines are solutions of Eqs. (148) and (149). The \(p_{x}\) and \(p_{y}\) lines are solutions of Eqs. (152) and (153) at \(E=-\alpha-\beta\). At these parameters, both \(p\) and \(d\) lines terminate at the Brillouin zone boundaries.[80]
\(M_{01}\), and \(M_{02}\), become one-dimensional integrals given by Eq. (104). Upon expansion, the determinant splits in two factors. One has an explicit solution
\[E_{1}(\pm\pi,P_{y})=-\sqrt{|V|^{2}+\beta^{2}}\,, \tag{150}\]
where \(\beta=4t\cos{(P_{y}/2)}\). The second factor can be brought to the following form
\[2|V|(U+|E|)(|E|-\sqrt{|E|^{2}-\beta^{2}}) -\] \[\beta^{2}(U+\sqrt{|E|^{2}-\beta^{2}}) =0\,. \tag{151}\]
This equation does not have a simple-form analytical solution.
An example of numerical solution of Eq. (133) is shown in Fig. 12. A bound pair behaves as a single particle with a fairly complex dispersion. Notice degeneracies along some high-symmetry lines.
### Triplet states
There are two antisymmetrized vectors \(\{\mathbf{b}_{-}\}=\{(1,0),\)\((0,1)\}\) and two functions \(\Phi^{-}\). Upon constructing the system, Eq. (37), the integration variable in \(M_{nm}^{-}\) is shifted as \(\mathbf{q}^{\prime}=\mathbf{q}-\frac{\mathbf{P}}{2}\). Off-diagonal terms vanish by symmetry, and the \((2\times 2)\) system splits into two separate equations:
\[\{1-|V|\,(M_{00}-M_{20})\}\,\Phi_{p_{x}}^{-} =0\,, \tag{152}\] \[\{1-|V|\,(M_{00}-M_{02})\}\,\Phi_{p_{y}}^{-} =0\,. \tag{153}\]
Note that such a decomposition takes place over the entire BZ. Since \(M_{20}\neq M_{02}\), \(p_{x}\) and \(p_{y}\) energies are different, as shown Fig. 12. Along the diagonals, \(M_{20}=M_{02}\), and the dispersion becomes double-degenerate.
Let us derive the binding condition along BZ diagonals. In the limit \(E\to E_{11}\), the difference \(M_{00}-M_{20}\) converges. Making use of Eq. (104), one obtains:
\[|V_{cr}^{p}|=\frac{2\pi}{\pi-2}\,t\cos\frac{P_{x}}{2}=(5.503876\ldots)\,t\cos \frac{P_{x}}{2}\,. \tag{154}\]
The two-particle phase diagram of the square \(UV\) model for \(\mathbf{P}=0\) is shown in Fig. 9(b).
The binding condition at arbitrary \(\mathbf{P}\) can be obtained from Eqs. (152) and (153) if analytical expressions for the \(M\) differences, Eqs. (101) and (102), are utilized. An example of \(p_{x}\) and \(p_{y}\) pairing lines is given in Fig. 10. Analytical properties of the \(p\) pairing lines were studied in Ref. [80].
Consider now triplet energy at arbitrary \(\mathbf{P}\). Again, certain simplification takes place at BZ edges. At \(P_{x}=\pm\pi\), \(M_{20}=0\), and the energy of \(p_{x}\) pair is given by Eq. (150). Thus, the spectrum at the BZ edges is always double-degenerate. It can also be seen in Fig. 12. At the \(\Gamma\)-point, the \(p\) energy can be determined from the following equation
\[\pi-2\mathbf{E}\!\left(\frac{8t}{|E_{p}|}\right)=\frac{2\pi t}{|V|}\frac{8t}{ |E_{p}|}\,, \tag{155}\]
where Eqs. (103) and (104) have been utilized. Equation (155) does not have a simple analytical solution for \(E_{p}\).
### Longer-range attractions
The square \(UV\) model is the rare case when the two-body problem was solved for interactions beyond the nearest neighbors.[37; 93; 120] In Refs. [37] and [120], next-nearest interaction and next-nearest hopping were considered. In Ref. [93], attraction was extended up to the _seventh_ nearest neighbors (with the potential depth being constant within the radius of attraction \(R\)) but the hopping was limited to the first neighbors only. Below, we provide the reasoning behind the model analyzed in Ref. [93].
In order to form a bound pair, attraction must exceed a threshold to overcome a strong repulsive core. For the nearest-neighbor attraction, the threshold is about \(2t\), see Eq. (139). In physical units, this may be quite a large number. Let us assume the effective hopping of _holes_ in the cuprates to be \(t_{h}\sim 0.1\) eV.[121; 122] Then, the attraction must be of order \(V\simeq 0.1-0.2\) eV, which is arguably quite large. At the same time, the cuprates are anisotropic polar solids with low carrier density and poorly screened electron-phonon interactions. This favors formation of bipolarons.[57; 58; 123] It also results in longer-range shallow attractive potentials within the copper-oxygen planes. Spreading attraction to many sites allows lowering the threshold on individual sites. Since it is the "total power" of the potential that matters for binding, one expects the threshold to scale ap
Figure 12: Dispersion of bound pairs in the 2D square \(UV\) model for \(U=30\,t\) and \(V=-14\,t\). The \(s\) and \(d\) energies are solutions of Eq. (133). The \(s\) dispersion along the \(\mathrm{X-M}\) line is given by Eq. (150). \(p_{x}\) and \(p_{y}\) energies are solutions of Eqs. (152) and (153), respectively. Notice \(p_{x}\), \(p_{y}\) degeneracy along the \(\mathrm{M-\Gamma}\) line and \(s\), \(p_{x}\) degeneracy along the \(\mathrm{X-M}\) line.
proximately inversely to the number of sites participating in attraction, i.e., inversely with the potential _area_. (In continuous quantum mechanics the scaling is also \(V_{\rm cr}\propto 1/R^{2}\).) This is what was confirmed by exact calculations.[93] The dependence of \(|V_{cr}|(R)\) is shown in Fig. 13. It approximately follows the \(1/R^{2}\) scaling, as expected for a continuum problem, with fluctuations around the line which reflects the discrete nature of the lattice. One can see that by the 6th or 7th nearest neighbors, the threshold falls by an order of magnitude to about \(0.2\,t\simeq 0.02\) eV, which would be easier to attain in real solids.
One can add to this argument the light pairs mechanism already discussed in Sec. IV.2. In shallow long-range attractive potentials, two or more neighbors will have equal or close attractive strengths with high probability. That will enable resonant movement of pairs without breaking attractive bonds. As a result, the pair effective mass will remain of order \(m_{0}\) even in the strong coupling limit. We illustrate this mechanism for an attractive potential extended to the second nearest neighbors with \(V_{2}=V_{1}\). We introduce the strong-coupling singlet dimer basis[120]
\[D_{i,{\bf m}}=\frac{1}{\sqrt{2}}\left(|\uparrow\rangle_{\bf m}|\downarrow \rangle_{\bf m+b_{i}}+|\downarrow\rangle_{\bf m}|\uparrow\rangle_{\bf m+b_{i} }\right)\;, \tag{156}\]
with \({\bf b}_{1}=(1,0)\), \({\bf b}_{2}=(1,1)\), \({\bf b}_{3}=(0,1)\), and \({\bf b}_{4}=(-1,1)\). Hamiltonian action within this space is given by
\[\hat{H}D_{1,{\bf m}} = -t\left(D_{2,{\bf m}}\!+\!D_{2,{\bf m}-{\bf b}_{3}}\!+\!D_{4,{\bf m }+{\bf b}_{1}}\!+\!D_{4,{\bf m}-{\bf b}_{4}}\right),\] \[\hat{H}D_{2,{\bf m}} = -t\left(D_{1,{\bf m}}\!+\!D_{1,{\bf m}+{\bf b}_{3}}\!+\!D_{3,{\bf m }}\!+\!D_{3,{\bf m}+{\bf b}_{1}}\right),\] \[\hat{H}D_{3,{\bf m}} = -t\left(D_{4,{\bf m}}\!+\!D_{4,{\bf m}+{\bf b}_{1}}\!+\!D_{2,{\bf m }}\!+\!D_{2,{\bf m}-{\bf b}_{1}}\right), \tag{157}\] \[\hat{H}D_{4,{\bf m}} = -t\left(D_{3,{\bf m}}\!+\!D_{3,{\bf m}-{\bf b}_{1}}\!+\!D_{1,{\bf m }-{\bf b}_{1}}\!+\!D_{1,{\bf m}+{\bf b}_{4}}\right).\]
Converting the corresponding Schrodinger equation to momentum space and expanding the determinant, one obtains dimer dispersion:
\[\tilde{E}_{1,2}({\bf P}) = \pm(2t)\sqrt{2+\cos\left(P_{x}a\right)+\cos\left(P_{y}a\right)}\,, \tag{158}\] \[\tilde{E}_{3,4}({\bf P}) = 0\;, \tag{159}\]
where \(\tilde{E}\) is referenced from \(-|V|\). At small \({\bf P}\), \(\tilde{E}_{1}\approx-4t+(ta^{2}P^{2})/4\), from where the pair mass is
\[m_{p}^{*}=\frac{2\hbar^{2}}{ta^{2}}=4\,m_{0}\,, \tag{160}\]
where \(m_{0}=\hbar^{2}/(2ta^{2})\). Thus, in this model the mass enhancement is limited by the same factor of 4 as in the 1D model with long-range attraction, cf. Eq. (124). The dimer analysis can also be extended to nonzero second-neighbor hopping.[120]
Next, we rigorously solve the \(UV\) model with attraction extended to second nearest neighbors. In deviation from Ref. [93], the second neighbor attraction \(-|V_{2}|\) will be different from the first neighbor attraction \(-|V_{1}|\). We consider only singlet pairs and begin with \({\bf P}=0\).
According to the general theory, there are _five_ symmetrized neighbor vectors. They can be chosen, for example, as \(\{{\bf b}_{+}\}=\{(0,0),(1,0),(0,1),(1,-1),(1,1)\}\). Hence, there are five equations, Eqs. (27) and (28), for five functions \(\Phi^{+}\). The equations mix \(s\)- and \(d\)-symmetrical pair states. To untangle them, add and subtract the equations for \(\Phi^{+}_{(1,0)}\) and \(\Phi^{+}_{(0,1)}\), and then do the same for the equations for \(\Phi^{+}_{(1,-1)}\) and \(\Phi^{+}_{(1,1)}\). The two differences are
\[\left\{1-|V_{1}|(M_{20}+M_{00}-2M_{11})\right\}\left(\Phi^{+}_{1 0}-\Phi^{+}_{01}\right)=0\,, \tag{161}\] \[\left\{1-|V_{2}|(M_{22}+M_{00}-2M_{20})\right\}\left(\Phi^{+}_{1 -1}-\Phi^{+}_{11}\right)=0\,. \tag{162}\]
The first equation is equivalent to Eq. (136) and describes a \(d\)-pair previously discussed. It has lobes along \(x\) and \(y\) axes, and therefore may be called a \(d_{x^{2}-y^{2}}\) state. This state does not depend on \(V_{2}\) and has the same pairing threshold as before, Eq. (143). Equation (162) describes another \(d\) state with lobes along the square diagonals. It may be called a \(d_{xy}\) state. Note that the two \(d\) states do not mix. The \(d_{xy}\) state does not involve potential \(V_{1}\) because the wave function has nodes on the first nearest neighbors. The linear combination of \(M\)'s in Eq. (162) converges in the \(E\to E_{11}\) limit. Applying Eqs. (139) and (140), one derives the \(d_{xy}\) pairing threshold
\[|V_{2,{\rm cr}}^{d_{xy}}|=\frac{3\pi}{3\pi-8}\,t=(6.614909\ldots)\,t\;. \tag{163}\]
It is _smaller_ than the \(d_{x^{2}-y^{2}}\) threshold, Eq. (143), by \(\approx 10\%\). Because of a node at \({\bf m}=(0,0)\), the pair wave function is larger at the second nearest neighbors than at the first nearest neighbors. As a result, potential \(V_{2}\) has more "power" and produces a bound state at a slightly lower value that \(V_{1}\).
Returning to the general system, Eqs. (27) and (28), the two equation sums combine with the equation for \(\Phi^{+}_{(0,0)}\) and form the \(s\)-sector. The full system reads
Figure 13: Binding threshold of the \(UV\) model on the square lattice versus the distance of \(n\)th nearest neighbors from the origin. \(U=\infty\). Solid symbols are exact values obtained from solving two-body Schrödinger equations. (The values are 2.0, 0.90212, 0.57139, 0.31280, 0.25075, 0.20967, and 0.15584.) The solid line is a phenomenological dependence \(|V_{\rm cr}|=2/R^{2}\) and is guide to the eye. Adapted from Ref. [93].
For \(V_{2}=0\), it reduces to Eq. (139). For \(V_{1}=0\), Eq. (165) yields
\[|V_{2,\mathrm{cr}}|=\frac{3\pi Ut}{(16-3\pi)U+12t}\,. \tag{166}\]
For \(V_{1}=V_{2}\) and \(U\to\infty\), Eq. (165) reduces to
\[(32-9\pi)|V|^{2}+(6\pi-64)|V|+12\pi=0\;, \tag{167}\]
that was reported in Ref. [93]. The smallest root of this equation is \(V_{\mathrm{cr}}(2)=0.902120\), which is plotted in Fig. 13. \(V_{\mathrm{cr}}(2)\) should be compared with the \(U\to\infty\) limit of Eq. (139), which is \(V_{\mathrm{cr}}(1)=2.0\).
Decomposition of the full dispersion relation into \(s\) and \(d\) sectors also occurs on BZ diagonals \(\mathbf{P}=(P,P)\). The matrix elements \(M_{nm}\) are still given by their isotropic expressions listed in Appendix A.3 but with \(\alpha=4t\cos\left(P/2\right)\). This allows for numerical calculation of effective masses using the formula:
\[\frac{m_{p}^{*}}{m_{0}}=4t\left[\frac{\partial^{2}E(P)}{\partial P^{2}}\right] ^{-1}. \tag{168}\]
Note that Eq. (164) contains more than one \(s\) state,[37] but we consider only the lowest one. Figure 14 shows the effective mass for three different cases: (i) Attraction on the first neighbors only; (ii) Attraction on the second neighbors only; (iii) Attraction of equal strength on both the first and the second neighbors, \(V_{1}=V_{2}\). One can see that in the first two cases the mass grows linearly with \(V\) while in the third case the mass is bounded by the light pair limit given by Eq. (160). We also note that for \(|V|<10\,t\), the mass enhancement is very modest, \(m_{p}^{*}/m_{0}<4\), in all three cases.
## VI UV Model on the rectangular lattice
In the rectangular \(UV\) model, both hopping and interaction along \(x\) and \(y\) axes are different, as illustrated in Fig. 15. Such a model can potentially be realized in cold gases. The model is rich in physical content and smoothly interpolates between 1D and 2D square models investigated earlier. In this section, we mostly focus on deriving binding conditions. In difference from preceding sections, we consider the possibility of _repulsive_ nearest-neighbor interaction \(V\). Therefore, the equations are written in terms of \(V\) that can be of any sign rather than \(-|V|\).
### Singlet states. \(\Gamma\)-point
The dispersion relation is given by Eq. (133) in which \(-|V|\) is replaced with \(V_{x}\) and \(V_{y}\) in the second and third columns, respectively:
\[\left|\begin{array}{ccc}1+UM_{00}&2V_{x}M_{10}&2V_{y}M_{01}\\ UM_{10}&1+V_{x}(M_{20}+M_{00})&2V_{y}M_{11}\\ UM_{01}&2V_{x}M_{11}&1+V_{y}(M_{02}+M_{00})\end{array}\right|\] \[=0\;. \tag{169}\]
Matrix elements \(M_{nm}\) are still defined by Eq. (134) but with \(\alpha=4t_{x}\cos\left(P_{x}/2\right)\) and \(\beta=4t_{y}\cos\left(P_{y}/2\right)\). In the following, we only consider the ground state. Hence, \(\alpha=4t_{x}\) and \(\beta=4t_{y}\). At the threshold, \(E\to-\alpha-\beta\), and all \(M_{nm}\) logarithmically diverge. To obtain a finite
Figure 14: Pair effective mass in the 2D square \(UV_{1}V_{2}\) model with attraction on the first _and_ second nearest neighbors. The masses are obtained by numerically solving Eq. (164) for \(E\) and applying Eq. (168). \(U=30\,t\). The dashed line marks the light-pair limit, Eq. (160).
result, we utilize the subtractive procedure defined by Eq. (146). Substitution in Eq. (169) and expansion of the determinant results in a third-order polynomial in \(M_{00}\). The \(M_{00}^{3}\) and \(M_{00}^{2}\) terms vanish identically, and the determinant assumes the form of Eq. (147). From here, the binding condition is \(A=0\), or, in full form
\[UV_{x}V_{y}\left[(L_{20}-4L_{10})(L_{02}-4L_{01})\right.\] \[\qquad\qquad\left.-4(L_{10}+L_{01}-L_{11})^{2}\right]\] \[+UV_{x}\left[L_{20}-4L_{10}\right]+UV_{y}\left[L_{02}-4L_{01}\right]\] \[+V_{x}V_{y}\left[2(L_{02}+L_{20})-8L_{11}\right]\] \[+U+2V_{x}+2V_{y}=0\;. \tag{170}\]
Equation (170) is the general binding condition in the rectangular \(UV\) model. It reduces to Eq. (148) if \(V_{x}=V_{y}=-|V|\).
Let us analyze particular cases of Eq. (170). First, we investigate isotropic hopping, \(t_{y}=t_{x}=t\). Utilizing Eqs. (169)-(173), the binding condition becomes
\[\frac{4-\pi}{4\pi t^{2}}\,UV_{x}V_{y}+\frac{1}{\pi t}\,U(V_{x}+V_ {y})+\frac{2(4-\pi)}{\pi t}\,V_{x}V_{y}\] \[\qquad+U+2(V_{x}+V_{y})=0\;. \tag{171}\]
For isotropic attraction, \(V_{x}=V_{y}=-|V|\), Eq. (171) reduces to the product of \(s\)- and \(d\)- thresholds, given by Eqs. (139) and (143), respectively. To get a sense of the effect of strong _nearest neighbor_ repulsion, we set \(V_{x}=U\) and then \(U\to\infty\). Obviously, to form a bound state, \(V_{y}\) must be attractive. Equation (171) yields the threshold:
\[|V_{y,{\rm cr}}|=\frac{4t}{4-\pi}=(4.569792\ldots)\,t\;. \tag{172}\]
Next, we consider the case of anisotropic attraction, \(V_{x}=0\), \(V_{y}<0\), and arbitrary hopping anisotropy. Equation (171) reduces to
\[|V_{y,{\rm cr}}|=\frac{U}{2+U(L_{02}-4L_{01})}\;. \tag{173}\]
According to Appendix A.7,
\[L_{02}-4L_{01}=\frac{1}{\pi t_{y}}\left\{\frac{t_{y}-t_{x}}{t_{y}}\arcsin \sqrt{\frac{t_{y}}{t_{x}+t_{y}}}+\sqrt{\frac{t_{x}}{t_{y}}}\right\}. \tag{174}\]
Several particular cases are of interest.
(i) \(\underline{t_{x}=0}.\) In this case, the system splits into individual \(y\) chains, each of which hosts a 1D \(UV\) model. \(L_{02}-4L_{01}=(2t_{y})^{-1}\) and \(|V_{y,{\rm cr}}|=2Ut_{y}/(U+4t_{y})\), which coincides with the earlier result, Eq. (110).
(ii) \(\underline{t_{x}=t_{y}=t}.\) This is the case of isotropic hopping. \(L_{02}-4L_{01}=(\pi t)^{-1}\) and
\[|V_{y,{\rm cr}}|=\frac{\pi Ut}{U+2\pi\,t}\;. \tag{175}\]
This expression should be compared with its isotropic attraction counterpart, Eq. (139). The attraction strength must be larger because there are only two attractive sites rather than four. In the \(U=\infty\) limit, the threshold value is \(\pi t\) rather than \(2t\).
(iii) \(\underline{t_{y}=0}.\) In this case, the system splits into individual \(x\) chains. The two particles reside on adjacent chains and feel an attraction \(V_{y}\) when their \(x\) coordinates coincide. The model is isomorphic to the 1D attractive Hubbard model. In the \(t_{y}\to 0\) limit, \(L_{02}-4L_{01}\) diverges as \(\simeq 4(3\pi)^{-1}(t_{x}t_{y})^{-1/2}\). Hence, \(|V_{y,{\rm cr}}|\to 0\), as expected for the 1D attractive Hubbard model.
### Triplet states. \(\Gamma\)-point
Dispersions of \(p_{x}\) and \(p_{y}\) pairs are given by Eqs. (152) and (153) with \(V\) replaced by \(V_{x}\) and \(V_{y}\), respectively. The binding thresholds at \({\bf P}=(0,0)\) are
\[|V_{x,{\rm cr}}| = \frac{\pi t_{x}}{\frac{t_{x}+t_{y}}{t_{x}}\arcsin\sqrt{\frac{t_{x} }{t_{x}+t_{y}}}-\sqrt{\frac{t_{x}}{t_{x}}}}\,, \tag{176}\] \[|V_{y,{\rm cr}}| = \frac{\pi t_{y}}{\frac{t_{x}+t_{y}}{t_{y}}\arcsin\sqrt{\frac{t_{y} }{t_{x}+t_{y}}}-\sqrt{\frac{t_{x}}{t_{y}}}}\,. \tag{177}\]
Consider the \(p_{y}\) pair. In the \(t_{x}=0\) limit, \(|V_{y,{\rm cr}}|=2t_{y}\), which is the correct threshold for the 1D \(UV\) model, see Eq. (132). In the isotropic hopping case, \(t_{x}=t_{y}=t\), \(|V_{y,{\rm cr}}|=2\pi t/(\pi-2)\), wich coincides with Eq. (154). Finally, in the \(t_{y}\to 0\) limit, \(|V_{y,{\rm cr}}|=\frac{3\pi}{2}\sqrt{t_{x}t_{y}}\to 0\). The \(p_{y}\) pair forms with a zero threshold.
## VII UV model on the triangular lattice
The triangular \(UV\) model possesses another qualitative feature: light pairs are formed due to lattice topology rather than degeneracy of the attractive potential. Consider Fig. 16. In a tightly bound pair, the members can
Figure 15: The rectangular \(UV\) model.
move through the lattice while remaining nearest neighbors to each other and without breaking the main attractive bond. As a result, the pair remains light even when attraction is limited to first nearest neighbors. Analysis of dimer motion [124; 125; 126] predicts that the mass of the lowest singlet pair is \(m_{p}^{*}<6\,m_{0}\).
This situation is more general than it might seem. There exist other lattices that can support light pairs with first neighbor attraction. One example is the staggered ladder,[57; 124] where the two ladder chains are shifted relative to each other by half of lattice constant. Similarly, two square lattices stuck one on top of each other and shifted by half diagonal of elementary plaquette will support "crab motion" as long as the two constituent particles reside on different layers, see Appendix E.1. In 3D, the face-centered cubic lattice supports light pairs.[83] Furthermore, if the range of primary hopping is extended, then more lattices are added to the list. For example, the square lattice with next nearest neighbor hopping (across the elementary plaquette) produces crab motion and light pairs as a result.[57; 123; 58]
The light pair effect has implications for the formation of phonon bipolarons. It is by now well understood[127] that long-range electron-ion interactions exponentially reduce the effective mass of _polarons_, and consequently of polaron pairs, bipolarons. On triangular-like lattices, the bipolarons acquire additional lightness due to crab motion and become "superlight"[123; 124; 94; 128] Based on that and on the BEC formula, Eq. (1), it was suggested that triangular-like lattices could provide even higher-\(T_{c}\) superconductivity than square-like lattices.[128]
The \(UV\) model on triangular lattice was solved by Bak[81] using a method similar to our unsymmetrized solution, and then in Ref. [94]. Here we analyze the model with emphasis on the pair mass, dispersion, and binding conditions.
### General dispersion relations
In the singlet sector, there are four symmetrized vectors: \(\{\mathbf{b}_{+}\}=\{(0,0),(1,0),(\frac{1}{2},\frac{\sqrt{3}}{2}),(-\frac{1}{ 2},\frac{\sqrt{3}}{2})\}\equiv\{\mathbf{0},\mathbf{1},\mathbf{2},\mathbf{3}\}\). Changing the momentum variable, \(\mathbf{q}^{\prime}=\mathbf{q}-\frac{\mathbf{p}}{2}\), and redefining amplitudes as \(\tilde{\Phi}_{\mathbf{b}}^{+}=e^{-i(\mathbf{P}/2)\mathbf{b}}\Phi_{\mathbf{b}} ^{+}\), the master equations, Eqs. (27) and (28), take the form
\[\left[\begin{array}{cccc}1+UM_{\mathbf{00}}^{+}&-|V|M_{\mathbf{01}}^{+}&-|V |M_{\mathbf{02}}^{+}&-|V|M_{\mathbf{03}}^{+}\\ UM_{\mathbf{10}}^{+}&1-|V|M_{\mathbf{11}}^{+}&-|V|M_{\mathbf{12}}^{+}&-|V|M_{ \mathbf{13}}^{+}\\ UM_{\mathbf{20}}^{+}&-|V|M_{\mathbf{21}}^{+}&1-|V|M_{\mathbf{22}}^{+}&-|V|M_{ \mathbf{23}}^{+}\\ UM_{\mathbf{30}}^{+}&-|V|M_{\mathbf{31}}^{+}&-|V|M_{\mathbf{32}}^{+}&1-|V|M_{ \mathbf{33}}^{+}\end{array}\right]\left[\begin{array}{c}\tilde{\Phi}_{ \mathbf{b}}^{+}\\ \tilde{\Phi}_{\mathbf{b}}^{+}\\ \tilde{\Phi}_{\mathbf{b}}^{+}\\ \tilde{\Phi}_{\mathbf{b}}^{+}\end{array}\right]=0\,. \tag{178}\]
Here
\[M_{\mathbf{00}}^{+} = \frac{1}{N}\sum_{\mathbf{q}}\frac{1}{W}\;, \tag{179}\] \[M_{\mathbf{0}\mathbf{b}_{+}^{\prime}}^{+} = \frac{1}{N}\sum_{\mathbf{q}}\frac{2\cos{(\mathbf{q}\mathbf{b}_{+}^ {\prime})}}{W}\,,\] (180) \[M_{\mathbf{b}_{+}\mathbf{0}}^{+} = \frac{1}{N}\sum_{\mathbf{q}}\frac{e^{i\mathbf{q}\mathbf{b}_{+}}}{W}\,,\] (181) \[M_{\mathbf{b}_{+}\mathbf{b}_{+}^{\prime}}^{+} = \frac{1}{N}\sum_{\mathbf{q}}\frac{2\cos{(\mathbf{q}\mathbf{b}_{+}^ {\prime})}\,e^{i\mathbf{q}\mathbf{b}_{+}}}{W}\,, \tag{182}\]
\[W = |E|-\alpha\cos{q_{x}}-\beta\cos{\frac{q_{x}}{2}}\cos{\frac{\sqrt{3} q_{y}}{2}} \tag{183}\] \[-\gamma\sin{\frac{q_{x}}{2}}\sin{\frac{\sqrt{3}q_{y}}{2}}\;,\]
and \(\alpha\), \(\beta\), and \(\gamma\) are defined in Eqs. (B3)-(B5). Integration over BZ can be replaced with integration over the rectangle \(0\leq q_{x}\leq 2\pi\), \(0\leq q_{y}\leq(4\pi)/\sqrt{3}\). The expressions for \(M^{+}\) are given in Appendix B.2. The following relations hold for any \(\mathbf{P}\):
\[M_{\mathbf{01}}^{+}=2M_{\mathbf{10}}^{+}\,,\quad M_{\mathbf{20}}^ {+}=2M_{\mathbf{20}}^{+}\,,\quad M_{\mathbf{30}}^{+}=2M_{\mathbf{30}}^{+}\,, \tag{184}\] \[M_{\mathbf{12}}^{+}=M_{\mathbf{21}}^{+}\,,\quad M_{\mathbf{13}} ^{+}=M_{\mathbf{31}}^{+}\,,\quad M_{\mathbf{23}}^{+}=M_{\mathbf{32}}^{+}\,. \tag{185}\]
Additional relations hold along BZ symmetry lines.
In the triplet sector, there are three anti-symmetrized vectors: \(\{\mathbf{b}_{-}\}=\{\mathbf{1},\mathbf{2},\mathbf{3}\}\). Performing the same variable change, \(\mathbf{q}^{\prime}=\mathbf{q}-\frac{\mathbf{P}}{2}\), and amplitude redefinition
Figure 16: In the triangular \(UV\) model, the pair moves in first order in hopping even when the attraction is limited to first nearest neighbors.
\(e^{-i(\mathbf{P}/2)\mathbf{b}_{-}}\Phi_{\mathbf{b}_{-}}^{-}\), the master equations yield
\[\left[\begin{array}{ccc}1-|V|M_{\mathbf{11}}^{-}&-|V|M_{\mathbf{12}}^{-}&-|V|M_ {\mathbf{13}}^{-}\\ -|V|M_{\mathbf{21}}^{-}&1-|V|M_{\mathbf{22}}^{-}&-|V|M_{\mathbf{23}}^{-}\\ -|V|M_{\mathbf{31}}^{-}&-|V|M_{\mathbf{32}}^{-}&1-|V|M_{\mathbf{33}}^{-}\end{array} \right]\left[\begin{array}{c}\tilde{\Phi}_{1}^{-}\\ \tilde{\Phi}_{2}^{-}\\ \tilde{\Phi}_{3}^{-}\end{array}\right]=0\,, \tag{186}\]
where
\[M_{\mathbf{b}_{-}\mathbf{b}_{-}^{\prime}}^{-}=\frac{1}{N}\sum_{\mathbf{q}} \frac{(-2i)\sin\left(\mathbf{q}\mathbf{b}_{-}^{\prime}\right)e^{i\mathbf{q} \mathbf{b}_{-}}}{W}\,. \tag{187}\]
The expressions for \(M^{-}\) are also given in Appendix B.2, Eqs. (118)-(119). One has:
\[M_{\mathbf{12}}^{-}=M_{\mathbf{21}}^{-}\,,\ \ \ \ \ M_{\mathbf{13}}^{-}=M_{\mathbf{31}}^{-}\,,\ \ \ \ \ M_{\mathbf{23}}^{-}=M_{\mathbf{32}}^{-}\,. \tag{188}\]
Thus, the triplet dispersion is described by a symmetric matrix for all \(\mathbf{P}\).
A typical pair dispersion is shown in Fig. 17.
Of note is the fact that attraction \(V=-16\,t\) is barely above the pairing threshold for the highest, \(f\), state given by Eq. (202). Thus, all six pair states are well defined in the entire BZ. At weaker attractions, the bound states begin to disappear into the two-particle continuum one-by-one.
The effective mass of the lowest, \(s\)-symmetric pair is shown in Fig. 18. One can observe that the mass indeed approaches the strong-coupling limit,[124; 125; 126]\(m_{p}^{*}/m_{0}=6\), for all \(U\). However, the approach is slow. For most realistic attractions, \(|V|<20\,t\), the pair mass is no heavier than just 4 free-particle masses.
### Pairing thresholds at the \(\Gamma\)-point
Determination of pairing thresholds from the general dispersion relations, Eqs. (178) and (186), at arbitrary \(\mathbf{P}\)
can be done only numerically. At the \(\Gamma\)-point, the systems acquire additional symmetries and the thresholds can be derived analytically. Analysis is greatly aided by the group theory. All the necessary information is given in Appendix B.4.
For the singlet states, we combine \(\Phi_{0}\), which is unchanged, with symmetrized combinations of basis functions given by Eqs. (111) and (112) into a new basis:
\[\left[\begin{array}{c}\Phi_{0}\\ \Phi_{s}\\ \Phi_{d_{1}}\\ \Phi_{d_{2}}\end{array}\right]=\left[\begin{array}{cccc}1&0&0&0\\ 0&1&1&1\\ 0&2&-1&-1\\ 0&0&1&-1\end{array}\right]\left[\begin{array}{c}\Phi_{0}\\ \Phi_{1}^{+}\\ \Phi_{2}^{+}\\ \Phi_{3}^{+}\end{array}\right]\equiv\hat{A}_{S}\left[\begin{array}{c}\Phi_ {0}\\ \Phi_{1}^{+}\\ \Phi_{2}^{+}\\ \Phi_{3}^{+}\end{array}\right]\,. \tag{189}\]
In terms of the new basis, dispersion equation, Eq. (178), transforms into a block-diagonal form:
\[\hat{A}_{S}\cdot[\ldots]\cdot\hat{A}_{S}^{-1}=\left[\begin{array}{cccc}1+ UM_{\mathbf{00}}^{+}&-2|V|M_{\mathbf{10}}^{+}&0&0\\ 3UM_{\mathbf{10}}^{+}&1-|V|(M_{\mathbf{11}}^{+}+2M_{\mathbf{12}}^{+})&0&0\\ 0&0&1-|V|(M_{\mathbf{11}}^{+}-M_{\mathbf{12}}^{+})&0\\ 0&0&0&1-|V|(M_{\mathbf{11}}^{+}-M_{\mathbf{12}}^{+})\end{array}\right]\left[ \begin{array}{c}\Phi_{0}\\ \Phi_{s}\\ \Phi_{d_{1}}\\ \Phi_{d_{2}}\end{array}\right]=0\,. \tag{190}\]
The top-left \(2\times 2\) block describes an \(s\)-symmetric ground state. To find the binding condition, set \(E\to-12\,t\), at
Figure 17: Dispersion of bound pairs in the triangular \(UV\) model for \(U=30\,t\) and \(V=-16\,t\). \(E=-12\,t\) is the lowest energy of two free particles at the \(\Gamma\)-point.
Figure 18: Effective mass of the lowest, \(s\)-symmetric bound pair in the triangular \(UV\) model. The strong coupling limit is \(m_{p}^{*}/m_{0}=6\).
which all \(M^{+}\) diverge logarithmically. To obtain a finite result, introduce the differences:
\[L^{+}_{\bf 10}=M^{+}_{\bf 10}-M^{+}_{\bf 00}\;, \tag{191}\] \[L^{+}_{\bf 11}=M^{+}_{\bf 11}-2M^{+}_{\bf 00}\,,\] (192) \[L^{+}_{\bf 12}=M^{+}_{\bf 12}-2M^{+}_{\bf 00}\,. \tag{193}\]
Next, express \(M^{+}\) via \(L^{+}\) and \(M^{+}_{\bf 00}\) and expand the \(2\times 2\) determinant. The \((M^{+}_{\bf 00})^{2}\) term vanishes identically while the coefficient at \(M^{+}_{\bf 00}\) must be zero. This leads to a binding condition
\[|V^{s}_{\rm cr}|=\frac{U}{(L^{+}_{\bf 11}+2L^{+}_{\bf 12}-12L^{+}_{\bf 10})\,U+6}\,. \tag{194}\]
Analytical expressions for \(L^{+}\) are given in Appendix B.3. The final result reads
\[|V^{s}_{\rm cr}|=\frac{2Ut}{U+12\,t}\,. \tag{195}\]
The other two \(1\times 1\) blocks in Eq. (190) describe a \(d\)-symmetric dublet. Direct calculation yields the pairing threshold
\[|V^{d}_{\rm cr}|=\frac{4\pi\,t}{3(2\sqrt{3}-\pi)}=(12.998135\ldots)\,t\,. \tag{196}\]
We now turn to the triplet dispersion, Eq. (186). Combining Eqs. (190) and (191) into one transformation, one obtains a new basis
\[\left[\begin{array}{c}\Phi_{p_{1}}\\ \Phi_{p_{2}}\\ \Phi_{f}\end{array}\right]=\left[\begin{array}{ccc}0&-1&-1\\ 2&1&-1\\ 1&-1&1\end{array}\right]\left[\begin{array}{c}\Phi^{-}_{1}\\ \Phi^{-}_{2}\\ \Phi^{-}_{3}\end{array}\right]\equiv\hat{A}_{T}\left[\begin{array}{c}\Phi^ {-}_{1}\\ \Phi^{-}_{2}\\ \Phi^{-}_{3}\end{array}\right]\,. \tag{197}\]
A transformed dispersion equation reads
\[\hat{A}_{T}\cdot[\ldots]\cdot\hat{A}^{-1}_{T}=\left[\begin{array}{ccc}1-|V| (M^{-}_{\bf 11}+M^{-}_{\bf 12})&0&0\\ 0&1-|V|(M^{-}_{\bf 11}+M^{-}_{\bf 12})&0\\ 0&1&-|V|(M^{-}_{\bf 11}-2M^{-}_{\bf 12})\end{array}\right]\left[\begin{array}{c} \Phi_{p_{1}}\\ \Phi_{p_{2}}\\ \Phi_{f}\end{array}\right]=0\,. \tag{198}\]
The first two blocks describe a \(p\)-symmetric doublet, whereas the lower-right block describes one \(f\)-symmetric state. Note that both \(M^{-}_{\bf 11}\) and \(M^{-}_{\bf 12}\)_converge_ at threshold, and no subtraction procedure is necessary. Starting with Eqs. (193) and (194), setting \(\alpha=4t\), \(\beta=8t\), \(\gamma=0\), \(E=-12t\), elementary integration yields
\[M^{-}_{\bf 11}(\Gamma,E=-12\,t) =\frac{2\pi-3\sqrt{3}}{3\pi t}\,, \tag{199}\] \[M^{-}_{\bf 12}(\Gamma,E=-12\,t) =\frac{2\sqrt{3}-\pi}{4\pi t}\,. \tag{200}\]
Consequently, the pairing thresholds are
\[|V^{p}_{\rm cr}|=\frac{12\pi t}{5\pi-6\sqrt{3}}=(7.092087\ldots)\,t\,, \tag{201}\] \[|V^{f}_{\rm cr}|=\frac{6\pi t}{7\pi-12\sqrt{3}}=(15.622833\ldots) \,t\,. \tag{202}\]
## VIII \(Uv\) model on the simple cubic lattice
One qualitatively new feature of 3D models is a larger role of kinetic energy. A finite attraction is needed to bind a pair even at \(U=0\). The \(UV\) model on the simple cubic lattice was solved in Refs. [11] and [95]. In this section, those results are rederived and extended using the (anti)-symmetrized method developed here.
### Singlet states
The symmetrized set of vectors consists of four elements: \(\{{\bf b}_{+}\}=\{(0,0,0),(1,0,0),(0,1,0),(0,0,1)\}\). Similar to the square lattice, Eqs. (27) and (28) are transformed by changing integration variables, \(q^{\prime}_{j}=q_{j}-\frac{P_{j}}{2}\), and functions, \(\tilde{\Phi}^{+}_{\bf x}=e^{-i(P_{x}/2)}\Phi^{+}_{100}\), and so on. The resulting \(4\times 4\) linear system reads
\[\left[\begin{array}{cccc}1+UM_{000}&-2|V|M_{100}&-2|V|M_{010}&-2|V|M_{001} \\ UM_{100}&1-|V|(M_{000}+M_{200})&-2|V|M_{110}&-2|V|M_{101}\\ UM_{010}&-2|V|M_{110}&1-|V|(M_{000}+M_{020})&-2|V|M_{011}\\ UM_{001}&-2|V|M_{101}&-2|V|M_{011}&1-|V|(M_{000}+M_{002})\end{array}\right] \left[\begin{array}{c}\tilde{\Phi}_{0}\\ \tilde{\Phi}^{+}_{\bf x}\\ \tilde{\Phi}^{+}_{\bf y}\\ \tilde{\Phi}^{+}_{\bf y}\\ \tilde{\Phi}^{+}_{\bf y}\\ \end{array}\right]=0\,, \tag{203}\]
\[M_{nmk}=\int\limits_{-\pi\cdots-\pi}^{\pi}\!\!\!\int\limits_{-\pi\cdots}^{\pi}\frac{ dq_{x}\,dq_{y}\,dq_{z}}{(2\pi)^{3}}\frac{\cos nq_{x}\cos mq_{y}\cos kq_{z}}{|E|- \alpha\cos q_{x}-\beta\cos q_{y}-\gamma\cos q_{z}}\,, \tag{204}\]
where \(\alpha=4t\cos\left(P_{x}/2\right)\), \(\beta=4t\cos\left(P_{y}/2\right)\), and \(\gamma=4t\cos\left(P_{z}/2\right)\). Pair dispersion is obtained by equating the \(4\times 4\) determinant to zero. The quantities \(M_{nmk}\) are generalized Watson integrals. For arbitrary \(\mathbf{P}\), they can be computed numerically. On BZ diagonals, including the \(\Gamma\) point, \(\alpha=\beta=\gamma\). All \(M_{nmk}\) in Eq. (203) can be expressed via the complete elliptic integrals in closed form. The expressions are given in Appendix C.1.
At the \(\Gamma\)-point, \(M_{100}=M_{010}=M_{001}\), \(M_{200}=M_{020}=M_{002}\), \(M_{110}=M_{101}=M_{011}\), and the matrix in Eq. (203) acquires additional symmetries. A point group analysis yields a new basis
\[\left[\begin{array}{c}\Phi_{0}\\ \Phi_{s}\\ \Phi_{d_{1}}\\ \Phi_{d_{2}}\end{array}\right]=\left[\begin{array}{ccc}1&0&0&0\\ 0&1&1&1\\ 0&2&-1&-1\\ 0&0&1&-1\end{array}\right]\left[\begin{array}{c}\Phi_{0}\\ \Phi_{\pi}^{+}\\ \Phi_{\pi}^{+}\\ \Phi_{\pi}^{+}\end{array}\right]\,, \tag{205}\]
in terms of which the dispersion equation becomes block-diagonal:
\[\left[\begin{array}{cccc}1+UM_{000}&-2VM_{100}&0&0\\ 3UM_{100}&1-V(M_{000}\!+\!M_{200}\!+\!4M_{110})&0&0\\ 0&0&1-V(M_{000}\!+\!M_{200}\!-\!2M_{110})&0\\ 0&0&0&1-V(M_{000}\!+\!M_{200}\!-\!2M_{110})\end{array}\right]\!\!\left[ \begin{array}{c}\Phi_{0}\\ \Phi_{s}\\ \Phi_{d_{1}}\\ \Phi_{d_{2}}\end{array}\right]=0\,. \tag{206}\]
The upper-left corner describes an \(s\)-symmetric ground state. Remarkably, all the matrix elements can be expressed via the basic integral \(M_{000}\):
\[M_{100} = \frac{1}{3\alpha}\left(|E|M_{000}-1\right), \tag{207}\] \[M_{000}+M_{200}+4M_{110} = \frac{2|E|}{3\alpha^{2}}\left(|E|M_{000}-1\right). \tag{208}\]
Expanding the \(2\times 2\) determinant, the \(s\)-pair dispersion equation becomes
\[\left(UM_{000}+1\right)+\frac{2}{3\alpha^{2}}|V|\left(|E_{s}|+U\right)\left(1- |E_{s}|M_{000}\right)=0\,, \tag{209}\]
where \(M_{000}\) is given in Eq. (76) or (C3). It is instructive to compare Eq. (209) with its 1D and 2D counterparts, Eqs. (107) and (137), respectively, which suggests generalizations for \(UV\) models on the primitive hyper-cubic lattices in any dimension. This topic is not pursued further here.
From Eq. (209), pair energy can be found numerically for any given \(U\), \(V\), and \(\alpha\). Let us determine the binding threshold along BZ diagonals. To that end, set \(|E|=3\alpha\). The numerical value of \(M_{000}\) at this energy was given in Eq. (71). It can be written as
\[M_{000}(3\alpha)=\frac{4t}{\alpha U_{0}}=\frac{1}{U_{0}\cos\left(P_{x}/2\right) }\,, \tag{210}\]
where \(U_{0}\equiv(7.913552\ldots)t\) is the pairing threshold in the attractive Hubbard model at \(\mathbf{P}=0\), see Eq. (72). Using \(\alpha=4t\cos\left(P_{x}/2\right)\), one obtains from Eq. (209)
\[|V_{\rm cr}^{s}({\rm diag})|=\frac{24t^{2}\cos\left(P_{x}/2\right)}{(12t-U_{ 0})}\frac{\left[U+U_{0}\cos\left(P_{x}/2\right)\right]}{\left[U+12t\cos\left( P_{x}/2\right)\right]}\,. \tag{211}\]
Next, consider the \(d\)-symmetric doublet described by the low-right corner of Eq. (206). To determine the binding threshold, use Eqs. (111) and (C13) to find
\[M_{000}(3\alpha)+M_{200}(3\alpha)-2M_{110}(3\alpha)=\] \[7M_{000}(3\alpha)+\frac{6}{\pi^{2}\alpha^{2}M_{000}(3\alpha)}- \frac{4}{\alpha}\,, \tag{212}\]
from where
\[|V_{\rm cr}^{d}({\rm diag})| = \frac{16\pi^{2}U_{0}\,t^{2}\cos\left(P_{x}/2\right)}{56\pi^{2}t^{ 2}+3U_{0}^{2}-8\pi^{2}U_{0}t} \tag{213}\] \[= (10.796952\ldots)\,t\,\cos\left(P_{x}/2\right).\]
The full phase diagram is shown in Fig. 19.
### Triplet states
The anti-symmetrized set of vectors consists of three elements: \(\{\mathbf{b}_{-}\}=\{(1,0,0),(0,1,0),(0,0,1)\}\). The dispersion equation splits into three independent equations describing three \(p\)-symmetric pairs:
\[\left[1-|V|(M_{000}-M_{200})\right]\Phi_{\mathbf{x}}^{-}=0\,, \tag{214}\] \[\left[1-|V|(M_{000}-M_{020})\right]\Phi_{\mathbf{y}}^{-}=0\,,\] (215) \[\left[1-|V|(M_{000}-M_{002})\right]\Phi_{\mathbf{x}}^{-}=0\,. \tag{216}\]
Note that the decomposition into three independent equations occurs at any pair momentum \(\mathbf{P}\). On BZ diagonals, the spectrum is triple degenerate because \(M_{200}=M_{202}=M_{200}\).
To obtain pair energies, Eqs. (214)-(216) ought to be solved numerically. On BZ diagonals, analytical expressions for \(M_{000}\) and \(M_{200}\) are given in Appendix C.1.
To find the binding threshold, set \(|E|=3\alpha\) and apply Eqs. (111) and (210). The result is
\[|V_{\rm cr}^{\rm p}({\rm diag})| = \frac{24\pi^{2}U_{0}\,t^{2}\cos\left(P_{x}/2\right)}{8\pi^{2}U_{0}t -56\pi^{2}t^{2}-3U_{0}^{2}} \tag{217}\] \[= (9.530994\ldots)\,t\,\cos\left(P_{x}/2\right).\]
This critical value is plotted in Fig. 19.
## IX Tetragonal \(Uv\) model
In the _tetragonal_\(UV\) model, attractive potential \(V_{z}\) and hopping integral \(t_{z}\) along \(z\) axis differ from their \(xy\) counterparts, see Fig. 20. The two extra parameters bring considerable richness and complexity. The tetragonal model smoothly interpolates between the quasi-2D limit \(t_{z}\ll t\), \(V_{z}\ll V\) (Sec. V) and the quasi-1D limit \(t_{z}\gg t\), \(V_{z}\gg V\) (Sec. IV) via the isotropic 3D case (Sec. VIII). The quasi-2D sector is most relevant to the physics of high-temperature superconductors, as discussed in the Introduction. From that standpoint, the special case of \(0\leq t_{z}\leq t\) and \(V_{z}=0\) was analyzed in Ref. [32]. It was argued that both \(t_{z}\) and \(V\) have nonmonotonic effects on preformed-pair superconductivity. A small \(V\) cannot form pairs whereas a large \(V\) produces pairs that are too heavy. In both cases, superconductivity is suppressed. (This may help understand why the highest \(T_{c}\) occurs at intermediate electron-phonon coupling in HTSC.[96] Likewise, a large \(t_{z}\) destroys the pairs because large kinetic energy overcomes a moderate \(V\). In the opposite limit of very small \(t_{z}\), pairs lose 3D coherence, \(z\)-axis mass becomes very large, and the condensation temperature drops. Thus, it was argued, preformed pair superconductivity is optimal at intermediate values of \(V/t\) and \(t_{z}/t\).
In this section, the general case of nonzero \(V_{z}\) is considered. Due to the model's complexity, very few results can be derived analytically. Bulk of the results presented below are obtained by solving pair dispersion equations numerically.
### Singlet states. Pairing thresholds at the \(\Gamma\)-point
Derivation of pair dispersion proceeds along the same lines as the simple cubic case of Sec VIII. The result[32] is again Eq. (203) in which the last column contains \(V_{z}\) instead of \(V\). A second difference concerns the expression for integrals \(M_{nmk}\), Eq. (204): the parameter \(\gamma\) is now given by \(\gamma=4t_{z}\cos\left(P_{z}/2\right)\). We note in passing that replacing \(V\) with another parameter \(V_{y}\) in the _third_ column of Eq. (203) and setting \(\beta=4t_{y}\cos\left(P_{y}/2\right)\) in Eq. (204) results in a dispersion relation for the _orthorhombic_\(UV\) model. The latter is not studied in this paper.
We begin with analysis of the binding conditions at the \(\Gamma\) point where \(M_{100}=M_{010}\), \(M_{200}=M_{020}\), and \(M_{101}=M_{011}\). Instead of doing a full point symmetry analysis, it is easier to proceed by observing that taking a sum and a difference of the second and third equations in Eq. (203) splits off one \(d\)-symmetric state. The difference can be written in terms of \(\Phi_{\bf x}^{+}-\Phi_{\bf y}^{+}\):
\[\left[1-|V_{xy}|\left(M_{000}+M_{200}-2M_{110}\right)\right]\left(\Phi_{\bf x} ^{+}-\Phi_{\bf y}^{+}\right)=0\,. \tag{218}\]
This equation describes a \(d_{x^{2}-y^{2}}\)-symmetric solution with a threshold that smoothly interpolates from the isotropic cubic case, Eq. (213), to the pure 2D limit, Eq. (145), see Fig. 22. The other three equations are
Figure 19: Phase diagram of the \(UV\) model on the simple cubic lattice for \({\bf P}=0\).
Figure 20: Tetragonal \(UV\) model.
\[\left(\begin{array}{ccc}1+UM_{000}&-2|V_{xy}|M_{100}&-2|V_{z}|M_{001}\\ 2UM_{100}&1-|V_{xy}|(M_{000}+M_{200}+2M_{110})&-4|V_{z}|M_{101}\\ UM_{001}&-2|V_{xy}|M_{101}&1-|V_{z}|(M_{000}+M_{002})\end{array}\right)\left( \begin{array}{c}\Phi_{0}\\ \Phi_{\bf x}^{+}+\Phi_{\bf y}^{+}\\ \Phi_{\bf z}^{+}\end{array}\right)=0\:. \tag{219}\]
This system describes a mixture of one \(s\)-symmetric and one \(d\)-symmetric states. Upon setting \(E=-8t-4t_{z}\), the consistency condition of Eq. (219) links four model parameters: \(U\), \(V_{xy}\), \(V_{z}\) and \(t_{z}\). One of them can be expressed via the other three. A new feature of this model relative the cases considered before is the ability to tune the degree of 3D anisotropy. Therefore, we will be mostly interested in \(t_{z}\)-dependence of binding conditions. It is convenient to expand the determinant in Eq. (219) in powers of \(U\) and \(V\). This is done in Appendix C.4. From here, one potential can be expressed via the other two. Out of all possibilities, we consider three special cases: (A) \(V_{z}=V_{xy}\), (B) \(V_{z}=0\), and (C) \(V_{xy}=0\), all at a fixed \(U\).
(A) \(V_{xy}=V_{z}\equiv V\). In this case, Eq. (111) becomes a quadratic equation for \(|V_{\rm cr}|\). Two real roots correspond to the formation of two bound states: a low-energy one with \(s\) symmetry and a high-energy one with \((d_{xz}+d_{yz})\) symmetry. Both thresholds are shown in Fig. 21 as functions of \(t_{z}\). At \(t_{z}/t=1\), this model is equivalent to the simple cubic \(UV\) model studied in Sec. VIII.1.
(B) \(V_{z}=0\). In-plane attraction only. Set \(V_{z}=0\) and expand the remaining \(2\times 2\) determinant in Eq. (219) to express \(V_{xy,{\rm cr}}\) vs. \(U\):
\[|V_{xy,{\rm cr}}^{s}|=\frac{UM_{000}+1}{U\left[M_{000}(M_{000}+M_{200}+2M_{110} )-4M_{100}^{2}\right]+(M_{000}+M_{200}+2M_{110})}\:. \tag{220}\]
In the 2D limit, \(t_{z}\to 0\), all \(M_{nmk}\) logarithmically diverge, but utilizing a subtractive procedure \(M_{nmk}=M_{000}+L_{nmk}\), one can show that Eq. (220) reduces to Eq. (139).
(C) \(V_{xy}=0\). Out-of-plane attraction only. Set \(V_{xy}=0\) and expand the remaining \(2\times 2\) determinant in Eq. (219) to express \(V_{z,{\rm cr}}\) vs \(U\):
\[|V_{z,{\rm cr}}^{s}|=\frac{UM_{000}+1}{U\left[M_{000}(M_{000}+M_{002})-2M_{001} ^{2}\right]+(M_{000}+M_{002})}\:. \tag{221}\]
Both Eqs. (220) and (221) mark the appearance of an \(s\)-symmetric pair. In case B, the pair is "disk-like" extending in the \(xy\)-plane. In case C, the pair is "cigar-like" extending along \(z\)-axis. Both functions are plotted in Fig. 21. It is instructive to compare cases B and C at different degrees of anisotropy. At \(t_{z}/t=1\), hopping is fully isotropic and all \(V\)'s contribute equally to binding. There are four attractive bonds in case B but only two in case C. Therefore, one expects the case C threshold to be larger, which can be observed in Fig. 21. However, in the opposite 2D limit, \(t_{z}\to 0\), the situation is reversed. Case B with in-plane attraction reduces to the square \(UV\) model studied in Sec. V. Accordingly, the threshold line terminates at a \(U\)-dependent finite value given by Eq. (139). (The limit is marked by a red circle in Fig. 21.) In contrast, in case C two particles reside on adjacent \(z\) planes and attract each other when their \(xy\) coordinate coincide. Thus, the model reduces to the 2D _attractive_ Hubbard model where the threshold is zero, see Sec. III.3. The C threshold line extends all the way to \(|V|=0\), which is marked by a blue square in Fig. 21. The zero threshold can also be deduced from Eq. (221). In the \(t_{z}\to 0\) limit, both \(M_{001}\) and \(M_{002}\) tend to zero rather than logarithmically diverge. The threshold reduces to the attractive Hubbard expression \(|V_{\rm cr}|=1/M_{000}\). Since \(M_{000}\) diverges, the threshold is zero. The entire line \(V_{z}(t_{z})\) in Fig. 21 almost exactly matches the one in the tetragonal attractive Hubbard model, see Sec. III.6 and Fig. 5(a). The effects of Hubbard repulsion \(U\) are barely felt.
By continuity, the B and C threshold lines must cross, which can be seen in the inset of Fig. 21. The crossing happens at high anisotropy, \(t_{z}\simeq 0.002\,t\). This is a useful anisotropy scale that separates two different regimes. At \(t_{z}<0.002\,t\), pair formation is driven by logarithmic divergencies of \(M\)'s. This is where pairs can be considered purely two-dimensional. At \(t_{z}>0.002\,t\), the divergencies are no longer dominant, and pairs become three-dimensional yet strongly anisotropic. This point will be discussed further in Sec. XII.
### Triplet states. Pairing thresholds at the \(\Gamma\)-point
There are three spin-triplet pairs, all with \(p\)-type orbital symmetries. Their dispersion relations are given by Eqs. (214)-(216). The only difference is that \(V_{xy}\) should be used in the \(\Phi_{\bf x}^{-}\) and \(\Phi_{\bf y}^{-}\) equations but \(V_{z}\) in the \(\Phi_{\bf z}^{-}\) equation. Like in the simple cubic case, decomposition into three independent dispersion relations takes place over the entire BZ of pair momenta.
In this section, we only investigate binding conditions at the \(\Gamma\) point. To that end, we set \(\alpha=\beta=4t\), \(\gamma=4t_{z}\) and \(E=-8t-4t_{z}\). Analysis of the \(p_{x}\) and \(p_{y}\) thresholds,
which are equal, is straightforward. They smoothly interpolate between the pure 2D limit, Eq. (154), and the isotropic 3D limit, Eq. (217). A numerical solution of Eq. (214) for all \(t_{z}/t\) is plotted in Fig. 22.
The situation with the \(p_{z}\) state is more interesting. Although Eq. (216) contains a difference of two \(M\)'s suggesting convergence at \(t_{z}\to 0\), \(M_{002}\) actually tends to zero while \(M_{000}\) still diverges, resulting in a zero threshold. A numerical solution of Eq. (216) plotted in Fig. 22 confirms the conclusion (black line). Due to a relative simplicity of Eq. (216), it is possible to derive an asymptotic behavior of \(|V_{z,{\rm cr}}^{p_{z}}|\) in the \(t_{z}\to 0\) limit. This is done in Appendix C.5. The result is
\[V_{z,{\rm cr}}^{p_{z}}(t_{z}\to 0)\approx\frac{8\pi t}{\ln\frac{32t}{\sqrt{e}\,t_ {z}}}\,. \tag{222}\]
This formula is plotted in Fig. 22 as the dot-dashed green line and is in excellent agreement with the exact result for \(t_{z}/t<0.1\)
Combining this result with the discussion of \(s\)-pair formation in Sec. IX.1, one concludes that _two_ bound states are formed with _zero_ thresholds in case (C), one signal-shaped \(s\) and one \(p_{z}\). This is an unusual situation in \(UV\) models since normally pairs of different symmetries are formed at different thresholds. The tetragonal \(UV\) model with out-of-plane attraction is richer than the attractive Hubbard model.
### Bose-Einstein condensation of real-space pairs
The tetragonal \(UV\) model is a convenient system to discuss Bose-Einstein condensation (BEC) of real-space pairs. BEC lies at the core of the preformed-pairs mechanism of high-temperature superconductivity.[11; 13; 16; 30; 32; 36] One should also mention BEC of molecules engineered in cold gases.[129; 75; 130] However, BEC is a many-body effect and as such is out-of-scope of this work. For that reason, the following discussion is only qualitative.
It was argued in Sec. IX.1 that if \(t_{z}>0.002\,t\), then pairs are already _three-dimensional_ yet highly anisotropic. The above condition is satisfied in most crystalline solids. Hence, the pairs' collective behavior should be described in terms of _anisotropic_ 3D BEC[13; 15] rather than a pure 2D Berezinskii-Kosterlitz-Thouless transition. A recent neutron scattering study[131] supports this viewpoint.
A convenient starting point is the anisotropic version of the continuous-space BEC formula:
\[k_{B}T_{\rm BEC}=3.31\,\frac{\hbar^{2}\tilde{n}_{p}^{\frac{2}{3}}}{(\tilde{m} _{px}^{*2}\tilde{m}_{p^{*}}^{*})^{\frac{1}{3}}}\,. \tag{223}\]
Here, \(\tilde{n}_{p}\) and \(\tilde{m}_{p}^{*}\) are the density and effective mass of bosons (fermion pairs) in _physical_ units. Transitioning to relative units one obtains
\[\mathcal{T}_{\rm BEC}\equiv\frac{1}{t}\,k_{B}T_{\rm BEC}=6.62\,\frac{n^{\frac {2}{3}}}{(m_{px}^{*2}m_{p^{*}}^{*})^{\frac{1}{3}}}\,. \tag{224}\]
Figure 21: Binding thresholds of singlet pairs in the tetragonal \(UV\) model at the \(\Gamma\)-point and \(U=30\,t\). Pairs are formed to the right of their respective lines. (A) The black lines describe formation of an \(s\)- and a \(d\)-symmetric pair when \(V_{xy}=V_{z}\). At \(t_{z}/t=1\), their values are given by Eqs. (211) and (213), respectively. (B) In-plane attraction only. The red dashed line is Eq. (220). In the \(t_{z}\to 0\) limit, the model reduces to the square \(UV\) model. The \(t_{z}=0\) value is given by Eq. (139) and is marked by a circle. (C) Out-of-plane attraction only. The blue line is Eq. (221). In the \(t_{z}\to 0\) limit, the model reduces to the 2D _attractive_ Hubbard model with zero binding threshold. The blue square marks the binding threshold for out-of-plane attraction (zero). Inset: comparison of cases B and C at extreme anisotropy.
Figure 22: Binding thresholds for \(d_{x^{2}-y^{2}}\) and \(p\)-symmetrical states at the \(\Gamma\)-point of the tetragonal \(UV\) model as a function of lattice anisotropy. The \(d_{x^{2}-y^{2}}\) line interpolates between the 2D limit, Eq. (143), and the isotropic 3D limit, Eq. (213). Similarly, the \(p_{x},p_{y}\) line interpolates between Eq. (154) and Eq. (217). The dot-dashed green line is Eq. (222).
Here, \(n\) is the number of pairs per unit cell, and the pair masses are expressed in units of \(m_{0}=\hbar^{2}/(2ta^{2})\). We also assume \(m_{px}^{*}=m_{py}^{*}\).
As was argued in the Introduction, a maximum critical temperature is reached at "close-packing" when the pair density is approximately equal to the inverse pair volume, \(n_{\rm cp}=\Omega_{p}^{-1}\). The close-packing temperature is:
\[{\cal T}_{\rm BEC}^{*}={\cal T}_{\rm BEC}(n_{\rm cp})=\frac{6.62}{(\Omega_{p}^ {2}\,m_{px}^{*2}m_{yz}^{*})^{\frac{1}{3}}}\,. \tag{225}\]
This formula contains only single pair properties supplied by the exact solution.
The continuum approximation can be superseded by a more rigorous lattice treatment. The pair density is computed as a full Bose integral and then equated to an inverse pair volume. Instead of Eq. (225), one has
\[\int_{\rm BZ}\frac{{\rm d}^{3}{\bf P}}{(2\pi)^{3}}\frac{1}{\exp\left\{\frac{E ({\bf P})-E_{0}}{T_{\rm BEC}}\right\}-1}=\frac{1}{\Omega_{p}}\,. \tag{226}\]
\(E_{\bf P}\) is the full pair dispersion also provided by the exact solution. This equation[32] does not resort to the effective mass approximation.
Here, we apply the simplified formula, Eq. (225), to the tetragonal \(UV\) model with out-of-plane attraction. Only \(s\)-symmetric pairs are included. The effective masses are calculated numerically from the full singlet dispersion relation [take Eq. (203) and set \(V=0\) in the two middle columns]:
\[\left[\begin{array}{cc}1+UM_{000}&-2|V_{z}|M_{001}\\ UM_{001}&1-|V_{z}|(M_{000}+M_{002})\end{array}\right]\left[\begin{array}{c} \Phi_{0}\\ \Phi_{\bf z}^{+}\end{array}\right]=0\,. \tag{227}\]
For the pair volume, we adopt the following formula[32]:
\[\Omega_{p} = r_{px}^{*}\,r_{py}^{*}\,r_{pz}^{*} \tag{228}\] \[= \sqrt{[1+\langle(\triangle x)^{2}\rangle]\,[1+\langle(\triangle y )^{2}\rangle]\,[1+\langle(\triangle z)^{2}\rangle]}\,,\]
which also defines the pair sizes \(r_{px}^{*}\), \(r_{py}^{*}\), and \(r_{px}^{*}\). This form of \(\Omega_{p}\) respects the exclusion principle and ensures the pair volume is not less than 1 even in the strong coupling limit. The mean-squared distances are defined via the full coordinate wave function, Eq. (18). For example
\[\langle(\triangle x)^{2}\rangle=\frac{\sum_{{\bf m}_{1},{\bf m}_{2}}(m_{1x}-m_ {2x})^{2}\left|\Psi({\bf m}_{1},{\bf m}_{2})\right|^{2}}{\sum_{{\bf m}_{1},{ \bf m}_{2}}\left|\Psi({\bf m}_{1},{\bf m}_{2})\right|^{2}}\,. \tag{229}\]
In taking the momentum sum in Eq. (18), \({\bf k}_{1}=-{\bf k}_{2}\) should be used, which corresponds to \({\bf P}=0\). In principle, all the sums can be computed by brute-force numerical calculation utilizing FFT. However, for the tetragonal dispersion, \(\langle(\triangle{\bf r}_{i})^{2}\rangle\) can be analytically reduced to ratios of two one-dimensional integrals (see Ref [32], Appendix C), which makes calculation of \(\Omega_{p}\) much more efficient.
Figure 23 shows the effective pair mass and radius as functions of \(t_{z}\) for \(U=30\,t\) and \(V_{z}=5\,t\). For this attraction, the pair exists in the interval \(0<t_{z}/t<0.205\), see Fig. 21, case (C). One expects the pair to be strongly bound at \(t_{z}\to 0\) and loosely bound at \(t_{z}\to 0.205\,t\). This can be seen in the effective radius plots: both \(r_{px}^{*}\) and \(r_{px}^{*}\) tend to 1 in the 2D limit but diverge near the threshold. The in-plane pair mass, \(m_{px}^{*}\), is weakly affected by out-plane anisotropy and stays almost constant as a function of \(t_{z}\). In contrast, the out-of-plane mass strongly depends on \(t_{z}\). Near the binding threshold, it approaches the \(z\)-mass of two free particles, \(2t/t_{z}\). In the opposite limit, \(t_{z}\to 0\), it diverges as \(\propto V_{z}/t_{z}^{2}\).
Now consider the close-packed BEC temperature given by Eq. (225). It is bounded by two divergencies: a divergent \(m_{px}^{*}\) at \(t_{z}\to 0\) and a divergent \(\Omega_{p}\) near the threshold. Thus, we expect \({\cal T}_{\rm BEC}^{*}\) to have a maximum at an intermediate \(t_{z}\). This is shown in Fig. 24. _The present model predicts the existence of an optimal degree of anisotropy_. If \(t_{z}\) is too large, the kinetic energy is large, pairs are loosely bound, their volume is large, the packing density is small, and \({\cal T}_{\rm BEC}^{*}\) is small as a result. If \(t_{z}\) is too small, the the out-of-plane mass is very large and \({\cal T}_{\rm BEC}^{*}\) is small again. A similar conclusion was reached in Ref. [32] for in-plane attraction.
Another interesting feature of Fig. 24 is that the _peak value_ of \({\cal T}_{\rm BEC}^{*}\) increases with the attraction strength. A stronger \(V_{z}\) creates more compact pairs which boosts the close-packed density. However, large \(V_{z}\)'s cause phase separation as discussed in Sec. XI.
## X Miscellaneous two particle problems
In this section, we briefly describe other two-particle problems that have been left out of the present review.
Early results on bound pairs in 1D, 2D square, and 3D simple cubic \(UV\) models were derived by Micnas and reported in Sec. III.C of Ref. [11].
Two-particle bound states in the square \(t\)-\(J\) model and \(t\)-\(J\) models, which map to the \(UV\) model, were studied
Figure 23: Masses (left panel) and radii (right panel) of an \(s\)-symmetric bound pair in the tetragonal \(UV\) model with out-of-plane attraction. \(U=30\,t\), \(V_{z}=5.0\,t\). The masses are measured in units of the free in-plane mass, \(m_{0}=\hbar^{2}/(2ta^{2})\).
by Lin, [60] Petukhov, Galan, and Verges, [61] and Kagan and Rice. [62] All those works are consistent with the material of Sec. V.
Bak and Micnas [37; 132] considered a square \(UV\) model with second nearest-neighbor _hopping_. They found that the latter could change the symmetry of the ground state. The \(d\)-symmetric pair had a lower energy than the \(s\)-symmetric one when the first-neighbor and second-neighbor hoppings were of opposite signs. Since effective second-neighbor hopping may be induced by strong correlations on an antiferromagnetic background near half-filling, this result may have implications for understanding symmetries of the superconducting order parameter in cuprates.
In Ref. [[133]], a square \(UV\) model with _anisotropic_ next-nearest-neighbor attraction was studied in the context of cold atom quantum simulator of extended Hubbard-Holstein models.
A 3D \(UV\) model on the BCC lattice was considered in Ref. [[82]]. Pair mass, radius, and binding conditions were computed. Based on those results, a critical temperature of an atomic condensate in a BCC optical lattice was estimated to be around 10 nK.
In Ref. [[83]], a \(UV\) model on the FCC lattice was analyzed in the context of superconducting fullerides. The pairs were found to be of small radius, strongly-bound, but relatively light due to the light-pair effect. It was estimated that such pairs could Bose-condense at high temperatures even if the lattice constant is large, as in the fullerides.
The following works studied _multi-orbital_ models with on-site or intersite attractions.
A "mixed" repulsive-attractive one-dimensional Hubbard model was studied in Ref. [[97]]. The unit cell consisted of two non-equivalent sites. The first site hosted a Hubbard repulsion \(U_{r}\) whereas the second site hosted a Hubbard attraction \(-|U_{a}|\). The singlet pairs were found to form when
\[|U_{a}|>\frac{2U_{r}t\sqrt{\cos P}}{U_{r}+2t\sqrt{\cos P}}\,. \tag{230}\]
Note an unusual, square-root, dependence on the pair momentum \(P\). The latter is confined to the interval \(-\frac{\pi}{2}\leq P\leq\frac{\pi}{2}\) in this case.
In Ref. [[39]], a \(UV\) model on a checker-board square lattice was studied. The symmetry between two sublattices was assumed to be broken by antiferromagnetic correlations of the underlying many-body system. The correlations created a staggered magnetic field \(\Delta\) that elevated energy of a spin-up hole on the first sublattice and lowered it on the second sublattice. For a spin-down hole, sublattice energies were reversed. It was found that a nonzero \(\Delta\) favored pairing because of two effects. First, the resulting band splitting reduced kinetic energies of the holes. Second, a nonzero \(\Delta\) suppressed double occupancy and reduced the influence of \(U\). Both factors effectively increased the attraction \(V\) relative to \(t\) and \(U\), which leads to easier pairing. A more detailed analysis of this two-particle lattice problem can be found in Ref. [[92]].
Finally, we mention the mixed Hubbard model on a multi-orbital CuO\({}_{2}\) plane. [98] The model included a repulsion on copper sites, \(U_{\rm C}\), and an attraction on oxygen sites, \(-|U_{\rm O}|\). The atomic energies of Cu and O sites were assumed equal, \(\varepsilon_{\rm C}=\varepsilon_{\rm O}\), for both spin orientations. The pairing condition was found to be \(|U_{\rm O}|>8\sqrt{2}U_{\rm C}t/(3U_{\rm C}+4\sqrt{2}t)\), where \(t\) is the absolute value of the hoping integral between the copper's \(d_{x^{2}-y^{2}}\) orbital and oxygen's \(p_{x,y}\) orbitals. One should add that multiple orbitals introduced significant technical complications in exact two-particle solutions, as explained in Sec. II.3. A similar analysis for the case of _oxygen_-_atraction [134] resulted in a threshold condition, \(|V_{OO}|>U_{C}t/(0.595U_{C}+2t)\). The latter model in its _many-body_ version was recently studied by constrained-path Monte Carlo in Ref. [[51]].
## XI More than two particles
The properties of one bound pair, which this review summarizes, make physical sense only if the pairs keep their identity in a macroscopic system with finite particle density. For most of the results to remain valid, the pairs should not aggregate in trions, quads, and larger clusters. In other words, the effective interaction between pairs should be either repulsive or weakly attractive when such a weak attraction is unable to bind pairs into a larger complex (in 3D). The situation is clear in attractive Hubbard models. Two fermions forming an on-site real-space pair have two opposite spin projections and prevent other fermions from occupying the same site at the same time. Thus, clusters of three or more fermions
Figure 24: The close-packed BEC temperature, Eq. (225), vs. lattice anisotropy for several attractive strenghts \(V_{z}\). Notice how the peak \(\mathcal{T}_{\rm BEC}\) increases with \(|V|\).
are prohibited by the exclusion principle. The attractive Fermi-Hubbard model is stable against phase separation,[135] and all the results derived in Sec. III remain valid. Bosons, on the other hand, can pile up on the same site without limits, sending the total energy to negative infinity. The attractive Bose-Hubbard model is unstable beyond the pair-forming threshold, such as the ones listed in Table 1.
The situation is more subtle in \(UV\) models, \(t\)-\(J\) models, and other models where the attractive interaction is of finite range. The exclusion principle does not directly prohibit pairs from sticking to each other side-by-side and forming a macroscopic cluster. Thus, any such system will phase-separate in the large \(V\) limit. However, the exclusion principle still plays an important role at intermediate \(V\). In the context of large bipolarons, Emin argued[136] that when forming a quadpolaron, additional carriers must occupy excited states of the self-trapping well. That leads to an additional short-range repulsion between pairs (large bipolarons). A liquid of real-space pairs must be stable against clustering in a finite region just above the pairing threshold. Thus, the question becomes quantitative: when exactly does the system phase-separate, and how is the phase separation threshold \(V_{\infty}\) related to the pair-forming threshold \(V_{2}\) studied in this paper? Since the discovery of high-temperature superconductivity, these issues have been discussed mostly in the context of phase-separation in the \(t\)-\(J\) model[33; 34] and in bipolaronic superconductors.[57; 58] Those investigations were based on comparing the energy of a collection of pairs with the energy of an infinite cluster. Because both energies were determined either by applying approximations or on small lattice segments, the phase boundaries were approximate.
Rigorous determination of clustering thresholds and phase boundaries requires solving a few-body or a full many-body problem on an infinite lattice with sufficient accuracy, and is therefore difficult. Berciu reported[137] a zero region of pair stability in a system of 3 to 5 _spinless_ fermions with nearest-neighbor attraction in an infinite 1D lattice. However, with second-nearest-neighbor _repulsion_ turned on, the pairs stabilized in a wide interval of parameters. In this model, pairs need to be stabilized dynamically because Emin's argument does not work for spinless fermions. Also in 1D, Chakraborty, Tezuka and Min investigated four "extended-Holstein bipolarons" on chains up to 24 sites long.[138] At high phonon frequencies, this model[127] maps to a \(UV\) model. The authors reported a narrow but finite interval of bipolaron stability in qualitative agreement with the 1D \(UV\) phase diagram shown on Fig. 25(a) and to be discussed below.
In 2D, Emery, Kivelson, and Lin reported[33] pair stability in the interval \(2.0\,t<|V|<3.53\,t\) for a \(4\times 4\) lattice and \(U=\infty\). A similar interval, \(2.0\,t<|V|<3.8\,t\) on a \(4\times 4\) lattice, was reported by Dagotto and Riera for the \(t\)-\(J\) model.[34]
We are unaware of any numerical investigation of clustering in 3D lattices except our own work[139] to be discussed next.
We now summarize our own results on pair liquid stability in \(UV\) models.[139; 140; 141; 135; 142; 143; 144; 145; 146; 147; 148; 149; 150; 140; 141] First off, _bosonic_\(UV\) models are unstable: as soon as pairs form, trions[140] and quads[35] form as well, triggering phase separation. The same conclusion applies to _spinless fermions_: those pairs are unstable, too. Phase diagrams of four _spinful fermions_ in 1D and 2D are shown in Fig. 25(a) and Fig. 25(b), respectively.[35] In 1D, the pair stability region is narrow; it shrinks to zero in the \(U\to\infty\) limit. In 2D, the pair stability region is \(\approx 2t\) wide for all \(U\). In 3D, exact solutions are computationally more demanding. Only a three-fermion problem has been solved in a 3D \(UV\) model so far.[139] Figure 26 shows the phase diagram of three fermions in the tetragonal \(UV\) model for \(U=10\,t\). Pairs are stable against formation of trions between the blue and red lines. One interesting feature of Fig. 26 is the sharp increase of \(V_{2}\) for small but nonzero \(t_{z}\). Going back to Fig. 25(b), one should be careful when regarding the pure 2D case as an approximation to a highly anisotropic 3D case. Even a small \(t_{z}\) elevates the \(V_{2}\) line and significantly reduces the domain of pair stability, which is illustrated by the \(t_{z}=0.01\,t\) line. Although the domain of pair stability shrinks significantly upon turning on interlayer hopping, _the former remains finite for all \(U\)_.
A high-level conclusion from this analysis is the existence of finite domains of pair stability in various forms of the \(UV\) model in all lattice dimensionalities. _It is within such domains is where the results derived in the bulk of this paper are valid_. The fundamental reason for pair stability is the Fermi statistics and associated nodes in the many-fermion wave functions.[35; 136] Since these rea
Figure 25: Phase diagrams[35] of four fermions in the 1D and 2D \(UV\) models at zero total momentum. Singlet pairs are stable against clustering between the black and blue lines. The solid black lines are Eqs. (110) and (139), respectively. Quad thresholds \(V_{4}\) were obtained by extrapolating to infinite lattices. In 2D, the \(U\to\infty\) limit of \(V_{4}\) is \(\approx 3.35\,t\), which is consistent with the \(4\times 4\) values reported in Refs. [33] and [34]. The dashed line in panel (b) is the pair threshold in the 3D _tetragonal_\(UV\) model with \(V_{z}=0\) and \(t_{z}=0.01\,t\), see Eq. (220).
sons are general and applicable to any system, we expect them to hold for other types of potentials including, for example, large-radius attractions considered in Sec. V.4.
## XII Discussion
### Common properties of lattice bound pairs
In this paper, \(UV\) models (extended attractive Hubbard models) of increasing complexity have been analyzed in detail. Taken together, the results point to several common properties of lattice bound states. They are summarized and discussed in this section.
#### xi.1.1 Mathematically exact solutions
The most important property of two-particle lattice problems is their exact solvability. This is a big advantage as all the results can be easily verified and cross-checked, both analytically and numerically. When it comes to physical applications, typical arguments are centered around justifications of a starting Hamiltonian or relevancy to a particular real system. But once the model is agreed upon, the conclusions that follow are not usually questioned.
At the same time, the complexity of a solution increases rapidly with interaction range and lattice dimensionality. As shown in Sec. II.2, the size of a matrix that defines pair dispersion, Eq. (17), is equal to the number of interacting lattice sites. Separation into singlet and triplet states described in Secs. II.4 and II.5 cuts the matrix size by about half, which is helpful. Still, the ability to derive analytical results is largely limited to nearest-neighbor interactions. The longer-range square model considered in Sec. V.4 and Ref. [93] is a rare exception. Additionally, the matrix elements \(M\) that populate Eq. (17) are lattice Green's functions whose complexity increases with lattice dimensionality. In general, \(M\)'s can be analytically evaluated in 1D, see Eqs. (50) and (B18)-(B22), and in 2D, see Appendix A. In 3D, analytical expressions are only known for the isotropic simple cubic lattice, Appendix C.1. In other cases, numerical integration is necessary. The ability to perform two integrations analytically, as shown in Appendix C.3, reduces \(M\)'s to one-dimensional integrals which makes subsequent numerics efficient.
The exact dispersion equation defines pair energy over the entire BZ of pair momenta. It provides singlet-triplet splitting, energy gaps between various sub-bands of the same parity, and a total bandwidth. The dispersion possesses degeneracies at high-symmetry points, which can be cross-checked with classifications by the group theory. By setting the pair energy equal to the lowest energy of two free particles _with the same total momentum_ a binding threshold is determined. At this special energy, matrix elements \(M\) simplify considerably. In many cases, and especially at the \(\Gamma\) point and along BZ diagonals, the thresholds can be calculated analytically. Many explicit examples have been derived in this paper.
The exact solution also provides pair effective mass and radius. The mass is rigorously defined via a second derivative of the total energy near the \(\Gamma\)-point and can be extracted from the dispersion relation. In some cases, one can utilize the fact that the dispersion relation simplifies along BZ diagonal and derive analytical expressions. See, for example, Es. (62). Calculation of effective radius is more involved as it requires knowledge or the pair wave function and its subsequent integration. Still, analytic expressions have been obtained in simple cases, see Eqs. (56) and (65). In more complicated models, analytical manipulations enable reductions of multidimensional integrals to one-dimensional integrals, as was done, for example, in Ref. [32]. The effective radius yields the pair volume and the _close-packed density of pairs_ which is an important parameter in the theory of Bose-Einstein condensation of pairs. Together with pair mass and full dispersion, it provides estimates of the close-packed BEC temperatures and enables conclusions about optimal model parameters.
#### xi.1.2 Pair binding energy depends on its momentum
This is purely a lattice effect that is absent in continuous space. [1] Since the lattice provides a reference frame, movement through it is not Galilean-invariant. Binding energies and masses of bound complexes become depen
Figure 26: Phase diagram of three fermions in the 3D tetragonal \(UV\) model for \(U=10\,t\). [139] The pairs are stable against formation of trions between the \(V_{2}\) and \(V_{3}\) lines. The trion threshold \(V_{3}\) was computed on a \(12\times 12\times 12\) lattice. The dashed line marks the coordinates of _peak_ close-packed BEC temperature, \(\mathcal{T}_{\text{BEC}}^{*}\), determined by locating the maxima of plots like those shown in Fig. 24.
dent on their momenta \(\mathbf{P}\). The pair energy and the minimum energy of two free particles both rise with \(\mathbf{P}\) but the latter rises _faster_. It means that the binding energy _increases_ with \(\mathbf{P}\). In systems with a finite threshold, it leads to situations when there are no bound pairs at \(\mathbf{P}=0\) but one or more bound pair at finite \(\mathbf{P}\). A similar effect has long been known in the theory of quantum spin waves.[78] Increasing stability of pairs can also be seen in threshold formulas with explicit dependence on \(\mathbf{P}\), see for example Eqs. (110), (144), and (211), where thresholds decrease to zero in BZ corners. That implies there is _always_ at least one bound pair in BZ corners for _any_\(U\) as long as \(|V|>0\).
An important consequence is the existence of a _binding surface_ in momentum space which separates unbound and bound pairs. As shown in Sec. V.2, the binding surface does not in general match the Fermi surface of _free_ electrons. The mismatch leads to segmentation of the Fermi surface into areas that are more or less prone to pairing and potential appearance of Fermi arcs. We note that such a pair would have formed with a nonzero total momentum. This is opposite to conventional Cooper pairs that are formed with zero total momentum. The described effect may also have links to the FFLO phase. A key question is how the binding surface is modified by a finite fermion density. This is a many-body problem that goes beyond the scope of this work.
#### v.2.3 Light pairs
Effective mass is an important characteristic of a bound pair. Broadly, it defines pair mobility and response to external perturbations. In the context of superfluidity, the mass determines a BEC temperature in a system of many pairs, as argued in the Introduction. It is important to know what makes pairs either light or heavy.
Near threshold, the mass is always close to two free-particle masses. In the opposite, strong coupling limit, the mass generally scales as \(m_{p}^{*}/m_{0}\propto|V|/t\), i.e., increases linearly with the binding energy. This is because the pair moves in the second order in hopping \(t\) and breaks an attractive bond in the intermediate state, see Fig. 8(a). However, there are two circumstances when a pair can move in the _first_ order in \(t\) and the above scaling is not followed. Instead, pair mass remains of order \(m_{0}\) at all energies including the strong coupling limit. We refer to such pairs as _light_.
The first situation arises for purely geometric reasons. One pair member can hop to a neighbor site without breaking an attractive bond. In other words, both the starting site and the ending site are nearest neighbors to the second pair member. Then the second member moves in the same way and the process repeats. The constituent particles hop in turns and the entire pair moves through the lattice in a "crab-like" fashion without ever breaking a bond. A good example is provided by the triangular lattice investigated in Sec. VII.1 (see also Refs. [126] and [128]). Pair movement is illustrated in Fig. 16 and the mass is plotted in Fig. 18. Observe that even at extreme attractive strengths, \(|V|>60\,t\), the pair mass remains below \(5m_{0}\). For more realistic attractions, \(|V|<10\,t\), the mass is below \(3m_{0}\). That is, mass increase due to binding is less than \(50\%\). It was suggested in Ref. [128], in the context of bipolaron superconductivity, that for that very reason triangular and hexagonal lattices may host high-temperature superconductivity. Other lattices that support crab motion and light pairs include staggered ladders[124; 57; 125] (\(m_{p}^{*}<4m_{0}\)), staggered square planes (Appendix E.1, \(m_{p}^{*}<4m_{0}\)), and face-centered cubic lattice[83] (\(m_{p}^{*}<6m_{0}\)). One should add to this list the _bodyt-centered_ tetragonal lattice that supports light pairs in \(xy\) plane but not in \(z\) direction. This lattice is of interest because of potential connection to cuprate superconductivity. Light pairs in the model are yet to be studied. Inclusion of second-neighbor hopping induces light pairs in other lattices.[124; 58; 123]
The second situation leading to light pairs arises when two or more lattice sites have equal attractive potentials. One such model was considered in Sec. IV.2. If the on-site interaction is attractive rather than repulsive and \(U=V\), then a first-order resonant motion is possible as illustrated in Fig. 8(b). The analysis is readily generalized to other lattices that do not support light pairs for geometric reasons, see Appendixes E.2 and E.3. The pair effective mass is \(m_{p}^{*}<2\sqrt{z}\,m_{0}\), where \(z\) is the number of nearest neighbors in a lattice. Such a potential is unlikely to arise in a solid crystal but can be engineered in cold gases where \(U\) and \(V\) are controlled independently.[68; 69; 70; 71] Another class of potentials involves equal-strength attractions at finite separation between particles. We considered a 1D example in Sec. IV.2, see Fig. 8(c), and a 2D example in Sec. V.4. In both cases, \(m_{p}^{*}<4\,m_{0}\) was found. One might think that such conditions require fine-tuning and is therefore unlikely in real systems. However, as was argued in Sec. V.4, realistic potentials are combinations of decaying repulsions and long-range attractions. They necessarily have broad and shallow attractive minima. In those situations, the likelihood of two sites near the minimum to have equal attractive strengths is quite high. More instances of this mechanism are investigated in Ref. [120].
An overall conclusion of this analysis is that in many circumstances, binding does not lead to significant increase of pair mass. Pair mass is not an impediment for a high mobility or a high BEC temperature.
#### v.2.4 Anisotropic 3D pairs
The role played by lattice anisotropy is important in the context of cuprate superconductivity. All cuprate superconductors have a layered structure which has led to a popular point of view that the superconductivity is essentially two-dimensional. A number of pure 2D
theoretical models have been proposed and extensively studied. In the preformed pair mechanism, however, superconductivity is still considered three-dimensional albeit highly anisotropic. Investigations conducted in the present work support this picture. We analyzed two 3D anisotropic models: tetragonal attractive Hubbard model in Sec. III.6 and tetragonal \(UV\) model in Sec. IX. We found in that in both cases the pair formation line splits in two regions, see Figs. 5(a) and 21. At extremely high anisotropy, \(t_{z}<0.002\,t\), pair formation is indeed dominated by logarithmic divergencies of 2D integrals. By that measure, the pairs may indeed be regarded as pure two-dimensional. However, when \(t_{z}>0.002\,t\) the divergencies are no longer dominant and the threshold is on the order of its value in an isotropic 3D lattice. For that reason, real-space pairs in this regime are already three-dimensional. Thus, two-body solutions may help to clarify the nature of superconductivity depending on the level of anisotropy observed in a system.
#### iv.1.5 Stability of fermion pairs against phase separation
Phase separation is a ubiquitous feature of all models with finite-range attraction such as \(UV\) or _t-J_ models. Physical reasoning suggests that any system with strong enough attraction will form clusters and eventually phase-separate. That raises the question about stability of real-space pairs in many-body systems. A positive answer is provided by the Fermi statistics.[139; 140; 136; 137; 138; 139] The ground state of a fermion pair is usually a spin singlet with a nodeless wave function. When two pairs attempt to coalesce into a quad, the full wave function must develop nodes. It is equivalent to a short-range _repulsion_ between pairs. This additional repulsion keeps the pairs separate as long as attraction is not too strong. (Since this reasoning does not apply to bosons, Bose-\(UV\) models are _not_ stable against phase separation.[35])
The derived conclusion is very general and applicable to a wide class of inter-particle potentials including three-dimensional and long-range ones. It has been confirmed by exact numerical calculations for _t-J_ models,[33; 34]\(UV\) models,[139; 140; 141; 142; 143; 144; 136; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213; 214; 215; 216; 217; 218; 219; 220; 221; 222; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 291; 292; 293; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 319; 320; 321; 322; 323; 324; 325; 326; 327; 328; 329; 333; 334; 335; 336; 337; 338; 339; 340; 341; 342; 343; 344; 345; 346; 347; 348; 349; 350; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 377; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 400; 401; 402; 403; 404; 405; 406; 407; 408; 409; 410; 411; 412; 413; 414; 415; 416; 417; 418; 419; 420; 421; 422; 423; 424; 425; 426; 427; 428; 429; 430; 444; 451; 452; 453; 454; 455; 456; 457; 458; 459; 460; 471; 472; 473; 474; 475; 476; 477; 478; 479; 480; 481; 482; 483; 484; 485; 486; 487; 488; 489; 490; 491; 492; 493; 494; 495; 496; 497; 498; 499; 500; 501; 502; 503; 504; 505; 506; 507; 508; 509; 510; 511; 512; 513; 514; 515; 516; 517; 518; 519; 520; 521; 522; 533; 536; 540; 558; 559; 561; 572; 537; 538; 573; 58; 59; 581; 59; 590; 591; 592; 593; 594; 595; 596; 597; 598; 599; 510; 512; 514; 515; 516; 517; 519; 539; 513; 517; 538; 599; 520; 539; 540; 55; 562; 573; 58; 599; 571; 599; 581; 593; 595; 598; 599; 511; 522; 530; 541; 542; 556; 574; 583; 599; 521; 532; 543; 55; 596; 575; 599; 582; 597; 599; 598; 599; 600; 601; 61; 622; 633; 64; 65; 66; 67; 68; 69; 691; 602; 64; 66; 68; 692; 693; 603; 67; 604; 694; 605; 606; 61; 637; 61; 640; 65; 67; 68; 69; 621; 65; 69; 607; 61; 608; 622; 64; 609; 638; 66; 67; 68; 69; 609; 610; 641; 65; 69; 622; 66; 67; 68; 69; 600; 68; 69; 60; 69; 627; 60; 639; 64; 610; 66; 65; 67; 68; 69; 63; 68; 69; 607; 642; 608; 69; 609; 65; 609; 66; 67; 697; 68; 610; 698; 611; 60; 67; 699; 60; 68; 612; 609; 613; 62; 641; 63; 65; 66; 67; 68; 69; 600; 69; 68; 69; 601; 69; 60; 69; 60; 614; 615; 62; 607; 63; 64; 67; 68; 69; 608; 62; 609; 63; 68; 6139; 69; 600; 615; 640; 67; 69; 60; 68; 617; 618; 62; 69; 619; 620; 63; 65; 67; 68; 69; 60; 69; 60; 610; 67; 61; 68; 62; 69; 607; 63; 69; 610; 68; 63; 69; 608; 614; 609; 615; 60; 69; 616; 617; 62; 63; 67; 64; 62; 68; 67; 69; 60; 68; 67; 69; 601; 68; 69; 609; 60; 617; 618; 63; 69; 619; 607; 61; 60; 62
unable to bind fast electrons moving with hopping \(t_{0}\) is now able to bind slow holes moving with a smaller effective hopping \(t_{h}\). _The role of strong in-plane correlation is to further slow down the carriers to enable pair formation by a weak attraction._ In a recent analysis of thermodynamic measurements of several cuprate families, Harrison and Chan argued[122] that the in-plane hopping near half-filling is about \(t_{h}=0.11\) eV while it should be \(0.36\) eV according to electronic structure calculations. Such a band narrowing is due to electron correlations. Since the band structure value should be accurate in an empty lattice, one concludes that the effective \(t\) increases from \(0.11\) to \(0.36\) eV with doping. At the same time, quantum chemistry calculations[152; 153] suggest an attractive potential of about \(0.12\) eV. Thus, \(V/t\approx 1\) near half-filling. This may be sufficient for binding in some highly anisotropic \(UV\) models.
Consider evolution of the system with hole doping starting at half-filling illustrated in Fig. 27. By continuity, the effective carrier hopping must smoothly increase from \(t_{h}\) at \(x=0\) to \(t_{0}\) at \(x=1\). Since exact shape of this curve is unknown, we show it schematically as a straight line. We then assume that the attraction strength _in physical units_ stays approximately constant with doping. Then, \(|V|\) will be systematically _decreasing_ when expressed in units of \(t\). If \(|V|\) was above a pairing threshold at \(x=0\), it will end up below the threshold at a finite \(x\). At this doping, the pairs disappear. At some intermediate doping, the pairs balloon in volume to reach close-packing. At this _optimal doping_, the critical temperature is maximal. _The preformed-pair mechanism naturally explains the existence of optimal doping and eventual disappearance of superconductivity with increasing hole density._ A recent analysis of the magnetic susceptibility and Knight shift in cuprates[150] suggests that the pseudogap decreases linearly with doping and coexists with superconductivity until both disappear in the overdoped regime. This conclusion supports our qualitative scenario. Returning to the specific numbers reported in Ref. [[122]], let us assume the threshold to be \(|V_{\rm cr}|=0.6\,t\). When \(t\) increases to \(\approx 0.20\) eV with doping, the pairs would evaporate. Thus, superconductivity should disappear by \(x=0.36\).
### Future outlook
Several additional two-body problems seem to be of interest. One of them is the body-centered tetragonal (BCT) \(UV\) model with out-of-plane attraction. The BCT lattice supports in-plane light pairs (but not out-of-plane ones), which should boost \(T_{\rm BEC}^{*}\). The negative effects of \(U\) on pairing can be minimized by a small \(t_{z}\). Compared with the simple tetragonal lattice, BCT has four times as many attractive bonds, which should reduce the threshold values by about a factor of four. Finally, BCT is close to the crystal structure of some cuprate superconductors.
It would also be interesting to analyze two-body problems with effective single-particle dispersions arising from strong correlations. For example, holes in nearly half-filled _t-J_ models acquire the tendency to hop diagonally to second nearest neighbors.[110; 121] According to Bak and Micnas,[37] such a dispersion may result in a \(d\)-symmetric ground state. Thus, such models may shed light on the symmetry of superconducting order parameter in cuprates.
Also of interest are multi-orbital models describing CuO\({}_{2}\) planes and even full CuO\({}_{6}\) octahedra. Only one such investigation has been published so far.[98] Extension of this work to 3D and accommodation of the available quantum-mechanical calculations of \(t\), \(U\), and \(V\),[151; 152; 153; 154] would make the theory less phenomenological.
Finally, we mention the need to investigate a four-fermion problem in the tetragonal \(UV\) model to complete the picture of phase separation in 3D.
###### Acknowledgements.
The author wishes to thank James Hague for long-term collaboration and numerous discussions on the subject of this paper.
Figure 27: Evolution of a \(UV\) model with hole doping. The carrier hopping (right axis, blue line) increases from \(t_{h}\) near half-filling (\(x=0\)) to \(t_{0}\) in the empty lattice (\(x=1\)). Accordingly, the attraction strength _expressed in units of_ \(t\) (left axis, red line) decreases from above threshold to below threshold.
## Appendix A Green's functions of the 2D square and rectangular lattices
### Definitions
In this section, we will be concerned with analytical evaluation of the two dimensional integrals
\[M_{nm}^{\rm rt}(E;\alpha,\beta)=\frac{1}{N}\sum_{\bf q}\frac{\cos nq_{x}\cos mq_ {y}}{|E|-\alpha\cos q_{x}-\beta\cos q_{y}}=\int\limits_{-\pi-\pi}^{\pi}\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Calculation of the next basic integral, \(M_{20}^{\rm sq}\), is more involved. Since the following derivation will also be needed in the anisotropic case, \(\alpha\neq\beta\), we present here its general version. After integration over \(q_{y}\), which is elementary,
\[M_{20}^{\rm sq}(E;\alpha,\beta)=\int_{0}^{\pi}\frac{{\rm d}q_{x}}{\pi}\frac{2 \cos^{2}\!q_{x}-1}{\sqrt{(|E|+\beta-\alpha\cos q_{x})(|E|-\beta-\alpha\cos q_{ x})}}\,. \tag{111}\]
The second term here is \(M_{00}^{\rm sq}\). It will be moved, temporarily, to the left side of the equation. After the substitution \(\cos q_{x}=z\),
\[M_{20}^{\rm sq}(E;\alpha,\beta)+M_{00}^{\rm sq}(E;\alpha,\beta)=\frac{2}{\pi \alpha}\int_{-1}^{1}\frac{z^{2}\,{\rm d}z}{\sqrt{(1-z^{2})(z_{1}-z)(z_{2}-z)}}\,, \tag{112}\]
where \(z_{1}=(|E|+\beta)/\alpha\) and \(z_{2}=(|E|-\beta)/\alpha\). Next we use substitution \(z=z_{1}+\frac{1}{u}\) to convert the quartic polynomial under the square root into a cubic polynomial. The result is
\[M_{20}^{\rm sq}(E;\alpha,\beta)+M_{00}^{\rm sq}(E;\alpha,\beta)=\frac{2}{\pi \alpha}\sqrt{abc}\,\left\{\left(\frac{|E|+\beta}{\alpha}\right)^{2}\!J_{0}-2 \frac{|E|+\beta}{\alpha}\,J_{-1}+J_{-2}\right\}, \tag{113}\]
where
\[a\equiv\frac{\alpha}{2\beta}\;,\hskip 28.452756ptb\equiv\frac{\alpha}{|E|+ \beta-\alpha}\;,\hskip 28.452756ptc\equiv\frac{\alpha}{|E|+\beta+\alpha}\;, \hskip 28.452756pta>b>c\,, \tag{114}\]
\[J_{m}=\int_{c}^{b}\frac{u^{m}\,{\rm d}u}{\sqrt{(a-u)(b-u)(u-c)}}\,. \tag{115}\]
According to the general theory of elliptic integrals,[156]\(J_{-2}\) can expressed as a linear combination of three fundamental integrals \(J_{-1}\), \(J_{0}\) and \(J_{1}\). To this end, call the polynomial under the square root \(Q(u)\), calculate the derivative \(\frac{d}{du}[\frac{1}{u}\sqrt{Q(u)}]\) and integrate the result between \(u=c\) and \(u=b\). This leads to
\[J_{-2}=\frac{1}{2abc}\,\left\{(ab+ac+bc)J_{-1}-J_{1}\right\}. \tag{116}\]
Substitution of Eq. (116) in Eq. (113) yields
\[M_{20}^{\rm sq}(E;\alpha,\beta)+M_{00}^{\rm sq}(E;\alpha,\beta)=\frac{2}{\pi \alpha}\sqrt{abc}\,\left\{\left(\frac{|E|+\beta}{\alpha}\right)^{2}\!\!J_{0}- \frac{|E|}{\alpha}\,J_{-1}-\frac{1}{2abc}\,J_{1}\right\}. \tag{117}\]
As the next step, we use the substitution
\[u=c+(b-c)\sin^{2}\!\phi\,, \tag{118}\]
to transform \(J_{m}\)'s into Legendre normal forms, Eq. (100). The results are
\[J_{0}(\alpha,\beta) = \frac{2}{\sqrt{a-c}}\,{\bf K}\!\left(\sqrt{\frac{b-c}{a-c}}\right), \tag{119}\] \[J_{1}(\alpha,\beta) = \frac{2}{\sqrt{a-c}}\left\{a\,{\bf K}\!\left(\sqrt{\frac{b-c}{a- c}}\right)-(a-c)\,{\bf E}\!\left(\sqrt{\frac{b-c}{a-c}}\right)\right\},\] (120) \[J_{-1}(\alpha,\beta) = \frac{2}{c\sqrt{a-c}}\,{\bf\Pi}\!\left(-\frac{b-c}{c},\sqrt{\frac {b-c}{a-c}}\right). \tag{121}\]
Note that the first argument in \({\bf\Pi}\) is negative. At this point, we return to the isotropic case, \(\alpha=\beta\). Using \(a\), \(b\), \(c\) from Eq. (114), one obtains
\[J_{0}(\alpha,\alpha) = 2\sqrt{2}\sqrt{\frac{|E|+2\alpha}{|E|}}\,{\bf K}\!\left(\frac{2 \alpha}{|E|}\right), \tag{122}\] \[J_{1}(\alpha,\alpha) = \sqrt{2}\sqrt{\frac{|E|+2\alpha}{|E|}}\,\left\{{\bf K}\!\left( \frac{2\alpha}{|E|}\right)-\frac{|E|}{|E|+2\alpha}\,{\bf E}\!\left(\frac{2 \alpha}{|E|}\right)\right\},\] (123) \[J_{-1}(\alpha,\alpha) = \frac{2\sqrt{2}\,(|E|+2\alpha)^{3/2}}{\alpha\sqrt{|E|}}\,{\bf\Pi }\!\left(-\frac{2\alpha}{|E|},\frac{2\alpha}{|E|}\right). \tag{124}\]
Notice that the expression for \(J_{-1}(\alpha,\alpha)\) involves \(\mathbf{\Pi}\) with their two arguments being equal by absolute value. In this special case, \(\mathbf{\Pi}\) can be reduced to \(\mathbf{K}\). According to Ref. [157], #17.7.22,
\[2\mathbf{\Pi}(-\kappa,\kappa)-\mathbf{K}(\kappa)=\frac{\pi}{2}\frac{1}{1+ \kappa}\,, \tag{119}\]
which is valid for any \(0\leq\kappa<1\). We can prove Eq. (119) by the following procedure. Go back to \(M_{10}^{\rm sq}(E;\alpha,\beta)\) and transform it in the same manner as \(M_{20}^{\rm sq}\) starting with Eq. (104). Instead of Eq. (103), we have
\[M_{10}^{\rm sq}(E;\alpha,\beta)=\frac{1}{\pi\alpha}\sqrt{abc}\,\left\{\frac{| E|+\beta}{\alpha}\,J_{0}-J_{-1}\right\}. \tag{120}\]
The isotropic version reads
\[M_{10}^{\rm sq}(E;\alpha,\alpha)=\frac{2}{\pi\alpha|E|}\,\left\{(|E|+\alpha) \,\mathbf{K}\!\left(\frac{2\alpha}{|E|}\right)-(|E|+2\alpha)\,\mathbf{\Pi}\! \left(-\frac{2\alpha}{|E|},\frac{2\alpha}{|E|}\right)\right\}. \tag{121}\]
Thus, \(M_{10}^{\rm sq}\) also contains a special case of \(\mathbf{\Pi}(-\kappa,\kappa)\). However, we previously derived an expression for \(M_{10}^{\rm sq}\) that includes \(\mathbf{K}(\kappa)\) only. By comparing Eq. (102) and Eq. (121), one obtains
\[\mathbf{\Pi}\!\left(-\frac{2\alpha}{|E|},\frac{2\alpha}{|E|}\right)=\frac{1}{ 2}\,\mathbf{K}\!\left(\frac{2\alpha}{|E|}\right)+\frac{\pi}{4}\frac{|E|}{|E|+ 2\alpha}\,, \tag{122}\]
which is Eq. (119) for \(\kappa=\frac{2\alpha}{|E|}\). Finally, by substituting Eq. (122) in Eq. (119), then expressions for \(J_{m}(\alpha,\alpha)\), \(a\), \(b\) and \(c\) in Eq. (103), and utilizing the previously derived formula for \(M_{00}^{\rm sq}(E,\alpha,\alpha)\), one arrives at the deceptively simple result:
\[M_{20}^{\rm sq}(E\leq-2\alpha;\alpha,\alpha)=\frac{2}{\pi|E|}\,\mathbf{K}\! \left(\frac{2\alpha}{|E|}\right)+\frac{|E|}{\alpha^{2}}\left\{\frac{2}{\pi}\, \mathbf{E}\!\left(\frac{2\alpha}{|E|}\right)-1\right\}. \tag{123}\]
### Recurrence relations for all \(M_{nm}\) in the square case, \(\alpha=\beta\)
The sum rule, Eq. (102), and basic integrals \(M_{00}^{\rm sq}\), \(M_{10}^{\rm sq}\) and \(M_{20}^{\rm sq}\), derived in the previous section enable calculation of several more \(M_{mn}^{\rm sq}\). Specifically, by setting \((n,m)=(1,0)\), one can obtain \(M_{11}^{\rm sq}\). Then, by setting \((n,m)=(1,1)\), one can obtain \(M_{21}^{\rm sq}=M_{12}^{\rm sq}\). Finally, by setting \((n,m)=(2,0)\), one can derive \(M_{30}^{\rm sq}\). However, after that progress stops. Further extension of this method is impossible. Fortunately, there is a second, remarkable identity due to Morita,[158] that involves only \(M_{n0}\)'s and for our case reads
\[2n\!\left(2|E|^{2}-\alpha^{2}\right)\!M_{n0}^{\rm sq}-2\alpha|E|(2n\!+\!1)M_{n +1,0}^{\rm sq}\!-2\alpha|E|(2n\!-\!1)M_{n-1,0}^{\rm sq}\!+\!\alpha^{2}(n\!+\! 1)M_{n+2,0}^{\rm sq}\!+\!\alpha^{2}(n\!-\!1)M_{n-2,0}^{\rm sq}=0\,. \tag{124}\]
This expression is valid for \(n\neq 0\). Using expressions for \(M_{00}^{\rm sq}\), \(M_{10}^{\rm sq}\), \(M_{20}^{\rm sq}\) and \(M_{30}^{\rm sq}\) derived above, one can validate Eq. (124) for \(n=1\). Then, by setting \(n=2,3,\ldots\), one can calculate \(M_{40}^{\rm sq}\), \(M_{50}^{\rm sq}\), and so on, for any \(n\). After that, application of the sum rule, Eq. (102), enables calculation of _any_\(M_{nm}^{\rm sq}\), at least in principle.
Below, we list all \(M_{nm}^{\rm sq}\) for \(n+m\leq 4\).
\[M_{00}^{\rm sq}(E\leq-2\alpha;\alpha,\alpha) =\frac{2}{\pi|E|}\,{\bf K}\!\left(\frac{2\alpha}{|E|}\right), \tag{111}\] \[M_{10}^{\rm sq}(E\leq-2\alpha;\alpha,\alpha) =M_{01}^{\rm sq}(E\leq-2\alpha;\alpha,\alpha)=\frac{1}{\pi\alpha} \,{\bf K}\!\left(\frac{2\alpha}{|E|}\right)-\frac{1}{2\alpha}\,,\] (112) \[M_{11}^{\rm sq}(E\leq-2\alpha;\alpha,\alpha) =\frac{|E|}{\pi\alpha^{2}}\left\{\left(1-\frac{2\alpha^{2}}{|E|^{ 2}}\right){\bf K}\!\left(\frac{2\alpha}{|E|}\right)-{\bf E}\!\left(\frac{2 \alpha}{|E|}\right)\right\},\] (113) \[M_{20}^{\rm sq}(E\leq-2\alpha;\alpha,\alpha) =M_{02}^{\rm sq}(E\leq-2\alpha;\alpha,\alpha)=\frac{2}{\pi|E|}\,{ \bf K}\!\left(\frac{2\alpha}{|E|}\right)+\frac{|E|}{\alpha^{2}}\left\{\frac{2 }{\pi}\,{\bf E}\!\left(\frac{2\alpha}{|E|}\right)-1\right\},\] (114) \[M_{21}^{\rm sq}(E\leq-2\alpha;\alpha,\alpha) =M_{12}^{\rm sq}(E\leq-2\alpha;\alpha,\alpha)=\left\{\frac{|E|^{2} }{\pi\alpha^{3}}-\frac{3}{\pi\alpha}\right\}{\bf K}\!\left(\frac{2\alpha}{|E|} \right)-\frac{|E|^{2}}{\pi\alpha^{3}}\,{\bf E}\!\left(\frac{2\alpha}{|E|} \right)+\frac{1}{2\alpha}\,,\] (115) \[M_{30}^{\rm sq}(E\leq-2\alpha;\alpha,\alpha) =M_{03}^{\rm sq}(E\leq-2\alpha;\alpha,\alpha)=\left\{\frac{9}{\pi \alpha}-\frac{2|E|^{2}}{\pi\alpha^{3}}\right\}{\bf K}\!\left(\frac{2\alpha}{|E |}\right)+\frac{6|E|^{2}}{\pi\alpha^{3}}\,{\bf E}\!\left(\frac{2\alpha}{|E|} \right)-\frac{2|E|^{2}}{\alpha^{3}}-\frac{1}{2\alpha}\,,\] (116) \[M_{40}^{\rm sq}(E\leq-2\alpha;\alpha,\alpha) =M_{04}^{\rm sq}(E\leq-2\alpha;\alpha,\alpha)=\left\{\frac{2}{\pi|E| }+\frac{80|E|}{3\pi\alpha^{2}}-\frac{20|E|^{3}}{3\pi\alpha^{4}}\right\}{\bf K }\!\left(\frac{2\alpha}{|E|}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+ \left\{\frac{8|E|}{3\pi\alpha^{2}}+\frac{44|E|^{3}}{3\pi\alpha^{4}}\right\}{\bf E }\!\left(\frac{2\alpha}{|E|}\right)-\frac{4|E|^{3}}{\alpha^{2}}-\frac{4|E|^{3} }{\alpha^{4}}\,,\] (117) \[M_{31}^{\rm sq}(E\leq-2\alpha;\alpha,\alpha) =M_{13}^{\rm sq}(E\leq-2\alpha;\alpha,\alpha)=\left\{-\frac{2}{\pi| E|}-\frac{13|E|}{3\pi\alpha^{2}}+\frac{4|E|^{3}}{3\pi\alpha^{4}}\right\}{\bf K }\!\left(\frac{2\alpha}{|E|}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+ \left\{\frac{4|E|}{3\pi\alpha^{2}}-\frac{2|E|^{3}}{3\pi\alpha^{4}}\right\}{\bf E }\!\left(\frac{2\alpha}{|E|}\right). \tag{118}\]
Several comments are in order. (i) \(M_{nm}^{\rm sq}(E;\alpha,\alpha)\) describe not only bound \(UV\) pairs in the \(\Gamma\) point but on the Brillouin zone diagonal as well. This follows from the definitions of \(\alpha\) and \(\beta\). (ii) All integrals have the analytical structure of \(\alpha M_{nm}^{\rm sq}=Q_{1}(\kappa^{-1}){\bf K}(\kappa)+Q_{2}(\kappa^{-1}){ \bf E}(\kappa)+Q_{3}(\kappa^{-1})\), where \(Q_{1,2,3}\) are polynomials and \(\kappa=\frac{2\alpha}{|E|}\). The smallest power in all \(Q_{i}\) is \(\geq-1\) while the highest power grows with \(n\) and \(m\). The overall complexity of the analytical expressions increases with \(n\) and \(m\). All \(M_{nm}^{\rm sq}\) include a nonzero term \(\propto{\bf K}(\kappa)\) that logarithmically diverges when \(E\to-2\alpha-0\), as expected from the definition, Eq. (109). However, coefficients by \({\bf K}(\kappa)\) approach a universal limit \(\frac{1}{\pi\alpha}\) that is the same for _all_\(n\) and \(m\). As a result, all differences \(M_{nm}^{\rm sq}-M_{n^{\prime}m^{\prime}}^{\rm sq}\) are finite at \(E=-2\alpha\). They are discussed in the next section.
### \(M_{nm}^{\rm sq}-M_{00}^{\rm sq}\) at threshold. Square case, \(\alpha=\beta\)
As discussed in the main body of this paper, to determine a pairing threshold, energy \(E\) must be sent to the lowest energy of two free particles from below. For the isotropic square \(UV\) model, it means \(E\to-2\alpha-0\). As shown in section A.4, all \(M_{nm}^{\rm sq}\) logarithmically diverge in that limit and direct determination of the threshold is impossible. The procedure can be regularized by subtracting from all \(M_{nm}^{\rm sq}\) a common term that has the same logarithmic divergence. The subtrahend can be any one of \(M_{nm}^{\rm sq}\) or any other function that has the same logarithmic divergence. The most obvious choice is \(M_{00}^{\rm sq}\). Accordingly, we introduce the following new functions
\[L_{nm}^{\rm sq}(\alpha,\alpha)\equiv M_{nm}^{\rm sq}(E=-2\alpha;\alpha,\alpha) -M_{00}^{\rm sq}(E=-2\alpha;\alpha,\alpha)\,, \tag{119}\]
that are finite for all \(n\), \(m\) and \(\alpha\). It can be deduced from the definitions, Eq. (109) and Eq. (119), that all \(L_{nm}^{\rm sq}<0\).
There are two principle ways of calculating \(L_{nm}^{\rm sq}\). For those \(\{nm\}\) for which full expression \(M_{nm}^{\rm sq}\) is known, \(L_{nm}^{\rm sq}\) is given by its non-singular part, utilizing \({\bf E}(1)=1\). But in general, \(L_{nm}^{\rm sq}\) can be calculated directly from the definition,
Eq. (165), because in this special case the second integration is also elementary. Below, we list \(L^{\rm sq}_{nm}\) for \(|n|+|m|\leq 5\).
\[L^{\rm sq}_{10}(\alpha,\alpha) = L^{\rm sq}_{01}(\alpha,\alpha)=-\frac{1}{2\alpha}=-\frac{1}{ \alpha}\,0.500000000\ldots\,, \tag{170}\] \[L^{\rm sq}_{20}(\alpha,\alpha) = L^{\rm sq}_{02}(\alpha,\alpha)=-\frac{2\,(\pi-2)}{\pi\alpha}=- \frac{1}{\alpha}\,0.7267604552\ldots\,,\] (171) \[L^{\rm sq}_{11}(\alpha,\alpha) = -\frac{2}{\pi\alpha}=-\frac{1}{\alpha}\,0.6366197723\ldots\,,\] (172) \[L^{\rm sq}_{30}(\alpha,\alpha) = L^{\rm sq}_{03}(\alpha,\alpha)=-\frac{17\pi-48}{2\pi\alpha}=- \frac{1}{\alpha}\,0.8605627315\ldots\,,\] (173) \[L^{\rm sq}_{21}(\alpha,\alpha) = L^{\rm sq}_{12}(\alpha,\alpha)=-\frac{8-\pi}{2\pi\alpha}=- \frac{1}{\alpha}\,0.7732395447\ldots\,,\] (174) \[L^{\rm sq}_{40}(\alpha,\alpha) = L^{\rm sq}_{04}(\alpha,\alpha)=-\frac{120\pi-368}{3\pi\alpha}=- \frac{1}{\alpha}\,0.9539872947\ldots\,,\] (175) \[L^{\rm sq}_{31}(\alpha,\alpha) = L^{\rm sq}_{31}(\alpha,\alpha)=-\frac{46-12\pi}{3\pi\alpha}=- \frac{1}{\alpha}\,0.8807515881\ldots\,,\] (176) \[L^{\rm sq}_{22}(\alpha,\alpha) = -\frac{8}{3\pi\alpha}=-\frac{1}{\alpha}\,0.8488263631\ldots\,,\] (177) \[L^{\rm sq}_{50}(\alpha,\alpha) = L^{\rm sq}_{05}(\alpha,\alpha)=-\frac{1203\pi-3760}{6\pi\alpha}= -\frac{1}{\alpha}\,1.0258046581\ldots\,,\] (178) \[L^{\rm sq}_{41}(\alpha,\alpha) = L^{\rm sq}_{14}(\alpha,\alpha)=-\frac{160-49\pi}{2\pi\alpha}=- \frac{1}{\alpha}\,0.9647908947\ldots\,,\] (179) \[L^{\rm sq}_{32}(\alpha,\alpha) = L^{\rm sq}_{23}(\alpha,\alpha)=-\frac{8+3\pi}{6\pi\alpha}=- \frac{1}{\alpha}\,0.9244131815\ldots\,. \tag{180}\]
### Basic integrals in the rectangular case, \(\alpha\neq\beta\)
Evaluation of basic integrals in the general case, \(\alpha\neq\beta\), parallels that of the isotropic case considered in Sec. A.2. First, integration over \(q_{y}\), and two substitutions, Eq. (162) and Eq. (163), directly yield
\[M^{\rm rt}_{00}(E;\alpha,\beta)=\frac{2}{\pi\sqrt{|E|^{2}-(\alpha-\beta)^{2}}} \,{\bf K}(\kappa_{0})\,, \tag{181}\]
where
\[\kappa_{0}\equiv\sqrt{\frac{4\,\alpha\beta}{|E|^{2}-(\alpha-\beta)^{2}}}\,. \tag{182}\]
In order to derive \(M^{\rm rt}_{10}\) and \(M^{\rm rt}_{20}\), we apply the transformation sequence, Eqs. (170)-(171). The only additional information needed is a relation between \({\bf\Pi}(-n,\kappa)\) and \({\bf\Pi}(+n,\kappa)\) (Ref. [157], # 17.7.17) which for our case reads
\[{\bf\Pi}\left(-\frac{2\alpha}{|E|+\beta-\alpha},\kappa\right)=\frac{2\beta}{| E|+\alpha+\beta}\,{\bf K}(\kappa)+\frac{|E|-\alpha-\beta}{|E|+\alpha+\beta}\,{ \bf\Pi}\left(\frac{2\alpha}{|E|+\alpha-\beta},\kappa\right). \tag{183}\]
With that, the results are
\[M^{\rm rt}_{10}(E\leq-(\alpha+\beta);\alpha,\beta) = \frac{2}{\pi\alpha\sqrt{|E|^{2}-(\alpha-\beta)^{2}}}\left\{\left( |E|-\beta\right){\bf K}(\kappa_{0})-\left(|E|-\alpha-\beta\right){\bf\Pi} \left(n_{10},\kappa_{0}\right)\right\}, \tag{184}\] \[M^{\rm rt}_{20}(E\leq-(\alpha+\beta);\alpha,\beta) = \frac{2}{\pi\alpha^{2}\sqrt{|E|^{2}-(\alpha-\beta)^{2}}}\left\{ \left(|E|-\beta\right)^{2}{\bf K}(\kappa_{0})+\left[|E|^{2}-(\alpha-\beta)^{2} \right]{\bf E}(\kappa_{0})\right.\] (185) \[\left.-2|E|\left(|E|-\alpha-\beta\right){\bf\Pi}\left(n_{10}, \kappa_{0}\right)\right\},\] \[n_{10} = \frac{2\alpha}{|E|+\alpha-\beta}\,. \tag{186}\]
\(M_{01}^{\rm rt}\) and \(M_{02}^{\rm rt}\) are obtained by permuting \(\alpha\leftrightarrow\beta\):
\[M_{01}^{\rm rt}(E\leq-(\alpha+\beta);\alpha,\beta) = \frac{2}{\pi\beta\sqrt{|E|^{2}-(\alpha-\beta)^{2}}}\left\{\left(|E |-\alpha\right){\bf K}(\kappa_{0})-\left(|E|-\alpha-\beta\right){\bf\Pi}\left( n_{01},\kappa_{0}\right)\right\}, \tag{101}\] \[M_{02}^{\rm rt}(E\leq-(\alpha+\beta);\alpha,\beta) = \frac{2}{\pi\beta^{2}\sqrt{|E|^{2}-(\alpha-\beta)^{2}}}\left\{ \left(|E|-\alpha\right)^{2}{\bf K}(\kappa_{0})+\left[|E|^{2}-(\alpha-\beta)^{2 }\right]{\bf E}(\kappa_{0})\right.\] (102) \[\left.-2|E|\left(|E|-\alpha-\beta\right){\bf\Pi}\left(n_{01}, \kappa_{0}\right)\right\},\] \[n_{01} = \frac{2\beta}{|E|+\beta-\alpha}\,. \tag{103}\]
Note that \(n_{10}n_{01}=\kappa_{0}^{2}\). Next, we compose a sum rule that is an anisotropic version of Eq. (100):
\[\alpha\left\{M_{n+1,m}^{\rm rt}(\alpha,\beta)+M_{n-1,m}^{\rm rt}(\alpha,\beta )\right\}+\beta\left\{M_{n,m-1}^{\rm rt}(\alpha,\beta)+M_{n,m+1}^{\rm rt}( \alpha,\beta)\right\}=-2\,\delta_{n0}\delta_{m0}+2|E|\,M_{nm}^{\rm rt}(\alpha, \beta)\,. \tag{104}\]
It can be verified by direct substitution of the definitions, Eq. (100). The \(n=m=0\) version of the sum rule reads
\[|E|\,M_{00}^{\rm rt}(\alpha,\beta)-\alpha M_{10}^{\rm rt}(\alpha,\beta)-\beta M _{01}^{\rm rt}(\alpha,\beta)=1\,, \tag{105}\]
which is satisfied by the expressions given above. By setting \((n,m)=(1,0)\) or \((0,1)\) in Eq. (104), one can derive the first diagonal integral
\[M_{11}^{\rm rt}(E\leq-(\alpha+\beta);\alpha,\beta)=\frac{1}{\pi\alpha\beta \sqrt{|E|^{2}-(\alpha-\beta)^{2}}}\left\{\left(|E|^{2}-\alpha^{2}-\beta^{2} \right){\bf K}(\kappa_{0})-\left[|E|^{2}-(\alpha-\beta)^{2}\right]{\bf E}( \kappa_{0})\right\}. \tag{106}\]
Equations (102), (103), (104), (105), (106) and (106) are sufficient to analyze the square \(UV\) model at arbitrary pair momenta away from the threshold. The same equations are instrumental in developing an efficient numerical method of computing similar integrals in 3D \(UV\) models, see Sec. C.3.
By using the identity \(\frac{1}{c}=\int_{0}^{\infty}\exp\left(-cz\right)dz\), Eq. (100) can be transformed into an integral of a product of two Bessel functions:
\[M_{nm}^{\rm rt}(E;\alpha,\beta)=\int_{0}^{\infty}e^{-|E|z}\,I_{n}(\alpha z)I_ {m}(\beta z)\,{\rm d}z\,. \tag{107}\]
Using this representation, expressions for \(M_{00}^{\rm rt}(E;\alpha,\beta)\) and \(M_{11}^{\rm rt}(E;\alpha,\beta)\) can also be found in integrals handbooks (see, e.g., Ref. [159], # 2.15.20.1), albeit without derivation.
### Recurrence relations for the rectangular case, \(\alpha\neq\beta\)
To proceed further, we need anisotropic analogs of the Morita recurrence relation.[158] Repeating the derivation twice, first time for \(M_{n0}^{\rm rt}\) and second time for \(M_{0m}^{\rm rt}\), one obtains
\[2n(2|E|^{2}+\alpha^{2}-2\beta^{2})M_{n0}^{\rm rt} -2\alpha|E|(2n+1)M_{n+1,0}^{\rm rt}-2\alpha|E|(2n-1)M_{n-1,0}^{ \rm rt}\] \[+\alpha^{2}(n+1)M_{n+2,0}^{\rm rt}+\alpha^{2}(n-1)M_{n-2,0}^{ \rm rt}=0\,,\qquad n\neq 0\,, \tag{108}\] \[2m(2|E|^{2}+\beta^{2}-2\alpha^{2})M_{0m}^{\rm rt}-2\beta|E|(2m+1 )M_{0,m+1}^{\rm rt}-2\beta|E|(2m-1)M_{0,m-1}^{\rm rt}\] \[+\beta^{2}(m+1)M_{0,m+2}^{\rm rt}+\beta^{2}(m-1)M_{0,m-2}^{\rm rt }=0\,,\qquad m\neq 0\,. \tag{109}\]
Using Eqs. (108)-(109), one can derive all \(M_{n0}^{\rm rt}\) and \(M_{0m}^{\rm rt}\), at least in principle. For example,
\[M_{30}^{\rm rt}(E\leq-(\alpha+\beta);\alpha,\beta) = \frac{2}{\pi\alpha^{3}\sqrt{|E|^{2}-(\alpha-\beta)^{2}}}\left\{ \left[(|E|-\beta)(|E|^{2}-3|E|\beta-\alpha^{2}+2\beta^{2})+|E|\alpha^{2} \right]{\bf K}(\kappa_{0})\right. \tag{110}\] \[\left.+\,3\,|E|\left[|E|^{2}-(\alpha-\beta)^{2}\right]{\bf E}( \kappa_{0})-(4|E|^{2}-\alpha^{2}+2\beta^{2})(|E|-\alpha-\beta){\bf\Pi}(n_{10}, \kappa_{0})\right\}.\]
\(M_{03}^{\rm rt}\) is obtained from Eq. (104) by permuting \(\alpha\leftrightarrow\beta\) and \(n_{10}\leftrightarrow n_{01}\).
Then, utilizing the sum rule, Eq. (104), one can derive all other \(M_{nm}^{\rm rt}\). For example,
\[M_{21}^{\rm rt}(E\leq-(\alpha+\beta);\alpha,\beta) = \frac{|E|}{\beta}\,M_{20}^{\rm rt}-\frac{\alpha}{2\beta}\left(M_{ 10}^{\rm rt}+M_{30}^{\rm rt}\right), \tag{111}\] \[M_{12}^{\rm rt}(E\leq-(\alpha+\beta);\alpha,\beta) = \frac{|E|}{\alpha}\,M_{02}^{\rm rt}-\frac{\beta}{2\alpha}\left(M_{ 01}^{\rm rt}+M_{03}^{\rm rt}\right), \tag{112}\]
and so on. These analytical expressions are useful in investigations of square, simple cubic, and tetragonal \(UV\) models with attraction beyond nearest neighbors.
### Table of \(M_{nm}-M_{00}\) at threshold in the rectangular case, \(\alpha\neq\beta\)
All \(M_{nm}^{\rm rt}\) derived in Sec. A.5 and Sec. A.6 diverge at pair binding threshold when \(E=-\alpha-\beta\). To obtain meaningful results, a subtractive procedure must be used. We define
\[L_{nm}^{\rm rt}(\alpha,\beta)\equiv M_{nm}^{\rm rt}(E=-\alpha- \beta;\alpha,\beta)-M_{00}^{\rm rt}(E=-\alpha-\beta;\alpha,\beta)\;. \tag{101}\]
\(L_{nm}^{\rm rt}(\alpha,\beta)\) can be derived either by taking the \(E\to-\alpha-\beta\) limit in the general expressions for \(M_{nm}^{\rm rt}(\alpha,\beta)\), or by direct evaluation of the double integrals in definitions, Eq. (101). In the latter case, the second integration is elementary but can be laborious. Below, we list several \(L_{nm}^{\rm rt}(\alpha,\beta)\) for the lowest \((n,m)\).
\[L_{10}^{\rm rt}(\alpha,\beta) = -\frac{2}{\pi\alpha}\arcsin\sqrt{\frac{\alpha}{\alpha+\beta}}\,, \tag{102}\] \[L_{01}^{\rm rt}(\alpha,\beta) = -\frac{2}{\pi\beta}\arcsin\sqrt{\frac{\beta}{\alpha+\beta}}\,,\] (103) \[L_{20}^{\rm rt}(\alpha,\beta) = -\frac{4}{\pi\alpha}\left[\frac{\alpha+\beta}{\alpha}\arcsin\sqrt {\frac{\alpha}{\alpha+\beta}}-\sqrt{\frac{\beta}{\alpha}}\right],\] (104) \[L_{02}^{\rm rt}(\alpha,\beta) = -\frac{4}{\pi\beta}\left[\frac{\alpha+\beta}{\beta}\arcsin\sqrt {\frac{\beta}{\alpha+\beta}}-\sqrt{\frac{\alpha}{\beta}}\right],\] (105) \[L_{11}^{\rm rt}(\alpha,\beta) = -\frac{2}{\pi\sqrt{\alpha\beta}}\,,\] (106) \[L_{21}^{\rm rt}(\alpha,\beta) = -\frac{2}{\pi\alpha}\left[\frac{\alpha+\beta}{\sqrt{\alpha\beta} }-\frac{\beta}{\alpha}\arcsin\sqrt{\frac{\alpha}{\alpha+\beta}}\right],\] (107) \[L_{12}^{\rm rt}(\alpha,\beta) = -\frac{2}{\pi\beta}\left[\frac{\alpha+\beta}{\sqrt{\alpha\beta} }-\frac{\alpha}{\beta}\arcsin\sqrt{\frac{\beta}{\alpha+\beta}}\right],\] (108) \[L_{22}^{\rm rt}(\alpha,\beta) = -\frac{8}{3\pi\sqrt{\alpha\beta}}\,. \tag{109}\]
These expressions are needed in deriving the pairing threshold in the square \(UV\) model at nonzero pair momenta.[80]
## Appendix B \(Uv\) model on the 2D triangular lattice
### Evaluation of basic integral \(M_{00}^{\rm tr}\) for the triangular lattice
The single-particle dispersion in the nearest-neighbor approximation is
\[\varepsilon_{\bf k}=-2t\cos k_{x}-4t\cos\left(\frac{k_{x}}{2} \right)\cos\left(\frac{\sqrt{3}k_{y}}{2}\right)\!. \tag{110}\]
Substituting this into Eq. (15), shifting the integration variables \({\bf q}={\bf q}^{\prime}+\frac{{\bf P}}{2}\), and rescaling \(q_{y}^{\prime}=\frac{2}{\sqrt{3}}\,q_{y}^{\prime\prime}\), the double integral \(M_{00}^{\rm tr}\) is transformed into an integral over the \(-\pi\leq q_{x},q_{y}\leq\pi\) square domain:
\[M_{00}^{\rm tr}(E,{\bf P}) = \int\limits_{-\pi^{-}\pi}^{\pi}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where
\[\alpha = 4t\cos\frac{P_{x}}{2}\,, \tag{111}\] \[\beta = 8t\cos\frac{P_{x}}{4}\cos\frac{\sqrt{3}P_{y}}{4}\,,\] (112) \[\gamma = 8t\sin\frac{P_{x}}{4}\sin\frac{\sqrt{3}P_{y}}{4}\,.\] (113) \[\tilde{\alpha} = \alpha^{2}\,,\] (114) \[\tilde{\beta} = -2\alpha|E|-\frac{\beta^{2}}{2}+\frac{\gamma^{2}}{2}\,,\] (115) \[\tilde{\gamma} = |E|^{2}-\frac{\beta^{2}}{2}-\frac{\gamma^{2}}{2}\,. \tag{116}\]
Integration over \(q_{y}\) in Eq. (110) was done by utilizing the residue theorem. The next step depends on whether the expression under the square root can be factorized into a product of two real factors. If \(\tilde{\beta}^{2}>4\tilde{\alpha}\tilde{\gamma}\), it is possible, and
\[\sqrt{\tilde{\alpha}\cos^{2}q_{x}+\tilde{\beta}\cos q_{x}+\tilde{ \gamma}} = \alpha\sqrt{(z_{1}-\cos\phi)(z_{2}-\cos\phi)}\,, \tag{117}\] \[z_{1,2} = \frac{1}{2\tilde{\alpha}}\left(-\tilde{\beta}\pm\sqrt{\tilde{ \beta}^{2}-4\tilde{\alpha}\tilde{\gamma}}\right). \tag{118}\]
Then the problem is reduced to Eq. (112), and the same transformations, Eqs. (110) and (113), result in
\[M_{00}^{\rm tr}=\frac{2}{\pi\alpha}\frac{1}{\sqrt{(z_{1}-1)(z_{2}+1)}}\,{\bf K }\!\left(\sqrt{\frac{2(z_{1}-z_{2})}{(z_{1}-1)(z_{2}+1)}}\right). \tag{119}\]
In the ground state, \(P_{x}=P_{y}=0\), \(\alpha=4t\), \(\beta=8t\), \(\gamma=0\), and
\[z_{1,2}=\frac{1}{\alpha}\left\{(|E|+\alpha)\pm\sqrt{2\alpha|E|+3\alpha^{2}} \right\}. \tag{120}\]
Substitution in Eq. (119) yields
\[M_{00}^{\rm tr}=\frac{2}{\pi}\frac{1}{\sqrt{|E_{0}|^{2}-48t^{2}+16t\sqrt{2t|E_ {0}|+12t^{2}}}}\,{\bf K}\!\left(\sqrt{\frac{32t\sqrt{2|E_{0}|t+12t^{2}}}{|E_{ 0}|^{2}-48t^{2}+16t\sqrt{2|E_{0}|t+12t^{2}}}}\right). \tag{121}\]
If \(\tilde{\beta}^{2}<4\tilde{\alpha}\tilde{\gamma}\), factorization is not possible. Reduction to Legendre standard form is achieved by consecutive application of Eq. (110) and
\[u = \frac{1}{2}+\sqrt{\frac{1}{4}+\frac{p}{2}+q}\cdot\tan^{2}\frac{ \phi}{2}\,, \tag{122}\] \[p = \frac{\tilde{\beta}-2\tilde{\alpha}}{\tilde{\alpha}-\tilde{\beta }+\tilde{\gamma}}\,,\] (123) \[q = \frac{\tilde{\alpha}}{\tilde{\alpha}-\tilde{\beta}+\tilde{\gamma }}\,. \tag{124}\]
The final result is
\[M_{00}^{\rm tr}=\frac{2}{\pi}\frac{1}{\left[(\tilde{\alpha}+\tilde{\gamma})^{2 }-\tilde{\beta}^{2}\right]^{1/4}}\,{\bf K}\!\left(\sqrt{\frac{1}{2}\left[1- \frac{\tilde{\gamma}-\tilde{\alpha}}{\sqrt{(\tilde{\alpha}+\tilde{\gamma})^{2 }-\tilde{\beta}^{2}}}\right]}\right). \tag{125}\]
### Numerical evaluation of \(M_{\rm nm}^{\pm}\) for the triangular lattice
In this section, the following integrals are utilized:
\[\int_{-\pi}^{\pi}\frac{{\rm d}x}{2\pi}\frac{1}{a-b\cos x-c\sin x} =\frac{1}{\sqrt{a^{2}-b^{2}-c^{2}}}\,, \tag{111}\] \[\int_{-\pi}^{\pi}\frac{{\rm d}x}{2\pi}\frac{\cos x}{a-b\cos x-c \sin x} =\frac{b}{b^{2}+c^{2}}\left(\frac{a}{\sqrt{a^{2}-b^{2}-c^{2}}}-1 \right),\] (112) \[\int_{-\pi}^{\pi}\frac{{\rm d}x}{2\pi}\frac{\sin x}{a-b\cos x-c \sin x} =\frac{c}{b^{2}+c^{2}}\left(\frac{a}{\sqrt{a^{2}-b^{2}-c^{2}}}-1 \right),\] (113) \[\int_{-\pi}^{\pi}\frac{{\rm d}x}{2\pi}\frac{\cos(2x)}{a-b\cos x- c\sin x} =\frac{b^{2}-c^{2}}{\sqrt{a^{2}-b^{2}-c^{2}}}\left(\frac{a-\sqrt{a ^{2}-b^{2}-c^{2}}}{b^{2}+c^{2}}\right)^{2},\] (114) \[\int_{-\pi}^{\pi}\frac{{\rm d}x}{2\pi}\frac{\sin(2x)}{a-b\cos x- c\sin x} =\frac{2bc}{\sqrt{a^{2}-b^{2}-c^{2}}}\left(\frac{a-\sqrt{a^{2}-b^{ 2}-c^{2}}}{b^{2}+c^{2}}\right)^{2},\] (115) \[\int_{0}^{\pi}\frac{{\rm d}x}{\pi}\frac{1}{\sqrt{a\cos^{2}x+b \cos x+c}} =\frac{2}{\pi}\frac{1}{[(a+c)^{2}-b^{2}]^{1/4}}\,{\bf K}\!\left[ \sqrt{\frac{1}{2}\left(1-\frac{c-a}{\sqrt{(a+c)^{2}-b^{2}}}\right)}\right], \tag{116}\]
where \(a,b,c>0\), and \(a^{2}>b^{2}+c^{2}\) is assumed. We also define
\[S\equiv\left(|E|-\alpha\cos q_{x}\right)^{2}-\beta^{2}\cos^{2}\frac{q_{x}}{2} -\gamma^{2}\sin^{2}\frac{q_{x}}{2}\,, \tag{117}\]
to simplify notation. The parameters \(\alpha\), \(\beta\), and \(\gamma\) are defined in Eqs. (104)-(105).
The singlet matrix elements are defined in Eqs. (179)-(183). The nearest-neighbor vectors are defined as: \({\bf 0}=(0,0)\); \({\bf b}_{+1}\equiv{\bf 1}=(1,0)\); \({\bf b}_{+2}\equiv{\bf 2}=\left(\frac{1}{2},\frac{\sqrt{3}}{2}\right)\); \({\bf b}_{+3}\equiv{\bf 3}=\left(-\frac{1}{2},\frac{\sqrt{3}}{2}\right)\). Integration over \(q_{y}\) yields
\[M_{{\bf 00}}^{+} =\int\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[M_{\bf 11}^{+} = \int\limits_{-\pi-\pi}^{\pi}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\((1,0)\); \({\bf b}_{-2}\equiv{\bf 2}=\left(\frac{1}{2},\frac{\sqrt{3}}{2}\right)\); \({\bf b}_{-3}\equiv{\bf 3}=\left(-\frac{1}{2},\frac{\sqrt{3}}{2}\right)\). Integration over \(q_{y}\) yields
\[M_{{\bf 11}}^{-}=(-2\,i)\int\limits_{-\pi-\pi}^{\pi}\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Table of \(M_{\bf nm}^{\rm tr}-M_{\bf 00}^{\rm tr}\) at threshold at the \(\Gamma\)-point
We set \(P_{x}=P_{y}=0\) and \(E=-12\,t\). Then \(\alpha=4t\), \(\beta=8t\) and \(\gamma=0\). \(L_{\bf nm}^{+}\) are defined in Eqs. (191)-(193). Utilizing Eqs. (194)-(193), elementary integration yields:
\[L_{\bf 10}^{+} = \int_{0}^{\pi}\frac{{\rm d}q_{x}}{\pi}\frac{\cos q_{x}-1}{\sqrt{( 12t-4t\cos q_{x})^{2}-64t^{2}\cos^{2}\frac{q_{x}}{2}}}=-\frac{1}{12\,t}\,, \tag{195}\] \[L_{\bf 11}^{+} = \int_{0}^{\pi}\frac{{\rm d}q_{x}}{\pi}\frac{\cos^{2}q_{x}-1}{\sqrt {(12t-4t\cos q_{x})^{2}-64t^{2}\cos^{2}\frac{q_{x}}{2}}}=\frac{3\sqrt{3}-2\pi} {3\pi t}\,,\] (196) \[L_{\bf 12}^{+} = \int_{0}^{\pi}\frac{{\rm d}q_{x}}{\pi}\frac{\cos q_{x}(3-\cos q_{ x})-2}{\sqrt{(12t-4t\cos q_{x})^{2}-64t^{2}\cos^{2}\frac{q_{x}}{2}}}=\frac{\pi-6 \sqrt{3}}{12\pi t}\,. \tag{197}\]
### Application of group theory
The dispersion relations, Eqs. (178) and (186), are quite complex and in general can be solved only numerically. However, at high symmetry points of the Brillouin zone there is enough simplification that some results can be derived analytically. The analysis is greatly aided by the theory of point groups. This section is devoted to the exposition of this subject. The triangular lattice is used as an example; \(UV\) models on other lattices can be analyzed similarly.
At the \(\Gamma\)-point (\({\bf P}=0\)), the following relations among singlet matrix elements hold:
\[M_{\bf 10}^{+}(\Gamma) = M_{\bf 20}^{+}(\Gamma)=M_{\bf 30}^{+}(\Gamma)\;, \tag{198}\] \[M_{\bf 01}^{+}(\Gamma) = M_{\bf 02}^{+}(\Gamma)=M_{\bf 03}^{+}(\Gamma)=2\,M_{\bf 10}^{+}( \Gamma)\;,\] (199) \[M_{\bf 11}^{+}(\Gamma) = M_{\bf 22}^{+}(\Gamma)=M_{\bf 33}^{+}(\Gamma)\;,\] (200) \[M_{\bf 12}^{+}(\Gamma) = M_{\bf 21}^{+}(\Gamma)=M_{\bf 13}^{+}(\Gamma)=\] (201) \[M_{\bf 31}^{+}(\Gamma) = M_{\bf 23}^{+}(\Gamma)=M_{\bf 32}^{+}(\Gamma)\;. \tag{202}\]
Thus, there remains only four independent integrals \(M_{\bf 11}^{-}(\Gamma)\) and \(M_{\bf 12}^{-}(\Gamma)\). As a result, the general Eqs. (178) and (186) acquire additional symmetries. It is clear that a bound pair at \({\bf P}=0\) should possess a hexagonal symmetry, which implies the solutions can be classified according to the irreducible representations (irreps) of the point group \(C_{6v}\). We now execute a textbook algorithm [160] of constructing linear combinations of functions \(\Phi\) that form bases of the irreps of \(C_{6v}\).
The starting point is the three basis functions \(\Phi_{1-3}^{+}\) in the singlet case (\(\Phi_{0}\) will be added later) and \(\Phi_{1-3}^{-}\) in the triplet case. By acting on both bases by the symmetry operations of \(C_{6v}\), see Fig. 28, two _reducible_ representations \(D^{+}\) and \(D^{-}\) are constructed. For example,
\[D^{+}(C_{6}^{1})=\left(\begin{array}{ccc}0&0&1\\ 1&0&0\\ 0&1&0\end{array}\right),\;\;\;D^{-}(\sigma_{2}^{\prime})=\left(\begin{array}[ ]{ccc}-1&0&0\\ 0&0&1\\ 0&1&0\end{array}\right), \tag{203}\]
and so on. The characters of \(D^{\pm}\) are given in Table 2. Applying the orthogonality theorem, one obtains the decompositions
\[D^{+} = A_{1}\oplus E_{2}\;, \tag{204}\] \[D^{-} = B_{2}\oplus E_{1}\;. \tag{205}\]
Next, we construct symmetrized linear combinations for the one-dimensional irreps \(A_{1}\) and \(B_{2}\) according to the
Figure 28: Symmetry operations of a bound pair in the triangular \(UV\) model (point group \(C_{6v}\)). Rotations are defined in the anti-clockwise direction. \(\sigma\) denote mirror reflections with respect to indicated lines. The dot-dashed oval illustrates the symmetric basis function \(\Phi_{1}^{+}\).
formula [160] (the overall normalization factors are omitted)
\[\phi^{A_{1}} = \sum_{g\in C_{6v}}\chi^{A_{1}}(g)\,\hat{g}\Phi_{1}^{+}=\Phi_{1}^{+}+ \Phi_{2}^{+}+\Phi_{3}^{+}\, \tag{114}\] \[\phi^{B_{2}} = \sum_{g\in C_{6v}}\chi^{B_{2}}(g)\,\hat{g}\Phi_{1}^{-}=\Phi_{1}^{-} -\Phi_{2}^{-}+\Phi_{3}^{-}\, \tag{115}\]
where \(g\) are all the symmetry operations of \(C_{6v}\) listed in the top row of Table 2. For the two-dimensional irreps \(E_{1}\) and \(E_{2}\), the characters \(\chi\) need to be replaced with full \(2\times 2\) matrices \(D^{E}(g)\) representing group elements \(g\)
\[\phi_{ik}^{E_{2,1}}=\sum_{g\in C_{6v}}D_{ik}^{E_{2,1}}(g)\,\hat{g}\Phi_{1}^{\pm }. \tag{116}\]
The matrices \(D\) for both \(E_{2}\) and \(E_{1}\) are given below. For a fixed index \(k\), application of Eq. (116) yields two linear combinations (one for \(i=1\) and another for \(i=2\)) that together form a basis sought. Thus, each irrep gets two bases (one for \(k=1\) and one for \(k=2\)), one of which may be trivial. In the case of \(E_{2}\), the \(k=2\) basis turns out to be trivial (identical zero) whereas \(k=1\) produces a nontrivial basis
\[\phi^{E_{2}}=\left\{\begin{array}{c}2\Phi_{1}^{+}-\Phi_{2}^{+}-\Phi_{3}^{+} \\ \Phi_{2}^{+}-\Phi_{3}^{+}\end{array}\right.. \tag{117}\]
Equations (114) and (117) are combined to form a new basis, Eq. (189). A similar treatment of irrep \(E_{1}\) yields a trivial basis for \(k=1\) and a nontrivial one for \(k=2\):
\[\phi^{E_{1}}=\left\{\begin{array}{c}-\Phi_{2}^{-}-\Phi_{3}^{-}\\ 2\Phi_{1}^{-}+\Phi_{2}^{-}-\Phi_{3}^{-}\end{array}\right.. \tag{118}\]
This completes the analysis of symmetry at the \(\Gamma\) point. Equations (115) and (118) are combined to form a new basis, Eq. (197).
The matrices of \(E_{2}\) are:
\[D^{E_{2}}(E)=\left[\begin{array}{cc}1&0\\ 0&1\end{array}\right]\,\hskip 14.226378ptD^{E_{2}}(C_{6}^{1})=\left[ \begin{array}{cc}-\frac{1}{2}&-\frac{\sqrt{3}}{2}\\ \frac{\sqrt{3}}{2}&-\frac{1}{2}\end{array}\right]\,,\hskip 14.226378ptD^{E_{2}}(C_{3}^ {1})=\left[\begin{array}{cc}-\frac{1}{2}&\frac{\sqrt{3}}{2}\\ -\frac{\sqrt{3}}{2}&-\frac{1}{2}\end{array}\right]\,, \tag{119}\]
\[D^{E_{2}}(C_{2})=\left[\begin{array}{cc}1&0\\ 0&1\end{array}\right]\,,\hskip 14.226378ptD^{E_{2}}(C_{3}^{2})=\left[ \begin{array}{cc}-\frac{1}{2}&-\frac{\sqrt{3}}{2}\\ \frac{\sqrt{3}}{2}&-\frac{1}{2}\end{array}\right]\,,\hskip 14.226378ptD^{E_{2}}(C_{6}^ {5})=\left[\begin{array}{cc}-\frac{1}{2}&\frac{\sqrt{3}}{2}\\ -\frac{\sqrt{3}}{2}&-\frac{1}{2}\end{array}\right]\,, \tag{120}\]
\[D^{E_{2}}(\sigma_{1})=\left[\begin{array}{cc}1&0\\ 0&-1\end{array}\right]\,,\hskip 14.226378ptD^{E_{2}}(\sigma_{1}^{\prime})= \left[\begin{array}{cc}-\frac{1}{2}&\frac{\sqrt{3}}{2}\\ \frac{\sqrt{3}}{2}&\frac{1}{2}\end{array}\right]\,,\hskip 14.226378ptD^{E_{2}}( \sigma_{2})=\left[\begin{array}{cc}-\frac{1}{2}&-\frac{\sqrt{3}}{2}\\ -\frac{\sqrt{3}}{2}&\frac{1}{2}\end{array}\right]\,, \tag{121}\]
\[D^{E_{2}}(\sigma_{2}^{\prime})=\left[\begin{array}{cc}1&0\\ 0&-1\end{array}\right]\,,\hskip 14.226378ptD^{E_{2}}(\sigma_{3})=\left[ \begin{array}{cc}-\frac{1}{2}&\frac{\sqrt{3}}{2}\\ \frac{\sqrt{3}}{2}&\frac{1}{2}\end{array}\right]\,,\hskip 14.226378ptD^{E_{2}}( \sigma_{3}^{\prime})=\left[\begin{array}{cc}-\frac{1}{2}&-\frac{\sqrt{3}}{2} \\ -\frac{\sqrt{3}}{2}&\frac{1}{2}\end{array}\right]\,. \tag{122}\]
The matrices of \(E_{1}\) are:
\[D^{E_{1}}(E)=\left[\begin{array}{cc}1&0\\ 0&1\end{array}\right]\,,\hskip 14.226378ptD^{E_{1}}(C_{6}^{1})=\left[ \begin{array}{cc}\frac{1}{2}&-\frac{\sqrt{3}}{2}\\ \frac{\sqrt{3}}{2}&\frac{1}{2}\end{array}\right]\,,\hskip 14.226378ptD^{E_{1}}(C_{3}^ {1})=\left[\begin{array}{cc}-\frac{1}{2}&-\frac{\sqrt{3}}{2}\\ \frac{\sqrt{3}}{2}&-\frac{1}{2}\end{array}\right]\,, \tag{123}\]
\[D^{E_{1}}(C_{2})=\left[\begin{array}{cc}-1&0\\ 0&-1\end{array}\right]\,,\hskip 14.226378ptD^{E_{1}}(C_{3}^{2})=\left[ \begin{array}{cc}-\frac{1}{2}&\frac{\sqrt{3}}{2}\\ -\frac{\sqrt{3}}{2}&-\frac{1}{2}\end{array}\right]\,,\hskip 14.226378ptD^{E_{1}}(C_{6}^{ 5})=\left[\begin{array}{cc}\frac{1}{2}&\frac{\sqrt{3}}{2}\\ -\frac{\sqrt{3}}{2}&\frac{1}{2}\end{array}\right]\,, \tag{142}\]
\[D^{E_{1}}(\sigma_{1})=\left[\begin{array}{cc}-1&0\\ 0&1\end{array}\right]\,,\hskip 14.226378ptD^{E_{2}}(\sigma_{1}^{\prime})= \left[\begin{array}{cc}-\frac{1}{2}&-\frac{\sqrt{3}}{2}\\ -\frac{\sqrt{3}}{2}&\frac{1}{2}\end{array}\right]\,,\hskip 14.226378ptD^{E_{1}}( \sigma_{2})=\left[\begin{array}{cc}\frac{1}{2}&-\frac{\sqrt{3}}{2}\\ -\frac{\sqrt{3}}{2}&-\frac{1}{2}\end{array}\right]\,, \tag{143}\]
\[D^{E_{1}}(\sigma_{2}^{\prime})=\left[\begin{array}{cc}1&0\\ 0&-1\end{array}\right]\,,\hskip 14.226378ptD^{E_{1}}(\sigma_{3})=\left[ \begin{array}{cc}\frac{1}{2}&\frac{\sqrt{3}}{2}\\ \frac{\sqrt{3}}{2}&-\frac{1}{2}\end{array}\right]\,,\hskip 14.226378ptD^{E_{1}}( \sigma_{3}^{\prime})=\left[\begin{array}{cc}-\frac{1}{2}&\frac{\sqrt{3}}{2} \\ \frac{\sqrt{3}}{2}&-\frac{1}{2}\end{array}\right]\,. \tag{144}\]
## Appendix C \(Uv\) model on the simple cubic and tetragonal lattices
### Green's functions, Eq. (204), in the simple cubic case, \(\alpha=\beta=\gamma\)
For \(\alpha=\beta=\gamma\), the Watson integrals
\[M_{nmk}^{\rm sc}=M_{nmk}^{\rm sc}(E;\alpha,\alpha,\alpha)=\int\limits_{-\pi- \pi-\pi}^{\pi}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Joyce [89] also derived an explicit expression for \(M_{300}^{\rm sc}\) and provided an algorithm how to systematically derive \(M_{nmk}^{\rm sc}\) from \(M_{000}^{\rm sc}\), \(M_{200}^{\rm sc}\) and \(M_{300}^{\rm sc}\) using several recurrence relations. Thus, _any_\(M_{nmk}^{\rm sc}\) can be analytically expressed with products of the complete elliptic integrals \({\bf K}\) and \({\bf E}\), at least in principle.
At threshold, \(|E|=3\alpha\), _all_\(M_{nmk}^{\rm sc}\) can be expressed via \(M_{000}^{\rm sc}\) using the remarkable identities derived by Glasser and Boersma [86] and Joyce [89]
\[M_{200}^{\rm sc}(3\alpha)=M_{020}^{\rm sc}(3\alpha)=M_{002}^{\rm sc}(3\alpha)= -\frac{2}{\alpha}+\frac{10}{3}\,M_{000}^{\rm sc}(3\alpha)+\frac{2}{\pi^{2} \alpha^{2}M_{000}^{\rm sc}(3\alpha)}\:, \tag{111}\]
\[M_{300}^{\rm sc}(3\alpha)=M_{030}^{\rm sc}(3\alpha)=M_{003}^{\rm sc}(3\alpha)= -\frac{13}{\alpha}+\frac{35}{2}\,M_{000}^{\rm sc}(3\alpha)+\frac{21}{\pi^{2} \alpha^{2}M_{000}^{\rm sc}(3\alpha)}\:. \tag{112}\]
All other \(M_{nmk}^{\rm sc}(3\alpha)\) can be obtained from the recurrence relations [86; 89]. For example,
\[M_{110}^{\rm sc}(3\alpha)=M_{101}^{\rm sc}(3\alpha)=M_{011}^{\rm sc}(3\alpha)= \frac{1}{4}\left\{5M_{000}^{\rm sc}(3\alpha)-M_{200}^{\rm sc}(3\alpha)-\frac{2 }{\alpha}\right\}. \tag{113}\]
### Green's functions, Eq. (204), in the tetragonal case, \(\alpha=\beta\neq\gamma\)
The tetragonal Green's functions are defined as
\[M_{nmk}^{\rm tg}=\int\limits_{-\pi-\pi-\pi}^{\pi}\!\!\!\int\limits_{-\pi}^{\pi }\frac{{\rm d}q_{x}\,{\rm d}q_{y}\,{\rm d}q_{z}}{(2\pi)^{3}}\frac{\cos nq_{x} \cos mq_{y}\cos kq_{z}}{|E|-\alpha(\cos q_{x}+\cos q_{y})-\gamma\cos q_{z}}\:. \tag{114}\]
As far as we are aware, only the basic integral \(M_{000}^{\rm tg}\) has been evaluated analytically for arbitrary \(|E|\), \(\alpha\), and \(\gamma\) by Delves, Joyce and Zucker [87; 88; 90]
\[M_{000}^{\rm tg}=\frac{1}{|E|}\frac{2}{\left[\sqrt{1-\frac{\alpha^{2}}{|E|^{ 2}}\left(2-\frac{\gamma}{\alpha}\right)^{2}}+\sqrt{1-\frac{\alpha^{2}}{|E|^{ 2}}\left(2+\frac{\gamma}{\alpha}\right)^{2}}\right]\sqrt{1-\kappa_{+}^{2}} \sqrt{1-\kappa_{-}^{2}}}\left[\frac{2}{\pi}{\bf K}\left(\sqrt{\frac{-\kappa_{ +}^{2}}{1-\kappa_{+}^{2}}}\right)\right]\left[\frac{2}{\pi}{\bf K}\left(\sqrt{ \frac{-\kappa_{-}^{2}}{1-\kappa_{-}^{2}}}\right)\right] \tag{115}\]
where
\[\kappa_{\pm}^{2}=\frac{1}{2}-\frac{1}{2}\frac{\left[\sqrt{1+\frac {\alpha}{|E|}\left(2-\frac{\gamma}{\alpha}\right)}\sqrt{1-\frac{\alpha}{|E|} \left(2+\frac{\gamma}{\alpha}\right)}+\sqrt{1-\frac{\alpha}{|E|}\left(2-\frac {\gamma}{\alpha}\right)}\sqrt{1+\frac{\alpha}{|E|}\left(2+\frac{\gamma}{ \alpha}\right)}\right]}{\left[\sqrt{1-\frac{\alpha^{2}}{|E|^{2}}\left(2-\frac {\gamma}{\alpha}\right)^{2}}+\sqrt{1-\frac{\alpha^{2}}{|E|^{2}}\left(2+\frac {\gamma}{\alpha}\right)^{2}}\right]^{3}}\times \tag{116}\] \[\times\left\{\pm\frac{16\alpha^{2}}{|E|^{2}}+\sqrt{1-\frac{\gamma ^{2}}{|E|^{2}}}\left[\sqrt{1+\frac{\alpha}{|E|}\left(2-\frac{\gamma}{\alpha} \right)}\sqrt{1+\frac{\alpha}{|E|}\left(2+\frac{\gamma}{\alpha}\right)}+\sqrt{ 1-\frac{\alpha}{|E|}\left(2-\frac{\gamma}{\alpha}\right)}\sqrt{1-\frac{\alpha }{|E|}\left(2+\frac{\gamma}{\alpha}\right)}\right]^{2}\right\}.\]
Compared with the original papers, we have applied the identity
\[{\bf K}(i\kappa)=\frac{1}{\sqrt{1-\kappa^{2}}}\,{\bf K}\!\left(\sqrt{\frac{- \kappa^{2}}{1-\kappa^{2}}}\right), \tag{117}\]
since \(\kappa_{\pm}^{2}<0\). Other integrals \(M_{nmk}^{\rm tr}\) can be calculated numerically using the recipes of Appendix C.3.
### Green's functions, Eq. (204), in the orthorhombic case, \(\alpha\neq\beta\neq\gamma\)
The general integrals,
\[M_{nmk}=\int\limits_{-\pi-\pi-\pi}^{\pi}\!\!\!\int\limits_{-\pi}^{\pi}\frac{{ \rm d}q_{x}\,{\rm d}q_{y}\,{\rm d}q_{z}}{(2\pi)^{3}}\frac{\cos nq_{x}\cos mq_{ y}\cos kq_{z}}{|E|-\alpha\cos q_{x}-\beta\cos q_{y}-\gamma\cos q_{z}}\:, \tag{118}\]
apply to the simple cubic \(UV\) model at arbitrary \({\bf P}\), tetragonal \(UV\) model (\(t_{x}=t_{y}\neq t_{z}\)) at arbitrary \({\bf P}\),
and _orthorhombic_\(UV\) model (\(t_{x}\neq t_{y}\neq t_{z}\), not studied in this paper) also at arbitrary \(\mathbf{P}\). They cannot be evaluated analytically. However, two out of three integrations can be carried out analytically. Thus, only one integration remains to be done numerically which leads to an efficient numerical scheme. The first integration in Eq. (118) is elementary while the second produces complete elliptic integrals. First, we introduce three elliptic moduli:
\[\kappa_{x} \equiv\sqrt{\frac{4\beta\gamma}{(|E|-\alpha\cos q_{x})^{2}-(\beta- \gamma)^{2}}}\,, \tag{119}\] \[\kappa_{y} \equiv\sqrt{\frac{4\alpha\gamma}{(|E|-\beta\cos q_{y})^{2}-( \alpha-\gamma)^{2}}}\,,\] (120) \[\kappa_{z} \equiv\sqrt{\frac{4\alpha\beta}{(|E|-\gamma\cos q_{z})^{2}-( \alpha-\beta)^{2}}}\,. \tag{121}\]
Next, we apply Eqs. (100) and (101) to perform the double integration. For \(M_{000}\), three equivalent representations are possible, depending on which two variables are integrated over:
\[M_{000} =\int_{0}^{\pi}\frac{\mathrm{d}q_{x}}{\pi}\frac{2\,\mathbf{K}( \kappa_{x})}{\pi\sqrt{(|E|-\alpha\cos q_{x})^{2}-(\beta-\gamma)^{2}}} \tag{122}\] \[=\int_{0}^{\pi}\frac{\mathrm{d}q_{y}}{\pi}\frac{2\,\mathbf{K}( \kappa_{y})}{\pi\sqrt{(|E|-\beta\cos q_{y})^{2}-(\alpha-\gamma)^{2}}}\] (123) \[=\int_{0}^{\pi}\frac{\mathrm{d}q_{z}}{\pi}\frac{2\,\mathbf{K}( \kappa_{z})}{\pi\sqrt{(|E|-\gamma\cos q_{z})^{2}-(\alpha-\beta)^{2}}}\,. \tag{124}\]
This equivalence can be used to validate the numerical method. Other integrals are
\[M_{100} =\int_{0}^{\pi}\frac{\mathrm{d}q_{x}}{\pi}\frac{2\cos q_{x}\, \mathbf{K}(\kappa_{x})}{\pi\sqrt{(|E|-\alpha\cos q_{x})^{2}-(\beta-\gamma)^{2} }}\,, \tag{125}\] \[M_{010} =\int_{0}^{\pi}\frac{\mathrm{d}q_{y}}{\pi}\frac{2\cos q_{y}\, \mathbf{K}(\kappa_{y})}{\pi\sqrt{(|E|-\beta\cos q_{y})^{2}-(\alpha-\gamma)^{2} }}\,,\] (126) \[M_{001} =\int_{0}^{\pi}\frac{\mathrm{d}q_{z}}{\pi}\frac{2\cos q_{z}\, \mathbf{K}(\kappa_{z})}{\pi\sqrt{(|E|-\gamma\cos q_{z})^{2}-(\alpha-\beta)^{2} }}\,. \tag{127}\]
\[M_{200} =\int_{0}^{\pi}\frac{\mathrm{d}q_{x}}{\pi}\frac{2\cos(2q_{x})\, \mathbf{K}(\kappa_{x})}{\pi\sqrt{(|E|-\alpha\cos q_{x})^{2}-(\beta-\gamma)^{2} }}\,, \tag{128}\] \[M_{020} =\int_{0}^{\pi}\frac{\mathrm{d}q_{y}}{\pi}\frac{2\cos(2q_{y})\, \mathbf{K}(\kappa_{y})}{\pi\sqrt{(|E|-\beta\cos q_{y})^{2}-(\alpha-\gamma)^{2} }}\,,\] (129) \[M_{002} =\int_{0}^{\pi}\frac{\mathrm{d}q_{z}}{\pi}\frac{2\cos(2q_{z})\, \mathbf{K}(\kappa_{z})}{\pi\sqrt{(|E|-\gamma\cos q_{z})^{2}-(\alpha-\beta)^{2} }}\,. \tag{130}\]
\[M_{110} =\int_{0}^{\pi}\frac{\mathrm{d}q_{z}}{\pi}\frac{(2-\kappa_{x}^{2} )\mathbf{K}(\kappa_{z})-2\,\mathbf{E}(\kappa_{z})}{\pi\kappa_{z}\sqrt{\alpha \beta}}\,, \tag{131}\] \[M_{101} =\int_{0}^{\pi}\frac{\mathrm{d}q_{y}}{\pi}\frac{(2-\kappa_{y}^{2} )\mathbf{K}(\kappa_{y})-2\,\mathbf{E}(\kappa_{y})}{\pi\kappa_{y}\sqrt{\alpha \gamma}}\,,\] (132) \[M_{011} =\int_{0}^{\pi}\frac{\mathrm{d}q_{x}}{\pi}\frac{(2-\kappa_{y}^{2} )\mathbf{K}(\kappa_{x})-2\,\mathbf{E}(\kappa_{y})}{\pi\kappa_{y}\sqrt{\beta \gamma}}\,. \tag{133}\]
### Determinant of Eq. (219)
We expand the determinant of Eq. (219) in powers of \(U\), \(V_{xy}\), and \(V_{z}\):
\[D =U\cdot|V_{xy}|\cdot|V_{z}|\cdot\begin{array}{c}M_{000}\ \
### Derivation of Eq. (222)
For convenience, we introduce an anisotropy parameter, \(\sigma\equiv t_{z}/t\). Starting with Eq. (216), we wish to evaluate
\[M_{000}-M_{002}=\int\limits_{-\pi-\pi-\pi}^{\pi}\!\!\!\int\limits_{-\pi-\pi}^{ \pi}\!\frac{\mathrm{d}q_{x}\,\mathrm{d}q_{y}\,\mathrm{d}q_{z}}{(2\pi)^{3}}\frac {1-\cos 2q_{z}}{8t+4t_{z}-4t\cos q_{x}-4t\cos q_{y}-4t_{z}\cos q_{z}}\,, \tag{103}\]
in the limit \(t_{z}\to 0\). Application of Eqs. (100) and (102) yields
\[M_{000}-M_{002}=\frac{1}{2\pi t}\int_{0}^{\pi}\frac{\mathrm{d}q_{z}}{\pi}\frac {1-\cos 2q_{z}}{2-\sigma(1-\cos q_{z})}\,\mathbf{K}\left[\frac{2}{2+\sigma(1- \cos q_{z})}\right]. \tag{104}\]
At \(\sigma\to 0\), \(\mathbf{K}\) logarithmically diverges. In the main nonvanishing order, one can neglect the \(\sigma\) term in the denominator outside of \(\mathbf{K}[\ldots]\). Using the asymptote, \(\mathbf{K}(z\to 1)=\frac{1}{2}\ln\frac{16}{1-z^{2}}\), and changing variables, Eq. (104) is brought to the form
\[M_{000}-M_{002}\approx\frac{1}{8\pi t}\left\{\ln\frac{16}{\sigma}-\frac{2}{ \pi}\int_{0}^{2}\mathrm{d}u\,\sqrt{u(2-u)}\,\ln\left(2-u\right)\right\}\equiv \frac{1}{8\pi t}\left\{\ln\frac{16}{\sigma}-J\right\}. \tag{105}\]
The remaining integral is expressible via \(\Gamma\)-function (Ref. [99], #2.6.10.25)
\[J=\frac{2}{\pi}\,4\frac{\Gamma(\frac{3}{2})\Gamma(\frac{3}{2})}{\Gamma(3)} \left[\ln 2+\frac{\Gamma^{\prime}(\frac{3}{2})}{\Gamma(\frac{3}{2})}-\frac{ \Gamma^{\prime}(3)}{\Gamma(3)}\right]=\frac{1}{2}-\ln 2\,. \tag{106}\]
Finally,
\[M_{000}-M_{002}\approx\frac{1}{8\pi t}\ln\frac{32}{\sigma\sqrt{e}}=\frac{1}{8 \pi t}\ln\frac{32\,t}{\sqrt{e}\,t_{z}}\,, \tag{107}\]
from where Eq. (222) follows by inversion.
## Appendix D \(Uv\) model on the BCC lattice
### Derivation of Eq. (87)
Introducing potential increment: \(|U|=|U_{\mathrm{cr}}^{\mathrm{bcc}}|+u\), where \(u\ll|U_{\mathrm{cr}}^{\mathrm{bcc}}|\), and binding energy \(E_{0}=-16t(1+\Delta)\), where \(\Delta\ll 1\), and expanding Eq. (85) for small \(u\) and \(\Delta\), one obtains
\[\sqrt{\Delta}=\frac{u}{|U_{\mathrm{cr}}^{\mathrm{bcc}}|}\frac{\mathbf{K}\! \left(\frac{1}{\sqrt{2}}\right)}{\mathbf{K}\!\left(\frac{1}{\sqrt{2}}\right)}\,, \tag{108}\]
where the prime denotes derivative with respect to \(\mathbf{K}\)'s argument. Applying Eq. (61) of the main text, the last expression is rewritten as
\[\sqrt{\Delta}=\frac{u}{\sqrt{2}|U_{\mathrm{cr}}^{\mathrm{bcc}}|}\cdot\frac{ \mathbf{K}\!\left(\frac{1}{\sqrt{2}}\right)}{2\mathbf{E}\!\left(\frac{1}{ \sqrt{2}}\right)-\mathbf{K}\!\left(\frac{1}{\sqrt{2}}\right)}\,. \tag{109}\]
It is known that both \(\mathbf{K}(2^{-1/2})\) and \(\mathbf{E}(2^{-1/2})\) can be expressed via gamma function, \(\Gamma(1/4)\). Whittaker and Watson [161] derive the following identities:
\[\mathbf{K}\!\left(\frac{1}{\sqrt{2}}\right)=\frac{1}{4\sqrt{\pi}}\left[\Gamma \left(\frac{1}{4}\right)\right]^{2}, \tag{110}\]
\[2\mathbf{E}\!\left(\frac{1}{\sqrt{2}}\right)-\mathbf{K}\!\left(\frac{1}{\sqrt{ 2}}\right)=4\pi^{\frac{3}{2}}\left[\Gamma\left(\frac{1}{4}\right)\right]^{-2}. \tag{111}\]
Substituting them into Eq. (46) and using \(|U_{\rm cr}^{\rm bcc}|\) from Eq. (86), Eq. (47) becomes
\[\sqrt{\Delta}=\frac{u}{t}\cdot\frac{1}{2^{9}\sqrt{2}\pi^{5}}\left[\Gamma\left( \frac{1}{4}\right)\right]^{8}. \tag{48}\]
Finally, applying definitions of \(\Delta\) and \(u\), total pair energy \(E_{0}\) can be expressed as Eq. (87) with the \(u^{2}\) coefficient given by
\[\frac{\left[\Gamma\!\left(\frac{1}{4}\right)\right]^{16}}{2^{15}\pi^{10}}= \frac{2}{\pi^{6}}\left[{\bf K}\!\left(\frac{1}{\sqrt{2}}\right)\right]^{8}=0.29 050160675....\,. \tag{49}\]
## Appendix E Light pairs in the strong coupling limit
### Staggered square planes
Consider a lattice consisting of two square lattices shifted out-of-plane and staggered in the \((xy)\) plane relative to each other, see Fig. 29(a). In the limit of very strong _out-of-plane_ attraction, one particle will reside on the first plane and another particle on the second plane. The magnitudes of \(U\) and inter-plane hopping \(t_{z}\) are irrelevant. At the same time, the bound pair can still move in the first order in in-plane hopping \(t\). In the strong-coupling limit, there are four non-equivalent dimer configurations shown in the figure. Because the particles cannot exchange, there is no need to consider symmetrized basis states. Acting on the dimer states by the Hamiltonian yields:
\[\hat{H}A_{\bf m} =-t\left(B_{\bf m}+B_{\bf m+x}\right)-t\left(D_{\bf m}+D_{\bf m+ y}\right),\] \[\hat{H}B_{\bf m} =-t\left(A_{\bf m}+A_{\bf m-x}\right)-t\left(C_{\bf m}+C_{\bf m+ y}\right),\] \[\hat{H}C_{\bf m} =-t\left(B_{\bf m}+B_{\bf m-y}\right)-t\left(D_{\bf m}+D_{\bf m- x}\right),\] \[\hat{H}D_{\bf m} =-t\left(A_{\bf m}+A_{\bf m-y}\right)-t\left(C_{\bf m}+C_{\bf m+ x}\right). \tag{50}\]
Next, compose a Schrodinger equation and transform it to momentum space. That results in a consistency condition
\[\left|\begin{array}{ccc}\tilde{E}&t\left(1+e^{iP_{z}a}\right)&0&\left(1+e^{ iP_{y}a}\right)\\ t\left(1+e^{-iP_{z}a}\right)&\tilde{E}&\left(1+e^{iP_{y}a}\right)&0\\ 0&t\left(1+e^{-iP_{y}a}\right)&\tilde{E}&t\left(1+e^{-iP_{z}a}\right)\\ t\left(1+e^{-iP_{y}a}\right)&0&t\left(1+e^{iP_{z}a}\right)&\tilde{E}\end{array} \right|=0\;, \tag{51}\]
w
Figure 29: (a) The two-plane staggered square lattice. A square lattice of open circles are shifted out-of-plane (in the \(z\) direction) relative to a square lattice of filled circles. Attraction \(V\) is _between_ the planes. In the \(U\to\infty\), \(|V|\to\infty\) limit, the two particles reside on different planes. The ovals mark four basic dimer configurations. (b) The square lattice with resonant attraction, \(U<0\), \(U=V\). The circle and ovals mark three basic dimer states that are relevant in the \(|U|,|V|\to\infty\) limit.
where \(\tilde{E}\) is pair energy counted from \(-|V|\). Expansion of the determinant yields four dispersion bands
\[\tilde{E}_{1-4}=\pm(2t)\left|\cos\frac{P_{x}a}{2}\pm\cos\frac{P_{y}a}{2}\right|\,. \tag{111}\]
Near \(P_{x}=P_{y}=0\), the ground state energy is
\[\tilde{E}_{1}=-(2t)\left(\cos\frac{P_{x}a}{2}+\cos\frac{P_{y}a}{2}\right)\,. \tag{112}\]
Expanding at small momentum, one finds the effective mass
\[m_{px}^{*}=m_{py}^{*}=\frac{2\hbar^{2}}{t\,a^{2}}=4\,m_{0}\,, \tag{113}\]
where \(m_{0}=\hbar^{2}/(2ta^{2})\) is the free particle mass.
### Square \(Uv\) model with resonant attraction, \(U=v\)
The simple square \(UV\) model with only nearest-neighbor attraction does not support light pairs. However, light pairs become possible for resonant attraction \(U<0\), \(U=V\). In this case, the energy of on-site dimer and nearest-neighbor dimer are equal, and the pair can move in the first order in \(t\). Figure 29(b) shows the three relevant basis dimers. Since the particles can exchange, it is essential to consider symmetrized states:
\[A_{\bf m} = |\uparrow\downarrow\rangle_{\bf m}\,, \tag{114}\] \[B_{\bf m} = \frac{1}{\sqrt{2}}\left(|\uparrow\rangle_{\bf m}|\downarrow\rangle _{\bf m+x}+|\downarrow\rangle_{\bf m}|\uparrow\rangle_{\bf m+x}\right),\] (115) \[C_{\bf m} = \frac{1}{\sqrt{2}}\left(|\uparrow\rangle_{\bf m}|\downarrow\rangle _{\bf m+y}+|\downarrow\rangle_{\bf m}|\uparrow\rangle_{\bf m+y}\right). \tag{116}\]
The Hamiltonian action is
\[\hat{H}A_{\bf m} = -\sqrt{2}\,t\left(B_{\bf m}+B_{\bf m-x}\right)-\sqrt{2}\,t\left(C _{\bf m}+C_{\bf m-y}\right), \tag{117}\] \[\hat{H}B_{\bf m} = -\sqrt{2}\,t\left(A_{\bf m}+A_{\bf m+x}\right),\] (118) \[\hat{H}C_{\bf m} = -\sqrt{2}\,t\left(A_{\bf m}+A_{\bf m+y}\right). \tag{119}\]
Notice the special role of dimer \(A\): it is coupled to both \(B\) and \(C\) while the latter two are not coupled directly but only through \(A\). That will allow for generalization to other lattices to be discussed in Appendix E.3. The dispersion relation reads
\[\left|\begin{array}{cc}\tilde{E}&\sqrt{2}\,t\left(1+e^{-iP_{x}a}\right)& \sqrt{2}\,t\left(1+e^{-iP_{y}a}\right)\\ \sqrt{2}\,t\left(1+e^{iP_{y}a}\right)&\tilde{E}&0\\ \sqrt{2}\,t\left(1+e^{iP_{y}a}\right)&0&\tilde{E}\end{array}\right|=0\,, \tag{120}\]
which yields three bands: \(\tilde{E}_{2}=0\), and
\[\tilde{E}_{1,3}=\pm(2\sqrt{2}\,t)\sqrt{\cos^{2}\frac{P_{x}a}{2}+\cos^{2}\frac {P_{y}a}{2}}\,. \tag{121}\]
Expanding the lowest branch at small \({\bf P}\) yields the mass
\[m_{px}^{*}=m_{py}^{*}=\frac{2\hbar^{2}}{t\,a^{2}}=4\,m_{0}\,, \tag{122}\]
where \(m_{0}=\hbar^{2}/(2ta^{2})\) is the free particle mass.
### Generalization to other lattices with resonant attraction, \(U=v\)
The preceding calculation is readily generalized to other lattices with resonant contact and nearest-neighbor attraction. Consider a set of nearest-neighbor vectors \({\bf b}_{+}\) such that the dimers \(D\) built on sites \({\bf m}\) and \({\bf m}+{\bf b}_{+}\) are coupled only to \(A_{\bf m}\) but not to each other. Then, instead of Eqs. (111)-(112) we have:
\[\hat{H}A_{\bf m} = -\sqrt{2}\,t\sum_{{\bf b}_{+}}\left(B_{\bf m}+B_{\bf m-b_{+}} \right), \tag{113}\] \[\hat{H}B_{\bf m} = -\sqrt{2}\,t\left(A_{\bf m}+A_{\bf m+b_{+}}\right). \tag{114}\]
This simple form enables analytical expressions for several other lattices. For example, in the 3D simple cubic lattice, \({\bf b}_{+}={\bf x}\), \({\bf y}\), and \({\bf z}\). The lowest energy band is
\[\tilde{E}_{1}=-(2\sqrt{2}\,t)\sqrt{\cos^{2}\frac{P_{x}a}{2}+\cos^{2}\frac{P_{ y}a}{2}+\cos^{2}\frac{P_{z}a}{2}}\,. \tag{115}\]
The effective mass is
\[m^{*}_{px}=m^{*}_{py}=m^{*}_{pz}=\sqrt{6}\,\frac{\hbar^{2}}{t\,a^{2}}=2\sqrt{ 6}\,m_{0}\;. \tag{116}\]
Comparison between Eqs. (119), (112), and (115) suggests an obvious generalization to hyper-cubic lattices in higher dimensions \(D>3\).
In the body-centered cubic lattice, there are four vectors: \({\bf b}_{+}=\frac{1}{2}({\bf x}\pm{\bf y}\pm{\bf z})\). Dimer dispersion has five energy bands with \(\tilde{E}_{2,3,4}=0\) and
\[\tilde{E}_{1,5}=\pm(4\,t)\sqrt{1+\cos\frac{P_{x}a}{2}\cos\frac{P_{y}a}{2}\cos \frac{P_{z}a}{2}}\;. \tag{117}\]
The effective mass of the lowest band is
\[m^{*}_{px}=m^{*}_{py}=m^{*}_{px}=2\sqrt{2}\,\frac{\hbar^{2}}{t\,a^{2}}=4\sqrt{ 2}\,m_{0}\;. \tag{118}\]
It is noteworthy that Eqs. (120), (113), (116), and (118) can be unified as
\[m^{*}_{p}=2\sqrt{z}\,m_{0}\;, \tag{119}\]
where \(z\) is the number of nearest neighbors in respective lattices.
|
2306.03306 | Tracking Evolving labels using Cone based Oracles | The evolving data framework was first proposed by Anagnostopoulos et al.,
where an evolver makes small changes to a structure behind the scenes. Instead
of taking a single input and producing a single output, an algorithm
judiciously probes the current state of the structure and attempts to
continuously maintain a sketch of the structure that is as close as possible to
its actual state. There have been a number of problems that have been studied
in the evolving framework including our own work on labeled trees. We were
motivated by the problem of maintaining a labeling in the plane, where updating
the labels require physically moving them. Applications involve tracking
evolving disease hot-spots via mobile testing units , and tracking unmanned
aerial vehicles. To be specific, we consider the problem of tracking labeled
nodes in the plane, where an evolver continuously swaps labels of any two
nearby nodes in the background unknown to us. We are tasked with maintaining a
hypothesis, an approximate sketch of the locations of these labels, which we
can only update by physically moving them over a sparse graph. We assume the
existence of an Oracle, which when suitably probed, guides us in fixing our
hypothesis. | Aditya Acharya, David Mount | 2023-06-05T23:27:36Z | http://arxiv.org/abs/2306.03306v1 | # Tracking Evolving Labels using Cone based Oracles
###### Abstract
This is an abstract of a presentation given at CG:YRF 2023. It has been made public for the benefit of the community and should be considered a preprint rather than a formally reviewed paper. Thus, this work is expected to appear in a conference with formal proceedings and/or in a journal.
and \(H\) as the sum of all such distances: \(\sum_{l\in L}\mathcal{D}_{l}=\mathcal{D}(M,H)\). It's easy to see that, this can get as large as \(O(n^{2})\) without a competing algorithm.
We assume we have access to an oracle, that we call a _Cone Oracle_: \(\mathcal{O}\), which when queried on a label \(l\), and its hypothesized location \(H(l)\), returns the cone \(C_{k}\) around \(H(l)\) which contains the true location of \(l\).
We assume an additional thing here: for a label \(l\), after an initial startup cost, subsequent queries on the oracle with the same label, takes \(o(1)\) time. This is a valid assumption on massive point sets, when only a small region around the current point, in our case \(H(l)\), is _cached_ in memory. The expensive step is moving to a completely different, possibly uncached area of the data set. In other words switching from \(l_{1}\) to \(l_{2}\), on the oracle takes constant amount of time, what we call the _memory overhead_.
Our aim is to design an algorithm that physically moves the hypothesized labels over a sparse graph with a constant speed, and maintains a labeling of distance \(O(n)\) at all times.
## 4 Algorithm
First we precompute the theta graph for our point set \(P\). Then we run the randomized algorithm 1. (We assume the evolver does not have access to our random bit generator). In short, our algorithm selects a label at random every iteration, and tracks it down using the routing scheme on theta graphs (Fig 1(b)). We probe the oracle to find the cone around a point which contains the actual location of the label.
Figure 1: Theta Graph: Construction, and Routing. 1(a) shows the construction of theta graphs. For a point, consider the set of cones around it. For every cone add an edge between that point, and the point which is nearest to it when projected along the bisector of the cone. 1(b) shows how to route using theta graphs. If \(C_{j}\) is the cone at \(p_{j}\) containing the destination point (Red pt), then follow the edge contained inside it. Repeat the same at \(p_{j+1}\) until destination is reached
```
0:\(\{H(l_{1}),H(l_{2}),\cdots,H(l_{n})\}\)\(\triangleright\) Initial hypothesized location of the labels
1:while true do\(\triangleright\) Continuously run the algorithm
2:for\(l\) chosen randomly uniformly from \(\{l_{1},l_{2},\cdots,l_{n}\}\)do
3:while\(\mathcal{O}(l,H(l))\neq NULL\)do\(\triangleright\) Keep tracking \(l\) until found
4: Let \(C_{k}\leftarrow\mathcal{O}(l,H(l))\)\(\triangleright\) Cone returned by the oracle
5: Let \(e_{l,k}=(H(l),v)\) be the edge incident on \(H(l)\) inside \(C_{k}\)
6: Move \(l\) along \(e_{l,k}\) with speed \(ct_{\theta}\)\(\triangleright\)\(c\) is a constant
7: Update \(H(l)\gets v\)
8:endwhile
9:endfor
10:endwhile
```
**Algorithm 1** Randomized algorithm using cone oracle
## 5 Analysis
Under a cached memory model, with overhead \(M\), lines 3, 4, 5, and 7 in Algorithm 1 take \(o(1)\) time. Let \(\mathcal{D}_{i}\) be the overall distance, and \(\mathcal{D}_{l,i}\) be the distance for label \(l\) at the start of the \(i^{th}\) iteration. Since we are moving at the speed of \(ct_{\theta}\) over a \(t_{\theta}\) spanner, we spend \(\mathcal{D}_{l,i}/c\) time in traversing Euclidean distance \(\mathcal{D}_{l,i}\). In that time the evolver could have moved the label by at most another \(\mathcal{D}_{l,i}/c\) distance. So, in the worst case, we spend at most \(\mathcal{D}_{l,i}/c+\mathcal{D}_{l,i}/c^{2}+\cdots=\mathcal{D}_{l,i}/(c-1)\) time tracking \(l\). In that time, the evolver can increase \(\mathcal{D}_{i}\) by at most \(2\mathcal{D}_{l,i}/(c-1)\). Since we chose \(l\) at random, we have \(\operatorname{E}[\mathcal{D}_{l,i}]=\mathcal{D}_{i}/n\). Therefore,
\[\operatorname{E}\left[\mathcal{D}_{i+1}\mid\mathcal{D}_{i}\right] \leq\ \mathcal{D}_{i}-\frac{\mathcal{D}_{i}}{n}+\frac{2}{c-1}\cdot \frac{\mathcal{D}_{i}}{n}+M\] \[\implies\operatorname{E}\left[\mathcal{D}_{i+1}\right] \leq\ \left(1-\frac{c-3}{c-1}\cdot\frac{1}{n}\right) \operatorname{E}\left[\mathcal{D}_{i}\right]+M\] \[\leq\left(1-\frac{z}{n}\right)^{i}\operatorname{E}\left[ \mathcal{D}_{1}\right]+M\sum_{j=1}^{i}\left(1-\frac{z}{n}\right)^{i} \left[z=\frac{c-3}{c-1}\right]\] \[\leq\left(1-\frac{z}{n}\right)^{i}O(n^{2})+\frac{nM}{z} [z>0]\]
For \(i=\ln n/(\ln n-\ln(n-z))\sim n\ln n/z\), we have \(\mathcal{D}_{i+1}\in O(n)\), provided \(c>3\). Therefore,
**Theorem 1**.: _There exists a randomized algorithm using a cone oracle, which moves labels at any speed greater than \(3\,t_{\theta}\), and in expectation maintains a hypothesized labeling with distance: \(O(n)\), in the presence of an adversarial evolver._
|
2310.11782 | The Lazer-McKenna conjecture for an anisotropic planar exponential
nonlinearity with a singular source | Given a bounded smooth domain $\Omega$ in $\mathbb{R}^2$, we study the
following anisotropic elliptic problem $$ \begin{cases} -\nabla\big(a(x)\nabla
\upsilon\big)=
a(x)\big[e^{\upsilon}-s\phi_1-4\pi\alpha\delta_q-h(x)\big]\,\,\,\,
\,\textrm{in}\,\,\,\,\,\Omega,\\[2mm] \upsilon=0 \qquad\qquad\qquad\qquad\qquad
\qquad\qquad\qquad\qquad\quad \textrm{on}\,\ \,\partial\Omega, \end{cases} $$
where $a(x)$ is a positive smooth function, $s>0$ is a large parameter, $h\in
C^{0,\gamma}(\overline{\Omega})$, $q\in\Omega$,
$\alpha\in(-1,+\infty)\setminus\mathbb{N}$, $\delta_q$ denotes the Dirac
measure with pole at point $q$ and $\phi_1$ is a positive first eigenfunction
of the problem $-\nabla\big(a(x)\nabla \phi\big)=\lambda a(x)\phi$ under
Dirichlet boundary condition in $\Omega$. We show that if $q$ is both a local
maximum point of $\phi_1$ and an isolated local maximum point of $a(x)\phi_1$,
this problem has a family of solutions $\upsilon_s$ with arbitrary $m$ bubbles
accumulating to $q$ and the quantity
$\int_{\Omega}a(x)e^{\upsilon_s}\rightarrow8\pi(m+1+\alpha)a(q)\phi_1(q)$ as
$s\rightarrow+\infty$, which give a positive answer to the Lazer-McKenna
conjecture for this case. | Yibin Zhang | 2023-10-18T08:21:24Z | http://arxiv.org/abs/2310.11782v2 | The Lazer-McKenna conjecture for an anisotropic planar exponential nonlinearity with a singular source
###### Abstract.
Given a bounded smooth domain \(\Omega\) in \(\mathbb{R}^{2}\), we study the following anisotropic elliptic problem
\[\begin{cases}-\nabla\big{(}a(x)\nabla v\big{)}=a(x)\big{[}e^{v}-s\phi_{1}-4\pi \alpha\delta_{q}-h(x)\big{]}&\text{in}\ \ \Omega,\\ v=0&\text{on}\ \ \partial\Omega,\end{cases}\]
where \(a(x)\) is a positive smooth function, \(s>0\) is a large parameter, \(h\in C^{0,\gamma}(\overline{\Omega})\), \(q\in\Omega\), \(\alpha\in(-1,+\infty)\setminus\mathbb{N}\), \(\delta_{q}\) denotes the Dirac measure with pole at point \(q\) and \(\phi_{1}\) is a positive first eigenfunction of the problem \(-\nabla\big{(}a(x)\nabla\phi\big{)}=\lambda a(x)\phi\) under Dirichlet boundary condition in \(\Omega\). We show that if \(q\) is both a local maximum point of \(\phi_{1}\) and an isolated local maximum point of \(a(x)\phi_{1}\), this problem has a family of solutions \(v_{s}\) with arbitrary \(m\) bubbles accumulating to \(q\) and the quantity \(\int_{\Omega}a(x)e^{v_{s}}\to 8\pi(m+1+\alpha)a(q)\phi_{1}(q)\) as \(s\to+\infty\), which give a positive answer to the Lazer-McKenna conjecture for this case.
2020 _Mathematics Subject Classification_. Primary 35B25, 35J25; Secondary 35B40.
_Keywords_: Lazer-McKenna conjecture; Anisotropic coefficient; Exponential nonlinearity; Singular source; Lyapunov-Schmidt procedure.
## 1. Introduction
We consider the anisotropic elliptic problem
\[\begin{cases}-\nabla\big{(}a(x)\nabla v\big{)}=a(x)\big{[}e^{v}-s\phi_{1}-4 \pi\alpha\delta_{q}-h(x)\big{]}&\text{in}\ \ \ \Omega,\\ v=0&\text{on}\ \ \partial\Omega,\end{cases} \tag{1.1}\]
where \(\Omega\) is a bounded smooth domain in \(\mathbb{R}^{2}\), \(s>0\) is a large parameter, \(q\in\Omega\), \(\alpha\in(-1,+\infty)\setminus\mathbb{N}\), \(\delta_{q}\) denotes the Dirac measure with pole at point \(q\), \(h\in C^{0,\gamma}(\overline{\Omega})\) is given, \(a(x)\) is a smooth function over \(\overline{\Omega}\) satisfying
\[a_{1}\leq a(x)\leq a_{2} \tag{1.2}\]
for some constants \(0<a_{1}<a_{2}<+\infty\), \(\phi_{1}\) is a positive eigenfunction of the anisotropic Laplacian operator
\[-\Delta_{a}:=-\frac{1}{a(x)}\nabla\big{(}a(x)\nabla\cdot\big{)}=-\Delta-\nabla \log a(x)\nabla \tag{1.3}\]
with Dirichlet boundary condition corresponding to the first eigenvalue \(\lambda_{1}\). Clearly, if we set \(\rho(x)=(-\Delta_{a})^{-1}h\) in \(H^{1}_{0}(\Omega)\) and let \(G(x,y)\) be the Green's function satisfying
\[\begin{cases}-\Delta_{a}G(x,y)=8\pi\delta_{y}(x),&x\in\Omega,\\ G(x,y)=0,&x\in\partial\Omega,\end{cases} \tag{1.4}\]
and \(H(x,y)\) be its regular part given by
\[H(x,y)=G(x,y)-4\log\frac{1}{|x-y|}, \tag{1.5}\]
then equation (1.1) is equivalent to solving for \(u=\upsilon+\frac{s}{\lambda_{1}}\phi_{1}+\frac{\alpha}{2}G(\,\cdot,\,q)+\rho\), the problem
\[\left\{\begin{aligned} &-\Delta_{a}u=|x-q|^{2\alpha}k(x)e^{-t\phi_{1}} e^{u}&\text{in}\ \ \Omega,\\ & u=0&\text{on}\ \,\partial\Omega,\end{aligned}\right. \tag{1.6}\]
where \(k(x)=e^{-\rho(x)-\frac{\alpha}{2}H(x,q)}\) and \(t=s/\lambda_{1}\). The point \(q\) with Dirac measure in equation (1.1) is called vortex point or singular source. The term \(|\cdot|^{2\alpha}\) in problem (1.6) is called the Hardy weight if \(-1<\alpha<0\), whereas the Henon weight if \(\alpha>0\). In this paper, we are interested in the existence of solutions of problem (1.6) (or (1.1)) which exhibit the _multiple concentration behavior_ around singular source \(q\) when the parameter \(t\) tends to infinity.
If \(a(x)=1\) and \(\alpha=0\), equation (1.1) is the classic Ambrosetti-Prodi type problem [1] with exponential nonlinearity
\[\left\{\begin{aligned} &-\Delta\upsilon=g(\upsilon)-s\phi_{1}-h(x)& \text{in}\ \ \Omega,\\ &\upsilon=0&\text{on}\ \ \partial\Omega,\end{aligned}\right. \tag{1.7}\]
as \(s\to+\infty\), where \(\Omega\) is a bounded smooth domain in \(\mathbb{R}^{N}\) (\(N\geq 2\)), \(h\in C^{0,\gamma}(\overline{\Omega})\) is given, \(\phi_{1}\) is a positive eigenfunction of \(-\Delta\) with Dirichlet boundary condition corresponding to the first eigenvalue \(\lambda_{1}\), and \(g:\mathbb{R}\to\mathbb{R}\) is a continuous function such that \(-\infty\leq\nu=\lim_{t\to-\infty}\frac{g(t)}{t}<\lim_{t\to+\infty}\frac{g(t)} {t}=\overline{\nu}\leq+\infty\). Here \(\nu=-\infty\) and \(\overline{\nu}=+\infty\) are allowed. The condition that \((\nu,\overline{\nu})\) contains some eigenvalues of \(-\Delta\) subject to Dirichlet boundary condition has great influence on the existence and multiplicity of solutions for problem (1.7), which has been widely considered since the early 1970s so that many interesting results have been obtained (see [2, 3, 4, 5, 6] and references therein). Moreover, in the early 1980s Lazer and McKenna [4] conjectured that the number of solutions to problem (1.7) is unbounded as \(s\to+\infty\) when \(\nu<\lambda_{1}<\overline{\nu}=+\infty\) and \(g(t)\) has an appropriate growth at infinity.
In fact, the Lazer-McKenna conjecture holds true for problem (1.7) with many different types of nonlinearities, which has been founded in [7, 8, 9] for the exponential nonlinear case \(g(t)=e^{t}-4\pi\alpha\delta_{q}\), \(-1<\alpha\not\in\mathbb{N}^{*}\) and \(N\geq 2\), in [10, 11, 12] for the asymptotically linear case \(g(t)=\overline{\nu}t_{+}-\nu t_{-}\) with \(\overline{\nu}\) large enough, in [13, 14] for the subcritical case \(g(t)=t_{+}^{p}+\lambda t\), \(\lambda<\lambda_{1}\), \(1<p<\frac{N+2}{N-2}\) if \(N\geq 3\), \(1<p<+\infty\) if \(N=2\), in [15, 16, 17, 18] for the critical case \(g(t)=t_{+}^{\frac{N+2}{N-2}}+\lambda t\), \(0<\lambda<\lambda_{1}\) and \(N\geq 6\), in [19] for the superlinear nonhomogeneous case \(g(t)=t_{+}^{p}+t_{-}^{q}\), \(1<q<p<\frac{N+2}{N-2}\) and \(N\geq 4\), in [20, 21, 22] for the superlinear homogeneous case \(g(t)=|t|^{p}\), \(1<p<\frac{N+2}{N-2}\) if \(N\geq 3\), \(1<p<+\infty\) if \(N=2\), and in [23, 24, 25, 26, 27] for some other cases and even some generalized versions, where \(t_{+}=\max\{t,0\}\) and \(t_{-}=\max\{-t,0\}\). In particular, when \(g(t)=e^{t}\) and \(N=2\), del Pino and Munoz in [7] proved the Lazer-McKenna conjecture by constructing solutions of problem (1.7) with the accumulation of arbitrarily many bubbles around any isolated local maximum point of \(\phi_{1}\). Recently, in [8], Dong, Hu and Zhang extended this result in [7] to the case \(g(t)=e^{t}-4\pi\alpha\delta_{q}\), \(-1<\alpha\not\in\mathbb{N}^{*}\) and \(N=2\), and showed that if singular source \(q\) is an isolated local maximum point of \(\phi_{1}\), problem (1.7) always admits a family of solutions with arbitrarily many bubbles accumulating to \(q\).
It is necessary to point out that the anisotropic equation (1.1)\(|_{\alpha=0}\) is a special case of problem (1.7) with \(g(t)=e^{t}\) in higher dimension \(N\geq 3\). Indeed, let a standard \(N\)-dimensional torus be \(\mathbb{T}=\{(x^{\prime},x_{N})\in\mathbb{R}^{N}:\ (\|x^{\prime}\|-1)^{2}+x_{N}^{2} \leq r_{0}^{2}\}\) with \(x^{\prime}=(x_{1},\ldots,x_{N-1})\), \(\|x^{\prime}\|=\sqrt{x_{1}^{2}+\ldots+x_{N-1}^{2}}\) and \(0<r_{0}<1\). If we look for some special solutions of problem (1.7) with \(g(t)=e^{t}\) in the axially symmetric torus \(\mathbb{T}\), i.e. solutions \(\upsilon\) of the form \(\upsilon(x^{\prime},x_{N})=\upsilon(r,x_{N})\) with \(r=\|x^{\prime}\|\), a simple calculation shows that problem (1.7) with \(g(t)=e^{t}\) in higher dimension \(N\geq 3\) is reduced to
\[\left\{\begin{aligned} &-\nabla\big{(}r^{N-2}\nabla\upsilon\big{)}=r^{N-2} \big{[}e^{v}-s\phi_{1}-h(x)\big{]}&\text{in}\ \ \ \Omega_{\mathbb{T}},\\ &\upsilon=0&\text{on}\ \ \partial\Omega,\end{aligned}\right.\]
where \(\Omega_{\mathbb{T}}=\{(r,x_{N})\in\mathbb{R}^{2}:\ (r-1)^{2}+x_{N}^{2}<r_{0}^{2}\}\). This is just the equation (1.1)\(|_{\alpha=0}\) with anisotropic coefficient \(a(r,x_{N})=r^{N-2}\). In this direction, Yang and Zhang in [9] proved that the Lazer-McKenna conjecture is also true for problem (1.7) with \(g(t)=e^{t}\) in the higher-dimensional domain with some rotational symmetries, by
constructing solutions of the anisotropic equation (1.1)\(|_{\alpha=0}\) which exhibit multiple concentration behavior around local maximum points of \(a(x)\phi_{1}\) in the domain as \(s\to+\infty\).
In the present paper, our goal is to give a positive answer to the Lazer-McKenna conjecture for the singular case of the anisotropic equation (1.1) involving \(\alpha\in(-1,+\infty)\setminus\mathbb{N}\), by trying to prove the existence of multiple clustered blowup solutions of problem (1.6) with Hardy-Henon weight \(|\cdot-q|^{2\alpha}\) in a constructive way. As a result, we find that if singular source \(q\) is both a local maximum point of \(\phi_{1}\) and an isolated local maximum point of \(a(x)\phi_{1}\), problem (1.6) (or (1.1)) always admits a family of solutions with arbitrarily many bubbles accumulating to \(q\). In particular, we recover and extend the results in [7, 8, 9]. This can be stated as follows.
**Theorem 1.1**.: _Let \(\alpha\in(-1,+\infty)\setminus\mathbb{N}\) and assume that \(q\) is both a local maximum point of \(\phi_{1}\) and an isolated local maximum point of \(a(x)\phi_{1}\). Then for any integer \(m\geq 1\), there exists \(t_{m}>0\) such that for any \(t>t_{m}\), problem (1.6) has a family of solutions \(u_{t}\) satisfying_
\[u_{t}(x)=\left[\log\frac{1}{(\varepsilon_{0,t}^{2}\mu_{0,t}^{2}+|x-q|^{2(1+ \alpha)})^{2}}+(1+\alpha)H(x,q)\right]+\sum_{i=1}^{m}\left[\log\frac{1}{( \varepsilon_{i,t}^{2}\mu_{i,t}^{2}+|x-\xi_{i,t}|^{2})^{2}}+H(x,\xi_{i,t}) \,\right]+o(1),\]
_where \(o(1)\to 0\), as \(t\to+\infty\), uniformly on each compact subset of \(\overline{\Omega}\setminus\{q,\,\xi_{1,t},\ldots,\xi_{m,t}\}\), the parameters \(\varepsilon_{0,t}\), \(\varepsilon_{i,t}\), \(\mu_{0,t}\) and \(\mu_{i,t}\) satisfy_
\[\varepsilon_{0,t}=e^{-\frac{1}{2}t\phi_{1}(q)},\qquad\varepsilon_{i,t}=e^{- \frac{1}{2}t\phi_{1}(\xi_{i,t})},\qquad\frac{1}{C}\leq\mu_{0,t}\leq Ct^{\frac {m(m+1+\alpha)^{2}a_{0}}{a_{1}}},\qquad\frac{1}{C}\leq\mu_{i,t}\leq Ct^{\frac {(2m+\alpha)(m+1+\alpha)^{2}a_{0}}{2a_{1}}},\]
_for some \(C>0\), and \((\xi_{1,t},\ldots,\xi_{m,t})\in\Omega^{m}\) satisfies_
\[\xi_{i,t}\to q\quad\text{ for all }\,i,\quad\text{ and }\quad|\xi_{i,t}-\xi_{j,t}|>t^{-\frac{(m+1+\alpha)^{2}a_{0}}{2a_{1}}} \quad\forall\,\,\,i\neq j.\]
The corresponding result for problem (1.1) can be stated as follows.
**Theorem 1.2**.: _Let \(\alpha\in(-1,+\infty)\setminus\mathbb{N}\) and assume that \(q\) is both a local maximum point of \(\phi_{1}\) and an isolated local maximum point of \(a(x)\phi_{1}\). Then for any integer \(m\geq 1\) and any \(s\) large enough, there exists a family of solutions \(v_{s}\) of problem (1.1) with \(m\) distinct bubbles accumulating to \(q\). Moreover,_
\[\lim_{s\to+\infty}\int_{\Omega}a(x)e^{v_{s}}=8\pi(m+1+\alpha)a(q)\phi_{1}(q).\]
According to Theorems 1.1 and 1.2, it follows that if singular source \(q\in\Omega\) is both a local maximum point of \(\phi_{1}\) and an isolated local maximum point of \(a(x)\phi_{1}\), then for any integer \(m\geq 1\) there exists a family of solutions of problem (1.6) which exhibits the phenomenon of \(m+1\)-bubbling at \(q\), namely \(|x-q|^{2\alpha}k(x)e^{-t\phi_{1}}e^{u_{t}}\to 8\pi(m+1+\alpha)\delta_{q}\) and \(u_{t}=(m+1+\alpha)G(x,q)+o(1)\). While for the case \(m=0\), by arguing simply along the initial sketch of the proof of Theorem 1.1, we easily find that problem (1.6) always admits a family of solutions with only one bubble located exactly at singular source \(q\) whether \(q\) is a local maximum point of the functions \(\phi_{1}\) and \(a(x)\phi_{1}\) in the domain or not.
The proof of our results relies on a very well known Lyapunov-Schmidt reduction procedure. The same strategy has been applied in [8] to build solutions for the two-dimensional elliptic problem with Hardy-Henon weight
\[\left\{\begin{aligned} &-\Delta u=|x-q|^{2\alpha}k(x)e^{-t\phi_{1}}e^{u }&\text{ in }\,\,\,\Omega,\\ & u=0&\text{ on }\,\partial\Omega,\end{aligned}\right. \tag{1.8}\]
as \(t\to+\infty\), where \(\Omega\) is a bounded smooth domain in \(\mathbb{R}^{2}\), \(\alpha\in(-1,+\infty)\setminus\mathbb{N}\), \(q\in\Omega\), \(k(x)\) is a given positive smooth function and \(\phi_{1}\) is a positive eigenfunction of \(-\Delta\) with Dirichlet boundary condition corresponding to the first eigenvalue \(\lambda_{1}\). Just like that in equation (1.6), the presence of Hardy-Henon weight has significant influence not only on the existence of the solution of problem (1.8) with a unique bubble at each singular source \(q\in\Omega\), but also on the existence of the solution of problem (1.8) with arbitrarily many bubbles accumulating to some singular source \(q\in\Omega\) if \(q\) is an isolated local maximum point of \(\phi_{1}\). However, due to the occurrence of Hardy-Henon
weight, it is necessary to point out that although the anisotropic planar equation (1.6) is seemingly similar to problem (1.8), equation (1.6) can not be viewed as a special case of (1.8) in higher dimension even if the domain has some axial symmetries. This seems to imply that unlike those for solutions of problem (1.8) in [8] whose multiple clustering bubbles are purely determined by an isolated local maximum point \(q\) of \(\phi_{1}\), the location of multiple clustering bubbles in solutions of equation (1.6) may be not only characterized as isolated local maximum points \(q\) of \(\phi_{1}\), but also those of \(a(x)\phi_{1}\), which needs us to investigate deeply the effect of the interaction between anisotropic coefficient \(a(x)\) and first positive eigenfunction \(\phi_{1}\) on the existence of solutions with arbitrarily many bubbles simultaneously accumulating to singular source \(q\). This is the delicate description during we carry out the whole reduction procedure to construct solutions of equation (1.6) with multiple clustering bubbles around singular source.
## 2. Approximating solutions
For notational convenience we always fix singular source \(q\in\Omega\) as an isolated local maximum point of \(a(x)\phi_{1}\) and also a local maximum point of \(\phi_{1}\), and further assume
\[2\inf_{x\in\overline{B}_{d}(q)}\phi_{1}(x)>\sup_{x\in\overline{B}_{d}(q)}\phi _{1}(x)=\phi_{1}(q)=1, \tag{2.1}\]
where \(d>0\) is a small but fixed number, independent of \(t\). For points \(\xi=(\xi_{1},\ldots,\xi_{m})\) with \(\xi_{i}\in\overline{B}_{d}(q)\), we define
\[\mathcal{O}_{t}(q):=\bigg{\{} \xi=(\xi_{1},\ldots,\xi_{m})\in\big{(}\overline{B}_{d}(q)\big{)}^ {m}\,\bigg{|}\,a(q)\phi_{1}(q)-a(\xi_{i})\phi_{1}(\xi_{i})\leq\frac{1}{\sqrt{t }},\ \ |\xi_{i}-q|\geq\frac{1}{t^{\beta}},\ \ \ |\xi_{i}-\xi_{j}|\geq\frac{1}{t^{\beta}},\] \[i,j=1,\ldots,m,\ \ i\neq j\Big{\}}, \tag{2.2}\]
where \(\beta\) is given by
\[\beta=\frac{(m+1+\alpha)^{2}a_{2}}{2a_{1}}. \tag{2.3}\]
We thus fix \(\xi\in\mathcal{O}_{t}(q)\). For numbers \(\mu_{0}>0\) and \(\mu_{i}>0\), \(i=1,\ldots,m\), yet to be chosen, we define
\[u_{0}(x)=\log\frac{8\mu_{0}^{2}(1+\alpha)^{2}}{k(q)(\varepsilon_{0}^{2}\mu_{0 }^{2}+|x-q|^{2(1+\alpha)})^{2}},\qquad\qquad u_{i}(x)=\log\frac{8\mu_{i}^{2}} {k(\xi_{i})|\xi_{i}-q|^{2\alpha}(\varepsilon_{i}^{2}\mu_{i}^{2}+|x-\xi_{i}|^{ 2})^{2}}, \tag{2.4}\]
which, respectively, solve
\[-\Delta u_{0}=\varepsilon_{0}^{2}k(q)|x-q|^{2\alpha}e^{u_{0}}\qquad\text{in} \quad\mathbb{R}^{2},\qquad\qquad\int_{\mathbb{R}^{2}}\varepsilon_{0}^{2}k(q)| x-q|^{2\alpha}e^{u_{0}}=8\pi(1+\alpha), \tag{2.5}\]
and
\[-\Delta u_{i}=\varepsilon_{i}^{2}k(\xi_{i})|\xi_{i}-q|^{2\alpha}e^{u_{i}} \quad\text{in}\quad\mathbb{R}^{2},\qquad\qquad\int_{\mathbb{R}^{2}} \varepsilon_{i}^{2}k(\xi_{i})|\xi_{i}-q|^{2\alpha}e^{u_{i}}=8\pi, \tag{2.6}\]
where
\[\varepsilon_{0}=\varepsilon_{0}(t)\equiv e^{-\frac{1}{2}t},\qquad\qquad \varepsilon_{i}=\varepsilon_{i}(t)\equiv e^{-\frac{1}{2}t\phi_{1}(\xi_{i})}. \tag{2.7}\]
Furthermore, we set
\[\rho_{0}:=\varepsilon_{0}^{\frac{1}{1+\alpha}}=\exp\left\{-\frac{1}{2(1+ \alpha)}t\right\},\qquad v_{0}:=\mu_{0}^{\frac{1}{1+\alpha}},\qquad\gamma_{i}: =\frac{1}{\varepsilon_{0}}\varepsilon_{i}\mu_{i}=\mu_{i}\exp\left\{-\frac{1}{ 2}t\big{[}\phi_{1}(\xi_{i})-1\big{]}\right\}. \tag{2.8}\]
We defined the approximate solution of problem (1.6) by
\[U(x):=\sum_{i=0}^{m}\,U_{i}(x)=\sum_{i=0}^{m}\,\big{[}u_{i}(x)+H_{i}(x)\big{]}, \tag{2.9}\]
where \(H_{i}(x)\) is a correction term defined as the solution of
\[\begin{cases}\Delta_{a}H_{i}+\nabla\log a(x)\nabla u_{i}=0\ \ \text{in}\ \ \Omega,\\ H_{i}=-u_{i}\ \
Then \(u(x)\) solves problem (1.6) if and only if \(\omega(y)\equiv u(\varepsilon_{0}y)-2t\) satisfies
\[\left\{\begin{aligned} &-\Delta_{a(\varepsilon_{0}y)}\omega=| \varepsilon_{0}y-q|^{2\alpha}\kappa(y,t)e^{\omega}&\text{in}\ \ \Omega_{t},\\ &\omega=-2t&\text{on}\ \,\,\partial\Omega_{t},\end{aligned}\right. \tag{2.13}\]
where
\[\kappa(y,t)\equiv k(\varepsilon_{0}y)\exp\left\{-t\big{[}\phi_{1}(\varepsilon _{0}y)-1\big{]}\right\}. \tag{2.14}\]
Let us define the initial approximate solution of (2.13) as
\[V(y)=U(\varepsilon_{0}y)-2t \tag{2.15}\]
with \(U\) given by (2.9). Hence if we try to look for a solution of equation (2.13) in the form \(\omega=V+\phi\) with \(\phi\) a lower order correction, then (2.13) can be stated as to find \(\phi\) a solution of
\[\left\{\begin{aligned} &\mathcal{L}(\phi):=-\Delta_{a(\varepsilon_{0}y)} \phi-W\phi=E+N(\phi)\ \ \text{in}\ \ \Omega_{t},\\ &\phi=0&\text{on}\ \,\,\partial\Omega_{t},\end{aligned}\right. \tag{2.16}\]
where
\[W(y)=|\varepsilon_{0}y-q|^{2\alpha}\kappa(y,t)e^{V}, \tag{2.17}\]
the "error term" is
\[E(y)=\Delta_{a(\varepsilon_{0}y)}V+|\varepsilon_{0}y-q|^{2\alpha}\kappa(y,t) e^{V}. \tag{2.18}\]
and the "nonlinear term" is given by
\[N(\phi)=|\varepsilon_{0}y-q|^{2\alpha}\kappa(y,t)e^{V}\big{(}e^{\phi}-1-\phi \big{)}, \tag{2.19}\]
In order to understand how well \(V(y)\) solves equation (2.13) so that the "error term" \(E(y)\) is sufficiently small near \(q\) and each \(\xi_{i}\) with \(i=1,\ldots,m\), a delicate ingredient is to make the following precise choices of the concentration parameters \(\mu_{0}\) and \(\mu_{i}\):
\[\log\frac{8\mu_{0}^{2}(1+\alpha)^{2}}{k(q)}=(1+\alpha)H(q,q)+\sum_{j=1}^{m}G(q,\xi_{j}), \tag{2.20}\]
\[\log\frac{8\mu_{i}^{2}}{k(\xi_{i})|\xi_{i}-q|^{2\alpha}}=H(\xi_{i},\xi_{i})+( 1+\alpha)G(\xi_{i},q)+\sum_{j=1,\,j\neq i}^{m}G(\xi_{i},\xi_{j}),\ \ \ \ i=1,\ldots,m. \tag{2.21}\]
Here, \(\mu_{0}\) and \(\mu_{i}\) are _a priori_ functions of \(\xi\) in \(\mathcal{O}_{t}(q)\) and hence \(\mu_{0}=\mu_{0}(\xi)\) and \(\mu_{i}=\mu_{i}(\xi)\) for all \(i=1,\ldots,m\). Thanks to the definition of \(\mathcal{O}_{t}(q)\) in (2.2), there exists a constant \(C>0\) independent of \(t\) such that
\[\frac{1}{C}\leq\mu_{0}\leq Ct^{2m\beta}\ \ \ \ \ \ \ \text{and}\ \ \ \ \ \ \big{|}\partial_{\xi_{kl}}\log\mu_{0}\big{|}\leq Ct^{\beta},\ \ \ \ \forall\ k=1,\ldots,m,\ l=1,2, \tag{2.22}\]
and
\[\frac{1}{C}\leq\mu_{i}\leq Ct^{(2m+\alpha)\beta}\ \ \ \ \ \ \ \ \text{and}\ \ \ \ \ \ \big{|}\partial_{\xi_{kl}}\log\mu_{i}\big{|}\leq Ct^{\beta},\ \ \ \ \forall\ i,k=1,\ldots,m,\ l=1,2. \tag{2.23}\]
Finally, we claim that the following behavior for \(E(y)\) holds: for any \(0<\sigma<\min\{1/2,\,1-1/(2\beta),\,2(1+\alpha)\}\) it yields that if \(|y-q^{\prime}|\leq 1/(\varepsilon_{0}t^{2\beta})\),
\[E(y)=\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}\frac{8(1+\alpha)^{2 }\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2\alpha}}{\big{(}1+ \big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2(1+\alpha)}\big{)}^{ 2}}O\left(\varepsilon_{0}^{\sigma}|y-q^{\prime}|^{\sigma}+\rho_{0}^{\sigma}v_{0 }^{\sigma}+\sum_{j=1}^{m}\varepsilon_{j}^{\sigma}\mu_{j}^{\sigma}\right)+\sum_ {j=1}^{m}O\left(\varepsilon_{0}^{2}\varepsilon_{j}^{2}\mu_{j}^{2}t^{4\beta} \right), \tag{2.24}\]
and if \(|y-\xi_{i}^{\prime}|\leq 1/(\varepsilon_{0}t^{2\beta})\) with some \(i\in\{1,\ldots,m\}\),
\[E(y)=\frac{1}{\gamma_{i}^{2}}\frac{8}{\big{(}1+\big{|}\frac{y-\xi_{i}^{\prime} }{\gamma_{i}}\big{|}^{2}\big{)}^{2}}O\left(\varepsilon_{0}^{\sigma}|y-\xi_{i}^ {\prime}|^{\sigma}+\rho_{0}^{\sigma}v_{0}^{\sigma}+\sum_{j=1}^{m}\varepsilon_{j }^{\sigma}\mu_{j}^{\sigma}\right)+O\left(\varepsilon_{0}^{4}\mu_{0}^{2}t^{(4+2 \alpha)\beta}+\sum_{j=1,\,j\neq i}^{m}\varepsilon_{0}^{2}\varepsilon_{j}^{2}\mu_ {j}^{2}t^{4\beta}\right), \tag{2.25}\]
while if \(|y-q^{\prime}|>1/(\varepsilon_{0}t^{2\beta})\) and \(|y-\xi_{i}^{\prime}|>1/(\varepsilon_{0}t^{2\beta})\) for all \(i=1,\ldots,m\),
\[E(y)=O\left(\frac{\varepsilon_{0}^{2}e^{-t\phi_{1}(\varepsilon_{0}y)}}{| \varepsilon_{0}y-q|^{4+2\alpha}}\prod_{i=1}^{m}\frac{1}{|\varepsilon_{0}y- \xi_{i}|^{4}}\right)+O\left(\varepsilon_{0}^{4}\mu_{0}^{2}t^{(8+4\alpha)\beta }\right)+\sum_{i=1}^{m}O\left(\varepsilon_{0}^{2}\varepsilon_{i}^{2}\mu_{i}^{2 }t^{8\beta}\right). \tag{2.26}\]
In fact, recalling that \(E(y)=\Delta_{a(\varepsilon_{0}y)}V+W\) with \(V\) and \(W\) given by (2.15) and (2.17), respectively, we first have
\[-\Delta_{a(\varepsilon_{0}y)}V(y) =\varepsilon_{0}^{4}k(q)|x-q|^{2\alpha}e^{u_{0}}+\varepsilon_{0} ^{2}\sum_{i=1}^{m}\varepsilon_{i}^{2}k(\xi_{i})|\xi_{i}-q|^{2\alpha}e^{u_{i}}\] \[=\frac{8\varepsilon_{0}^{4}\mu_{0}^{2}(1+\alpha)^{2}|x-q|^{2 \alpha}}{(\varepsilon_{0}^{2}\mu_{0}^{2}+|x-q|^{2(1+\alpha)})^{2}}+\sum_{i=1} ^{m}\frac{8\varepsilon_{0}^{2}\varepsilon_{i}^{2}\mu_{i}^{2}}{(\varepsilon_{ i}^{2}\mu_{i}^{2}+|x-\xi_{i}|^{2})^{2}}\] \[=\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}\,\frac{8 (1+\alpha)^{2}\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2\alpha }}{\left(1+\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2(1+\alpha )}\right)^{2}}+\sum_{i=1}^{m}\frac{1}{\gamma_{i}^{2}}\frac{8}{\big{(}1+\big{|} \frac{y-\xi_{i}^{\prime}}{\gamma_{i}}\big{|}^{2}\big{)}^{2}}. \tag{2.27}\]
For the expression \(W\) near singular source \(q\), we can compute
\[W(y) =\varepsilon_{0}^{4}|\varepsilon_{0}y-q|^{2\alpha}\kappa(y,t) \exp\left\{\sum_{j=0}^{m}\,\big{[}u_{j}(\varepsilon_{0}y)+H_{j}(\varepsilon_{ 0}y)\big{]}\right\}\] \[=\varepsilon_{0}^{4}k(\varepsilon_{0}y)|\varepsilon_{0}y-q|^{2 \alpha}\exp\left\{-t\big{[}\phi_{1}(\varepsilon_{0}y)-1\big{]}+\log\frac{8 \mu_{0}^{2}(1+\alpha)^{2}}{k(q)(\varepsilon_{0}^{2}\mu_{0}^{2}+|x-q|^{2(1+ \alpha)})^{2}}\right.\] \[\quad\left.+H_{0}(\varepsilon_{0}y)+\sum_{j=1}^{m}\,\big{[}u_{j}( \varepsilon_{0}y)+H_{j}(\varepsilon_{0}y)\big{]}\right\}\] \[=\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}\frac{8(1+ \alpha)^{2}\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2\alpha}}{ \left(1+\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2(1+\alpha)} \right)^{2}}\times\frac{k(\varepsilon_{0}y)}{k(q)}\times\exp\Big{\{}-t\big{[} \phi_{1}(\varepsilon_{0}y)-\phi_{1}(q)\big{]}\Big{\}}\] \[\quad\times\exp\left\{H_{0}(\varepsilon_{0}y)+\sum_{j=1}^{m}\, \big{[}u_{j}(\varepsilon_{0}y)+H_{j}(\varepsilon_{0}y)\big{]}\right\}.\]
Using (2.4), (2.11), (2.12) and the fact that \(H(\cdot,x)\) is \(C^{\sigma}(\Omega)\) for any \(x\in\Omega\) and \(0<\sigma<\min\{1/2,\,1-1/(2\beta),\,2(1+\alpha)\}\), we obtain that for \(|y-q^{\prime}|\leq 1/(\varepsilon_{0}t^{2\beta})\),
\[H_{0}(\varepsilon_{0}y)+\sum_{j=1}^{m}\,\big{[}u_{j}(\varepsilon _{0}y)+H_{j}(\varepsilon_{0}y)\big{]}\] \[\quad=(1+\alpha)H(\varepsilon_{0}y,q)-\log\frac{8\mu_{0}^{2}(1+ \alpha)^{2}}{k(q)}+O\left(\rho_{0}^{\sigma}v_{0}^{\sigma}\right)\] \[\quad\quad+\sum_{j=1}^{m}\,\left[\log\frac{1}{|q-\xi_{j}|^{4}}+H( \varepsilon_{0}y,\xi_{j})+O\left(\frac{|\varepsilon_{0}y-q|^{2}+2\langle \varepsilon_{0}y-q,q-\xi_{j}\rangle+\varepsilon_{j}^{2}\mu_{j}^{2}}{|q-\xi_ {j}|^{2}}\right)+O\left(\varepsilon_{j}^{\sigma}\mu_{j}^{\sigma}\right)\right]\] \[\quad=(1+\alpha)H(q,q)-\log\frac{8\mu_{0}^{2}(1+\alpha)^{2}}{k(q )}+\sum_{j=1}^{m}G(q,\xi_{j})+O\left(\varepsilon_{0}^{\sigma}|y-q^{\prime}|^{ \sigma}\right)+O\left(\rho_{0}^{\sigma}v_{0}^{\sigma}\right)+\sum_{j=1}^{m}O \left(\varepsilon_{j}^{\sigma}\mu_{j}^{\sigma}\right)\] \[\quad=O\left(\varepsilon_{0}^{\sigma}|y-q^{\prime}|^{\sigma} \right)+O\left(\rho_{0}^{\sigma}v_{0}^{\sigma}\right)+\sum_{j=1}^{m}O\left( \varepsilon_{j}^{\sigma}\mu_{j}^{\sigma}\right),\]
where the last equality is due to the choice of \(\mu_{0}\) in (2.20). Therefore if \(|y-q^{\prime}|\leq 1/(\varepsilon_{0}t^{2\beta})\),
\[W(y)=\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}\,\frac{8(1+\alpha)^ {2}\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2\alpha}}{\big{(}1+ \big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2(1+\alpha)}\big{)}^{ 2}}\left[1+O\left(\varepsilon_{0}^{\sigma}|y-q^{\prime}|^{\sigma}\right)+O \left(\rho_{0}^{\sigma}v_{0}^{\sigma}\right)+\sum_{j=1}^{m}O\left(\varepsilon_ {j}^{\sigma}\mu_{j}^{\sigma}\right)\right]. \tag{2.28}\]
Similarly, if \(|y-\xi_{i}^{\prime}|\leq 1/(\varepsilon_{0}t^{2\beta})\) for some \(i\in\{1,\ldots,m\}\),
\[W(y)=\frac{1}{\gamma_{i}^{2}}\frac{8}{\big{(}1+\big{|}\frac{y-\xi_{i}^{ \prime}}{\gamma_{i}}\big{|}^{2}\big{)}^{2}}\left[1+O\left(\varepsilon_{0}^{ \sigma}|y-\xi_{i}^{\prime}|^{\sigma}\right)+O\left(\rho_{0}^{\sigma}v_{0}^{ \sigma}\right)+\sum_{j=1}^{m}O\left(\varepsilon_{j}^{\sigma}\mu_{j}^{\sigma} \right)\right], \tag{2.29}\]
while if \(|y-q^{\prime}|>1/(\varepsilon_{0}t^{2\beta})\) and \(|y-\xi_{i}^{\prime}|>1/(\varepsilon_{0}t^{2\beta})\) for all \(i=1,\ldots,m\),
\[W(y)=O\left(\frac{\varepsilon_{0}^{2}e^{-t\phi_{1}(\varepsilon_{0}y)}}{| \varepsilon_{0}y-q|^{4+2\alpha}}\prod_{i=1}^{m}\frac{1}{|\varepsilon_{0}y-\xi_ {i}|^{4}}\right). \tag{2.30}\]
These combined with (2.27) imply that the expansions of \(E(y)\) in (2.24)-(2.26) hold.
## 3. The linearized problem
In this section we solve the following linear problem: given \(h\in L^{\infty}(\Omega_{t})\) and points \(\xi=(\xi_{1},\ldots,\xi_{m})\in\mathcal{O}_{t}(q)\), we find a function \(\phi\) such that for certain scalars \(c_{ij}\), \(i=1,\ldots,m\), \(j=1,2\), one has
\[\begin{cases}\mathcal{L}(\phi)=-\Delta_{a(\varepsilon_{0}y)}\phi-W\phi=h+ \frac{1}{a(\varepsilon_{0}y)}\sum_{i=1}^{m}\sum_{j=1}^{2}c_{ij}\chi_{i}Z_{ij}& \text{in}\ \ \Omega_{t},\\ \phi=0&\text{on}\ \,\partial\Omega_{t},\\ \int_{\Omega_{t}}\chi_{i}Z_{ij}\phi=0&\forall\ i=1,\ldots,m,\ j=1,2,\end{cases} \tag{3.1}\]
where \(W=|\varepsilon_{0}y-q|^{2\alpha}\kappa(y,t)e^{V}\) satisfies (2.28)-(2.30), and \(Z_{ij}\), \(\chi_{i}\) are defined as follows: let \(R_{0}\) be a large but fixed positive number and \(\chi(r)\) be a radial smooth non-increasing cut-off function satisfying \(0\leq\chi(r)\leq 1\), \(\chi(r)=1\) for \(r\leq R_{0}\) and \(\chi(r)=0\) for \(r\geq R_{0}+1\). Set
\[\mathcal{Z}_{q}(z)=\frac{|z|^{2(1+\alpha)}-1}{|z|^{2(1+\alpha)}+1},\qquad \qquad\mathcal{Z}_{0}(z)=\frac{|z|^{2}-1}{|z|^{2}+1},\qquad\qquad\mathcal{Z}_ {j}(z)=\frac{4z_{j}}{|z|^{2}+1},\ \ j=1,\,2. \tag{3.2}\]
Then we define
\[\chi_{q}(y)=\chi\left(\frac{|\varepsilon_{0}y-q|}{\rho_{0}v_{0}}\right)\qquad \quad\text{and}\qquad\quad Z_{q}(y)=\frac{\varepsilon_{0}}{\rho_{0}v_{0}} \mathcal{Z}_{q}\left(\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\right), \tag{3.3}\]
and for any \(i=1,\ldots,m\) and \(j=0,1,2\),
\[\chi_{i}(y)=\chi\left(\frac{|y-\xi_{i}^{\prime}|}{\gamma_{i}}\right)\qquad \quad\text{and}\qquad\quad Z_{ij}(y)=\frac{1}{\gamma_{i}}\mathcal{Z}_{j}\left( \frac{y-\xi_{i}^{\prime}}{\gamma_{i}}\right). \tag{3.4}\]
Equation (3.1) will be solved for \(h\in L^{\infty}(\Omega_{t})\), but we need to estimate the size of the solution by introducing the following norm:
\[\|h\|_{*}:=\left\|\left[\varepsilon_{0}^{2}+\left(\frac{\varepsilon_{0}}{\rho_{ 0}v_{0}}\right)^{2}\,\frac{\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|} ^{2\alpha}}{\big{(}1+\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{ 4+2\hat{\alpha}+2\alpha}}\,+\sum_{i=1}^{m}\frac{1}{\gamma_{i}^{2}}\frac{1}{ \big{(}1+\big{|}\frac{y-\xi_{i}^{\prime}}{\gamma_{i}}\big{|}^{4+2\hat{\alpha}} \big{)}^{-1}}\,h(y)\right\|_{L^{\infty}(\Omega_{t})}, \tag{3.5}\]
where \(\hat{\alpha}+1\) is a small but fixed positive number, independent of \(t\), such that \(-1<\hat{\alpha}<\min\big{\{}\alpha,-2/3\big{\}}\).
**Proposition 3.1**.: _Let \(m\) be a positive integer. Then there exist constants \(t_{m}>1\) and \(C>0\) such that for any \(t>t_{m}\), any points \(\xi=(\xi_{1},\dots,\xi_{m})\in\mathcal{O}_{t}(q)\) and any \(h\in L^{\infty}(\Omega_{t})\), there is a unique solution \(\phi=\mathcal{T}(h)\) and scalars \(c_{ij}\), \(i=1,\dots,m\), \(j=1,2\) to problem (3.1), which satisfies_
\[\|\mathcal{T}(h)\|_{L^{\infty}(\Omega_{t})}\leq Ct\|h\|_{*}\qquad\qquad\text{ and}\qquad\qquad|c_{ij}|\leq C\gamma_{i}^{-1}\|h\|_{*}. \tag{3.6}\]
Proof.: The proof will be divided into four steps which we state and prove next.
**Step 1:** Building a suitable barrier defined in \(\widetilde{\Omega}_{t}:=\Omega_{t}\setminus\big{[}\bigcup_{i=1}^{m}B_{R_{1} \gamma_{i}}(\xi_{i}^{\prime})\cup B_{R_{1}\rho_{0}v_{0}/\varepsilon_{0}}(q^{ \prime})\cup B_{2d/\varepsilon_{0}}^{c}(q^{\prime})\big{]}\) for some \(R_{1}\) large but \(d\) small, independent of \(t\).
**Lemma 3.1**.: _There exist positive constants \(R_{1}\) and \(C\), independent of \(t\), such that for any points \(\xi=(\xi_{1},\dots,\xi_{m})\in\mathcal{O}_{t}(q)\) and any \(t\) large enough, there exists \(\psi:\widetilde{\Omega}_{t}\to\mathbb{R}\) smooth and positive verifying_
\[\mathcal{L}(\psi)=-\Delta_{a(\varepsilon_{0}y)}\psi-W\psi\geq\left(\frac{ \varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}\frac{1}{\big{|}\frac{\varepsilon_{ 0}y-q}{\rho_{0}v_{0}}\big{|}^{4+2\widetilde{\alpha}}}+\sum_{i=1}^{m}\frac{1}{ \gamma_{i}^{2}}\frac{1}{\big{|}\frac{y-\xi_{i}^{\prime}}{\gamma_{i}}\big{|}^ {4+2\widetilde{\alpha}}}+\varepsilon_{0}^{2}\quad\quad\text{in}\ \ \widetilde{\Omega}_{t}.\]
_Moreover, \(\psi\) is uniformly bounded, i.e._
\[1<\psi\leq C\quad\text{ in }\ \overline{\widetilde{\Omega}}_{t}.\]
Proof.: Let us take
\[\psi=C_{1}\left(\Psi_{0}(\varepsilon_{0}y)-\frac{1}{\big{|}\frac{\varepsilon_ {0}y-q}{\rho_{0}v_{0}}\big{|}^{2(1+\widetilde{\alpha})}}\right)+C_{1}\sum_{i= 1}^{m}\left(\Psi_{0}(\varepsilon_{0}y)-\frac{1}{\big{|}\frac{y-\xi_{i}^{ \prime}}{\gamma_{i}}\big{|}^{2(1+\widetilde{\alpha})}}\right),\]
where \(\Psi_{0}\) satisfies \(-\Delta_{a}\Psi_{0}=1\) in \(\Omega\), \(\Psi_{0}=2\) on \(\partial\Omega\). Since \(\Psi_{0}\geq 2\) in \(\Omega\), it is directly checked that, choosing the positive constant \(C_{1}\) larger if necessary, \(\psi\) meets all the conditions of the lemma for numbers \(R_{1}\) and \(t\) large enough.
**Step 2:** An auxiliary linear equation. Given \(h\in L^{\infty}(\Omega_{t})\) and \(\xi=(\xi_{1},\dots,\xi_{m})\in\mathcal{O}_{t}(q)\), we first study the linear equation
\[\begin{cases}\mathcal{L}(\phi)=-\Delta_{a(\varepsilon_{0}y)}\phi-W\phi=h&\text {in}\ \ \Omega_{t},\\ \phi=0&\text{on}\ \,\partial\Omega_{t}.\end{cases} \tag{3.7}\]
For solutions of (3.7) involving more orthogonality conditions than those in (3.1), we have the following a priori estimate.
**Lemma 3.2**.: _There exist \(R_{0}>0\) and \(t_{m}>1\) such that for any \(t>t_{m}\) and any solution \(\phi\) of (3.7) with the orthogonality conditions_
\[\int_{\Omega_{t}}\chi_{q}Z_{q}\phi=0\qquad\text{ and }\qquad\int_{\Omega_{t}} \chi_{i}Z_{ij}\phi=0,\ \ \ \ i=1,\dots,m,\ \ j=0,1,2, \tag{3.8}\]
_we have_
\[\|\phi\|_{L^{\infty}(\Omega_{t})}\leq C\|h\|_{*}, \tag{3.9}\]
_where \(C>0\) is independent of \(t\)._
Proof.: Take \(R_{0}=2R_{1}\) with \(R_{1}\) the constant in the previous step. Since \(\xi=(\xi_{1},\dots,\xi_{m})\in\mathcal{O}_{t}(q)\), \(\rho_{0}v_{0}=o(1/t^{2\beta})\) and \(\varepsilon_{0}\gamma_{i}=o(1/t^{2\beta})\) for \(t\) large enough, we have \(B_{R_{1}\rho_{0}v_{0}/\varepsilon_{0}}(q^{\prime})\) and \(B_{R_{1}\gamma_{i}}(\xi_{i}^{\prime})\), \(i=1,\dots,m\), disjointed and included in \(\Omega_{t}\). Recalling the barrier \(\psi\) in the previous lemma, we first claim that the operator \(\mathcal{L}\) satisfies the maximum principle in \(\widetilde{\Omega}_{t}\), namely if \(\phi\) is a supersolution of \(\mathcal{L}(\phi)=-\Delta_{a(\varepsilon_{0}y)}\phi-W\phi\geq 0\) in \(\widetilde{\Omega}_{t}\), \(\phi\geq 0\) on \(\partial\widetilde{\Omega}_{t}\), then \(\phi\geq 0\)
in \(\widetilde{\Omega}_{t}\). In fact, suppose by contradiction that the operator \(\mathcal{L}\) does not satisfy the maximum principle in \(\widetilde{\Omega}_{t}\). Since \(\psi>0\) in \(\widetilde{\Omega}_{t}\), the function \(\phi/\psi\) has a negative minimum point \(y_{0}\) in \(\widetilde{\Omega}_{t}\). A simple computation deduces
\[-\Delta_{a(\varepsilon_{0}y)}\left(\frac{\phi}{\psi}\right)=\frac{1}{\psi^{2} }\big{[}\psi\mathcal{L}(\phi)-\phi\mathcal{L}(\psi)\big{]}+\frac{2}{\psi}\nabla \psi\nabla\left(\frac{\phi}{\psi}\right).\]
This, together with the fact that \(\mathcal{L}(\psi)>0\) in \(\widetilde{\Omega}_{t}\), gives \(-\Delta_{a(\varepsilon_{0}y)}\big{(}\phi/\psi\big{)}(y_{0})>0\). On the other hand,
\[-\Delta\left(\frac{\phi}{\psi}\right)=-\Delta_{a(\varepsilon_{0}y)}\left( \frac{\phi}{\psi}\right)+\varepsilon_{0}\nabla\log a(\varepsilon_{0}y)\nabla \left(\frac{\phi}{\psi}\right).\]
Then \(-\Delta\big{(}\phi/\psi\big{)}(y_{0})>0\), which contradicts to the fact that \(y_{0}\) is a minimum point of \(\phi/\psi\) in \(\widetilde{\Omega}_{t}\).
Let \(h\) be bounded and \(\phi\) be a solution to (3.7) satisfying (3.8). We define the "inner norm" of \(\phi\) as
\[\|\phi\|_{**}=\sup_{y\in\bigcup_{i=1}^{m}B_{R_{1}\gamma_{i}}(\xi_{i}^{\prime}) \cup B_{R_{1}\rho_{0}v_{0}/\varepsilon_{0}}(q^{\prime})\cup\big{(}\Omega_{t} \backslash B_{2d/\varepsilon_{0}}(q^{\prime})\big{)}} \tag{3.10}\]
and claim that there is a constant \(C>0\) independent of \(t\) such that for any points \(\xi=(\xi_{1},\ldots,\xi_{m})\in\mathcal{O}_{t}(q)\),
\[\|\phi\|_{L^{\infty}(\Omega_{t})}\leq C\left(\|\phi\|_{**}+\|h\|_{*}\right). \tag{3.11}\]
Indeed, we take
\[\widetilde{\phi}(y)=\left(\|\phi\|_{**}+\|h\|_{*}\right)\psi(y),\qquad\forall \ y\in\overline{\widetilde{\Omega}}_{t}=\overline{B}_{2d/\varepsilon_{0}}(q^ {\prime})\setminus\left[\bigcup_{i=1}^{m}B_{R_{1}\gamma_{i}}(\xi_{i}^{\prime} )\cup B_{R_{1}\rho_{0}v_{0}/\varepsilon_{0}}(q^{\prime})\right].\]
For \(y\in B_{2d/\varepsilon_{0}}(q^{\prime})\setminus\big{[}\bigcup_{i=1}^{m}B_{R_ {1}\gamma_{i}}(\xi_{i}^{\prime})\cup B_{R_{1}\rho_{0}v_{0}/\varepsilon_{0}}(q ^{\prime})\big{]}\),
\[\mathcal{L}\big{(}\widetilde{\phi}\pm\phi\big{)}(y)\geq C_{1}\|h\|_{*}\left\{ \left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}\frac{1}{\big{|}\frac{ \varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{4+2\widetilde{\alpha}}}+\sum_{i=1} ^{m}\frac{1}{\gamma_{i}^{2}}\frac{1}{\big{|}\frac{y-\xi_{i}^{\prime}}{\gamma_ {i}}\big{|}^{4+2\widetilde{\alpha}}}+\varepsilon_{0}^{2}\right\}\pm h(y)\geq|h (y)|\pm h(y)\geq 0,\]
and for \(y\in\bigcup_{i=1}^{m}\partial B_{R_{1}\gamma_{i}}(\xi_{i}^{\prime})\cup\partial B _{R_{1}\rho_{0}v_{0}/\varepsilon_{0}}(q^{\prime})\cup\partial B_{2d/\varepsilon _{0}}(q^{\prime})\),
\[\big{(}\widetilde{\phi}\pm\phi\big{)}(y)\geq\|\phi\|_{**}\oplus\phi(y)\geq|\phi (y)|\pm\phi(y)\geq 0.\]
By the above maximum principle we obtain that \(|\phi|\leq\tilde{\phi}\) in \(\widetilde{\Omega}_{t}=\Omega_{t}\setminus\big{[}\bigcup_{i=1}^{m}B_{R_{1} \gamma_{i}}(\xi_{i}^{\prime})\cup B_{R_{1}\rho_{0}v_{0}/\varepsilon_{0}}(q^{ \prime})\cup B_{2d/\varepsilon_{0}}^{c}(q^{\prime})\big{]}\), which implies that estimate (3.11) holds.
We prove the lemma by contradiction. Assume that there are sequences of parameters \(t_{n}\to+\infty\), points \(\xi^{n}=(\xi_{1}^{n},\ldots,\xi_{m}^{m})\in\mathcal{O}_{t_{n}}(q)\), functions \(h_{n}\), \(W_{n}\) and associated solutions \(\phi_{n}\) of equation (3.7) with orthogonality conditions (3.8) such that
\[\|\phi_{n}\|_{L^{\infty}(\Omega_{t_{n}})}=1\qquad\text{ but }\qquad\|h_{n}\|_{*}\to 0, \quad\text{ as }\ n\to+\infty. \tag{3.12}\]
Consider
\[\widehat{\phi}_{q^{c}}^{n}(x)=\phi_{n}\big{(}x/\varepsilon_{0}^{n}\big{)}, \widehat{h}_{q^{c}}^{n}(x)=h_{n}\big{(}x/\varepsilon_{0}^{n}\big{)} \qquad\text{ for all }\ x\in\Omega\setminus B_{2d}(q),\]
and
\[\widehat{\phi}_{q}^{n}(z)=\phi_{n}\big{(}\big{(}\rho_{0}^{n}v_{0}^{n}z+q\big{)} /\varepsilon_{0}^{n}\big{)}, \widehat{h}_{q}^{n}(z)=h_{n}\big{(}(\rho_{0}^{n}v_{0}^{n}z+q)/ \varepsilon_{0}^{n}\big{)},\]
and for all \(i=1,\ldots,m\),
\[\widehat{\phi}_{i}^{n}(z)=\phi_{n}\big{(}\gamma_{i}^{n}z+(\xi_{i}^{n})^{\prime} \big{)}, \widehat{h}_{i}^{n}(z)=h_{n}\big{(}\gamma_{i}^{n}z+(\xi_{i}^{n})^{\prime} \big{)},\]
where \(\mu^{n}=\big{(}\mu_{0}^{n},\mu_{1}^{n},\ldots,\mu_{m}^{n}\big{)}\), \(\varepsilon_{0}^{n}=\exp\big{\{}-\frac{1}{2}t_{n}\big{\}}\), \(\varepsilon_{i}^{n}=\exp\big{\{}-\frac{1}{2}t_{n}\phi_{1}(\xi_{i}^{n})\big{\}}\), \(\rho_{0}^{n}=(\varepsilon_{0}^{n})^{\frac{1}{1+\widetilde{\alpha}}}=\exp\big{\{}- \frac{1}{2(1+\widetilde{\alpha})}t_{n}\big{\}}\), \(v_{0}^{n}=(\mu_{0}^{n})^{\frac{1}{1+\widetilde{\alpha}}}\) and \(\gamma_{i}^{n}=\frac{1}{\varepsilon_{0}^{n}}\varepsilon_{i}^{n}\mu_{i}^{n}=\mu_{i} ^{n}\exp\big{\{}-\frac{1}{2}t_{n}\big{[}\phi_{1}(\xi_{i}^{n})-1\big{\}}\big{]}\). Notice first that
\[h_{n}(y)=\big{(}-\Delta_{a(\varepsilon_{0}^{n}y)}\phi_{n}-W_{n}\phi_{n}\big{)} \big{|}_{y=x/\varepsilon_{0}^{n}}=(\varepsilon_{0}^{n})^{2}\left[-\Delta_{a(x)} \widehat{\phi}_{q^{c}}^{n}-(\varepsilon_{0}^{n})^{-2}\widehat{W}_{n}\widehat{ \phi}_{q^{c}}^{n}\right](x),\]
where
\[\widehat{W}_{n}(x)=W_{n}(x/\varepsilon_{0}^{n}).\]
Applying the expansion of \(W_{n}\) in (2.30), we find that \(\widehat{\phi}_{q^{\varepsilon}}^{n}(x)\) satisfies
\[\left\{\begin{aligned} &-\Delta_{a(x)}\widehat{\phi}_{q^{\varepsilon}}^{n}(x)+O \left(\frac{e^{-t_{n}\phi_{1}(x)}}{|x-q|^{4+2\alpha}}\prod_{i=1}^{m}\frac{1}{|x -\xi_{i}^{n}|^{4}}\right)\widehat{\phi}_{q^{\varepsilon}}^{n}(x)=\left(\frac{1 }{\varepsilon_{0}^{n}}\right)^{2}\widehat{h}_{q^{\varepsilon}}^{n}(x)\quad \text{ in }\quad\Omega\setminus B_{2d}(q),\\ &\widehat{\phi}_{q^{\varepsilon}}^{n}(x)=0\end{aligned}\right.\]
Thanks to the definition of the \(\|\cdot\|_{*}\)-norm in (3.5), we find that \(\big{(}\frac{1}{\varepsilon_{0}^{n}}\big{)}^{2}\big{|}\widehat{h}_{q^{ \varepsilon}}^{n}(x)\big{|}\leq C\|h_{n}\|_{*}\to 0\) uniformly in \(\Omega\setminus B_{2d}(q)\). By elliptic estimates, \(\widehat{\phi}_{q^{\varepsilon}}^{n}\) converges uniformly in \(\Omega\setminus B_{2d}(q)\) to a trivial solution \(\widehat{\phi}_{q^{\varepsilon}}^{\infty}\), namely \(\widehat{\phi}_{q^{\varepsilon}}^{\infty}\equiv 0\) in \(\Omega\setminus B_{2d}(q)\).
Next, we observe that
\[h_{n}(y)=\big{(}-\Delta_{a(\varepsilon_{0}^{n}y)}\phi_{n}-W_{n}\phi_{n}\big{)} \,\bigg{|}_{y=\frac{\rho_{0}^{n}\upsilon_{0}^{n}+y}{\varepsilon_{0}^{n}}}= \left(\frac{\varepsilon_{0}^{n}}{\rho_{0}^{n}v_{0}^{n}}\right)^{2}\left[- \Delta_{\widehat{a}_{n}(z)}\widehat{\phi}_{q}^{n}-\left(\frac{\rho_{0}^{n}v_{0 }^{n}}{\varepsilon_{0}^{n}}\right)^{2}\widehat{W}_{n}\widehat{\phi}_{q}^{n} \right](z),\]
where
\[\widehat{a}_{n}(z)=a\big{(}\rho_{0}^{n}v_{0}^{n}z+q\big{)},\qquad\qquad\qquad \widehat{W}_{n}(z)=W_{n}\big{(}(\rho_{0}^{n}v_{0}^{n}z+q)/\varepsilon_{0}^{n} \big{)}.\]
Using the expansion of \(W_{n}\) in (2.28), we find that \(\widehat{\phi}_{q}^{n}(z)\) solves
\[-\Delta_{\widehat{a}_{n}(z)}\widehat{\phi}_{q}^{n}(z)-\frac{8(1+\alpha)^{2}|z |^{2\alpha}}{(1+|z|^{2(1+\alpha)})^{2}}\Big{[}1+O\left(|\rho_{0}^{n}v_{0}^{n} z|^{\sigma}\right)+o\left(1\right)\Big{]}\widehat{\phi}_{q}^{n}(z)=\left(\frac{ \rho_{0}^{n}v_{0}^{n}}{\varepsilon_{0}^{n}}\right)^{2}\widehat{h}_{q}^{n}(z)\]
for any \(z\in B_{R_{0}+2}(0)\). Owing to the definition of the \(\|\cdot\|_{*}\)-norm in (3.5), we have that for any \(\theta\in\big{(}1,-1/\hat{\alpha}\big{)}\), \(\big{(}\frac{\rho_{0}^{n}v_{0}^{n}}{\varepsilon_{0}^{n}}\big{)}^{2}\widehat{h} _{q}^{n}\to 0\) in \(L^{\theta}\big{(}B_{R_{0}+2}(0)\big{)}\). Since \(\frac{8(1+\alpha)^{2}|z|^{2\alpha}}{(1+|z|^{2(1+\alpha)})^{2}}\) is bounded in \(L^{\theta}\big{(}B_{R_{0}+2}(0)\big{)}\), standard elliptic regularity implies that \(\widehat{\phi}_{q}^{n}\) converges uniformly over compact subsets near the origin to a bounded solution \(\widehat{\phi}_{q}^{\infty}\) of equation
\[\Delta\phi+\frac{8(1+\alpha)^{2}|z|^{2\alpha}}{(1+|z|^{2(1+\alpha)})^{2}}\phi= 0\quad\text{ in }\,\mathbb{R}^{2}, \tag{3.13}\]
which satisfies
\[\int_{\mathbb{R}^{2}}\chi\mathcal{Z}_{q}\widehat{\phi}_{q}^{\infty}=0. \tag{3.14}\]
By the result of [29, 30, 31], \(\widehat{\phi}_{q}^{\infty}\) is proportional to \(\mathcal{Z}_{q}\). Since \(\int_{\mathbb{R}^{2}}\chi\mathcal{Z}_{q}^{2}>0\), by (3.14) we find that \(\widehat{\phi}_{q}^{\infty}\equiv 0\) in \(B_{R_{1}}(0)\).
Finally, for each \(i\in\{1,\ldots,m\}\), we get
\[h_{n}(y)=\big{(}-\Delta_{a(\varepsilon_{0}^{n}y)}\phi_{n}-W_{n}\phi_{n}\big{)} \big{|}_{y=\gamma_{i}^{n}z+(\xi_{i}^{n})^{\prime}}=(\gamma_{i}^{n})^{-2} \left[-\Delta_{\widehat{a}_{n}(z)}\widehat{\phi}_{i}^{n}-(\gamma_{i}^{n})^{2} \widehat{W}_{n}\widehat{\phi}_{i}^{n}\right](z),\]
where
\[\widehat{a}_{n}(z)=a\big{(}\varepsilon_{0}^{n}\gamma_{i}^{n}z+\xi_{i}^{n} \big{)},\qquad\qquad\qquad\widehat{W}_{n}(z)=W_{n}\big{(}\gamma_{i}^{n}z+( \xi_{i}^{n})^{\prime}\big{)}.\]
Employing the expansion of \(W_{n}\) in (2.29) and elliptic regularity, we have that for each \(i\in\{1,\ldots,m\}\), \(\widehat{\phi}_{i}^{n}\) converges uniformly over compact subsets near the origin to a bounded solution \(\widehat{\phi}_{i}^{\infty}\) of equation
\[\Delta\phi+\frac{8}{(1+|z|^{2})^{2}}\phi=0\quad\text{ in }\,\mathbb{R}^{2}, \tag{3.15}\]
which satisfies
\[\int_{\mathbb{R}^{2}}\chi\mathcal{Z}_{j}\widehat{\phi}_{i}^{\infty}=0\quad\text { for }\,j=0,\,1,\,2. \tag{3.16}\]
Thus by the result of [32, 33], \(\widehat{\phi}_{i}^{\infty}\) must be a linear combination of \(\mathcal{Z}_{j}\), \(j=0,1,2\). But \(\int_{\mathbb{R}^{2}}\chi\mathcal{Z}_{j}^{2}>0\) and \(\int_{\mathbb{R}^{2}}\chi\mathcal{Z}_{j}\mathcal{Z}_{l}=0\) for \(j\neq l\). Hence (3.16) implies \(\widehat{\phi}_{i}^{\infty}\equiv 0\) in \(B_{R_{1}}(0)\). As a result, by definition (3.10) we obtain \(\lim_{n\to+\infty}\|\phi_{n}\|_{**}=0\). But (3.11) and (3.12) tell us \(\liminf_{n\to+\infty}\|\phi_{n}\|_{**}>0\), which is a contradiction.
**Step 3:** Proving an a priori estimate for solutions to (3.7) that satisfy orthogonality conditions with respect to \(Z_{ij}\) for \(j=1,2\) only.
**Lemma 3.3**.: _For \(t\) large enough, if \(\phi\) solves (3.7) and satisfies_
\[\int_{\Omega_{t}}\chi_{i}Z_{ij}\phi=0\quad\ \ \forall\ i=1,\ldots,m,\ j=1,2, \tag{3.17}\]
_then_
\[\|\phi\|_{L^{\infty}(\Omega_{t})}\leq Ct\,\|h\|_{*}, \tag{3.18}\]
_where \(C>0\) is independent of \(t\)._
Proof.: Let \(R>R_{0}+1\) be a large but fixed number. We consider the functions
\[\widetilde{Z}_{q}(y)=Z_{q}(y)-\frac{\varepsilon_{0}}{\rho_{0}v_{0}}+a_{q}G( \varepsilon_{0}y,q),\qquad\qquad\widetilde{Z}_{i0}(y)=Z_{i0}(y)-\frac{1}{ \gamma_{i}}+a_{i0}G(\varepsilon_{0}y,\xi_{i}), \tag{3.19}\]
where
\[a_{q}=\frac{\varepsilon_{0}}{\rho_{0}v_{0}\big{[}H(q,q)-4\log(\rho_{0}v_{0}R) \big{]}},\qquad\qquad a_{i0}=\frac{1}{\gamma_{i}\big{[}H(\xi_{i},\xi_{i})-4 \log(\varepsilon_{0}\gamma_{i}R)\big{]}}. \tag{3.20}\]
From estimates (2.22)-(2.23), we obtain
\[C_{1}|\log\varepsilon_{0}|\leq-\log(\rho_{0}v_{0}R)\leq C_{2}|\log\varepsilon _{0}|,\qquad\qquad C_{1}|\log\varepsilon_{i}|\leq-\log(\varepsilon_{0}\gamma_{i }R)\leq C_{2}|\log\varepsilon_{i}|, \tag{3.21}\]
and
\[\widehat{Z}_{q}(y)=O\left(\frac{\varepsilon_{0}G(\varepsilon_{0}y,q)}{\rho_{ 0}v_{0}|\log\varepsilon_{0}|}\right),\qquad\qquad\widehat{Z}_{i0}(y)=O\left( \frac{G(\varepsilon_{0}y,\xi_{i})}{\gamma_{i}|\log\varepsilon_{i}|}\right). \tag{3.22}\]
Let \(\eta_{1}\) and \(\eta_{2}\) be radial smooth cut-off functions in \(\mathbb{R}^{2}\) such that
\[0\leq\eta_{1}\leq 1;\qquad\qquad\eta_{1}\equiv 1\ \text{in}\,B_{R}(0); \qquad\eta_{1}\equiv 0\ \text{in}\,\mathbb{R}^{2}\setminus B_{R+1}(0);\]
\[0\leq\eta_{2}\leq 1;\qquad\qquad\eta_{2}\equiv 1\ \text{in}\,B_{3d}(0); \qquad\eta_{2}\equiv 0\ \text{in}\,\mathbb{R}^{2}\setminus B_{6d}(0).\]
Without loss of generality we assume that \(d>0\) is a small but fixed number independent of \(t\) such that \(B_{9d}(q)\subset\Omega\). Set
\[\eta_{q1}(y)=\eta_{1}\left(\frac{\big{|}\varepsilon_{0}y-q\big{|}}{\rho_{0}v_ {0}}\right),\qquad\qquad\eta_{i1}(y)=\eta_{1}\left(\frac{\big{|}y-\xi_{i}^{ \prime}\big{|}}{\gamma_{i}}\right), \tag{3.23}\]
and
\[\eta_{q2}(y)=\eta_{2}\left(\varepsilon_{0}\big{|}y-q^{\prime}\big{|}\right), \qquad\qquad\eta_{i2}(y)=\eta_{2}\left(\varepsilon_{0}\big{|}y-\xi_{i}^{ \prime}\big{|}\right), \tag{3.24}\]
and define the two test functions
\[\widetilde{Z}_{q}=\eta_{q1}Z_{q}+(1-\eta_{q1})\eta_{q2}\widehat{Z}_{q},\qquad \qquad\qquad\widetilde{Z}_{i0}=\eta_{i1}Z_{i0}+(1-\eta_{i1})\eta_{i2}\widehat{ Z}_{i0}. \tag{3.25}\]
Given \(\phi\) satisfying (3.7) and (3.17), we modify it so that the extra orthogonality conditions with respect to \(Z_{q}\) and \(Z_{i0}\)'s hold. We set
\[\widetilde{\phi}=\phi+d_{q}\widetilde{Z}_{q}+\sum_{i=1}^{m}d_{i}\widetilde{Z} _{i0}+\sum_{i=1}^{m}\sum_{j=1}^{2}e_{ij}\chi_{i}Z_{ij}, \tag{3.26}\]
and adjust the coefficients \(d_{q},\,d_{i}\) and \(e_{ij}\) such that \(\widetilde{\phi}\) satisfies the orthogonality condition
\[\int_{\Omega_{t}}\chi_{q}Z_{q}\widetilde{\phi}=0\qquad\text{ and }\qquad\int_{\Omega_{t}}\chi_{i}Z_{ij}\widetilde{\phi}=0\quad\text{ for all }\,i=1,\ldots,m,\ j=0,1,2. \tag{3.27}\]
Then
\[\mathcal{L}(\widetilde{\phi})=h+d_{q}\mathcal{L}(\widetilde{Z}_{q})+\sum_{i=1} ^{m}d_{i}\mathcal{L}(\widetilde{Z}_{i0})+\sum_{i=1}^{m}\sum_{j=1}^{2}e_{ij} \mathcal{L}(\chi_{i}Z_{ij})\quad\text{ in }\ \Omega_{t}. \tag{3.28}\]
If (3.27) holds, the previous lemma allows us to conclude
\[\|\widetilde{\phi}\|_{L^{\infty}(\Omega_{t})}\leq C\left[\|h\|_{*}+|d_{q}|\big{\|} \mathcal{L}(\widetilde{Z}_{q})\big{\|}_{*}+\sum_{i=1}^{m}|d_{i}|\big{\|} \mathcal{L}(\widetilde{Z}_{i0})\big{\|}_{*}+\sum_{i=1}^{m}\sum_{j=1}^{2}|e_{ij }|\big{\|}\mathcal{L}(\chi_{i}Z_{ij})\big{\|}_{*}\right]. \tag{3.29}\]
Furthermore, using the definition of \(\widetilde{\phi}\) again and the fact that
\[\big{\|}\widetilde{Z}_{q}\big{\|}_{L^{\infty}(\Omega_{t})}\leq\frac{C\varepsilon _{0}}{\rho_{0}v_{0}},\qquad\qquad\big{\|}\widetilde{Z}_{i0}\big{\|}_{L^{ \infty}(\Omega_{t})}\leq\frac{C}{\gamma_{i}},\qquad\qquad\big{\|}\chi_{i}Z_{ ij}\big{\|}_{L^{\infty}(\Omega_{t})}\leq\frac{C}{\gamma_{i}}, \tag{3.30}\]
estimate (3.18) is a direct consequence of the following two claims:
**Claim 1**.: _The coefficients \(d_{q}\), \(d_{i}\) and \(e_{ij}\) are well defined and_
\[\big{\|}\mathcal{L}(\widetilde{Z}_{q})\big{\|}_{*}\leq\frac{C\varepsilon_{0} \log t}{\rho_{0}v_{0}|\log\varepsilon_{0}|}, \tag{3.31}\]
_and_
\[\big{\|}\mathcal{L}(\chi_{i}Z_{ij})\big{\|}_{*}\leq\frac{C}{\gamma_{i}}, \qquad\qquad\qquad\big{\|}\mathcal{L}(\widetilde{Z}_{i0})\big{\|}_{*}\leq \frac{C\log t}{\gamma_{i}|\log\varepsilon_{i}|}. \tag{3.32}\]
**Claim 2**.: _The following bounds hold:_
\[|d_{q}|\leq C\frac{\rho_{0}v_{0}|\log\varepsilon_{0}|}{\varepsilon_{0}}\|h\|_ {*},\qquad\qquad|d_{i}|\leq C\gamma_{i}|\log\varepsilon_{i}|\|h\|_{*},\qquad \qquad|e_{ij}|\leq C\gamma_{i}\log t\,\|h\|_{*}. \tag{3.33}\]
**Proof of Claim 1.** First, we find \(d_{q}\), \(d_{i}\) and \(e_{ij}\). Testing (3.26) against \(\chi_{i}Z_{ij}\) and using orthogonality condition (3.27) for \(j=1,2\) and the fact that \(\chi_{i}\chi_{k}\equiv 0\) if \(i\neq k\), we readily find
\[e_{ij}=\left(-d_{q}\int_{\Omega_{t}}\chi_{i}Z_{ij}\widetilde{Z}_{q}-\sum_{k \neq i}^{m}d_{k}\int_{\Omega_{t}}\chi_{i}Z_{ij}\widetilde{Z}_{k0}\right)\bigg{/} \int_{\Omega_{t}}\chi_{i}^{2}Z_{ij}^{2},\ \ \ \ i=1,\ldots,m,\ j=1,2. \tag{3.34}\]
Note that \(\int_{\Omega_{t}}\chi_{i}^{2}Z_{ij}^{2}=c>0\) for all \(i\), \(j\), and
\[\int_{\Omega_{t}}\chi_{i}Z_{ij}\widetilde{Z}_{q}=O\left(\frac{\varepsilon_{0} \gamma_{i}\log t}{\rho_{0}v_{0}|\log\varepsilon_{0}|}\right),\qquad\quad\int _{\Omega_{t}}\chi_{i}Z_{ij}\widetilde{Z}_{k0}=O\left(\frac{\gamma_{i}\log t}{ \gamma_{k}|\log\varepsilon_{k}|}\right),\ \ \ \ k\neq i.\]
Then
\[|e_{ij}|\leq C\left(|d_{q}|\frac{\varepsilon_{0}\gamma_{i}\log t}{\rho_{0}v_{ 0}|\log\varepsilon_{0}|}+\sum_{k\neq i}^{m}|d_{k}|\frac{\gamma_{i}\log t}{ \gamma_{k}|\log\varepsilon_{k}|}\right). \tag{3.35}\]
We only need to consider \(d_{q}\) and \(d_{i}\). Testing (3.26) against \(\chi_{q}Z_{q}\) and \(\chi_{k}Z_{k0}\), respectively, and using the orthogonality conditions in (3.27) for \(q\) and \(j=0\), we obtain a system of \(\mathcal{D}=(d_{q},d_{1},\ldots,d_{m})\),
\[\begin{split} d_{q}\int_{\Omega_{t}}\chi_{q}Z_{q}\widetilde{Z}_{q }+\sum_{i=1}^{m}d_{i}\int_{\Omega_{t}}\chi_{q}Z_{q}\widetilde{Z}_{i0}& =-\int_{\Omega_{t}}\chi_{q}Z_{q}\phi,\\ d_{q}\int_{\Omega_{t}}\chi_{k}Z_{k0}\widetilde{Z}_{q}+\sum_{i=1 }^{m}d_{i}\int_{\Omega_{t}}\chi_{k}Z_{k0}\widetilde{Z}_{i0}&=- \int_{\Omega_{t}}\chi_{k}Z_{k0}\phi,\qquad\forall\ \ k=1,\ldots,m.\end{split} \tag{3.36}\]
But
\[\int_{\Omega_{t}}\chi_{q}Z_{q}\widetilde{Z}_{q}=\int_{\Omega_{t}}\chi_{q}Z_{q} ^{2}=C_{1}>0,\qquad\qquad\int_{\Omega_{t}}\chi_{q}Z_{q}\widetilde{Z}_{i0}=O\left( \frac{\rho_{0}v_{0}\log t}{\varepsilon_{0}\gamma_{i}|\log\varepsilon_{i}|} \right),\]
and
\[\int_{\Omega_{t}}\chi_{k}Z_{k0}\widetilde{Z}_{q}=O\left(\frac{\varepsilon_{0} \gamma_{k}\log t}{\rho_{0}v_{0}|\log\varepsilon_{0}|}\right),\qquad\int_{\Omega _{t}}\chi_{k}Z_{k0}\widetilde{Z}_{k0}=C_{2}>0,\qquad\int_{\Omega_{t}}\chi_{k}Z_{ k0}\widetilde{Z}_{i0}=O\left(\frac{\gamma_{k}\log t}{\gamma_{i}|\log \varepsilon_{i}|}\right),\ \ \ i\neq k.\]
Let us denote the system (3.36) as \(\mathcal{MD}=\mathcal{S}\), where \(\mathcal{S}\) is the right-hand member and \(\mathcal{M}\) is the coefficient matrix. By the above estimates, \(\mathcal{P}^{-1}\mathcal{MP}\) is diagonally dominant and then invertible, where \(\mathcal{P}=\mathrm{diag}\left(\rho_{0}v_{0}/\varepsilon_{0},\,\gamma_{1},\, \dots,\,\gamma_{m}\right)\). Hence \(\mathcal{M}\) is also invertible and \(\mathcal{D}=(d_{q},d_{1},\dots,d_{m})\) is well defined.
Let us prove now inequality (3.31). Consider four regions
\[\Omega_{1}=\left\{\left|\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}} \right|\leq R\right\}, \Omega_{2}=\left\{R<\left|\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}} \right|\leq R+1\right\},\] \[\Omega_{3}=\left\{R+1<\left|\frac{\varepsilon_{0}y-q}{\rho_{0}v_ {0}}\right|\leq\frac{3d}{\rho_{0}v_{0}}\right\}, \Omega_{4}=\left\{\frac{3d}{\rho_{0}v_{0}}<\left|\frac{\varepsilon_{0}y-q}{ \rho_{0}v_{0}}\right|\leq\frac{6d}{\rho_{0}v_{0}}\right\}.\]
Notice first that (3.2) and (3.3) imply
\[\Delta_{a(\varepsilon_{0}y)}Z_{q}+\left(\frac{\varepsilon_{0}}{ \rho_{0}v_{0}}\right)^{2}\frac{8(1+\alpha)^{2}\big{|}\frac{\varepsilon_{0}y-q }{\rho_{0}v_{0}}\big{|}^{2\alpha}}{\left(1+\big{|}\frac{\varepsilon_{0}y-q}{ \rho_{0}v_{0}}\big{|}^{2(1+\alpha)}\right)^{2}}Z_{q} =\varepsilon_{0}\nabla\log a(\varepsilon_{0}y)\nabla Z_{q}\] \[=O\left(\varepsilon_{0}\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0} }\right)^{2}\left|\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\right|^{2\alpha} \left[1+\frac{|\varepsilon_{0}y-q|}{\rho_{0}v_{0}}\right]^{-3-4\alpha}\right), \tag{3.37}\]
and then, in \(\Omega_{1}\cup\Omega_{2}\), by (2.28),
\[\mathcal{L}(Z_{q}) =\left[-\Delta_{a(\varepsilon_{0}y)}Z_{q}-\left(\frac{ \varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}\frac{8(1+\alpha)^{2}\big{|}\frac{ \varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2\alpha}}{\left(1+\big{|}\frac{ \varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2(1+\alpha)}\right)^{2}}Z_{q} \right]+\left[\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}\frac{8(1+ \alpha)^{2}\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2\alpha}} {\left(1+\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2(1+\alpha) }\right)^{2}}-W\right]Z_{q}\] \[=\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{3}\frac{8(1+ \alpha)^{2}\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2\alpha}}{ \left(1+\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2(1+\alpha)} \right)^{2}}O\left(|\varepsilon_{0}y-q|^{\sigma}\right)+O\left(\varepsilon_{0} \left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}\left|\frac{\varepsilon _{0}y-q}{\rho_{0}v_{0}}\right|^{2\alpha}\left[1+\frac{|\varepsilon_{0}y-q|}{ \rho_{0}v_{0}}\right]^{-3-4\alpha}\right). \tag{3.38}\]
Hence in \(\Omega_{1}\),
\[\mathcal{L}(\widetilde{Z}_{q})=\mathcal{L}(Z_{q})=\left(\frac{\varepsilon_{0}}{ \rho_{0}v_{0}}\right)^{3}\left|\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\right|^ {2\alpha}\Big{[}O\left(\varepsilon_{0}^{\sigma}|y-q^{\prime}|^{\sigma}\right) +O\left(\rho_{0}v_{0}\right)\Big{]}. \tag{3.39}\]
In \(\Omega_{2}\),
\[\mathcal{L}(\widetilde{Z}_{q}) =\eta_{q1}\mathcal{L}(Z_{q})+(1-\eta_{q1})\mathcal{L}(\widehat{Z}_ {q})-2\nabla\eta_{q1}\nabla(Z_{q}-\widehat{Z}_{q})-(Z_{q}-\widehat{Z}_{q}) \Delta_{a(\varepsilon_{0}y)}\eta_{q1}\] \[=\mathcal{L}(Z_{q})+(1-\eta_{q1})W(Z_{q}-\widehat{Z}_{q})-2\nabla \eta_{q1}\nabla(Z_{q}-\widehat{Z}_{q})-(Z_{q}-\widehat{Z}_{q})\Delta_{a( \varepsilon_{0}y)}\eta_{q1}. \tag{3.40}\]
Note that
\[Z_{q}-\widehat{Z}_{q}=\frac{\varepsilon_{0}}{\rho_{0}v_{0}}-a_{q}G(\varepsilon_ {0}y,q)=\frac{\varepsilon_{0}}{\rho_{0}v_{0}\big{[}H(q,q)-4\log(\rho_{0}v_{0}R) \big{]}}\left[4\log\frac{|\varepsilon_{0}y-q|}{R\rho_{0}v_{0}}+O\left( \varepsilon_{0}^{\sigma}|y-q^{\prime}|^{\sigma}\right)\right], \tag{3.41}\]
and then, in \(\Omega_{2}\),
\[|Z_{q}-\widehat{Z}_{q}|=O\left(\frac{\varepsilon_{0}}{R\rho_{0}v_{0}|\log \varepsilon_{0}|}\right), \left|\nabla\big{(}Z_{q}-\widehat{Z}_{q}\big{)}\right|=O\left(\frac{ \varepsilon_{0}^{2}}{R\rho_{0}^{2}v_{0}^{2}|\log\varepsilon_{0}|}\right). \tag{3.42}\]
Moreover \(|\nabla\eta_{q1}|=O\big{(}\varepsilon_{0}/(\rho_{0}v_{0})\big{)}\) and \(|\Delta_{a(\varepsilon_{0}y)}\eta_{q1}|=O\big{(}\varepsilon_{0}^{2}/(\rho_{0}^{2 }v_{0}^{2})\big{)}\). These, together with (2.28) and (3.38), gives that in \(\Omega_{2}\),
\[\mathcal{L}(\widetilde{Z}_{q})=O\left(\frac{\varepsilon_{0}^{3}}{R\rho_{0}^{3}v_ {0}^{3}|\log\varepsilon_{0}|}\right). \tag{3.43}\]
In \(\Omega_{3}\), by (3.19), (3.25) and (3.37),
\[\mathcal{L}(\widetilde{Z}_{q})=\mathcal{L}(\widehat{Z}_{q}) =\mathcal{L}(Z_{q})-\mathcal{L}(Z_{q}-\widehat{Z}_{q})\] \[\equiv\mathcal{A}_{1}+\mathcal{A}_{2}+O\left(\varepsilon_{0} \left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}\left[1+\frac{| \varepsilon_{0}y-q|}{\rho_{0}v_{0}}\right]^{-3-2\alpha}\right),\]
where
\[\mathcal{A}_{1}=\left[\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2} \frac{8(1+\alpha)^{2}\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2 \alpha}}{\left(1+\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2(1+ \alpha)}\right)^{2}}-W\right]Z_{q}\hskip 28.452756pt\text{and}\hskip 28.452756pt \mathcal{A}_{2}=W\left[\frac{\varepsilon_{0}}{\rho_{0}v_{0}}-a_{q}G( \varepsilon_{0}y,q)\right].\]
For the estimates of these two terms, we split \(\Omega_{3}\) into some subregions:
\[\Omega_{q}=\left\{\,R+1<\left|\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\right| \leq\frac{1}{\rho_{0}v_{0}t^{2\beta}}\,\right\},\]
From (2.28) and (2.30), we obtain
\[\mathcal{A}_{1}=\begin{cases}\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}} \right)^{3}\frac{8(1+\alpha)^{2}\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0 }}\big{|}^{2\alpha}}{\left(1+\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}} \big{|}^{2(1+\alpha)}\right)^{2}}\left[\,O\left(\varepsilon_{0}^{\sigma}|y-q^ {\prime}|^{\sigma}\right)+O\left(\rho_{0}^{\sigma}v_{0}^{\sigma}\right)+\sum _{j=1}^{m}O\left(\varepsilon_{j}^{\sigma}\mu_{j}^{\sigma}\right)\right]&\text{ in }\ \Omega_{q},\\ O\left(\frac{\varepsilon_{0}^{5}}{\rho_{0}v_{0}}\mu_{0}^{2}t^{4\beta(2+\alpha)}\right)+O \left(\frac{\varepsilon_{0}^{3}}{\rho_{0}v_{0}}e^{-t\phi_{1}(\varepsilon_{0}y )}t^{4\beta(2m+2+\alpha)}\right)&\text{ in }\ \widetilde{\Omega}_{3}.\end{cases}\]
Moreover, by (3.21) and (3.41),
\[\mathcal{A}_{2}=\begin{cases}\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}} \right)^{3}\frac{8(1+\alpha)^{2}\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0 }}\big{|}^{2\alpha}}{\left(1+\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}} \big{|}^{2(1+\alpha)}\right)^{2}}O\left(\frac{\log|\varepsilon_{0}y-q|-\log(R \rho_{0}v_{0})+\varepsilon_{0}^{\sigma}|y-q^{\prime}|^{\sigma}}{|\log \varepsilon_{0}|}\right)&\text{ in }\Omega_{q},\\ O\left(\frac{\varepsilon_{0}^{3}}{\rho_{0}v_{0}}t^{4\beta(2m+2+\alpha)}e^{-t \phi_{1}(\varepsilon_{0}y)}\right)&\text{ in }\ \widetilde{\Omega}_{3}.\end{cases}\]
Then in \(\Omega_{q}\cup\widetilde{\Omega}_{3}\),
\[\mathcal{L}(\widetilde{Z}_{q})=\mathcal{L}(\widehat{Z}_{q})=\left(\frac{ \varepsilon_{0}}{\rho_{0}v_{0}}\right)^{3}\frac{8(1+\alpha)^{2}\big{|}\frac{ \varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2\alpha}}{\left(1+\big{|}\frac{ \varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2(1+\alpha)}\right)^{2}}O\left( \frac{\log|\varepsilon_{0}y-q|-\log(R\rho_{0}v_{0})}{|\log\varepsilon_{0}|} \right). \tag{3.44}\]
In \(\Omega_{3,k}\) with \(k=1,\ldots,m\), by (2.29), (3.22) and (3.37),
\[\mathcal{L}(\widetilde{Z}_{q}) =\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}\frac{8(1+ \alpha)^{2}\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2\alpha}}{ \left(1+\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2(1+\alpha)} \right)^{2}}Z_{q}-\left[\Delta_{a(\varepsilon_{0}y)}Z_{q}+\left(\frac{ \varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}\frac{8(1+\alpha)^{2}\big{|}\frac{ \varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2\alpha}}{\left(1+\big{|}\frac{ \varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2(1+\alpha)}\right)^{2}}Z_{q}\right] -W\widehat{Z}_{q}\] \[=\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}\frac{8(1+ \alpha)^{2}\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2\alpha}}{ \left(1+\big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2(1+\alpha)} \right)^{2}}Z_{q}+O\left(\frac{1}{\gamma_{k}^{2}}\frac{8}{\left(1+\big{|} \frac{y-\varepsilon_{k}^{\prime}}{\gamma_{k}}\big{|}^{2}\right)^{2}}\cdot \frac{\varepsilon_{0}G(\varepsilon_{0}y,q)}{\rho_{0}v_{0}|\log\varepsilon_{0}|}\right)\] \[=O\left(\frac{1}{\gamma_{k}^{2}}\frac{8}{\left(1+\big{|}\frac{y- \varepsilon_{k}^{\prime}}{\gamma_{k}}\big{|}^{2}\right)^{2}}\cdot\frac{ \varepsilon_{0}\log t}{\rho_{0}v_{0}|\log\varepsilon_{0}|}\right). \tag{3.45}\]
In \(\Omega_{4}\), thanks to \(|\nabla\eta_{q2}|=O\left(\varepsilon_{0}\right)\), \(|\Delta_{a(\varepsilon_{0}y)}\eta_{q2}|=O\left(\varepsilon_{0}^{2}\right)\),
\[|\widehat{Z}_{q}|=O\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}|\log\varepsilon_{0} |}\right)\hskip 56.905512pt\text{and}\hskip 56.905512pt|\nabla\widehat{Z}_{q}|=O\left( \frac{\varepsilon_{0}^{2}}{\rho_{0}v_{0}|\log\varepsilon_{0}|}\right), \tag{3.46}\]
by (3.37) we get
\[\mathcal{L}(\widetilde{Z}_{q}) =-\eta_{q2}\Delta_{a(\varepsilon_{0}y)}Z_{q}-\eta_{q2}W\widetilde{Z} _{q}-2\nabla\eta_{q2}\nabla\widehat{Z}_{q}-\widehat{Z}_{q}\Delta_{a(\varepsilon_ {0}y)}\eta_{q2}\] \[=-\eta_{q2}W\widehat{Z}_{q}+O\left(\left(\frac{\varepsilon_{0}}{ \rho_{0}v_{0}}\right)^{3}\left(\frac{|\varepsilon_{0}y-q|}{\rho_{0}v_{0}} \right)^{-4-2\alpha}+\varepsilon_{0}\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0} }\right)^{2}\left(\frac{|\varepsilon_{0}y-q|}{\rho_{0}v_{0}}\right)^{-3-2 \alpha}+\frac{\varepsilon_{0}^{3}}{\rho_{0}v_{0}|\log\varepsilon_{0}|}\right). \tag{3.47}\]
From the previous choice of the number \(d\) we get that for any \(y\in\Omega_{4}\) and any \(k=1,\ldots,m\),
\[|y-\xi_{k}^{\prime}|\geq|y-q^{\prime}|-|q^{\prime}-\xi_{k}^{\prime}|\geq\frac {3d}{\varepsilon_{0}}-\frac{d}{\varepsilon_{0}}=\frac{2d}{\varepsilon_{0}}> \frac{1}{\varepsilon_{0}t^{2\beta}}.\]
This combined with (2.30) gives
\[W=O\left(\varepsilon_{0}^{2}e^{-t\phi_{1}(\varepsilon_{0}y)}\right)\quad \text{ in }\ \Omega_{4}. \tag{3.48}\]
Hence in this region,
\[\mathcal{L}(\widetilde{Z}_{q})=O\left(\frac{\varepsilon_{0}^{3}}{\rho_{0}v_{0 }|\log\varepsilon_{0}|}\right). \tag{3.49}\]
Combing (3.39), (3.43), (3.44), (3.45) and (3.49) with the definition of the \(\|\cdot\|_{*}\)-norm in (3.5), we conclude
\[\big{\|}\mathcal{L}(\widetilde{Z}_{q})\big{\|}_{*}=O\left(\frac{\varepsilon_{ 0}\log t}{\rho_{0}v_{0}|\log\varepsilon_{0}|}\right).\]
The inequalities in (3.32) are easy to get as they are very similar to the consideration of inequality (3.31). We leave the details for readers.
**Proof of Claim 2.** Testing equation (3.28) against \(a(\varepsilon_{0}y)\widetilde{Z}_{q}\) and using estimates (3.29)-(3.30), we can derive that
\[d_{q}\int_{\Omega_{t}} a(\varepsilon_{0}y)\widetilde{Z}_{q}\mathcal{L}( \widetilde{Z}_{q})+\sum_{k=1}^{m}d_{k}\int_{\Omega_{t}}a(\varepsilon_{0}y) \widetilde{Z}_{q}\mathcal{L}(\widetilde{Z}_{k0})\] \[= -\int_{\Omega_{t}}a(\varepsilon_{0}y)h\widetilde{Z}_{q}+\int_{ \Omega_{t}}a(\varepsilon_{0}y)\widetilde{\phi}\mathcal{L}(\widetilde{Z}_{q}) -\sum_{k=1}^{m}\sum_{l=1}^{2}e_{kl}\int_{\Omega_{t}}a(\varepsilon_{0}y)\chi_{ k}Z_{kl}\mathcal{L}(\widetilde{Z}_{q})\] \[\leq \frac{C\varepsilon_{0}}{\rho_{0}v_{0}}\|h\|_{*}+C\big{\|}\mathcal{ L}(\widetilde{Z}_{q})\big{\|}_{*}\left(\big{\|}\widetilde{\phi}\big{\|}_{L^{ \infty}(\Omega_{t})}+\sum_{k=1}^{m}\sum_{l=1}^{2}\frac{1}{\gamma_{k}}|e_{kl}|\right)\] \[\leq \frac{C\varepsilon_{0}}{\rho_{0}v_{0}}\|h\|_{*}+C\big{\|}\mathcal{ L}(\widetilde{Z}_{q})\big{\|}_{*}\left[\|h\|_{*}+|d_{q}\big{\|}\big{\|} \mathcal{L}(\widetilde{Z}_{q})\big{\|}_{*}+\sum_{k=1}^{m}|d_{k}|\big{\|} \mathcal{L}(\widetilde{Z}_{k0})\big{\|}_{*}+\sum_{k=1}^{m}\sum_{l=1}^{2}\big{|} e_{kl}|\left(\frac{1}{\gamma_{k}}+\big{\|}\mathcal{L}(\chi_{k}Z_{kl})\big{\|}_{*}\right) \right],\]
where we have applied that
\[\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}\int_{\Omega_{t}}\frac{ \big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}^{2\alpha}}{\big{(}1+ \big{|}\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\big{|}\big{)}^{4+2\hat{\alpha}+ 2\alpha}dy}\leq C\quad\text{ and }\quad\frac{1}{\gamma_{i}^{2}}\int_{\Omega_{t}}\frac{1}{\big{(}1+\big{|} \frac{y-\xi_{i}^{\prime}}{\gamma_{i}}\big{|}\big{)}^{4+2\hat{\alpha}}}dy\leq C,\ \ i=1,\ldots,m. \tag{3.50}\]
But estimate (3.35) and Claim 1 imply
\[|d_{q}|\left|\int_{\Omega_{t}}a(\varepsilon_{0}y)\widetilde{Z}_{q}\mathcal{L}( \widetilde{Z}_{q})\right|\leq\frac{C\varepsilon_{0}}{\rho_{0}v_{0}}\|h\|_{*}+ \frac{C\varepsilon_{0}\log^{2}t}{\rho_{0}v_{0}|\log\varepsilon_{0}|}\left[ \frac{\varepsilon_{0}|d_{q}|}{\rho_{0}v_{0}|\log\varepsilon_{0}|}+\sum_{k=1}^{m }\frac{|d_{k}|}{\gamma_{k}|\log\varepsilon_{k}|}\right]+\sum_{k=1}^{m}\Big{|}d_ {k}\int_{\Omega_{t}}a(\varepsilon_{0}y)\widetilde{Z}_{k0}\mathcal{L}( \widetilde{Z}_{q})\right|. \tag{3.51}\]
Similarly, testing (3.28) against \(a(\varepsilon_{0}y)\widetilde{Z}_{i0}\) and using (3.29), (3.30), (3.35) and Claim 1, we obtain
\[|d_{i}|\left|\int_{\Omega_{t}}a(\varepsilon_{0}y)\widetilde{Z}_{i0 }\mathcal{L}(\widetilde{Z}_{i0})\right| \leq\frac{C\|h\|_{*}}{\gamma_{i}}+\frac{C\log^{2}t}{\gamma_{i}| \log\varepsilon_{i}|}\left[\frac{\varepsilon_{0}|d_{q}|}{\rho_{0}v_{0}|\log \varepsilon_{0}|}+\sum_{k=1}^{m}\frac{|d_{k}|}{\gamma_{k}|\log\varepsilon_{k}| }\right]+\left|d_{q}\int_{\Omega_{t}}a(\varepsilon_{0}y)\widetilde{Z}_{i0} \mathcal{L}(\widetilde{Z}_{q})\right|\] \[\quad+\sum_{k\neq i}^{m}\left|d_{k}\int_{\Omega_{t}}a(\varepsilon _{0}y)\widetilde{Z}_{k0}\mathcal{L}(\widetilde{Z}_{i0})\right|. \tag{3.52}\]
To achieve the estimates of \(d_{q}\), \(d_{i}\) and \(e_{ij}\) in (3.33), we have the following claim.
**Claim 3**.: _If \(d\) is sufficiently small, but \(R\) is sufficiently large, then we have that for any \(i,k=1,\ldots,m\) with \(i\neq k\),_
\[\int_{\Omega_{t}}a(\varepsilon_{0}y)\widetilde{Z}_{i0}\mathcal{L}( \widetilde{Z}_{i0}) =\frac{2\pi a(\xi_{i})}{\gamma_{i}^{2}|\log\varepsilon_{i}|}\left[1+O \left(\frac{1}{R^{2}}\right)\right], \int_{\Omega_{t}}a(\varepsilon_{0}y)\widetilde{Z}_{k0}\mathcal{L}( \widetilde{Z}_{i0}) =O\left(\frac{\log^{2}t}{\gamma_{i}\gamma_{k}|\log\varepsilon_{i}| |\log\varepsilon_{k}|}\right), \tag{3.53}\]
_and_
\[\int_{\Omega_{t}}a(\varepsilon_{0}y)\widetilde{Z}_{q}\mathcal{L}( \widetilde{Z}_{q}) =\frac{2\pi(1+\alpha)a(q)}{|\log\varepsilon_{0}|}\left(\frac{ \varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}\left[1+O\left(\frac{1}{R^{2(1+ \alpha)}}\right)\right], \int_{\Omega_{t}}a(\varepsilon_{0}y)\widetilde{Z}_{k0}\mathcal{L}( \widetilde{Z}_{q}) =O\left(\frac{\varepsilon_{0}\log^{2}t}{\rho_{0}v_{0}\gamma_{k}| \log\varepsilon_{0}|\log\varepsilon_{k}|}\right). \tag{3.54}\]
Indeed, once Claim 3 is valid, then replacing (3.53) and (3.54) in (3.52) and (3.51), respectively, we conclude
\[\frac{\varepsilon_{0}|d_{q}|}{\rho_{0}v_{0}|\log\varepsilon_{0}|}\leq C\|h\|_{ *}+\frac{C\log^{2}t}{|\log\varepsilon_{0}|}\left(\frac{\varepsilon_{0}|d_{q} |}{\rho_{0}v_{0}|\log\varepsilon_{0}|}+\sum_{k=1}^{m}\frac{|d_{k}|}{\gamma_{ k}|\log\varepsilon_{k}|}\right), \tag{3.55}\]
and for any \(i=1,\ldots,m\),
\[\frac{|d_{i}|}{\gamma_{i}|\log\varepsilon_{i}|}\leq C\|h\|_{*}+\frac{C\log^{2 }t}{|\log\varepsilon_{i}|}\left(\frac{\varepsilon_{0}|d_{q}|}{\rho_{0}v_{0}| \log\varepsilon_{0}|}+\sum_{k=1}^{m}\frac{|d_{k}|}{\gamma_{k}|\log\varepsilon_ {k}|}\right). \tag{3.56}\]
As a result, using linear algebra arguments for (3.55)-(3.56), by (2.7) we can prove Claim 2 for \(d_{q}\) and \(d_{i}\), and then complete the proof by inequality (3.35).
**Proof of Claim 3.** Let us first establish the validity of the two expansions in (3.54). We decompose
\[\int_{\Omega_{t}}a(\varepsilon_{0}y)\widetilde{Z}_{q}\mathcal{L}(\widetilde{Z} _{q})=\sum_{l=1}^{4}\int_{\Omega_{t}}a(\varepsilon_{0}y)\widetilde{Z}_{q} \mathcal{L}(\widetilde{Z}_{q})\equiv\sum_{l=1}^{4}I_{l}.\]
From (3.3) and (3.39) we obtain
\[I_{1}=\int_{\Omega_{1}}a(\varepsilon_{0}y)Z_{q}\mathcal{L}(Z_{q})=\int_{ \Omega_{1}}\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{4}\left|\frac{ \varepsilon_{0}y-q}{\rho_{0}v_{0}}\right|^{2\alpha}\left[O\left(\varepsilon_{ 0}^{\sigma}|y-q^{\prime}|^{\sigma}\right)+O\left(\rho_{0}v_{0}\right)\right]= \left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}O\left(\rho_{0}^{ \sigma}v_{0}^{\sigma}\right)\]
From (3.22), (3.44) and (3.45), we deduce
\[I_{3} =\int_{\Omega_{q}\cup\widetilde{\Omega_{3}}}a(\varepsilon_{0}y) \widehat{Z}_{q}\mathcal{L}(\widehat{Z}_{q})+\sum_{k=1}^{m}\int_{\Omega_{3,k}} a(\varepsilon_{0}y)\widehat{Z}_{q}\mathcal{L}(\widehat{Z}_{q})\] \[=\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}O\left( \int_{R+1}^{3d/(\rho_{0}v_{0})}\frac{r^{1+2\alpha}\log(r/R)}{(1+r^{2(1+\alpha) })^{2}}\frac{\log(\rho_{0}v_{0}r)}{|\log\varepsilon_{0}|^{2}}dr+\sum_{k=1}^{m }\int_{0}^{1/(\varepsilon_{0}\gamma_{k}t^{2\beta})}\frac{r}{(1+r^{2})^{2}} \frac{\log^{2}t}{|\log\varepsilon_{0}|^{2}}dr\right)\] \[=\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}\left[O \left(\frac{1}{R^{2(1+\alpha)}|\log\varepsilon_{0}|}\right)+O\left(\frac{\log^{2 }t}{|\log\varepsilon_{0}|^{2}}\right)\right].\]
From (3.46) and (3.49), we derive that
\[I_{4}=\int_{\Omega_{4}}a(\varepsilon_{0}y)\eta_{q2}\widehat{Z}_{q}\mathcal{L}( \widetilde{Z}_{q})=\int_{\left\{\frac{\alpha d}{\rho_{0}v_{0}}\subset\left| \frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\right|\leq\frac{\alpha d}{\rho_{0}v_{ 0}}\right\}}O\left(\frac{\varepsilon_{0}^{4}}{\rho_{0}^{2}v_{0}^{2}|\log \varepsilon_{0}|^{2}}\right)dy=O\left(\frac{\varepsilon_{0}^{2}}{\rho_{0}^{2}v_ {0}^{2}|\log\varepsilon_{0}|^{2}}\right).\]
As for \(I_{2}\), by (3.40) we get
\[I_{2}= -\int_{\Omega_{2}}a(\varepsilon_{0}y)\widetilde{Z}_{q}(Z_{q}- \widehat{Z}_{q})\Delta_{a(\varepsilon_{0}y)}\eta_{q1}-2\int_{\Omega_{2}}a( \varepsilon_{0}y)\widetilde{Z}_{q}\nabla\eta_{q1}\nabla(Z_{q}-\widehat{Z}_{q})\] \[+\int_{\Omega_{2}}a(\varepsilon_{0}y)\widetilde{Z}_{q}\big{[} \mathcal{L}(Z_{q})+(1-\eta_{q1})W(Z_{q}-\widehat{Z}_{q})\big{]}.\]
Integrating by parts the first term and using estimates (2.28), (3.37) and (3.42) for the last term, we obtain
\[I_{2}= -\int_{\Omega_{2}}a(\varepsilon_{0}y)\widehat{Z}_{q}\nabla\eta_ {q1}\nabla(Z_{q}-\widehat{Z}_{q})+\int_{\Omega_{2}}a(\varepsilon_{0}y)(Z_{q} -\widehat{Z}_{q})^{2}|\nabla\eta_{q1}|^{2}\] \[+\int_{\Omega_{2}}a(\varepsilon_{0}y)(Z_{q}-\widehat{Z}_{q}) \nabla\eta_{q1}\nabla\widehat{Z}_{q}+\left(\frac{\varepsilon_{0}}{\rho_{0}v_{ 0}}\right)^{2}O\left(\frac{1}{R^{3+2\alpha}|\log\varepsilon_{0}|}\right)\] \[=I_{21}+I_{22}+I_{23}+\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0} }\right)^{2}O\left(\frac{1}{R^{3+2\alpha}|\log\varepsilon_{0}|}\right).\]
From (3.2), (3.3), (3.23) and (3.42) we find \(|\nabla\eta_{q1}|=O\big{(}\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\big{)}\) and \(|\nabla\widehat{Z}_{q}|=O\big{(}\frac{\varepsilon_{0}^{2}}{R^{3+2\alpha}\rho_ {0}^{2}v_{0}^{2}}\big{)}\) in \(\Omega_{2}\). Furthermore,
\[I_{22}=O\left(\frac{\varepsilon_{0}^{2}}{R\rho_{0}^{2}v_{0}^{2}|\log \varepsilon_{0}|^{2}}\right)\qquad\qquad\text{and}\qquad\qquad I_{23}=O\left( \frac{\varepsilon_{0}^{2}}{R^{3+2\alpha}\rho_{0}^{2}v_{0}^{2}|\log\varepsilon _{0}|}\right).\]
Since \(a(\varepsilon_{0}y)=a(q)\big{[}1+O(\varepsilon_{0}|y-q^{\prime}|)\big{]}\) and \(\widehat{Z}_{q}=Z_{q}\big{[}1+O\big{(}\frac{1}{R|\log\varepsilon_{0}|}\big{)} \big{]}\) in \(\Omega_{2}\), by (2.8), (2.22), (3.2), (3.3), (3.23) and (3.41) we conclude
\[I_{21}= -\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{3}\frac{1}{ H(q,q)-4\log(\rho_{0}v_{0}R)}\int_{\left\{R<\left|\frac{\varepsilon_{0}y-q}{\rho_{0}v_ {0}}\right|\leq R+1\right\}}\frac{a(\varepsilon_{0}y)}{|y-q^{\prime}|}\mathcal{Z }_{q}\left(\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\right)\eta_{1}^{\prime} \left(\frac{|\varepsilon_{0}y-q|}{\rho_{0}v_{0}}\right)\big{(}4+o(1)\big{)}dy\] \[= -\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}\frac{8\pi a (q)}{H(q,q)-4\log(\rho_{0}v_{0}R)}\int_{R}^{R+1}\eta_{1}^{\prime}(r)\left[1+O \left(\frac{1}{r^{2(1+\alpha)}}\right)\right]dr\] \[= \frac{2\pi(1+\alpha)a(q)}{|\log\varepsilon_{0}|}\left(\frac{ \varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}\left[1+O\left(\frac{1}{R^{2(1+ \alpha)}}\right)\right].\]
Combining all these estimates, we have that for \(R\) and \(t\) large enough, and \(d\) small enough,
\[\int_{\Omega_{t}}a(\varepsilon_{0}y)\widetilde{Z}_{q}\mathcal{L}(\widetilde{Z} _{q})=\frac{2\pi(1+\alpha)a(q)}{|\log\varepsilon_{0}|}\left(\frac{ \varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}\left[1+O\left(\frac{1}{R^{2(1+ \alpha)}}\right)\right]. \tag{3.57}\]
According to (3.51), we only need to consider \(\int_{\Omega_{t}}a(\varepsilon_{0}y)\widetilde{Z}_{k0}\mathcal{L}(\widetilde{Z} _{q})\) for all \(k\). By the above estimates of \(\mathcal{L}(\widetilde{Z}_{q})\) and \(\widetilde{Z}_{k0}\), we have clearly
\[\int_{\Omega_{1}}a(\varepsilon_{0}y)\widetilde{Z}_{k0}\mathcal{L}( \widetilde{Z}_{q})=O\left(\frac{\varepsilon_{0}\rho_{0}^{\prime}v_{0}^{\prime} \log t}{\rho_{0}v_{0}\gamma_{k}|\log\varepsilon_{k}|}\right),\qquad\qquad\qquad \int_{\Omega_{2}}a(\varepsilon_{0}y)\widetilde{Z}_{k0}\mathcal{L}(\widetilde{Z} _{q})=O\left(\frac{\varepsilon_{0}\log t}{\rho_{0}v_{0}\gamma_{k}|\log\varepsilon_{0 }||\log\varepsilon_{k}|}\right),\] \[\int_{\Omega_{4}}a(\varepsilon_{0}y)\widetilde{Z}_{k0}\mathcal{L}( \widetilde{Z}_{q})=O\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}\gamma_{k}|\log \varepsilon_{0}||\log\varepsilon_{k}|}\right),\qquad\qquad\int_{\Omega_{q}\cup \widetilde{\Omega}_{3}}a(\varepsilon_{0}y)\widetilde{Z}_{k0}\mathcal{L}( \widetilde{Z}_{q})=O\left(\frac{\varepsilon_{0}\log t}{\rho_{0}v_{0}\gamma_{k}| \log\varepsilon_{0}||\log\varepsilon_{k}|}\right),\]
and
\[\int_{\Omega_{3}}a(\varepsilon_{0}y)\widetilde{Z}_{k0}\mathcal{L}(\widetilde{Z}_{q} )=O\left(\frac{\varepsilon_{0}\log^{2}t}{\rho_{0}v_{0}\gamma_{k}|\log \varepsilon_{0}||\log\varepsilon_{k}|}\right)\qquad\text{for all }\,l\neq k.\]
It remains to calculate the integral over \(\Omega_{3,k}\). Using (3.25) and an integration by parts, we obtain
\[\int_{\Omega_{3,k}}a(\varepsilon_{0}y)\widetilde{Z}_{k0}\mathcal{L}(\widetilde{Z }_{q})=\int_{\Omega_{3,k}}a(\varepsilon_{0}y)\widehat{Z}_{q}\mathcal{L}( \widetilde{Z}_{k0})-\int_{\partial\Omega_{3,k}}a(\varepsilon_{0}y)\widehat{Z}_ {k0}\frac{\partial\widehat{Z}_{q}}{\partial\nu}+\int_{\partial\Omega_{3,k}}a( \varepsilon_{0}y)\widehat{Z}_{q}\frac{\partial\widehat{Z}_{k0}}{\partial\nu}.\]
Let us decompose
\[\int_{\Omega_{3,k}}a(\varepsilon_{0}y)\widehat{Z}_{q}\mathcal{L}(\widetilde{Z} _{k0})=\left(\int_{\left\{|y-\xi^{\prime}_{k}|\leq\gamma_{k}R\right\}}+\int_{ \left\{\gamma_{k}R<|y-\xi^{\prime}_{k}|\leq\gamma_{k}(R+1)\right\}}+\int_{ \left\{\gamma_{k}(R+1)<|y-\xi^{\prime}_{k}|\leq 1/(\varepsilon_{0}t^{2\beta}) \right\}}\right)a(\varepsilon_{0}y)\widehat{Z}_{q}\mathcal{L}(\widetilde{Z}_{ k0}).\]
In a straightforward but tedious way, we can compute that for \(|y-\xi^{\prime}_{k}|\leq\gamma_{k}R\),
\[\mathcal{L}(\widetilde{Z}_{k0})=\mathcal{L}(Z_{k0})=O\left(\rho_{0}^{\sigma} v_{0}^{\sigma}/\gamma_{k}^{3}\right)+\sum_{j=1}^{m}O\left(\varepsilon_{j}^{ \sigma}\mu_{j}^{\sigma}/\gamma_{k}^{3}\right),\]
for \(\gamma_{k}R<|y-\xi^{\prime}_{k}|\leq\gamma_{k}(R+1)\),
\[\mathcal{L}(\widetilde{Z}_{k0})=O\left(\frac{1}{R\gamma_{k}^{3}|\log \varepsilon_{k}|}\right),\]
and for \(\gamma_{k}(R+1)<|y-\xi^{\prime}_{k}|\leq 1/(\varepsilon_{0}t^{2\beta})\),
\[\mathcal{L}(\widetilde{Z}_{k0})=\mathcal{L}(\widehat{Z}_{k0})=O\left(\frac{ \log|y-\xi^{\prime}_{k}|-\log(R\gamma_{k})}{\left(1+\left|\frac{y-\xi^{\prime }_{k}}{\gamma_{k}}\right|^{2}\right)^{2}}\cdot\frac{1}{\gamma_{k}^{3}|\log \varepsilon_{k}|}\right).\]
These combined the estimate of \(\widehat{Z}_{q}\) in (3.22) give
\[\int_{\Omega_{3,k}}a(\varepsilon_{0}y)\widehat{Z}_{q}\mathcal{L}(\widetilde{Z }_{k0})=O\left(\frac{\varepsilon_{0}\log t}{\rho_{0}v_{0}\gamma_{k}|\log \varepsilon_{0}||\log\varepsilon_{k}|}\right).\]
As on \(\partial\Omega_{3,k}\), by (2.2) and (3.22) we know
\[\widehat{Z}_{q}=O\left(\frac{\varepsilon_{0}\log t}{\rho_{0}v_{0}|\log \varepsilon_{0}|}\right), \left|\nabla\widehat{Z}_{q}\right|=O\left(\frac{\varepsilon_{0}^{2}t^{ \beta}}{\rho_{0}v_{0}|\log\varepsilon_{0}|}\right),\]
and
\[\widehat{Z}_{k0}=O\left(\frac{\log t}{\gamma_{k}|\log\varepsilon_{k}|}\right), \left|\nabla\widehat{Z}_{k0}\right|=O\left(\frac{\varepsilon_{0}t^{2 \beta}}{\gamma_{k}|\log\varepsilon_{k}|}\right).\]
Thus
\[\int_{\Omega_{3,k}}a(\varepsilon_{0}y)\widetilde{Z}_{k0}\mathcal{L}(\widetilde {Z}_{q})=O\left(\frac{\varepsilon_{0}\log t}{\rho_{0}v_{0}\gamma_{k}|\log \varepsilon_{0}||\log\varepsilon_{k}|}\right).\]
By the above estimates, we readily have
\[\int_{\Omega_{t}}a(\varepsilon_{0}y)\widetilde{Z}_{k0}\mathcal{L}(\widetilde{Z }_{q})=O\left(\frac{\varepsilon_{0}\log^{2}t}{\rho_{0}v_{0}\gamma_{k}|\log \varepsilon_{0}||\log\varepsilon_{k}|}\right),\ \ \ \ k=1,\ldots,m. \tag{3.58}\]
The two expansions in (3.53) are easy to get as they are very similar to the above consideration for the two expansions in (3.54). We leave the details for readers.
**Step 4:** Proof of Proposition 3.1. Let us first establish the validity of the a priori estimate (3.6). From the previous lemma and the fact that \(\|\chi_{i}Z_{ij}\|_{*}=O(\gamma_{i})\), we give
\[\|\phi\|_{L^{\infty}(\Omega_{t})}\leq Ct\left(\|h\|_{*}+\sum_{i=1}^{m}\sum_{j= 1}^{2}\gamma_{i}|c_{ij}|\right), \tag{3.59}\]
hence it is sufficient to estimate the size of the constants \(c_{ij}\). Let us consider the cut-off function \(\eta_{i2}\) introduced in (3.24). Testing (3.1) against \(a(\varepsilon_{0}y)\eta_{i2}Z_{ij}\), \(i=1,\ldots,m\) and \(j=1,2\), we obtain
\[\int_{\Omega_{t}}a(\varepsilon_{0}y)\phi\mathcal{L}(\eta_{i2}Z_{ij})=\int_{ \Omega_{t}}a(\varepsilon_{0}y)h\eta_{i2}Z_{ij}+\sum_{k=1}^{m}\sum_{l=1}^{2}c_{ kl}\int_{\Omega_{t}}\chi_{k}Z_{kl}\eta_{i2}Z_{ij}. \tag{3.60}\]
For any \(i=1,\ldots,m\) and \(j=1,2\),
\[\Delta_{a(\varepsilon_{0}y)}Z_{ij}+\frac{1}{\gamma_{i}^{2}}\frac{8}{\left(1+ \left|\frac{y-\xi_{i}^{\prime}}{\gamma_{i}}\right|^{2}\right)^{2}}Z_{ij}= \varepsilon_{0}\nabla\log a(\varepsilon_{0}y)\nabla Z_{ij}=O\left(\frac{ \varepsilon_{0}}{\gamma_{i}^{2}}\left[1+\frac{|y-\xi_{i}^{\prime}|}{\gamma_{i }}\right]^{-2}\right).\]
Then
\[\mathcal{L}(\eta_{i2}Z_{ij})= \,\eta_{i2}\mathcal{L}(Z_{ij})-Z_{ij}\Delta_{a(\varepsilon_{0}y) }\eta_{i2}-2\nabla\eta_{i2}\nabla Z_{ij}\] \[= \left[\frac{1}{\gamma_{i}^{2}}\frac{8}{\left(1+\left|\frac{y- \xi_{i}^{\prime}}{\gamma_{i}}\right|^{2}\right)^{2}}-W\right]\eta_{i2}Z_{ij}- \eta_{i2}\left[\Delta_{a(\varepsilon_{0}y)}Z_{ij}+\frac{1}{\gamma_{i}^{2}} \frac{8}{\left(1+\left|\frac{y-\xi_{i}^{\prime}}{\gamma_{i}}\right|^{2} \right)^{2}}Z_{ij}\right]+O\left(\varepsilon_{0}^{3}\right)\] \[\equiv \,\mathcal{B}_{ij}+O\left(\frac{\varepsilon_{0}}{\gamma_{i}^{2}} \left[1+\frac{|y-\xi_{i}^{\prime}|}{\gamma_{i}}\right]^{-2}\right)+O\left( \varepsilon_{0}^{3}\right),\]
where
\[\mathcal{B}_{ij}=\left[\frac{1}{\gamma_{i}^{2}}\frac{8}{\left(1+\left|\frac{y -\xi_{i}^{\prime}}{\gamma_{i}}\right|^{2}\right)^{2}}-W\right]\eta_{i2}Z_{ij}.\]
For the estimate of \(\mathcal{B}_{ij}\), we decompose \(\mathrm{supp}(\eta_{i2})\) to some subregions:
\[\widehat{\Omega}_{q}=\mathrm{supp}(\eta_{i2})\bigcap\left\{|y-q^{ \prime}|\leq 1/(\varepsilon_{0}t^{2\beta})\right\},\qquad\qquad\widehat{ \Omega}_{k1}=\mathrm{supp}(\eta_{i2})\bigcap\left\{|y-\xi_{k}^{\prime}|\leq 1/( \varepsilon_{0}t^{2\beta})\right\},\ \ k=1,\ldots,m,\] \[\widehat{\Omega}_{2}=\mathrm{supp}(\eta_{i2})\setminus\left[ \bigcup_{k=1}^{m}\widehat{\Omega}_{k1}\cup\widehat{\Omega}_{q}\right],\]
where \(\mathrm{supp}(\eta_{i2})=\{|y-\xi_{i}^{\prime}|\leq 6d/\varepsilon_{0}\}\). Notice that, by (2.2),
\[|y-\xi_{i}^{\prime}|\geq|\xi_{i}^{\prime}-q^{\prime}|-|y-q^{\prime}|\geq|\xi_ {i}^{\prime}-q^{\prime}|-\frac{1}{\varepsilon_{0}t^{2\beta}}\geq\frac{1}{ \varepsilon_{0}t^{\beta}}\left(1-\frac{1}{t^{\beta}}\right)>\frac{1}{2 \varepsilon_{0}t^{\beta}} \tag{3.61}\]
uniformly in \(\widehat{\Omega}_{q}\), and
\[|y-\xi_{i}^{\prime}|\geq|\xi_{i}^{\prime}-\xi_{k}^{\prime}|-|y-\xi_{k}^{ \prime}|\geq|\xi_{i}^{\prime}-\xi_{k}^{\prime}|-\frac{1}{\varepsilon_{0}t^{2 \beta}}\geq\frac{1}{\varepsilon_{0}t^{\beta}}\left(1-\frac{1}{t^{\beta}}\right) >\frac{1}{2\varepsilon_{0}t^{\beta}} \tag{3.62}\]
uniformly in \(\widehat{\Omega}_{k1}\) with \(k\neq i\). From expansions (2.28)-(2.30) of \(W\) and definition (3.4) of \(Z_{ij}\) we get, in \(\widehat{\Omega}_{i1}\),
\[\mathcal{B}_{ij}=\frac{1}{\gamma_{i}^{3}}\frac{8}{\left(1+\left|\frac{y-\xi_{i }^{\prime}}{\gamma_{i}}\right|^{2}\right)^{5/2}}\left[O\left(\varepsilon_{0}^{ \sigma}|y-\xi_{i}^{\prime}|^{\sigma}\right)+O\left(\rho_{0}^{\sigma}v_{0}^{ \sigma}\right)+\sum_{j=1}^{m}O\left(\varepsilon_{j}^{\sigma}\mu_{j}^{\sigma} \right)\right],\]
and in \(\widehat{\Omega}_{q}\), by (3.61),
\[\mathcal{B}_{ij}=\left[O\left(\frac{\gamma_{i}^{2}}{|y-\xi_{i}^{\prime}|^{4}} \right)+O\left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}O\left(\frac{8( 1+\alpha)^{2}\left|\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\right|^{2\alpha}}{ \left(1+\left|\frac{\varepsilon_{0}y-q}{\rho_{0}v_{0}}\right|^{2(1+\alpha)} \right)^{2}}\right)\right]O\left(\varepsilon_{0}t^{\beta}\right),\]
and in \(\widehat{\Omega}_{k1}\), \(k\neq i\), by (3.62),
\[\mathcal{B}_{ij}=\left[O\left(\frac{\gamma_{i}^{2}}{|y-\xi_{i}^{\prime}|^{4}} \right)+O\left(\frac{1}{\gamma_{k}^{2}}\frac{8}{\left(1+\left|\frac{y-\xi_{k}^{ \prime}}{\gamma_{k}}\right|^{2}\right)^{2}}\right)\right]O\left(\varepsilon_{0} t^{\beta}\right),\]
and in \(\widehat{\Omega}_{2}\),
\[\mathcal{B}_{ij}=O\left(\varepsilon_{0}^{3}\varepsilon_{i}^{2}\mu_{i}^{2}t^{10 \beta}\right)+O\left(\varepsilon_{0}^{3}t^{2\beta(4m+5+2\alpha)}e^{-t\phi_{1}( \varepsilon_{0}y)}\right).\]
So,
\[\left|\int_{\Omega_{t}}a(\varepsilon_{0}y)\phi\mathcal{L}(\eta_{i2}Z_{ij}) \right|\leq C\frac{1}{\gamma_{i}}\max\big{\{}(\rho_{0}v_{0})^{\sigma},( \varepsilon_{1}\mu_{1})^{\sigma},\ldots,(\varepsilon_{m}\mu_{m})^{\sigma} \big{\}}\|\phi\|_{L^{\infty}(\Omega_{t})}.\]
On the other hand, since \(\|\eta_{i2}Z_{ij}\|_{L^{\infty}(\Omega_{t})}\leq C\gamma_{i}^{-1}\), by (3.5) and (3.50) we know that
\[\int_{\Omega_{t}}a(\varepsilon_{0}y)h\eta_{i2}Z_{ij}=O\left(\frac{\|h\|_{*}}{ \gamma_{i}}\right).\]
Now, if \(k=i\), by (3.2), (3.3) and (3.4),
\[\int_{\Omega_{t}}\chi_{k}Z_{kl}\eta_{k2}Z_{kj}=\int_{\mathbb{R}^{2}}\chi(|z|) \mathcal{Z}_{l}(z)\mathcal{Z}_{j}(z)dz=C\delta_{lj},\]
while if \(k\neq i\), by (3.62),
\[\int_{\Omega_{t}}\chi_{k}Z_{kl}\eta_{i2}Z_{ij}=O\left(\gamma_{k}\varepsilon_{0 }t^{\beta}\right).\]
Using the above estimates in (3.60), we find
\[|c_{ij}|\leq C\left(\frac{1}{\gamma_{i}}\max\big{\{}(\rho_{0}v_{0})^{\sigma},( \varepsilon_{1}\mu_{1})^{\sigma},\ldots,(\varepsilon_{m}\mu_{m})^{\sigma} \big{\}}\|\phi\|_{L^{\infty}(\Omega_{t})}+\frac{1}{\gamma_{i}}\|h\|_{*}+\sum_ {k\neq i}^{m}\sum_{l=1}^{2}\gamma_{k}\varepsilon_{0}t^{\beta}|c_{kl}|\right),\]
and then
\[|c_{ij}|\leq C\frac{1}{\gamma_{i}}\Big{(}\max\big{\{}(\rho_{0}v_{0})^{\sigma},(\varepsilon_{1}\mu_{1})^{\sigma},\ldots,(\varepsilon_{m}\mu_{m})^{\sigma} \big{\}}\|\phi\|_{L^{\infty}(\Omega_{t})}+\|h\|_{*}\Big{)}.\]
Putting this estimate in (3.59), we conclude the validity of (3.6).
Let us consider the Hilbert space
\[H_{\xi}=\left\{\phi\in H_{0}^{1}(\Omega_{t})\,\Big{|}\int_{\Omega_{t}}\chi_{i} Z_{ij}\phi=0\quad\forall\ i=1,\ldots,m,\ j=1,2\right\}\]
with the norm \(\|\phi\|_{H_{\xi}}=\|\nabla\phi\|_{L^{2}(\Omega_{t})}\). Equation (3.1) is equivalent to find \(\phi\in H_{\xi}\), such that
\[\int_{\Omega_{t}}a(\varepsilon_{0}y)\nabla\phi\nabla\psi-\int_{\Omega_{t}}a( \varepsilon_{0}y)W\phi\psi=\int_{\Omega_{t}}a(\varepsilon_{0}y)h\psi,\ \ \ \ \ \forall\ \psi\in H_{\xi}.\]
By Fredholm's alternative this is equivalent to the uniqueness of solutions to this problem, which is guaranteed by (3.6).
**Lemma 3.4**.: _For any integer \(m\geq 1\), the operator \(\mathcal{T}\) is differentiable with respect to the variables \(\xi=(\xi_{1},\ldots,\xi_{m})\) in \(\mathcal{O}_{t}(q)\), precisely for any \(k=1,\ldots,m\) and \(l=1,2\),_
\[\|\partial_{\xi_{kl}^{\prime}}\mathcal{T}(h)\|_{L^{\infty}(\Omega_{t})}\leq Ct ^{2}\|h\|_{*}. \tag{3.63}\]
To prove this lemma, we give the following estimate.
**Lemma 3.5**.: _For any \(0<\sigma<1\) and \(\xi=(\xi_{1},\ldots,\xi_{m})\in\mathcal{O}_{t}(q)\), then we have that for any \(k=1,\ldots,m\) and \(l=1,2\),_
\[\partial_{\xi_{kl}^{\prime}}H_{i}(x)=O\left((\varepsilon_{k}\mu_{k})^{\sigma} \right),\]
_uniformly in \(\overline{\Omega}\), where \(H_{i}\), \(i=0,1,\ldots,m\), is defined as the solution of equation (2.10)._
Proof.: Differentiating equation (2.10) of \(H_{0}\)'s with respect to \(\xi_{kl}\), we obtain
\[\begin{cases}-\Delta_{a}\big{(}\partial_{\xi_{kl}}H_{0}\big{)}=\nabla\log a(x) \nabla\big{(}\partial_{\xi_{kl}}u_{0}\big{)}&\text{in}\ \ \Omega,\\ \partial_{\xi_{kl}}H_{0}=-\partial_{\xi_{kl}}u_{0}&\text{on}\ \,\partial \Omega,\end{cases}\]
where
\[\partial_{\xi_{kl}}u_{0}=2\partial_{\xi_{kl}}\log\mu_{0}-\frac{4\varepsilon_{0 }^{2}\mu_{0}^{2}\partial_{\xi_{kl}}\log\mu_{0}}{\varepsilon_{0}^{2}\mu_{0}^{2 }+|x-q|^{2(1+\alpha)}}\qquad\text{and}\qquad\big{|}\nabla\big{(}\partial_{\xi _{kl}}u_{0}\big{)}\big{|}\leq 4\varepsilon_{0}^{2}\mu_{0}^{2}\big{|}\partial_{ \xi_{kl}}\log\mu_{0}\big{|}\frac{2(1+\alpha)|x-q|^{1+2\alpha}}{(\varepsilon_{ 0}^{2}\mu_{0}^{2}+|x-q|^{2(1+\alpha)})^{2}}.\]
Clearly,
\[\big{\|}\partial_{\xi_{kl}}u_{0}\big{\|}_{C^{2}(\partial\Omega)}\leq C\big{|} \partial_{\xi_{kl}}\log\mu_{0}\big{|}\leq Ct^{\beta}.\]
For any \(\max\{1,\,2/(3+2\alpha)\}<p<2\), by the change of variables \(\rho_{0}v_{0}z=x-q\) we give
\[\int_{\Omega}\left|\frac{|x-q|^{1+2\alpha}}{(\varepsilon_{0}^{2}\mu_{0}^{2}+| x-q|^{2(1+\alpha)})^{2}}\right|^{p}dx=\int_{\Omega_{\rho_{0}v_{0}}}\frac{|z|^{(1+2 \alpha)p}}{(1+|z|^{2(1+\alpha)})^{2p}}(\rho_{0}v_{0})^{2-(3+2\alpha)p}dz=O \left(\frac{1}{(\rho_{0}v_{0})^{(3+2\alpha)p-2}}\right),\]
where \(\Omega_{\rho_{0}v_{0}}=\frac{1}{\rho_{0}v_{0}}(\Omega-\{q\})\). Then
\[\big{\|}\nabla\log a(x)\nabla\big{(}\partial_{\xi_{kl}}u_{0}\big{)}\big{\|}_{ L^{p}(\Omega)}\leq C\,\big{\|}\nabla\big{(}\partial_{\xi_{kl}}u_{0}\big{)} \big{\|}_{L^{p}(\Omega)}\leq C\frac{\varepsilon_{0}^{2}\mu_{0}^{2}\big{|} \partial_{\xi_{kl}}\log\mu_{0}\big{|}}{(\rho_{0}v_{0})^{(3+2\alpha)p-2}/p} \leq Ct^{\beta}(\rho_{0}v_{0})^{(2-p)/p}.\]
Applying \(L^{p}\) theory,
\[\big{\|}\partial_{\xi_{kl}}H_{0}\big{\|}_{W^{2,p}(\Omega)}\leq C\left(\big{\|} \Delta_{a}\big{(}\partial_{\xi_{kl}}H_{0}\big{)}\big{\|}_{L^{p}(\Omega)}+ \big{\|}\partial_{\xi_{kl}}H_{0}\big{\|}_{C^{2}(\partial\Omega)}\right)\leq Ct ^{\beta}.\]
By Sobolev embedding, we find that for any \(0<\gamma<2-(2/p)\),
\[\big{\|}\partial_{\xi_{kl}}H_{0}\big{\|}_{C^{\gamma}(\overline{\Omega})}\leq Ct ^{\beta}.\]
Similarly, using the above arguments for each \(H_{i}\), \(i=1,\ldots,m\) again, we readily get
\[\big{\|}\partial_{\xi_{kl}}H_{k}\big{\|}_{C^{\gamma}(\overline{\Omega})}\leq C (\varepsilon_{k}\mu_{k})^{(2-2p)/p}\qquad\quad\text{but}\qquad\big{\|} \partial_{\xi_{kl}}H_{i}\big{\|}_{C^{\gamma}(\overline{\Omega})}\leq Ct^{ \beta}\qquad\forall\ i\neq k.\]
By (2.1), (2.7), (2.8), (2.22), (2.23) and \(\xi^{\prime}=\xi/\varepsilon_{0}\), the lemma is then proven.
**Proof of Lemma 3.4.** Differentiating (3.1) with respect to \(\xi^{\prime}_{kl}\), formally \(Z=\partial_{\xi^{\prime}_{kl}}\phi\) should satisfy
\[\begin{cases}\mathcal{L}(Z)=\phi\,\partial_{\xi^{\prime}_{kl}}W+\frac{1}{a( \varepsilon_{0}y)}\sum_{i=1}^{m}\sum_{j=1}^{2}\Big{[}c_{ij}\partial_{\xi^{ \prime}_{kl}}(\chi_{i}Z_{ij})+\widetilde{c}_{ij}\chi_{i}Z_{ij}\Big{]}&\text{in} \ \ \Omega_{t},\\ Z=0&\text{on}\ \,\partial\Omega_{t},\\ \int_{\Omega_{t}}\chi_{i}Z_{ij}Z=-\int_{\Omega_{t}}\phi\partial_{\xi^{\prime} _{kl}}(\chi_{i}Z_{ij})&\forall\ i=1,\ldots,m,\ j=1,2,\end{cases}\]
where (still formally) \(\widetilde{c}_{ij}=\partial_{\xi^{\prime}_{kl}}(c_{ij})\). We consider the constants \(b_{ij}\) defined as
\[b_{ij}\int_{\Omega_{t}}\chi_{i}^{2}|Z_{ij}|^{2}=\int_{\Omega_{t}}\phi\, \partial_{\xi^{\prime}_{kl}}(\chi_{i}Z_{ij}),\]
and set
\[\widetilde{Z}=Z+\sum_{i=1}^{m}\sum_{j=1}^{2}b_{ij}\chi_{i}Z_{ij}.\]
Then
\[\begin{cases}\mathcal{L}(\widetilde{Z})=f+\frac{1}{a(\varepsilon_{0}y)}\sum_{i=1} ^{m}\sum_{j=1}^{2}\widetilde{c}_{ij}\chi_{i}Z_{ij}&\text{in}\quad\Omega_{t},\\ \widetilde{Z}=0&\text{on}\quad\partial\Omega_{t},\\ \int_{\Omega_{t}}\chi_{i}Z_{ij}\widetilde{Z}=0&\forall\ i=1,\dots,m,\ j=1,2, \end{cases}\]
where
\[f=\phi\,\partial_{\xi_{kl}^{\prime}}W+\sum_{i=1}^{m}\sum_{j=1}^{2}b_{ij} \mathcal{L}(\chi_{i}Z_{ij})+\frac{1}{a(\varepsilon_{0}y)}\sum_{i=1}^{m}\sum_{ j=1}^{2}c_{ij}\partial_{\xi_{kl}^{\prime}}(\chi_{i}Z_{ij}).\]
The result of Proposition 3.1 implies that this equation has a unique solution \(\widetilde{Z}\) and \(\widetilde{c}_{ij}\), and thus \(\partial_{\xi_{kl}^{\prime}}\mathcal{T}(h)=\mathcal{T}(f)-\sum_{i=1}^{m}\sum_ {j=1}^{2}b_{ij}\chi_{i}Z_{ij}\) is well defined. Moreover,
\[\|\partial_{\xi_{kl}^{\prime}}T(h)\|_{L^{\infty}(\Omega_{t})}\leq\|\mathcal{T }(f)\|_{L^{\infty}(\Omega_{t})}+C\sum_{i=1}^{m}\sum_{j=1}^{2}\frac{1}{\gamma_ {i}}|b_{ij}|\leq Ct\|f\|_{*}+C\sum_{i=1}^{m}\sum_{j=1}^{2}\frac{1}{\gamma_{i}} |b_{ij}|. \tag{3.64}\]
To prove estimate (3.63), we first estimate \(\partial_{\xi_{kl}^{\prime}}W\). Notice that \(\partial_{\xi_{kl}^{\prime}}W=W\partial_{\xi_{kl}^{\prime}}V\). Obviously, by (2.28), (2.29), (2.30) and (3.5) we have that \(\|W\|_{*}=O\left(1\right)\). Moreover, thanks to Lemma 3.5, by (2.4), (2.9), (2.22), (3.2) and (3.4) we can directly check that
\[\partial_{\xi_{kl}^{\prime}}V(y)=Z_{kl}(y)+O\left((\varepsilon_{k}\mu_{k})^{ \sigma}\right). \tag{3.65}\]
This, together with the fact that \(\frac{1}{\gamma_{k}}\leq C\) uniformly on \(t\), immediately implies
\[\|\partial_{\xi_{kl}^{\prime}}V\|_{L^{\infty}(\Omega_{t})}=O\left(1\right) \qquad\text{ and }\qquad\|\partial_{\xi_{kl}^{\prime}}W\|_{*}=O\left(1\right).\]
Next, by definitions (3.2) and (3.4), a straightforward but tedious computation shows that
\[\|\partial_{\xi_{kl}^{\prime}}(\chi_{i}Z_{ij})\|_{*}=\begin{cases}O\left(\left| \partial_{\xi_{kl}^{\prime}}\gamma_{i}\right|\right)&\text{if }\ i\neq k,\\ O\left(1\right)&\text{if }\ i=k,\end{cases}\qquad\qquad\qquad|b_{ij}|= \begin{cases}O\left(|\partial_{\xi_{kl}^{\prime}}\gamma_{i}|\right)\|\phi\|_{L^ {\infty}(\Omega_{t})}&\text{if }\ i\neq k,\\ O\left(1\right)\|\phi\|_{L^{\infty}(\Omega_{t})}&\text{if }\ i=k.\end{cases}\]
Furthermore, using (3.6), (3.32) and the fact that \(\left|\partial_{\xi_{kl}^{\prime}}\gamma_{i}\right|=O(\varepsilon_{0}^{\sigma})\) uniformly on \(t\), we find
\[\|f\|_{*}\leq Ct\|h\|_{*}\qquad\qquad\text{and}\qquad\qquad|b_{ij}|\leq Ct\|h \|_{*}.\]
Substituting these estimates into (3.64), we then prove (3.63).
## 4. The intermediate nonlinear problem
In order to solve problem (2.16), we shall solve first the intermediate nonlinear problem: for any integer \(m\geq 1\) and any points \(\xi=(\xi_{1},\dots,\xi_{m})\in\mathcal{O}_{t}(q)\), we find a function \(\phi\) and scalars \(c_{ij}\), \(i=1,\dots,m\), \(j=1,2\), such that
\[\begin{cases}\mathcal{L}(\phi)=-\Delta_{a(\varepsilon_{0}y)}\phi-W\phi=E+N( \phi)+\frac{1}{a(\varepsilon_{0}y)}\sum_{i=1}^{m}\sum_{j=1}^{2}c_{ij}\chi_{i}Z _{ij}&\text{in}\ \ \Omega_{t},\\ \phi=0&\text{on}\ \,\partial\Omega_{t},\\ \int_{\Omega_{t}}\chi_{i}Z_{ij}\phi=0&\forall\ i=1,\dots,m,\ j=1,2, \end{cases} \tag{4.1}\]
where \(W\) is as in (2.28), (2.29) and (2.30), and \(E\), \(N(\phi)\) are given by (2.18) and (2.19), respectively.
**Proposition 4.1**.: _Let \(m\) be a positive integer and \(0<\sigma<\min\{1/2,\,1-1/(2\beta),\,2(1+\alpha)\}\). Then there exist constants \(t_{m}>1\) and \(C>0\) such that for any \(t>t_{m}\) and any points \(\xi=(\xi_{1},\ldots,\xi_{m})\in\mathcal{O}_{t}(q)\), problem (4.1) admits a unique solution \(\phi\in L^{\infty}(\Omega_{t})\), and scalars \(c_{ij}\in\mathbb{R}\), \(i=1,\ldots,m\), \(j=1,2\), such that_
\[\|\phi\|_{L^{\infty}(\Omega_{t})}\leq Ct\max\big{\{}(\rho_{0}v_{0})^{\min\{ \sigma,2(\alpha-\hat{\alpha})\}},\,(\varepsilon_{1}\mu_{1})^{\sigma},\,\ldots, \,(\varepsilon_{m}\mu_{m})^{\sigma},\,\|e^{-\frac{1}{2}t\phi_{1}}\|_{L^{\infty }(\Omega_{t})}\big{\}}. \tag{4.2}\]
_Furthermore, the map \(\xi^{\prime}\mapsto\phi_{\xi^{\prime}}\in C(\overline{\Omega}_{t})\) is \(C^{1}\), precisely for any \(k=1,\ldots,m\) and \(l=1,2\),_
\[\|\partial_{\xi^{\prime}_{kl}}\phi\|_{L^{\infty}(\Omega_{t})}\leq Ct^{2}\max \big{\{}(\rho_{0}v_{0})^{\min\{\sigma,2(\alpha-\hat{\alpha})\}},\,(\varepsilon _{1}\mu_{1})^{\sigma},\,\ldots,\,(\varepsilon_{m}\mu_{m})^{\sigma},\,\|e^{- \frac{1}{2}t\phi_{1}}\|_{L^{\infty}(\Omega_{t})}\big{\}}, \tag{4.3}\]
_where \(\xi^{\prime}:=(\xi^{\prime}_{1},\ldots,\xi^{\prime}_{m})=(\frac{1}{ \varepsilon_{0}}\xi_{1},\ldots,\frac{1}{\varepsilon_{0}}\xi_{m})\)._
Proof.: Proposition 3.1 and Lemma 3.4 allow us to apply the contraction mapping theorem and the implicit function theorem to find a unique solution for problem (4.1) satisfying (4.2)-(4.3). Since it is a standard procedure, we shall not present the details here, see Lemmas 4.1-4.2 in [34] for a similar proof. But we just mention that \(\|N(\phi)\|_{*}\leq C\|\phi\|_{L^{\infty}(\Omega_{t})}^{2}\), \(\|E\|_{*}\leq C\max\{(\rho_{0}v_{0})^{\min\{\sigma,2(\alpha-\hat{\alpha})\}},\,(\varepsilon_{1}\mu_{1})^{\sigma},\ldots,(\varepsilon_{m}\mu_{m})^{\sigma}, \|e^{-\frac{1}{2}t\phi_{1}}\|_{L^{\infty}(\Omega_{t})}\}\) and \(\|\partial_{\xi^{\prime}_{kl}}E\|_{*}\leq C\max\{(\rho_{0}v_{0})^{\min\{ \sigma,2(\alpha-\hat{\alpha})\}},(\varepsilon_{1}\mu_{1})^{\sigma},\ldots,( \varepsilon_{m}\mu_{m})^{\sigma},\|e^{-\frac{1}{2}t\phi_{1}}\|_{L^{\infty}( \Omega_{t})}\}\) due to (2.24)-(2.26) and (3.65).
## 5. The reduced problem: A maximization procedure
In this section we study a maximization problem. Let us consider the energy functional associated to problem (1.6)
\[J_{t}(u)=\frac{1}{2}\int_{\Omega}a(x)|\nabla u|^{2}-\int_{\Omega}a(x)k(x)|x-q| ^{2\alpha}e^{-t\phi_{1}}e^{u},\ \ \ \ u\in H^{1}_{0}(\Omega). \tag{5.1}\]
For any points \(\xi=(\xi_{1},\ldots,\xi_{m})\in\mathcal{O}_{t}(q)\), we introduce the reduced energy \(\mathcal{F}_{t}:\mathcal{O}_{t}(q)\to\mathbb{R}\) by
\[\mathcal{F}_{t}(\xi)=J_{t}\big{(}u(\xi)\big{)}=J_{t}\big{(}U(\xi)+\tilde{\phi} (\xi)\big{)}, \tag{5.2}\]
where \(U(\xi)\) is the approximation defined in (2.9) and \(\tilde{\phi}(\xi)(x)=\phi(\frac{\xi}{\varepsilon_{0}},\frac{x}{\varepsilon_{0 }})\), \(x\in\Omega\), with \(\phi=\phi_{\xi^{\prime}}\) the unique solution to problem (4.1) given by Proposition 4.1. Define
\[\mathcal{M}_{m}^{t}=\max_{\xi=(\xi_{1},\ldots,\xi_{m})\in\mathcal{O}_{t}(q)} \mathcal{F}_{t}(\xi).\]
Clearly, for any \(t\) large enough, the map \(\mathcal{F}_{t}(\xi)\) is differential in \(\xi\) and hence the maximization problem has a solution over \(\mathcal{O}_{t}(q)\).
**Proposition 5.1**.: _The maximization problem_
\[\max_{(\xi_{1},\ldots,\xi_{m})\in\mathcal{O}_{t}(q)}\mathcal{F}_{t}(\xi_{1}, \ldots,\xi_{m}) \tag{5.3}\]
_has a solution \(\xi_{t}=(\xi_{1,t},\ldots,\xi_{m,t})\in\mathcal{O}_{t}^{\sigma}(q)\), i.e., the interior of \(\mathcal{O}_{t}(q)\)._
Proof.: **Step 1:** With the choices for the parameters \(\mu_{0}\) and \(\mu_{i}\), \(i=1,\ldots,m\), respectively given by (2.20) and (2.21), let us claim that the following expansion holds
\[J_{t}\big{(}U(\xi)\big{)}=8\pi(1+\alpha)ta(q)+8\pi t\sum_{i=1}^{m}a(\xi_{i}) \phi_{1}(\xi_{i})+16\pi(2+\alpha)\sum_{i=1}^{m}a(\xi_{i})\log|\xi_{i}-q|+16\pi \sum_{i\neq j}^{m}a(\xi_{i})\log|\xi_{i}-\xi_{j}|+O \tag{5.4}\]
uniformly for all points \(\xi=(\xi_{1},\ldots,\xi_{m})\in\mathcal{O}_{t}(q)\) and for all \(t\) large enough. In fact, observe that by (2.9) and (2.10),
\[\frac{1}{2}\int_{\Omega}a(x)|\nabla U|^{2} =\frac{1}{2}\int_{\Omega}\big{[}-\nabla\big{(}a(x)\nabla U_{0} \big{)}\big{]}\,U_{0}+\sum_{j=1}^{m}\int_{\Omega}\big{[}-\nabla\big{(}a(x) \nabla U_{0}\big{)}\big{]}\,U_{j}+\frac{1}{2}\sum_{i,j=1}^{m}\int_{\Omega} \big{[}-\nabla\big{(}a(x)\nabla U_{i}\big{)}\big{]}\,U_{j}\] \[=-\frac{1}{2}\int_{\Omega}a(x)\big{(}u_{0}+H_{0}\big{)}\Delta u_{ 0}-\sum_{j=1}^{m}\int_{\Omega}a(x)\big{(}u_{j}+H_{j}\big{)}\Delta u_{0}-\frac{ 1}{2}\sum_{i,j=1}^{m}\int_{\Omega}a(x)\big{(}u_{j}+H_{j}\big{)}\Delta u_{i}. \tag{5.5}\]
Let us analyze the behavior of the first term. By (2.4), (2.5) and (2.11) we get
\[-\int_{\Omega}a(x)\big{(}u_{0}+H_{0}\big{)}\Delta u_{0}=\int_{\Omega}\frac{8 \varepsilon_{i}^{2}\mu_{0}^{2}(1+\alpha)^{2}a(x)|x-q|^{2\alpha}}{(\varepsilon _{0}^{2}\mu_{0}^{2}+|x-q|^{2(1+\alpha)})^{2}}\left[\log\frac{1}{(\varepsilon_ {0}^{2}\mu_{0}^{2}+|x-q|^{2(1+\alpha)})^{2}}+(1+\alpha)H(x,q)+O\left(\rho_{0} ^{\sigma}v_{0}^{\sigma}\right)\right]\!.\]
Making the change of variables \(\rho_{0}v_{0}z=x-q\), we can derive that
\[-\int_{\Omega}a(x)\big{(}u_{0}+H_{0}\big{)}\Delta u_{0}=\int_{\Omega_{\rho_{0 }v_{0}}}\frac{8(1+\alpha)^{2}a(q+\rho_{0}v_{0}z)|z|^{2\alpha}}{(1+|z|^{2(1+ \alpha)})^{2}}\left[\log\frac{(\varepsilon_{0}\mu_{0})^{-4}}{(1+|z|^{2(1+ \alpha)})^{2}}+(1+\alpha)H(q,q)+O\left(\rho_{0}^{\sigma}v_{0}^{\sigma}|z|^{ \sigma}+\rho_{0}^{\sigma}v_{0}^{\sigma}\right)\right],\]
where \(\Omega_{\rho_{0}v_{0}}=\frac{1}{\rho_{0}v_{0}}(\Omega-\{q\})\). But
\[\int_{\Omega_{\rho_{0}v_{0}}}a(q+\rho_{0}v_{0}z)\frac{8(1+\alpha)^{2}|z|^{2 \alpha}}{(1+|z|^{2(1+\alpha)})^{2}}=8\pi(1+\alpha)a(q)+O\left(\rho_{0}v_{0} \right),\]
and
\[\int_{\Omega_{\rho_{0}v_{0}}}a(q+\rho_{0}v_{0}z)\frac{8(1+\alpha)^{2}|z|^{2 \alpha}}{(1+|z|^{2(1+\alpha)})^{2}}\log\frac{1}{(1+|z|^{2(1+\alpha)})^{2}}=-1 6\pi(1+\alpha)a(q)+O(\rho_{0}v_{0}).\]
Then
\[-\int_{\Omega}a(x)\big{(}u_{0}+H_{0}\big{)}\Delta u_{0}=8\pi(1+\alpha)a(q) \big{[}(1+\alpha)H(q,q)-2-4\log(\varepsilon_{0}\mu_{0})\big{]}+O\left(\rho_{0 }^{\sigma}v_{0}^{\sigma}\right). \tag{5.6}\]
For the second term of (5.5), by (2.2), (2.4), (2.5), (2.12) and the change of variables \(\rho_{0}v_{0}z=x-q\) we get that for any \(j=1,\ldots,m\),
\[-\int_{\Omega}a(x)\big{(}u_{j}+H_{j}\big{)}\Delta u_{0}= \int_{\Omega}\frac{8\varepsilon_{0}^{2}\mu_{0}^{2}(1+\alpha)^{2}a (x)|x-q|^{2\alpha}}{(\varepsilon_{0}^{2}\mu_{0}^{2}+|x-q|^{2(1+\alpha)})^{2}} \left[\log\frac{1}{(\varepsilon_{j}^{2}\mu_{j}^{2}+|x-\xi_{j}|^{2})^{2}}+H(x, \xi_{j})+O\left(\varepsilon_{j}^{\sigma}\mu_{j}^{\sigma}\right)\right]dx\] \[= \int_{\Omega_{\rho_{0}v_{0}}}\frac{8(1+\alpha)^{2}a(q+\rho_{0}v_{ 0}z)|z|^{2\alpha}}{(1+|z|^{2(1+\alpha)})^{2}}\left[\log\frac{1}{|q-\xi_{j}|^{4 }}+H(q,\xi_{j})+O\left(\rho_{0}^{\sigma}v_{0}^{\sigma}|z|^{\sigma}\right)\right.\] \[\left.\quad+O\left(\rho_{0}v_{0}q^{\beta}|z|\right)+O\left( \varepsilon_{j}^{\sigma}\mu_{j}^{\sigma}\right)\right]dz\] \[= 8\pi(1+\alpha)a(q)G(q,\xi_{j})+O\left(\rho_{0}^{\sigma}v_{0}^{ \sigma}\right)+O\left(\varepsilon_{j}^{\sigma}\mu_{j}^{\sigma}\right). \tag{5.7}\]
As for the last term of (5.5), by (2.4), (2.6), (2.12) and the change of variables \(\varepsilon_{i}\mu_{i}z=x-\xi_{i}\) we observe that for any \(i\), \(j=1,\ldots,m\),
\[-\int_{\Omega}a(x)\big{(}u_{j}+H_{j}\big{)}\Delta u_{i}= \int_{\Omega}\frac{8\varepsilon_{i}^{2}\mu_{i}^{2}a(x)}{(\varepsilon _{i}^{2}\mu_{i}^{2}+|x-\xi_{i}|^{2})^{2}}\left[\log\frac{1}{(\varepsilon_{j}^{2} \mu_{j}^{2}+|x-\xi_{j}|^{2})^{2}}+H(x,\xi_{j})+O(\varepsilon_{j}^{\sigma}\mu_{j }^{\sigma})\right]dx\] \[= \int_{\Omega_{\varepsilon_{i}\mu_{i}}}\frac{8a(\xi_{i}+\varepsilon_ {i}\mu_{i}z)}{(1+|z|^{2})^{2}}\left[\log\frac{1}{(\varepsilon_{j}^{2}\mu_{j}^{2}+| \xi_{i}-\xi_{j}+\varepsilon_{i}\mu_{i}z|^{2})^{2}}+H(\xi_{i},\xi_{j})+O\left( \varepsilon_{i}^{\sigma}\mu_{i}^{\sigma}|z|^{\sigma}\right)+O\left(\varepsilon_{j}^ {\sigma}\mu_{j}^{\sigma}\right)\right]dz,\]
where \(\Omega_{\varepsilon_{i}\mu_{i}}=\frac{1}{\varepsilon_{i}\mu_{i}}(\Omega-\{\xi_{i}\})\). Note that
\[\int_{\Omega_{\varepsilon_{i}\mu_{i}}}a(\xi_{i}+\varepsilon_{i}\mu_{i}z)\frac{8}{(1+ |z|^{2})^{2}}=8\pi a(\xi_{i})+O(\varepsilon_{i}\mu_{i}),\]
and
\[\int_{\Omega_{t_{i}\mu_{i}}}a(\xi_{i}+\varepsilon_{i}\mu_{i}z)\frac{8}{(1+|z|^{2} )^{2}}\log\frac{1}{(1+|z|^{2})^{2}}=-16\pi a(\xi_{i})+O(\varepsilon_{i}\mu_{i}).\]
Then for all \(i\), \(j=1,\ldots,m\),
\[-\int_{\Omega}a(x)\big{(}u_{j}+H_{j}\big{)}\Delta u_{i}=\begin{cases}8\pi a( \xi_{i})\big{[}H(\xi_{i},\xi_{i})-2-4\log(\varepsilon_{i}\mu_{i})\big{]}+O \left(\varepsilon_{i}^{\sigma}\mu_{i}^{\sigma}\right)&\forall\ i=j,\\ 8\pi a(\xi_{i})G(\xi_{i},\xi_{j})+O\left(\varepsilon_{i}^{\sigma}\mu_{i}^{ \sigma}+\varepsilon_{j}^{\sigma}\mu_{j}^{\sigma}\right)&\forall\ i\neq j.\end{cases} \tag{5.8}\]
On the other hand, by (2.14), (2.15), (2.17) and the change of variables \(x=\varepsilon_{0}y=e^{-\frac{1}{2}t}y\), we obtain
\[\int_{\Omega}a(x)k(x)|x-q|^{2\alpha}e^{-t\phi_{1}}e^{U}= \int_{\Omega_{t}}a(\varepsilon_{0}y)k(\varepsilon_{0}y)| \varepsilon_{0}y-q|^{2\alpha}e^{-t\big{[}\phi_{1}(\varepsilon_{0}y)-1\big{]}}e ^{U(\varepsilon_{0}y)-2t}dy\] \[= \left(\int_{\Omega_{t}\setminus\big{[}\bigcup_{i=1}^{m}B_{\frac{ 1}{\varepsilon_{0}t^{2\beta}}}(\xi_{i}^{\prime})\cup B_{\frac{1}{\varepsilon_ {0}t^{2\beta}}}(q^{\prime})\big{]}}+\int_{B_{\frac{1}{\varepsilon_{0}t^{2 \beta}}}(q^{\prime})}+\sum_{i=1}^{m}\int_{B_{\frac{1}{\varepsilon_{0}t^{2 \beta}}}(\xi_{i}^{\prime})}\right)a(\varepsilon_{0}y)Wdy.\]
By (2.28), (2.29) and (2.30) we obtain
\[\int_{\Omega_{t}\setminus\big{[}\bigcup_{i=1}^{m}B_{\frac{1}{ \varepsilon_{0}t^{2\beta}}}(\xi_{i}^{\prime})\cup B_{\frac{1}{\varepsilon_{0}t ^{2\beta}}}(q^{\prime})\big{]}}a(\varepsilon_{0}y)Wdy= \int_{\Omega_{t}\setminus\big{[}\bigcup_{i=1}^{m}B_{\frac{1}{ \varepsilon_{0}t^{2\beta}}}(\xi_{i}^{\prime})\cup B_{\frac{1}{\varepsilon_{0}t ^{2\beta}}}(q^{\prime})\big{]}}O\left(\frac{\varepsilon_{0}^{2}e^{-t\phi_{1}( \varepsilon_{0}y)}}{|\varepsilon_{0}y-q|^{4+2\alpha}}\prod_{i=1}^{m}\frac{1}{| \varepsilon_{0}y-\xi_{i}|^{4}}\right)dy\] \[= \,O\left(1\right),\]
and
\[\int_{B_{\frac{1}{\varepsilon_{0}t^{2\beta}}}(q^{\prime})}a( \varepsilon_{0}y)Wdy= \int_{B_{\frac{1}{\varepsilon_{0}t^{2\beta}}}(q^{\prime})} \left(\frac{\varepsilon_{0}}{\rho_{0}v_{0}}\right)^{2}\frac{8(1+\alpha)^{2}a( \varepsilon_{0}y)}{\big{(}1+\big{|}\frac{y-q}{\rho_{0}v_{0}}\big{|}^{2\alpha}} \big{[}1+O\left(\varepsilon_{0}^{\sigma}|y-q^{\prime}|^{\sigma}\right)+o \left(1\right)\big{]}dy\] \[= \,8\pi(1+\alpha)a(q)+o(1),\]
and for any \(i=1,\ldots,m\),
\[\int_{B_{\frac{1}{\varepsilon_{0}t^{2\beta}}}(\xi_{i}^{\prime})}a( \varepsilon_{0}y)Wdy=\]
Then
\[\int_{\Omega}a(x)k(x)|x-q|^{2\alpha}e^{-t\phi_{1}}e^{U}=O\left(1\right). \tag{5.9}\]
Hence by (5.1), (5.5)-(5.9) we conclude that
\[J_{t}\left(U(\xi)\right)=-16\pi(1+\alpha)a(q)\log(\varepsilon_{0}\mu_{0})-16 \pi\sum_{i=1}^{m}a(\xi_{i})\log(\varepsilon_{i}\mu_{i})+8\pi(1+\alpha)\sum_{i=1 }^{m}a(q)G(q,\xi_{i})+4\pi\sum_{i\neq j}^{m}a(\xi_{i})G(\xi_{i},\xi_{j})+O(1),\]
which, together with the definitions of \(\varepsilon_{0}\), \(\varepsilon_{i}\) in (2.7) and the choices of \(\mu_{0}\), \(\mu_{i}\) in (2.20)-(2.21), implies that expansion (5.4) holds.
**Step 2:** For any \(t\) large enough, we claim that the following expansion holds
\[\mathcal{F}_{t}(\xi)=J_{t}\big{(}U(\xi)\big{)}+o(1) \tag{5.10}\]
uniformly on points \(\xi=(\xi_{1},\ldots,\xi_{m})\in\mathcal{O}_{t}(q)\). Indeed, if we define
\[I_{t}(\omega)=\frac{1}{2}\int_{\Omega_{t}}a(\varepsilon_{0}y)|\nabla\omega|^{ 2}-\int_{\Omega_{t}}|\varepsilon_{0}y-q|^{2\alpha}a(\varepsilon_{0}y)\kappa( y,t)e^{\omega},\ \ \ \ \omega\in H_{0}^{1}(\Omega_{t}), \tag{5.11}\]
then by (2.7) and (2.15),
\[\mathcal{F}_{t}(\xi)-J_{t}\big{(}U(\xi)\big{)}=I_{t}\big{(}V(\xi^{\prime})+ \phi_{\xi^{\prime}}\big{)}-I_{t}\big{(}V(\xi^{\prime})\big{)}.\]
Using \(DI_{t}(V+\phi_{\xi^{\prime}})[\phi_{\xi^{\prime}}]=0\), a Taylor expansion and an integration by parts give
\[\mathcal{F}_{t}(\xi)-J_{t}\big{(}U(\xi)\big{)}= \int_{0}^{1}D^{2}I_{t}(V+\tau\phi_{\xi^{\prime}})[\phi_{\xi^{ \prime}}]^{2}(1-\tau)d\tau\] \[= \int_{0}^{1}\left\{\int_{\Omega_{t}}a(\varepsilon_{0}g)\Big{[} \big{(}N(\phi_{\xi^{\prime}})+E\big{)}\phi_{\xi^{\prime}}+W\big{(}1-e^{\tau \phi_{\xi^{\prime}}}\big{)}\phi_{\xi^{\prime}}^{2}\Big{]}\right\}(1-\tau)d\tau.\]
From the estimates in Lemma 3.4 and Proposition 4.1, we find
\[\mathcal{F}_{t}(\xi)-J_{t}\left(U(\xi)\right)=O\left(t\max\Big{\{}\big{(}\rho_ {0}v_{0}\big{)}^{\min\{2\sigma,4(\alpha-\dot{\alpha})\}},\,(\varepsilon_{1} \mu_{1})^{2\sigma},\,\ldots,\,(\varepsilon_{m}\mu_{m})^{2\sigma},\,\|e^{-t \phi_{1}}\|_{L^{\infty}(\Omega_{t})}\Big{\}}\right)=o\left(1\right).\]
The continuity in \(\xi\) of this expression is inherited from that of \(\phi_{\xi^{\prime}}\) in the \(L^{\infty}\) norm.
**Step 3:** Proof of Proposition 5.1. Let \(\xi_{t}=(\xi_{1,t},\ldots,\xi_{m,t})\) be the maximizer of \(\mathcal{F}_{t}\) over \(\mathcal{O}_{t}(q)\). We need to prove that \(\xi_{t}\) belongs to the interior of \(\mathcal{O}_{t}(q)\). First, we obtain a lower bound for \(\mathcal{F}_{t}\) over \(\mathcal{O}_{t}(q)\). Let us fix the point \(q\) as an isolated local maximum point of \(a(x)\phi_{1}\) in \(\Omega\) and set
\[\xi_{i}^{0}=q+\frac{1}{\sqrt{t}}\widehat{\xi}_{i},\]
where \(\widehat{\xi}=(\widehat{\xi}_{1},\ldots,\widehat{\xi}_{m})\) forms an \(m\)-regular polygon in \(\mathbb{R}^{2}\). Obviously, \(\xi^{0}=(\xi_{1}^{0},\ldots,\xi_{m}^{0})\in\mathcal{O}_{t}(q)\) because \(\beta>1\) and \(a(\xi_{i}^{0})\phi_{1}(\xi_{i}^{0})=a(q)\phi_{1}(q)+O(t^{-1})\). Then
\[\max_{\xi\in\mathcal{O}_{t}(q)}\mathcal{F}_{t}(\xi) \geq 8\pi(1+\alpha)ta(q)+8\pi t\sum_{i=1}^{m}a(\xi_{i}^{0})\phi_{1 }(\xi_{i}^{0})+16\pi(2+\alpha)\sum_{i=1}^{m}a(\xi_{i}^{0})\log|\xi_{i}^{0}-q|+ 16\pi\sum_{i\neq j}^{m}a(\xi_{i}^{0})\log|\xi_{i}^{0}-\xi_{j}^{0}|+O\left(1\right)\] \[\geq 8\pi(m+1+\alpha)a(q)t-8\pi(m+1+\alpha))\big{[}a(\xi_{1}^{0}) +\cdots+a(\xi_{m}^{0})\big{]}\log t+O(1). \tag{5.12}\]
Next, we suppose \(\xi_{t}=(\xi_{1,t},\ldots,\xi_{m,t})\in\partial\mathcal{O}_{t}(q)\). Then there exist four possibilities:
* There exists an \(i_{0}\) such that \(\xi_{i_{0},t}\in\partial B_{d}(q)\), in which case, \(a(\xi_{i_{0},t})\phi_{1}(\xi_{i_{0},t})\leq a(q)\phi_{1}(q)-d_{0}\) for some \(d_{0}>0\) independent of \(t\);
* There exists an \(i_{0}\) such that \(a(\xi_{i_{0},t})\phi_{1}(\xi_{i_{0},t})=a(q)\phi_{1}(q)-\frac{1}{\sqrt{t}}\);
* There exist indices \(i_{0}\), \(j_{0}\), \(i_{0}\neq j_{0}\) such that \(|\xi_{i_{0},t}-\xi_{j_{0},t}|=t^{-\beta}\);
* There exists an \(i_{0}\) such that \(|\xi_{i_{0},t}-q|=t^{-\beta}\).
For the first case, we have
\[\max_{\xi\in\mathcal{O}_{t}(q)}\mathcal{F}_{t}(\xi)=\mathcal{F}_{ t}(\xi_{t}) \leq 8\pi(1+\alpha)ta(q)+8\pi t\left[(m-1)a(q)\phi_{1}(q)+a(q)\phi_{ 1}(q)-d_{0}\right]+O\big{(}\log t\big{)}\] \[=8\pi(m+1+\alpha)a(q)t-8\pi d_{0}t+O\big{(}\log t\big{)}, \tag{5.13}\]
which contradicts to (5.12). This shows that \(a(\xi_{i,t})\phi_{1}(\xi_{i,t})\to a(q)\phi_{1}(q)\). Using the assumption of \(a(x)\phi_{1}\) over \(\overline{\Omega}\), we deduce \(\xi_{i,t}\to q\) for all \(i=1,\ldots,m\).
For the second case, we have
\[\max_{\xi\in\mathcal{O}_{t}(q)}\mathcal{F}_{t}(\xi)=\mathcal{F}_{ t}(\xi_{t}) \leq 8\pi(1+\alpha)ta(q)+8\pi t\left[(m-1)a(q)\phi_{1}(q)+a(q)\phi_{1 }(q)-\frac{1}{\sqrt{t}}\right]+O\big{(}\log t\big{)}\] \[=8\pi(m+1+\alpha)a(q)t-8\pi\sqrt{t}+O\big{(}\log t\big{)}, \tag{5.14}\]
which contradicts to (5.12). For the third case, we have
\[\max_{\xi\in\mathcal{O}_{t}(q)}\mathcal{F}_{t}(\xi)=\mathcal{F}_{ t}(\xi_{t})\leq 8\pi(m+1+\alpha)ta(q)-16\pi\beta a(\xi_{i_{0},t})\log t+O\big{(}1\big{)}. \tag{5.15}\]
For the last case, we have
\[\max_{\xi\in\mathcal{O}_{t}(q)}\mathcal{F}_{t}(\xi)=\mathcal{F}_{ t}(\xi_{t})\leq 8\pi(m+1+\alpha)ta(q)-16\pi(2+\alpha)\beta a(\xi_{i_{0},t})\log t+O \big{(}1\big{)}. \tag{5.16}\]
Combining (5.15)-(5.16) with (5.12), we give
\[16\pi(2+\alpha)\beta a(\xi_{i_{0},t})\log t+O\big{(}1\big{)}\leq 8\pi(m+1+ \alpha))\big{[}a(\xi_{1}^{0})+\cdots+a(\xi_{m}^{0})\big{]}\log t+O(1), \tag{5.17}\]
which is impossible by the choice of \(\beta\) in (2.3).
## 6. Proof of Theorem 1.1
**Proof of Theorem 1.1.** From Proposition 4.1 it follows that for any points \(\xi=(\xi_{1},\ldots,\xi_{m})\in\mathcal{O}_{t}(q)\) and any \(t\) large enough, there is a function \(\phi_{\xi^{\prime}}\), which solves
\[-\Delta_{a(\varepsilon_{0}y)}\big{(}V(\xi^{\prime})+\phi_{\xi^{ \prime}})-|\varepsilon_{0}y-q|^{2\alpha}\kappa(y,t)e^{V(\xi^{\prime})+\phi_{ \xi^{\prime}}}=\frac{1}{a(\varepsilon_{0}y)}\sum_{i=1}^{m}\sum_{j=1}^{2}c_{ij }(\xi^{\prime})\chi_{i}Z_{ij},\qquad\int_{\Omega_{t}}\chi_{i}Z_{ij}\phi_{\xi^ {\prime}}=0\]
for some coefficients \(c_{ij}(\xi^{\prime})\), \(i=1,\ldots,m\), \(j=1,2\). So, in order to find a solution to equation (2.13) and hence to the original problem (1.6), we need to match \(\xi^{\prime}\) with the coefficients \(c_{ij}(\xi^{\prime})\) such that
\[c_{ij}(\xi^{\prime})=0\quad\text{ for all }\,i=1,\ldots,m,\ j=1,2. \tag{6.1}\]
According to Proposition 5.1, there exits a \(\xi_{t}=(\xi_{1,t},\ldots,\xi_{m,t})\in\mathcal{O}_{t}^{\alpha}(q)\) that achieves the maximum for the maximization problem (5.3). Let \(\omega_{t}=V(\xi_{t}^{\prime})+\phi_{\xi_{t}^{\prime}}\). Then we have
\[\partial_{\xi_{kl}}\mathcal{F}_{t}(\xi_{t})=0\quad\text{ for all }\,k=1,\ldots,m,\ l=1,2. \tag{6.2}\]
From (5.1), (5.2) and (5.11) we get
\[\partial_{\xi_{kl}}\mathcal{F}_{t}(\xi_{t})= \,\partial_{\xi_{kl}}J_{t}\big{(}U(\xi_{t})+\tilde{\phi}(\xi_{t}) \big{)}=\,\frac{1}{\varepsilon_{0}}\partial_{\xi_{kl}^{\prime}}I_{t}\big{(}V( \xi_{t}^{\prime})+\phi_{\xi_{t}^{\prime}}\big{)}\] \[= \,\frac{1}{\varepsilon_{0}}\left\{\int_{\Omega_{t}}a( \varepsilon_{0}y)\nabla\omega_{t}\nabla\Big{[}\partial_{\xi_{kl}^{\prime}}V( \xi_{t}^{\prime})+\partial_{\xi_{kl}^{\prime}}\phi_{\xi_{t}^{\prime}}\Big{]}- \int_{\Omega_{t}}a(\varepsilon_{0}y)|\varepsilon_{0}y-q|^{2\alpha}\kappa(y,t) e^{\omega_{t}}\left[\partial_{\xi_{kl}^{\prime}}V(\xi_{t}^{\prime})+\partial_{\xi_{kl}^{ \prime}}\phi_{\xi_{t}^{\prime}}\right]\right\}.\]
Then for all \(k=1,\ldots,m\) and \(l=1,2\),
\[\sum_{i=1}^{m}\sum_{j=1}^{2}c_{ij}(\xi_{t}^{\prime})\int_{\Omega_{t}}\chi_{i}Z _{ij}\left[\partial_{\xi_{kl}^{\prime}}V(\xi_{t}^{\prime})+\partial_{\xi_{kl}^ {\prime}}\phi_{\xi_{t}^{\prime}}\right]=0.\]
Notice that \(\partial_{\xi_{kl}^{\prime}}V(\xi_{t}^{\prime})=Z_{kl}+o\big{(}1\big{)}\) and \(\partial_{\xi_{kl}^{\prime}}\phi_{\xi_{t}^{\prime}}=o(1)\) with \(o(1)\) sufficiently small in the sense of the \(L^{\infty}\) norm as \(t\to+\infty\). Therefore, we find that \(D_{\xi}\mathcal{F}_{t}(\xi)=0\) implies the validity of a system of equations of the form
\[\sum_{i=1}^{m}\sum_{j=1}^{2}c_{ij}(\xi_{t}^{\prime})\int_{\Omega_{t}}\chi_{i}Z _{ij}\big{[}Z_{kl}(y)+o\left(1\right)\big{]}=0,\qquad\ k=1,\ldots,m,\ l=1,2.\]
Clearly, the matrix of this system is diagonal dominant and then \(c_{ij}(\xi_{t}^{\prime})=0\) for all \(i=1,\ldots,m\), \(j=1,2\). As a result, we find a solution \(u_{t}\) of problem (1.6) in the form \(U(\xi_{t})+\tilde{\phi}(\xi_{t})\) with the qualitative properties as predicted in Theorem 1.1.
|
2302.11593 | Quantum spherical codes | We introduce a framework for constructing quantum codes defined on spheres by
recasting such codes as quantum analogues of the classical spherical codes. We
apply this framework to bosonic coding, obtaining multimode extensions of the
cat codes that can outperform previous constructions while requiring a similar
type of overhead. Our polytope-based cat codes consist of sets of points with
large separation that at the same time form averaging sets known as spherical
designs. We also recast concatenations of CSS codes with cat codes as quantum
spherical codes, revealing a new way to autonomously protect against dephasing
noise. | Shubham P. Jain, Joseph T. Iosue, Alexander Barg, Victor V. Albert | 2023-02-22T19:00:11Z | http://arxiv.org/abs/2302.11593v2 | # Quantum spherical codes
###### Abstract
We introduce a framework for constructing quantum codes defined on spheres by recasting such codes as quantum analogues of the classical spherical codes. We apply this framework to bosonic coding, obtaining multimode extensions of the cat codes that can outperform previous constructions while requiring a similar type of overhead. Our polytope-based cat codes consist of sets of points with large separation that at the same time form averaging sets known as spherical designs. We also recast concatenations of qubit CSS codes with cat codes as quantum spherical codes.
+
Footnote †: preprint: APS/123-QED
Bosonic (a.k.a. oscillator) codes [1] offer alternative qubit blueprints that are compatible with continuous-variable (CV) quantum platforms [1; 2; 3; 4; 5; 6; 7; 8; 9] and that can reduce overhead by offering an extra layer of protection [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24]. Qubits defined on a few bosonic modes or more exotic spaces [25] are likely to prove useful as control of quantum systems improves, but the field remains relatively under-explored [26] in part because structures and intuition from qubit-based coding theory need not apply.
We develop a framework that yields generalizations of a class of bosonic codes called the _cat codes_[27; 28] and unifies such codes with several others. Our key observation is that all such codes are particular instances of quantum versions of _spherical codes_[29; 30], a family well known in classical coding theory. We overview the framework and demonstrate its utility with several new multimode cat codes. A rigorous study of general features is left to a companion follow-up work.
**General codes on the sphere** Codewords of qubit codes [1] are quantum superpositions of bit-strings. By analogy, we start with a spherical code, which is a set, or constellation, of points on the unit sphere. To construct a _quantum spherical code_, or _QSC_, we take a collection \(\{\mathcal{C}_{k}\}_{k=1}^{K}\) of _codeword constellations_, each of which gives rise to a codeword of the QSC obtained by taking a quantum superposition of all points \(\mathbf{x}\in\mathcal{C}_{k}\). We consider uniform superpositions here, leaving more general codes to future work. Taken together, the codeword constellations yield the _code constellation_, \(\mathcal{C}=\bigcup_{k=1}^{K}\mathcal{C}_{k}\).
In the electromagnetic setting, spherical codes protect classical information against signal fluctuations during transmission, which correspond to small shifts acting on points in the constellation. A code's ability to protect against such errors can be quantified by the minimum (squared) Euclidean distance \(d_{E}\) between any pair of distinct points. QSCs naturally inherit \(d_{E}\) as a figure of merit for protecting against such "bit-flip" noise.
Since QSCs store quantum information, they also suffer from "phase" noise, which comes from, e.g., fiber attenuation. Such noise can be expressed in terms of "potential-energy" functions on the sphere whose evaluation can be used to distinguish codeword constellations (cf. [25; Sec. VI.B; 31]). If the average of a function over points in a constellation \(\mathcal{C}_{k}\) depends on \(k\), then the function's underlying physical process causes an undetectable "phase" error.
An \((n,K,d_{E})\) spherical code contains \(K\) points on the \(n\)-dimensional unit sphere such that the squared Euclidean distance between any two points is at least \(d_{E}\). An \((\!(n,K,d_{E},\ldots)\!)\) QSC is a \(K\)-dimensional subspace of a quantum system's vector space whose states are labeled by points on an \(n\)-dimensional (real or complex) unit sphere, and whose protection against rotations is quantified by \(d_{E}\). Protection against "phase" noise is designated by the proxy "\(\ldots\)" because the notion of a "phase-flip" distance depends on the physical system embedding the QSC.
In principle, the above framework applies to any quan
Figure 1: Quantum spherical codewords are quantum superpositions of constellations on a sphere. Codeword constellations can form the vertices of a polytope and unite to form a code polytope compound. Projections of polytope compounds are shown for the (**a**) cat, (**b**) Möbius-Kantor, and (**c**) Hessian quantum spherical codes, with codeword constellation points colored either green, red, or purple.
tum state space parameterized by points on a sphere. CV systems [32; 33] admit several such spaces, and there already exist examples of QSCs expressed using CV coherent states [27; 28; 34] and pair-coherent states [35]. Collective atomic systems described by spin-coherent states as well as rotational state spaces of diatomic molecules also admit QSCs, namely, various large-spin codes [36] and diatomic molecular codes [25, Sec. VI], respectively. We focus on coherent-state QSCs because such codes naturally generalize the cat codes, and error-correction procedures for these new multimode cat codes require a similar type of overhead as what has already been realized [1; 2; 3; 4; 5]. We note that the discussion below can be modified to apply to other manifestations of QSCs.
Coherent-state formalismA single-mode _coherent state_ is a quantum representation of a standing wave of a fixed-frequency signal. An \(n\)-mode coherent state \(|\mathbf{\alpha}\rangle\) is parameterized by a complex \(n\)-dimensional point \(\mathbf{\alpha}\). The point's norm \(\|\mathbf{\alpha}\|^{2}\) corresponds to the state's energy, and points of all states with a fixed energy \(\bar{\mathrm{s}}\) form a complex \(n\)-sphere, \(\Omega_{n}=\{\mathbf{\alpha}\in\mathbb{C}^{n}\,,\,\|\mathbf{\alpha}\|^{2}=\bar{\mathrm{ s}}\}\).
Coherent-state QSCs consist of codeword constellations \(\mathcal{C}_{k}\) of \(|\mathcal{C}_{k}|\) points picked from the \(n\)-sphere and superimposed to form logical codewords,
\[|\mathcal{C}_{k}\rangle\sim\frac{1}{\sqrt{|\mathcal{C}_{k}|}}\sum_{\mathbf{ \alpha}\in\mathcal{C}_{k}}|\sqrt{\bar{\mathrm{s}}}\mathbf{\alpha}\rangle\, \tag{1}\]
where we restrict codeword constellations to lie on the _unit_\(n\)-sphere and relegate the overall scaling to \(\bar{\mathrm{s}}\). An example to keep in mind is the four-component cat code defined by \(\mathcal{C}_{0}=\{(1),(-1)\}\) and \(\mathcal{C}_{1}=\{(\mathrm{i}),(-\mathrm{i})\}=\mathrm{i}\mathcal{C}_{0}\).
The normalization in Eq. (1) is valid asymptotically as \(\bar{\mathrm{s}}\to\infty\) because coherent states are not quite orthogonal due to the uncertainty principle,
\[|\langle\mathbf{\alpha}|\mathbf{\beta}\rangle|^{2}=\exp\left(-\bar{\mathrm{s}}\|\mathbf{ \alpha}-\mathbf{\beta}\|^{2}\right)\leq\exp\left(-\bar{\mathrm{s}}d_{E}\right). \tag{2}\]
The above "quantum corrections" for two coherent states of a code are suppressed exponentially with the energy \(\bar{\mathrm{s}}\) and the minimum distance between two points in the code's constellation \(\mathcal{C}=\bigcup_{k}\mathcal{C}_{k}\),
\[d_{E}=\min_{\mathbf{\alpha},\mathbf{\beta}\in\mathcal{C}}\|\mathbf{\alpha}-\mathbf{\beta}\|^{ 2}. \tag{3}\]
Since \(d_{E}\) sets the scale of resolution of the constellation points, we refer to it as the _resolution_ from now on.
Coherent states are subjected to two essentially different types of distortion -- angular dephasing due to fluctuations in a mode's frequency and changes in the mode's excitations [37, Sec. II.A]. These induce "bit" and "phase" noise on QSCs, respectively. The corresponding relevant noise operators are passive linear-optical transformations and products of modal ladder operators \(\{a_{j},a_{j}^{\dagger}\}_{j=1}^{n}\), whose commutator is \([a_{j},a_{\ell}^{\dagger}]=\mathrm{i}\delta_{j\ell}\). Products of transformations and ladder operators can be used to express any physical noise channel [38, Eq. (39)].
Transformations on \(n\) modes are parameterized by the unitary group \(\mathsf{U}(n)\)[33, Sec. 5.1.2]. A transformation \(U_{\mathbf{R}}\) corresponding to the \(n\)-dimensional unitary matrix \(\mathbf{R}\) rotates a coherent state \(|\mathbf{\alpha}\rangle\) into \(|\mathbf{R}\mathbf{\alpha}\rangle\). If the rotation satisfies \(\|\mathbf{R}\mathbf{\alpha}-\mathbf{\alpha}\|^{2}<d_{E}\), the transformation is detectable in the \(\bar{\mathrm{s}}\to\infty\) limit. Codes with larger resolution protect against larger sets of transformations.
A general ladder error,
\[L_{\mathbf{p},\mathbf{q}}(\mathbf{a}^{\dagger},\mathbf{a})=\prod_{j=1}^{n}a_{j}^{\dagger p_{j} }a_{j}^{q_{j}}\, \tag{4}\]
is a monomial in the operators \((a_{1},a_{2},\cdots,a_{n})=\mathbf{a}\) and their adjoints. It is parameterized by non-negative integer vectors \(\mathbf{p}=(p_{1},p_{2},\cdots,p_{n})\) and \(\mathbf{q}=(q_{1},q_{2},\cdots,q_{n})\) quantifying how many energy carriers (e.g., photons or phonons) are gained and lost in each mode, respectively.
Lowering operators \(a_{j}\) are "diagonal" in the coherent-state basis, satisfying \(a_{j}|\mathbf{\alpha}\rangle=\alpha_{j}|\mathbf{\alpha}\rangle\), where \(\alpha_{j}\) is the \(j\)th component of \(\mathbf{\alpha}\). This "diagonality" relation and its adjoint imply that the expectation value of a ladder error over the \(k\)th codeword (1) reduces to the average of the operator's corresponding monomial over \(\mathcal{C}_{k}\),
\[\langle\mathcal{C}_{k}|L_{\mathbf{p},\mathbf{q}}(\mathbf{a}^{\dagger},\mathbf{a})|\mathcal{C} _{k}\rangle\sim\frac{\bar{\mathrm{s}}^{|\mathbf{p}+\mathbf{q}|/2}}{|\mathcal{C}_{k}|} \sum_{\mathbf{\alpha}\in\mathcal{C}_{k}}L_{\mathbf{p},\mathbf{q}}(\mathbf{\alpha}^{\star},\mathbf{ \alpha})\, \tag{5}\]
where the one-norm \(|\mathbf{p}+\mathbf{q}|\) is the degree of \(L_{\mathbf{p},\mathbf{q}}(\mathbf{\alpha}^{\star},\mathbf{\alpha})\). A ladder error can be detected whenever the above average is _independent_ of \(k\)[39].
Polytope QSCsWe have found numerous QSCs whose constellations form vertices of real [40] or complex [41; 42] polytopes [5]. Polytope vertices are both sufficiently well-separated and uniform, providing protection against both types of noise. Code and polytope tables for the two cases are in Appxs. A and B, respectively.
We characterize ladder-error protection of polytope QSCs with three "distances": \(d_{\downarrow}\), \(t_{\downarrow}\), and \(d_{\uparrow}^{\star}\). The first is the number of detectable losses (plus one), signifying that any pure-loss ladder error \(L_{\mathbf{p}=\mathbf{0},\mathbf{q}}\) with \(|\mathbf{q}|<d_{\downarrow}\) is detectable. Similarly, \(t_{\downarrow}\) is the number of _correctable_ losses (plus one), signifying that any ladder error with \(|\mathbf{p}|,|\mathbf{q}|<t_{\downarrow}\) is detectable. The _degree distance_\(d_{\uparrow}\) signifies that the code detects ladder errors with degree \(|\mathbf{p}+\mathbf{q}|<d_{\uparrow}\). These three parameters satisfy
\[\lfloor(d_{\uparrow}+1)/2\rfloor\leq t_{\downarrow}\leq d_{\uparrow}\leq d_{\downarrow} \tag{6}\]
and can vary quite significantly.
Our notation for an \(n\)-mode polytope QSC with \(K\) logical codewords is \((\!(n,K,d_{E},d_{\uparrow})\!)\) or, more generally, \((\!(n,K,d_{E},\langle t_{\downarrow},d_{\uparrow},d_{\downarrow})\!)\). The four-component cat code is a \((\!(1,2,2.0,\langle 2,2,2\rangle)\!)\) QSC, detecting \(d_{\downarrow}-1=1\) loss error while sporting the relatively high resolution of \(2.0\). Since it can detect one gain simultaneously with one loss, this code also corrects \(t_{\downarrow}-1=1\) loss error.
Each codeword constellation of the four-component cat code is a line segment, and the code constellation forms the vertices of a square. More generally, codeword constellations of the \(2p\)-component \((\!(1,2,4\sin^{2}\frac{\pi}{2p},\langle p,p,p\rangle)\!)\)
cat code are two \(p\)-gons whose vertices interleave for maximal resolution. There is a tradeoff between loss protection and resolution, with the latter of order \(O(1/p^{2})\) for a large number \(p-1\) of correctable losses. Utilizing higher dimensions, we pick other complex polytopes that maintain the same resolution while offering increased loss protection over the cat codes.
A simple code straddling the \(p=2,3\) cat codes in terms of performance is the \((\!(2,2,1.5,\langle 2,3,3\rangle)\!)\)_simplex code_,
\[\mathcal{C}_{0}=\left\{\tfrac{1}{\sqrt{2}}(\omega^{\mu},\omega^{2\mu})\,|\, \mu\in\mathbb{Z}_{\mathsf{g}}\right\}=-\mathcal{C}_{1}\, \tag{7}\]
where \(\omega=e^{i\frac{2\pi}{\mathsf{C}}}\). This code admits a lower resolution than the \(p=2\) cat code, but detects one more loss in any of the two modes. Equivalently, it admits a higher resolution than the \(p=3\) cat code's resolution of unity, but corrects one fewer loss. Simplices exist in any dimension, yielding the infinite \((\!(n,2,2-1/n,3))\) QSC family that approaches the resolution of the \(p=2\) cat code with increasing \(n\) while detecting one more loss in any mode.
The _Mobius-Kantor_\((\!(2,3,1.0,\langle 3,4,4\rangle)\!)\) code maintains the resolution of the \(p=3\) cat code while adding one more logical state and detecting one more loss. Each of its three codeword constellations form the 8 vertices of a Mobius-Kantor polygon (3{3}3 in Coxeter notation; see Appx. B), and such polygons combine to form the 24 vertices of a 3{4}3 polygon. This code corrects one more loss than the 2T-qutrit [34], a \((\!(2,3,1.0,\langle 2,4,4\rangle)\!)\) QSC whose codeword constellations each make up the 8 vertices of a complex octagon 2{4}4. These two codes differ despite the fact that both code constellations map to the vertices of the _same_ real 4D polytope via the mapping \((x+iy,z+iw)\to(x,y,z,w)\), demonstrating subtleties in using real polytopes to define complex QSCs.
Codeword constellations of the powerful \((\!(3,2,1.0,\langle 4,5,9\rangle\!))\)_Hessian code_ consist of the 27 vertices of a Hessian polytope,
\[\mathcal{C}_{0}=\left\{\tfrac{1}{\sqrt{2}}(\eta^{\mu},-\eta^{\nu},0)\cup \text{perms.}\,|\,\mu,\nu\in\mathbb{Z}_{3}\right\}=-\mathcal{C}_{1}\, \tag{8}\]
where \(\eta=e^{i\frac{2\pi}{\mathsf{C}}}\), and "perms." is shorthand for the two cyclic permutations of the vector to the left for each \(\mu,\nu\). This code corrects as many losses as the \(p=4\) cat code, but has the resolution of the \(p=3\) cat code. Moreover, it can detect up to 8 losses, a feature available only to the \(p\geq 9\) cat codes.
There is a \((\!(2,2,2-\sqrt{2},\langle 5,6,12\rangle\!))\) code that maintains the same resolution as the \(p=4\) cat code, but corrects one more and detects 8 more losses. Its codeword constellations each form the 24 vertices of a 4{3}4 polygon, combining into a 48-vertex 2{6}4 polygon.
An overachieving cousin of the above code is the \((\!(4,2,2-\sqrt{2},\langle 6,8,12\rangle\!))\)_Witting code_, consisting of two Witting polytopes with 240 vertices each. This code corrects as many losses as a \(p=6\) cat code, has the resolution of a \(p=4\) cat code, and detects up to 11 losses. It is the first member of the infinite \((\!(2^{r},2,2-\sqrt{2},8)\!)\) family of codes that are based on orbits of the real Clifford group [43; 44; 45; 46].
A lower bound on \(d_{\ddagger}\) for Clifford, simplex, or other QSCs can be obtained whenever their codeword constellations form designs [47]. A constellation \(\mathcal{C}_{k}\) is a _complex spherical design_[48] of strength \(\tau\) if averages of monomials \(L_{\mathbf{p},\mathbf{q}}\) of total degree \(|\mathbf{p}+\mathbf{q}|\leq\tau\) over \(\mathcal{C}_{k}\) (1) are equal to those over the entire unit \(n\)-sphere,
\[\frac{1}{|\mathcal{C}_{k}|}\sum_{\mathbf{\alpha}\in\mathcal{C}_{k}}L_{\mathbf{p},\mathbf{q }}(\mathbf{\alpha}^{*},\mathbf{\alpha})=\int_{\Omega_{n}}d\mathbf{\alpha}L_{\mathbf{p},\mathbf{q}} (\mathbf{\alpha}^{*},\mathbf{\alpha}). \tag{9}\]
Design strength is preserved under unitary rotations \(\mathbf{R}\), so codeword constellations \(\mathcal{C}_{k}=\mathbf{R}_{k}\mathcal{C}_{0}\) consisting of rotated versions of a complex spherical \(\tau\)-design \(\mathcal{C}_{0}\) yield a QSC whose degree distance is at least \(\tau+1\). In this way, construction of good QSCs can be accomplished by analyzing the problem of finding well-separated spherical designs \(\mathcal{C}_{0}\) of high strength coupled with a choice of rotations \(\{\mathbf{R}_{k}\}_{k=1}^{K}\) that permits to control the resolution \(d_{E}\) of the code constellation \(\bigcup_{k=1}^{K}\mathbf{R}_{k}\mathcal{C}_{0}\) while achieving high logical dimension \(K\).
**CSS-based QSCs** Concatenations of CSS codes [49; 50; 51] with the two-component cat code [27], \(\mathcal{C}_{0}=\{(+1)\}=-\mathcal{C}_{1}\), can also be interpreted as QSCs, albeit with a weight-based notion of ladder-error protection. Such codes are actively studied [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24], but have so far been interpreted in the framework of the outer qubit code and not in terms of underlying modal degrees of freedom. Our interpretation parallels a standard way to construct (classical) spherical codes by mapping binary codes to the (real) sphere [29, Sec. 2.5; 30, Sec. 1.2].
A \([[n,k,(d_{X},d_{Z})]]\) qubit CSS code is constructed from two binary linear codes with distances \(d_{X}\) and \(d_{Z}\), guaranteeing detection of Pauli \(X\)-type and \(Z\)-type errors with weights less than the distances, respectively. Its codewords are equal superpositions of multi-qubit states labeled by binary strings. Concatenation is equivalent to mapping each binary string into a point on the \(n\)-sphere via the coordinate-wise antipodal mapping \(0\to+1\) and \(1\to-1\). This yields an \((\!(n,2^{k},d_{E}=4d_{X}/n,w_{\ddagger}=d_{Z})\!)\) QSC that detects all errors \(L_{\mathbf{p},\mathbf{q}}\) with Hamming weight \(\Delta(\mathbf{p}+\mathbf{q})<w_{\ddagger}\) (see Appx. C). Asymptotically good qubit CSS codes thus yield QSCs whose distances \(d_{E},w_{\ddagger}\) are both separated from \(0\) as \(n\to\infty\).
**X-type gates & stabilizers** Rotations on the \(n\)-sphere provide groups of \(X\)-type logical gates and stabilizers for QSCs. Elements of a _logical group_\(\mathsf{G}\) permute codeword constellations. Elements of a _stabilizer subgroup_\(\mathsf{H}\subset\mathsf{G}\) permute points within each constellation, thereby leaving codewords invariant. Rotations are realized by passive linear-optical transformations using [33, Eq. (3.24)]. Rotation-based gates are noise-bias preserving [52] in that they do not convert rotations into losses.
For cat codes with \(2p\) components, \(\mathcal{C}_{0}=\{(\zeta^{2j})\,|\,j\in\mathbb{Z}_{\mathsf{p}}\}=\zeta\mathcal{C} _{1}\) with \(\zeta=e^{i\frac{\pi}{p}}\), the 1D rotation \(\zeta\) permutes the two constellations, while powers of \(\zeta^{2}\) leave each constellation invariant. These rotations generate \(\mathsf{H}=\mathbb{Z}_{\mathsf{p}}\subset\mathsf{G}=\mathbb{Z}_{2\mathsf{p}}\) and are realized by transformations \(\zeta^{a^{\dagger}a}\) and \(\zeta^{2a^{\dagger}a}\).
Simplex constellations (7) can be permuted with the
\(-\left(\begin{smallmatrix}1&0\\ 0&1\end{smallmatrix}\right)\) rotation and are invariant under powers of \(\omega\left(\begin{smallmatrix}1&0\\ 0&\omega\end{smallmatrix}\right)\), corresponding to the groups \(\mathbb{Z}_{\mathfrak{s}}\subset\mathbb{Z}_{\mathfrak{s}}\times\mathbb{Z}_{2}\), respectively. The latter group is generated by the two-mode transformations \((-1)^{a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}}\) and \(\omega^{a_{1}^{\dagger}a_{1}+2a_{2}^{\dagger}a_{2}}\).
The stabilizer group for the Hessian code (8) is \(\mathsf{He_{3}}=\langle\eta,X,Z\rangle\), the 27-element qutrit Pauli/Heisenberg group consisting of powers of \(\eta\) and the \(X,Z\) qutrit Pauli matrices. Appending by the logical-\(X\) rotation \(-I\), where \(I\) is the 3-by-3 identity, yields the logical group \(\mathsf{He_{3}}\times\mathbb{Z}_{2}\). These groups are realized by phase-shifters and SWAP gates. Larger \(\mathsf{H}\subset\mathsf{G}\) can be picked using the fact that all constellations form polytopes. The largest, \(3[3]3[3]3\subset 2[4]3[3]3\), form the 648-element and 1296-element symmetry groups of the Hessian and double-Hessian polytopes. These offer other ways to implement the logical-\(X\) Pauli gate, but do not yield other gate types.
Qudit QSCs offer larger logical-gate groups. The two groups are \(\mathbb{Z}_{2}\subset 2\mathsf{I}\) for the 24-cell (\((2,5,0.382,\langle 4,6,8\rangle)\)) real polytope code, with the former generated by the 5-by-5 matrix \(-I\), and the latter the binary icosahedral group 2\(\mathsf{I}\). Since the stabilizer group acts trivially, the logical group acts on the 5 codewords as a 5D permutation representation of the icosahedral group \(\mathsf{I}=2\mathsf{I}/\mathbb{Z}_{2}\).
CSS-based QSCs inherit logical-\(X\) stabilizers (gates) by mapping each \(X\)-type stabilizer (logical Pauli) to a transversal linear-optical transformation via the component-wise mapping \(\sigma_{x}\rightarrow(-1)^{a^{\dagger}a}\). For example, the \(\sigma_{x}^{\otimes 4}\) stabilizer of the \([[\mathsf{a},2,2]]\) code is mapped to the joint parity \(\bigotimes_{j=1}^{4}(-1)^{a_{j}^{\dagger}a_{j}}\).
Z-type gates & stabilizersThe \(Z\)-type "stabilizer" for \(2p\)-component cat codes is \(F(a)=a^{2p}-\mathbb{N}^{p}\), which annihilates each point in the dilated code constellation \(\sqrt{\mathbb{N}}\mathcal{C}\). The corresponding polynomial \(F(\alpha)\) can be thought of as a potential on the sphere that is minimized only at the code-constellation points [53].
Polytope QSCs can require multiple polynomials to be stabilized. Simplex codes (7) are stabilized by \(F_{1}=a_{1}^{2}a_{2}^{4}-\bar{\mathbb{N}}^{3}\) and \(F_{2}=a_{1}^{3}a_{2}-\bar{\mathbb{N}}^{2}\). Hessian codewords (8) are stabilized by the \(F_{1}=a_{1}a_{2}a_{3}\), \(F_{2}=a_{1}^{3}+a_{2}^{3}+a_{3}^{3}\), and \(F_{3}=a_{1}^{6}+a_{2}^{6}+a_{3}^{6}+\bar{\mathbb{N}}^{3}/4\). The degree of \(F_{1,2}\) is lower than the code's degree distance (\(d_{\frac{1}{2}}=5\)) and detectable-loss distance (\(d_{\frac{1}{2}}=9\)), unlike for the cat codes. This effect is also manifest in QLDPC codes, which admit low-weight check operators but can have larger distances.
Stabilizer polynomials commute with logical transformations \(U_{\mathbf{R}}\) for any \(\mathbf{R}\) in the logical group and can be obtained by averaging ladder operators (4) over the symmetry group of the code constellation's polytope.
Other polynomials act as logical gates on QSCs, evaluating to the same value for all points in \(\mathcal{C}_{k}\) in a way that depends on \(k\). For the cat codes, \(G=a^{p}\) evaluates to \(\pm\bar{\mathbb{N}}^{p/2}\) on the two codewords, respectively, yielding a logical-\(Z\) gate. The monomial \(G=a_{1}a_{2}^{2}\) projects to a logical-\(Z\) gate within the simplex codespace. The smallest loss-only \(Z\)-gate of the Hessian code is \(G_{1}=a_{1}^{3}a_{2}^{6}\) or its two cyclic permutations, and only a permutation-symmetric combination of all three operators commutes with the stabilizer group. A lower-degree monomial \(G_{2}=a_{1}^{4}a_{1}a_{2}^{3}\) realizes another \(Z\)-gate with the help of gain operators. Combinations \(G_{j}+G_{j}^{\dagger}\) generate logical \(Z\)-rotations within the \(F\)-annihilated subspace [53], and have been realized for \(p=2\) cat codes [4].
CSS-based QSCs inherit gates/stabilizers by mapping each \(Z\)-type gate/stabilizer to a monomial via the component-wise mapping \(\sigma_{z}\to a\). For example, the \([[4,2,2]]\) code's \(\sigma_{z}\otimes\sigma_{z}\otimes I\otimes I\) gate is mapped to \(a_{1}a_{2}\).
Correcting errorsProtection against rotation-based noise for \(2p\)-component cat codes is done passively using a Lindbladian whose jump operator is the \(Z\)-type stabilizer \(F\)[53] and/or a Hamiltonian \(F^{\dagger}F\)[54, 55]. Both techniques have been realized for \(p=2\)[5, 3]. General QSCs admit the same type of passive protection but require several \(F_{j}\)'s.
Ladder errors (4) map the \(k\)th codeword (1) into error states in \(\operatorname{span}\{|\mathbf{\alpha}\rangle,\mathbf{\alpha}\in\mathcal{C}_{k}\}\). The stabilizer group \(\mathsf{H}\) splits up into several irreducible representations (irreps) acting on this span. Ladder-error protection is done by measuring syndromes associated with irreps and mapping back into the codespace. In order for correction to be possible, the stabilizer group has to be able to resolve all error spaces associated with a given error set.
The 4-component cat-code stabilizer is the parity \((-1)^{a^{\dagger}a}\). Its eigenvalues correspond to the two irreps of \(\mathsf{H}=\mathsf{Z}_{2}\), distinguishing between no error and a single loss \(a\). This technique [1] led to the first demonstration of break-even QEC using \(p=2\) cat codes [2]. Similar multimode parities detect \(X\)-errors for CSS-based QSCs.
For the simplex code (7), eigenvalues of the two-mode stabilizer \(\omega^{a_{1}^{\dagger}a_{1}+2a_{2}^{\dagger}a_{2}}\) label the five irreps of \(\mathsf{Z}_{\mathfrak{s}}\). They allow for correction of \(\{a_{1},a_{2},a_{1}a_{2},a_{2}^{2}\}\), falling short of correcting all two-mode losses due to \(a_{1}^{2}\) not being simultaneously correctable with \(a_{2}\).
For the Hessian code (8), the transformations realizing \(\mathsf{He_{3}}\) can be measured to resolve the group's 11 irreps. The general procedure for this and other non-Abelian codes resembles that of molecular codes [25, Sec. V.D].
Discussion & conclusionWe introduce a framework for constructing quantum analogs of the classical spherical codes, encapsulating several physically relevant quantum coding schemes for bosonic, spin, and molecular systems. We apply our framework to obtain multimode cat codes based on polytopes, CSS qubit codes, and classical codes; some of these outperform previous cat-code constructions [27, 34, 28].
There are many other ways of constructing spherical codes, e.g., as group-orbit codes [56, 57, 58], as spherical embeddings of association schemes [30], through computer searches [59, 60], and many others [29, 30, 61], as well as ways of constructing spherical designs [62, 63, 64]. As such, we anticipate that this work will pave the way for many novel, well-protected, and experimentally feasible logical qubits.
## Acknowledgments
We thank Jonathan Conrad, Aurelie Denys, Michael Gullans, and Greg Kuperberg for helpful discussions. This work is supported in part by NSF QLCI grant OMA-2120757 and NSF grants CCF-2110113 (NSF-BSF) and CCF-2104489. JTI thanks the Joint Quantum Institute at the University of Maryland for support through a JQI fellowship. Our figures were drawn using Mathematica 13 following the prescription of Ref. [65]. Contributions to this work by NIST, an agency of the US government, are not subject to US copyright. Any mention of commercial products does not indicate endorsement by NIST. VVA thanks Ryhor Kandratsenia and Olga Albert for providing daycare support throughout this work.
## Appendix A Real-polytope QSCs
Codeword constellations \(\mathcal{C}_{k}\) of a real polytope QSC form the vertices of a real polytope. The figure that results from the union of all codeword polytopes is called a _polytope compound_, and its vertices form the code constellation \(\mathcal{C}\). Polytope QSCs can thus be constructed from established polytope compounds.
All regular real polytope compounds have been classified in three [66] and four [40, Table VII; 67] dimensions. Using those classifications, we collect what we believe are all QSCs whose codeword and code constellations both form convex regular real polytopes in Table 1. We have also included a few other QSCs constructed from non-regular polytopes. All polytopes used in our constructions are listed in Table 2.
The first column of the table lists the polytope whose vertices make up the codeword constellations \(\mathcal{C}_{k}\). All \(\mathcal{C}_{k}\) make up the same polytope for every code, with the exception being the "hyper-cube, -octahedron" code, in which \(\mathcal{C}_{0}\) (\(\mathcal{C}_{1}\)) makes up the vertices of a hyper-cube (hyper-octahedron).
Since the \(n\)-sphere is complex while the polytopes are real, we have to embed the polytopes into the sphere. For
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline logical constellation & code constellation & \(n\) & \(K\) & \(\langle t_{\downarrow},d_{\uparrow},d_{\downarrow}\rangle\) & \(d_{E}\) & related code \\ \hline line segment & \(2K\)-gon & \(1\) & \(K\) & \(\langle 2,2,2\rangle\) & \(4\sin^{2}\frac{\pi}{2K}\) & two-component cat qu\(K\)it \\ & icosahedron & \(2\) & \(6\) & \(2\) & \(1.106\) & \\ & dodecahedron & \(2\) & \(10\) & \(2\) & \(0.509\) & \\ & \(24\)-cell & \(2\) & \(12\) & \(2\) & \(1.000\) & \(\mathbb{Z}_{2}\subset 2\mathsf{T}\) group-GKP \\ & \(288\)-cell & \(2\) & \(24\) & \(2\) & \(0.586\) & \(\mathbb{Z}_{2}\subset 2\mathsf{O}\) group-GKP \\ & hyper-icosahedron & \(2\) & \(60\) & \(2\) & \(0.382\) & \(\mathbb{Z}_{2}\subset 2\mathsf{I}\) group-GKP \\ & hyper-dodecahedron & \(2\) & \(300\) & \(2\) & \(0.073\) & \\ & \(D\)-orthoplex & \(\lceil D/2\rceil\) & \(D\) & \(\langle 1,2,2\rangle\) & \(2.000\) & \(D=4\): \(\mathbb{Z}_{2}\subset\mathsf{Q}\) group-GKP \\ & \(D\)-cube & \(\lceil D/2\rceil\) & \(2^{D-1}\) & \(2\) & \(4/D\) & \\ \hline \(p\)-gon & \(Kp\)-gon & \(1\) & \(K\) & \(\langle p,p,p\rangle\) & \(4\sin^{2}\frac{\pi}{Kp}\) & \(p\)-component cat qu\(K\)it \\ \hline tetrahedron & dodecahedron & \(2\) & \(5\) & \(3\) & \(0.509\) & \\ \hline octahedron & \(5\)-octahedron & \(2\) & \(5\) & \(4\) & \(0.382\) & \\ \hline hyper-tetrahedron & hyper-dodecahedron & \(2\) & \(120\) & \(3\) & \(0.073\) & \\ \hline hyper-octahedron & \(24\)-cell & \(2\) & \(3\) & \(\langle 2,4,4\rangle\) & \(1.000\) & \(\mathsf{Q}\subset 2\mathsf{T}\) group-GKP, \(2\mathsf{T}\)-qutrit \\ & \(288\)-cell & \(2\) & \(6\) & \(\langle 2,4,4\rangle\) & \(0.586\) & \(\mathsf{Q}\subset 2\mathsf{O}\) group-GKP \\ & hyper-icosahedron & \(2\) & \(15\) & \(4\) & \(0.382\) & \(\mathsf{Q}\subset 2\mathsf{I}\) group-GKP \\ & hyper-dodecahedron & \(2\) & \(75\) & \(4\) & \(0.073\) & \\ \hline hyper-cube, -octahedron & \(24\)-cell & \(2\) & \(2\) & \(\langle 2,4,4\rangle\) & \(1.000\) & \\ \hline
24-cell & \(288\)-cell & \(2\) & \(2\) & \(\langle 5,6,12\rangle\) & \(0.586\) & \(2\mathsf{T}\subset 2\mathsf{O}\) group-GKP \\ & hyper-icosahedron & \(2\) & \(5\) & \(\langle 4,6,8\rangle\) & \(0.382\) & \(2\mathsf{T}\subset 2\mathsf{I}\) group-GKP \\ & hyper-dodecahedron & \(2\) & \(25\) & \(6\) & \(0.073\) & \\ \hline hyper-icosahedron & hyper-dodecahedron & \(2\) & \(5\) & \(12\) & \(0.073\) & \\ \hline \(D\)-simplex & \(D\)-bisimplex & \(\lceil D/2\rceil\) & \(2\) & \(\langle 2,3,3\rangle\) & \(2-2/D\) & \\ \((2^{r}-1)\)-simplex & \((2^{r}-1)\)-cube & \(2^{r-1}\) & \(2^{2^{r}-r-1}\) & \(3\) & \(4/(2^{r}-1)\) & shortened Hadamard \\ \hline \(D\)-demicube & \(D\)-cube & \(\lceil D/2\rceil\) & \(2\) & \(\min(4,D)\) & \(4/D\) & single parity-check \\ \hline \(2^{r}\)-orthoplex & \(2^{r}\)-cube & \(2^{r-1}\) & \(2^{2^{r}-r-1}\) & \(4\) & \(2^{2-r}\) & augmented Hadamard \\ \hline \hline \end{tabular}
\end{table}
Table 1: QSCs whose logical and code constellations both make up the vertices of a real polytope; \(D\geq 2\) corresponds to spatial dimension, and the parameter \(r\geq 2\).
even dimension \(D\), the standard method of doing this is via the mapping
\[\mathbb{R}^{D}\ni(x_{1},x_{2},\cdots,x_{D})\to(x_{1}+\mathrm{i}x_{2},x_{3}+ \mathrm{i}x_{4},\cdots,x_{D-1}+\mathrm{i}x_{D})\in\mathbb{C}^{D/2}. \tag{10}\]
Other mappings can be obtained by permuting the real coordinates. For odd \(D\), one has to embed the polytope into \(D+1\) dimensions and then apply a mapping like the one above. Convenient coordinates exist for polytopes embedded in higher dimensions, e.g., vertices of a \(D\)-simplex have coordinates \((1,1,\cdots 1,-D)\in\mathbb{R}^{D+1}\) and permutations thereof [30, Sec. 1.5]. Mappings into higher-dimensional spaces can also be used, e.g., the \(2p\)-component cat-code constellation can be mapped into \(\mathcal{C}=\{\zeta^{j}\boldsymbol{\alpha},j\in\mathbb{Z}_{\mathsf{2p}}\}\) for any \(n\)-dimensional unit vector \(\boldsymbol{\alpha}\). If one prefers to use real-valued vertices, then \(\mathbb{R}^{D}\) can be directly embedded into \(\mathbb{C}^{D}\).
The parameters \(t_{\downarrow},d_{\downarrow}\) can depend on which of the above mappings one uses; we calculate them numerically by evaluating Eq. (5) for a given \(\mathcal{C}\). A mapping-independent lower bound on the degree distance \(d_{\downarrow}\) can be obtained from the strength of the design formed by the codeword polytopes. Real polytope vertices can form (real) spherical designs [47], which are convertible into complex spherical designs via [48, Lemma 3.3]. The design strengths \(\tau\) of \(D\)-dimensional polytope vertices are listed in Table 2, column 5, yielding \(d_{\downarrow}\geq\tau+1\) for a code consisting of such polytopes. This bound appears to be tight for real polytopes and holds as long as the polytope formed by \(\mathcal{C}\) is the same dimension as those formed by each \(\mathcal{C}_{k}\). Otherwise, the codeword polytopes will not share a common sphere on which their vertices form designs. An exception to this restriction is for \(\mathcal{C}_{k}\) that are 1D line segments and is due to the fact that any pair of segments shares a common circle. The degree distance of a QSC consisting of segments is thus at least two.
Points on the real 4D sphere are in one-to-one correspondence with quaternions, which in turn parameterize the group \(\mathsf{SU}(2)\)[70]. Vertices of the hyper-octahedron, 24-cell, (disphenoidal) 288-cell, and hyper-icosahedron correspond to quaternions forming the quaternion \(\mathsf{Q}\), binary tetrahedral \(\mathsf{2T}\), binary octahedral \(\mathsf{2O}\), and binary icosahedral \(\mathsf{2I}\) subgroups, respectively. Polytope QSCs consisting of such polytopes thus are related to \(\mathsf{SU}(2)\) group-GKP codes [25]. The \(\mathsf{2T}\)-qutrit code [34] is similarly related to the \(\mathsf{Q}\subset\mathsf{2T}\subset\mathsf{SU}(2)\) group-GKP code, but the idea of using groups this way is limited to two modes because spheres in higher dimensions no longer correspond to groups.
An \([n,k]\) binary linear \(\mathbb{C}\) code \(C\) can be converted into a QSC by taking codeword constellations to be cosets of
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline polytope & dim & Schlafli/Coxeter & vertices & design & \(d_{E}\) & \(d_{E}\) (numerical) reference \\ \hline line segment & 1 & \(\{\ \}\) & 2 & 1 & 4 & 4.000 \\ \hline triangle & 2 & \(\{\ \}\) & 3 & 3 & 2 & 3 & 3.000 \\ square & 2 & \(\{\ \}\) & 4 & 4 & 3 & 2 & 2.000 \\ pentagon & 2 & \(\{\ \}\) & 5 & 5 & 4 & \((5-\sqrt{5})/2\) & 1.382 \\ \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\ \(p\)-gon & 2 & \(\{p\}\) & \(p\) & \(p-1\) & \(4\sin^{2}\frac{\pi}{p}\) & \\ \hline tetrahedron & 3 & \(\{\ \}\) & 4 & 2 & 8/3 & 2.667 \\ octahedron & 3 & \(\{\ \}\) & 6 & 3 & 2 & 2.000 \\ cube & 3 & \(\{\ \}\) & 8 & 3 & 4/3 & 1.333 \\ icosahedron & 3 & \(\{\ \}\) & 12 & 5 & \(2-2\sqrt{5}/3\) & 1.106 \\ dodecahedron & 3 & \(\{\ \}\) & 20 & 5 & \(2-2\sqrt{5}/3\) & 0.509 \\ _5-octahedron_ & 3 & \(\{\ \}\) & 30 & 5 & \((3-\sqrt{5})/2\) & 0.382 [66] \\ \hline hyper-tetrahedron & 4 & \(\{\ \}\) & 3,3 & 5 & 2 & 5/2 & 2.500 & [68] \\ hyper-octahedron & 4 & \(\{\ \}\) & 3,3 & 4 & 8 & 3 & 2 & 2.000 & [68] \\ hyper-cube & 4 & \(\{\ \ \}\) & 4,3 & 3 & 16 & 3 & 1 & 1.000 & [68] \\
24-cell & 4 & \(\{\ \}\) & 3,4 & 3 & 24 & 5 & 1 & 1.000 & [68] \\ _288-cell_ & 4 & _o3m4m3o_ & 48 & 5 & \(2-\sqrt{2}\) & 0.586 & [69] \\ hyper-icosahedron & 4 & \(\{\ \}\) & \(\{\ \}\) & 120 & 11 & \((3-\sqrt{5})/2\) & 0.382 & [68] \\ hyper-dodechedron & 4 & \(\{\ \}\) & \(\{\ \}\) & 600 & 11 & \((7-3\sqrt{5})/4\) & 0.073 & [68] \\ \hline \(D\)-simplex & \(D\) & \(\{\ \}\) & \(\{\ \}\) & \(D+1\) & 2 & \(2+2/D\) & & [69] \\ \(D\)-_bisimplex_ & \(D\) & \(\{\ \&
in \(\mathbb{F}_{2}^{n}\) under the antipodal mapping. The table lists QSCs arising this way from the Hadamard \(\mathbb{R}\) and single parity-check \(\mathbb{R}\) codes. These codes all have non-trivial \(d_{\ddagger}\) because the cosets correspond to known polytope compounds when embedded into the sphere [42, pg. 287].
## Appendix B Complex-polytope QSCs
Complex polytopes are polytopes whose vertices are complex. As with real polytopes, there are a myriad polygons in the two complex dimensions, a handful of special polytopes in a few of the higher dimensions, and only two infinite families of non-real complex polytopes present in any dimension.
The two families are straightforward complex generalizations of the cube and orthoplex, respectively. A simple set of vertices of a real \(D\)-dimensional cube consists of \(2^{D}\) vectors with coordinates \(\pm 1\). The vertices of the _complex \((n,m)\)-cube_ (a.k.a. \(\gamma_{n}^{m}\)) consist of \(m^{n}\) complex vectors of dimension \(n\) with \(m\)th roots of unity at each coordinate. A similar generalization holds for the \((n,m)\)_-orthoplex_ (a.k.a. \(\beta_{n}^{m}\)), whose \(mn\) coordinates are \(n\)-dimensional vectors whose single nonzero entry is an \(m\)th root of unity.
A union of complex polytopes sharing a common center forms a _complex polytope compound_. Complex compounds yield complex QSCs whose code constellations are formed by the vertices of the compound and whose codeword constellations are formed by the vertices of the participating polytopes. Complex compounds have not been as thoroughly studied as their real counterparts, and most of our codes come from the handful of constructions from Refs. [42, 71, 72]. In Table 1, we collect the complex polytope QSCs that are the most interesting for a comparative study with the real polytope codes. All the polytopes used in our constructions are listed in Table 2.
Complex polygons yield several interesting QSCs not available in the real case. We mentioned already in the main text that multiple complex polytopes can reduce to the same real polytope when mapped into the reals. As another example, compounds consisting of \(5\{3\}5\) polygonal code constellations have exceptional loss detection capabilities, with \(d_{\downarrow}\) as high as \(30\), but suffer from low resolution. There are many more polygons, and we leave a more extensive list of complex polytope QSCs to a follow-up work.
Complex polytopes also offer interesting many-mode alternatives to cat codes. The tensor product of \(n\) single-mode 4-component cat codes is an \((\!(n,2^{n},2/n,(2,2,2)\!)\!)\) QSC whose code constellation can be thought of as an \((n,4)\)-cube, constructed as a Kronecker product of \(n\)\((1,4)\)-cubes. The resolution of this code decreases as order \(O(1/n)\), meaning that a constant energy per mode (usually picked to be \(\bar{\aleph}/n\approx 2\)[2, 16]) is required in order to be able to resolve
\begin{table}
\begin{tabular}{c c c c c c c} \hline logical const-n & code const-n & \(n\) & \(K\) & \(\langle t_{\downarrow},d_{\ddagger},d_{\downarrow}\rangle\) & \(d_{E}\) & related code \\ \hline Möbius-Kantor & \(2\{6\}3\) & \(2\) & \(2\) & \(\langle 3,4,6\rangle\) & \(0.845\) & \\ & \(3\{4\}3\) & \(2\) & \(3\) & \(\langle 3,4,4\rangle\) & \(1.000\) & \(\mathsf{Q}\subset 2\mathsf{T}\) group-GKP \\ & \(2\{8\}3\) & \(2\) & \(6\) & \(\langle 2,4,4\rangle\) & \(0.367\) & \\ \hline \((2,4)\)-orthoplex & \(4\{3\}4\) & \(2\) & \(3\) & \(\langle 2,4,4\rangle\) & \(1.000\) & \(\mathsf{Q}\subset 2\mathsf{T}\) group-GKP, \(2\mathsf{T}\)-qutrit \\ \hline \(3\{6\}2\) & \([2\,3\{6\}2]\) & \(2\) & \(2\) & \(\langle 4,4,4\rangle\) & \(0.211\) & \\ \hline \(4\{3\}4\) & \(2\{6\}4\) & \(2\) & \(2\) & \(\langle 5,6,12\rangle\) & \(0.586\) & \(2\mathsf{T}\subset 2\mathsf{O}\) group-GKP \\ \hline \(3\{4\}3\) & \(2\{8\}3\) & \(2\) & \(2\) & \(\langle 3,6,12\rangle\) & \(0.367\) & \\ \hline \(2\{6\}4\) & \([2\,2\{6\}4]\) & \(2\) & \(2\) & \(\langle 4,8,8\rangle\) & \(0.268\) & \\ \hline \(3\{5\}3\) & \(2\{10\}3\) & \(2\) & \(2\) & \(\langle 9,12,30\rangle\) & \(0.132\) & \\ \hline \(5\{3\}5\) & \(2\{6\}5\) & \(2\) & \(2\) & \(\langle 11,12,30\rangle\) & \(0.098\) & \\ & \(3\{4\}5\) & \(2\) & \(3\) & \(\langle 11,12,20\rangle\) & \(0.044\) & \\ \hline \((3,3)\)-orthoplex & rectified Hessian & \(3\) & \(8\) & \(\langle 2,3,3\rangle\) & \(1.000\) & \\ \hline \((3,6)\)-orthoplex & rectified Hessian & \(3\) & \(4\) & \(\langle 2,4,6\rangle\) & \(1.000\) & \\ \hline Hessian & double Hessian & \(3\) & \(2\) & \(\langle 4,5,9\rangle\) & \(1.000\) & \\ \hline \(\phantom{0}\)Witting & double Witting & \(4\) & \(2\) & \(\langle 6,8,12\rangle\) & \(0.586\) & Clifford group-orbit \\ \hline \((1,m)\)-cube & \((n,m)\)-cube & \(n\) & \(m^{n-1}\) & \(\langle 1,2,m\rangle\) & \(\frac{4}{n}\sin^{2}\frac{\pi}{m}\) & \\ \hline \((1,m)\)-orthoplex & \((n,m)\)-orthoplex & \(n\) & \(n\) & \(\langle 1,2,m\rangle\) & \(\min(2,4\sin^{2}\frac{\pi}{m})\) & \\ \hline \end{tabular}
\end{table}
Table 1: QSCs whose logical and code constellations both make up the vertices of a non-real complex polytope; \(n\geq 1\) corresponds to complex dimension. \(d_{\downarrow}=m\) for the \((n,m)\)-cube/orthoplex codes are conjectured based on numerical results.
codewords without substantial intrinsic memory error. On the other hand, the \((n,4)\)-orthoplex (\((n,n,2.0,\langle 1,2,4\rangle)\)) QSC, whose codeword constellations are \((1,4)\)-orthoplexes, maintains _constant_ resolution and has extra loss detection at the expense of a linear increase in the codespace dimension and no loss correction. It is an interesting open problem to find a QSC with \(K=O(n)\) that can correct one or more losses.
## Appendix C CSS-based QSCs
The antipodal mapping converts binary strings \(\mathbf{b}=(b_{1},b_{2},\cdots,b_{n})\) labeling \(n\)-qubit states into \(n\)-mode coherent states normalized to an energy of unity,
\[\mathbf{\alpha_{b}}=\left((-1)^{b_{1}},(-1)^{b_{2}},\cdots,(-1)^{b_{n}}\right)/ \sqrt{n}. \tag{10}\]
Using [73, Thm. 7.3], there exists a basis of codewords for an \([[n,k,(d_{X},d_{Z})]]\) CSS code that is labeled by length-\(k\) binary strings \(\mathbf{\ell}\) and that is expressed in terms of \(\mathds{C}_{Z}^{\perp}\), the dual of one of the underlying binary linear codes. Applying the antipodal mapping to the \(\mathbf{\ell}\)th element of such a basis yields a codeword for the corresponding QSC,
\[|\overline{\mathbf{\ell}}\rangle\sim\frac{1}{\sqrt{|\mathds{C}_{Z}^{\perp}|}}\sum _{\mathbf{c}\in\mathds{C}_{Z}^{z}}|\sqrt{\mathds{N}}\ \mathbf{\alpha_{\mathbf{\ell}+\mathbf{c}}}\rangle. \tag{11}\]
Phase-flip errorsUsing Eq. (5), the projection of a general ladder error acting a subset of modes \(\mathsf{S}\) into the QSC codespace is equivalent to a \(Z\)-type error,
\[L_{\mathbf{p},\mathbf{q}}^{(\mathsf{S})}=\prod_{j\in\mathsf{S}}a_{j}^{\dagger p_{j}}a _{j}^{q_{j}}\qquad\to\qquad\left(\frac{\mathds{N}}{n}\right)^{|\mathbf{p}+\mathbf{q} |/2}\prod_{j\in\mathsf{S}}Z_{j}^{p_{j}+q_{j}}\, \tag{12}\]
where we define \(Z_{j}|\sqrt{\mathds{N}}\mathbf{\alpha_{b}}\rangle=(-1)^{b_{j}}|\sqrt{\mathds{N}} \mathbf{\alpha_{b}}\rangle\). As long as the support size of the region \(\mathsf{S}\) is less than \(d_{Z}\), the distance of \(\mathsf{C}_{Z}\), the properties of CSS codes can be used to show that the above error is detectable. This means that any ladder error with Hamming weight \(\Delta(\mathbf{p}+\mathbf{q})<d_{Z}\) is detectable.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline polytope & dim & Schlafli/Coxeter & vertices & design & \(d_{E}\) & \(d_{E}\) (numerical) & reference \\ \hline Möbius-Kantor & 2 & 3\{3\}3 & 8 & 3 & 2 & 2.000 & [41, 71] \\ & 2 & 2\{6\}3 & 16 & 3 & \(2-2/\sqrt{3}\) & 0.845 & [41, 71] \\ & 2 & 3\{4\}3 & 24 & 5 & 1 & 1.000 & [41, 71] \\ & 2 & 4\{3\}4 & 24 & 5 & 1 & 1.000 & [41, 71] \\ & 2 & 3\{6\}2 & 24 & 3 & \((3-\sqrt{3})/2\) & 0.634 & [41, 71] \\ & 2 & 2\{6\}4 & 48 & 7 & \(2-\sqrt{2}\) & 0.586 & [41, 71] \\ & 2 & 2\{8\}3 & 48 & 5 & \(2-2\sqrt{2}/3\) & 0.367 & [41, 71] \\ & 2 & \([\text{$2$ 3\{6\}2$}]\) & 48 & 3 & \((3-\sqrt{3})/6\) & 0.211 & \\ & 2 & \([\text{$2$ 6\{4\}4$}]\) & 96 & 7 & \(2-\sqrt{3}\) & 0.268 & \\ & 2 & 3\{5\}3 & 120 & 11 & \((3-\sqrt{5})/2\) & 0.382 & [41, 71] \\ & 2 & 2\{10\}3 & 240 & 11 & \(2-\sqrt{2(3+\sqrt{5})/3}\) & 0.132 & [41, 71] \\ & 2 & 2\{6\}5 & 240 & 11 & \(2-\sqrt{(5+\sqrt{5})/2}\) & 0.098 & [41, 71] \\ & 2 & 3\{4\}5 & 360 & 11 & \(4\sin^{2}(\pi/30)\) & 0.044 & [41, 71] \\ \hline Hessian & 3 & 3\{3\}3\{3\}3 & 27 & 4 & 3/2 & 1.500 & [41] \\ double Hessian & 3 & 2\{4\}3\{3\}3 & 54 & 4 & 1 & 1.000 & [72] \\ rectified Hessian & 3 & 3\{3\}3\{4\}2 & 72 & 5 & 1 & 1.000 & [42] \\ \hline
\begin{tabular}{c c c c c c} \hline \hline \(\text{Witting}\) & 4 & 3\{3\}3\{3\}3 & 3240 & 7 & 1 & 1.000 & [41] \\ _double Witting_ & 4 & _[_2_ 3_ &_3_ &_3_ & \(3\) & \(3\) & \(3\) \\ \hline \((n,m)\)-cube & \(n\) & \(m\{4\}2\{3\}\cdots 2\{3\}2\) & \(m^{n}\) & \(\min(3,m-1)\) & \(\frac{4}{\hbar}\sin^{2}\frac{\pi}{m}\) & & [41] \\ \((n,m)\)-orthoplex & \(n\) & \(2\{3\}2\{3\}\cdots 2\{4\}m\) & \(nm\) & \(\min(3,m-1)\) & \(\min(2,4\sin^{2}\frac{\pi}{m})\) & & [41] \\ \hline \hline \end{tabular}
\end{table}
Table 2: Non-real polytope data used to construct QSCs in Table 11. Italicised polytopes are not regular.
Bit-flip errorsThe squared Euclidean distance between two code constellation elements \(\mathbf{\alpha_{b}}\) and \(\mathbf{\alpha_{c}}\) can be expressed in terms of the Hamming distance \(\Delta(\mathbf{b},\mathbf{c})\) between their corresponding binary strings,
\[\left\lVert\mathbf{\alpha_{b}}-\mathbf{\alpha_{c}}\right\rVert^{2} =2-2\mathbf{\alpha_{b}}\cdot\mathbf{\alpha_{c}} \tag{41a}\] \[=2-\frac{2}{n}\sum_{j=1}^{n}(-1)^{b_{j}+c_{j}}\] (41b) \[=2-\frac{2}{n}\sum_{j=1}^{n}+[n-\Delta(\mathbf{b},\mathbf{c})]-[\Delta( \mathbf{b},\mathbf{c})]\] (41c) \[=4\Delta(\mathbf{b},\mathbf{c})/n. \tag{41d}\]
This quantity is bounded by \(4d_{X}/n\), where \(d_{X}\) is the distance of the other underlying binary linear code \(\mathsf{C_{X}}\).
|
2308.15976 | The dawn is quiet here: Rise in [$α$/Fe] is a signature of massive
gas accretion that fueled proto-Milky Way | The proto-Milky Way epoch forms the earliest stars in our Galaxy and sets the
initial conditions for subsequent disk formation. Recent observations from
APOGEE and H3 surveys showed that the [$\alpha$/Fe] ratio slowly declined
between [Fe/H] $=-3$ and $-1.3$ until it reached the lowest value ($\sim 0.25$)
among the selected in situ metal-poor stars that most likely formed during the
proto-Galaxy epoch. [$\alpha$/Fe] rose to meet the traditional high value
commonly associated with the thick disk population at [Fe/H] $=-1$. It was
suggested that the rise in [$\alpha$/Fe] could be caused by an increase in the
star formation efficiency (SFE), known as the "simmering" phase scenario.
However, gas inflow also plays a vital role in shaping the star formation
history and chemical evolution of galaxies. We investigate this unexpected
[$\alpha$/Fe]-rise with a statistical experiment involving a galactic chemical
evolution (GCE). Our model has five free parameters: the mass of the initial
reservoir of the cold interstellar medium (ISM) at birth, the frequency of Type
Ia supernovae (SNe Ia), the cooling timescale of the warm ISM, the SFE, and the
inflow rate of fresh gas. The last two free parameters were allowed to change
after [$\alpha$/Fe] reached its lowest value, dividing the proto-Galaxy epoch
into two phases. We find that the rise in [$\alpha$/Fe] is caused by a large
inflow of fresh gas and conclude that the [$\alpha$/Fe]-rise is a signature of
the cold mode accretion whose materials formed the prototype Milky Way
preceding disk formation. Although the SFE is essential in regulating the
chemical evolution, it does not necessarily increase to facilitate the
[$\alpha$/Fe]-rise. | Boquan Chen, Yuan-Sen Ting, Michael Hayden | 2023-08-30T11:59:03Z | http://arxiv.org/abs/2308.15976v1 | The dawn is quiet here: Rise in [\(\alpha\)/Fe] is a signature of massive gas accretion that fueled proto-Milky Way
###### Abstract
The proto-Milky Way epoch forms the earliest stars in our Galaxy and sets the initial conditions for subsequent disk formation. Recent observations from APOGEE and H3 surveys showed that the [\(\alpha\)/Fe] ratio slowly declined between [Fe/H] = \(-3\) and \(-1.3\) until it reached the lowest value (\(\sim 0.25\)) among the selected in situ metal-poor stars that most likely formed during the proto-Galaxy epoch. [\(\alpha\)/Fe] rose to meet the traditional high value commonly associated with the thick disk population at [Fe/H] = \(-1\). It was suggested that the rise in [\(\alpha\)/Fe] could be caused by an increase in the star formation efficiency (SFE), known as the "simmering" phase scenario. However, gas inflow also plays a vital role in shaping the star formation history and chemical evolution of galaxies. We investigate this unexpected [\(\alpha\)/Fe]-rise with a statistical experiment involving a galactic chemical evolution (GCE). Our model has five free parameters: the mass of the initial reservoir of the cold interstellar medium (ISM) at birth, the frequency of Type Ia supernovae (SNe Ia), the cooling timescale of the warm ISM, the SFE, and the inflow rate of fresh gas. The last two free parameters were allowed to change after [\(\alpha\)/Fe] reached its lowest value, dividing the proto-Galaxy epoch into two phases. We find that the rise in [\(\alpha\)/Fe] is caused by a large inflow of fresh gas and conclude that the [\(\alpha\)/Fe]-rise is a signature of the cold mode accretion whose materials formed the prototype Milky Way preceding disk formation. Although the SFE is essential in regulating the chemical evolution, it does not necessarily increase to facilitate the [\(\alpha\)/Fe]-rise.
keywords: keyword1 - keyword2 - keyword3
## 1 Introduction
The Milky Way galaxy is a complex and dynamic system that has undergone a long and rich history of formation and evolution. One of the main goals of Galactic archaeology is to reconstruct this history by studying the properties of its stellar populations, especially the oldest and most pristine ones. Stars carry valuable information about the physical and chemical conditions of their birth environments. By measuring their photospheric elemental abundances, we can infer the nucleosynthesis processes that enriched the interstellar medium (ISM) at the time of their birth, the star formation rates (SFR), the mixing and transport mechanisms, and the merger events that shaped the Galaxy. However, it remains a challenge to identify the ancient in situ stars that formed in the pre-disk phase of our Galaxy. The elemental abundances of these stars are expected to reveal the initial conditions of our Galaxy that set the stage for disk formation.
Stellar photospheric elemental abundances are one of the most powerful tools for Galactic archaeology, as they reflect the gas conditions at the birth of stars and provide direct and robust constraints on the chemical evolution of the Galaxy (Freeman and Bland-Hawthorn, 2002). Different elements are produced by different sources, such as massive stars, Type Ia supernovae (SNe Ia), asymptotic giant branch (AGB) stars, or neutron star mergers, with different delay timescales and efficiencies (Kobayashi et al., 2006; Kobayashi et al., 2020). Certain elements, such as oxygen, neon, magnesium, silicon, sulfur, argon, calcium, and titanium, can be produced at early times in core-collapse events at relatively constant rates with iron. Stars with high [\(\alpha\)/Fe] tend to form early when core-collapse supernovae (CCSN) efficient at producing \(\alpha\)-elements dominate nucleosynthesis. As time passes and Type Ia supernovae "turn on," the [\(\alpha\)/Fe] ratio drops as [Fe/H] increases. The relative abundances of different elements can thus reflect the relative contributions of these separate sources as well as the time delay between their production and their incorporation into new generations of stars.
Massive disk galaxies like the Milky Way are expected to have an ancient, metal-poor, and centrally concentrated stellar population, reflecting the star formation and enrichment in the most massive progenitor components that formed the proto-Galaxy at high redshift. Hopkins et al. (2023) showed that a centrally concentrated mass profile is necessary for disk formation with Feedback In Realistic Environments (FIRE) simulation. Metal-poor stars are known to reside in the inner few kiloparsecs of the Milky Way (Garcia Perez et al., 2013; Arentsen et al., 2020, 2020), but the current data does not provide a comprehensive picture of this metal-poor "heart" of the Milky Way. However, recent observations taking advantage of the
XP spectra from _Gaia_ DR3 have revealed an extensive, ancient, and metal-poor population of stars in the inner Galaxy, representing a significant stellar mass (Rix et al., 2022). The early phases of the Milky Way's star formation and enrichment are reflected in the distribution of old and metal-poor stars, which can be a mix of those that formed within the main in situ over-densities of the proto-Galaxy and those that formed in distinct satellite galaxies that later merged with the main body (Horta et al., 2021). The distinction between in situ formation and accretion can be seen in the abundance patterns of the stars, although at very early epochs, the distinction may become blurry due to the rapid coalescence of comparable mass pieces in major mergers.
Recent observational evidence has shed light on the chemical evolution of the transition period when the disk started forming in the Milky Way. Belokurov & Kravtsov (2022) identified a metal-poor component in the Milky Way called _Aurora_ from the APOGEE survey. This component is kinematically hot, with an approximately isotropic velocity ellipsoid and a modest net rotation. They revealed that the in-situ stars in Aurora exhibit a large scatter in elemental abundance ratios and the median tangential velocity of the in-situ stars increases sharply with increasing metallicity when [Fe/H] is between -1.3 and -0.9. The chemical scatter suddenly drops after this period, signalling the formation of the disk in about one to two Gyr. They proposed that these observed trends in the Milky Way reflect generic processes during the early evolution of progenitors of Milky-Way-sized galaxies, including a period of chaotic pre-disk evolution and subsequent rapid disk formation. Interestingly, many of the most metal-poor in situ stars preceding the disk populations in their sample have lower [Mg/Fe] than the traditional high [Mg/Fe] associated with old stars in the Galaxy (see their figure 6 and 7).
Conroy et al. (2022) extended the search for in situ halo stars as metal-poor as [Fe/H] = -2.5 in the H3 survey (Conroy et al., 2019) and revealed that [\(\alpha\)/Fe] gradually declined at low metallicity and rose up around [Fe/H] = \(-\)1.3 (see their figure 1). (Rix et al., 2022) took advantage of the XP spectra from _Gaia_ DR3 and derived reliable metallicity estimates for about two million bright stars, including 18,000 stars with \(-\)2.7 \(<\) [M/H] \(<\)\(-\)1.5. This massive sample allowed them to present the most comprehensive collection of metal-poor in situ stars in the Milky Way. They showed that the observed [\(\alpha\)/Fe]-rise is robust even for stars on near-circular orbits in their sample supplemented by [Mg/Fe] from APOGEE (their figure 7). Despite using samples from different surveys and selection methods, all of their works showed an unexpected [\(\alpha\)/Fe]-rise between [Fe/H] = -1.3 and -1 where [Mg/Fe] temporarily drops to an intermediate value between the high- and low-[\(\alpha\)/Fe] sequence in the disk.
The decline in [\(\alpha\)/Fe] is expected in all galaxies in time as remnants from intermediate-mass stars (\(\sim 3-8\) M\({}_{\odot}\)) explode as SNe Ia and release iron-peak elements unless an increasing amount of massive stars are continually evolving as CCSNe and releasing \(\alpha\)-elements to balance [\(\alpha\)/Fe] due to the rarity of massive stars. However, it is surprising to witness an increase in [\(\alpha\)/Fe] after it has started to drop as shown in recent observations. This signals the introduction of a considerable amount of \(\alpha\)-elements into our Galaxy after SNe Ia have made an impact on the composition of the ISM. The "simmering" phase scenario was proposed by Conroy et al. (2022) to explain the rise in [\(\alpha\)/Fe]. They fixed the inflow rate constant and adopted a low SFE as [\(\alpha\)/Fe] naturally declined due to the onset of SNe Ia to avoid forming too many metal-poor stars. As [\(\alpha\)/Fe] reached the lowest point, they increased the SFE in the model by twenty-five times. Many massive stars form and evolve as a result and CCSNe dominate the nucleosynthesis process causing [\(\alpha\)/Fe] to rise. However, adjusting the SFE is not the only way to increase short-term star formation and a twenty-five-fold increase is rare in isolated galaxies and requires specific galaxy interactions and mergers (Di Matteo et al., 2008). Another feasible scenario is that the [\(\alpha\)/Fe]-rise was a symptom of fluctuations in the inflow history. The gas reservoir was kept small
Figure 1: Flowchart illustrating scenarios of early Milky Way chemical evolution. The scenario capable of producing an [\(\alpha\)/Fe]-rise is highlighted in green and the rest in red. In summary, the additional star formation required to raise [\(\alpha\)/Fe] can be achieved by increasing the SFE, inflow rate, or both. However, increasing the SFE is ineffective if no gas sustains star formation. If the gas already exists as a massive gas reservoir before the parameter change, it is difficult to change the abundance in the model. It is preferable for the inflow to join the model after the parameter change.
as [Fe/H] rose and [\(\alpha\)/Fe] declined and a large amount of fresh gas was brought in through inflow, which achieved a similar effect as changing the SFE. There are additional benefits to this scenario. The SFE could be high throughout the entire proto-Galaxy phase, facilitating the rapid rise in [Fe/H]. Fuelling star formation with additional fresh gas also reduces the risk of running out of gas, unlike increasing the SFE. The reasoning process is summarized in Figure 1, which we will revisit after presenting our results.
This work aims to investigate the cause behind the [\(\alpha\)/Fe]-rise comprehensively with a galactic chemical evolution (GCE) model. GCE models are a computationally efficient approach to studying the evolution of galaxies, particularly their elemental abundances. They use parametric empirical laws to trace the evolution of abundances without directly modelling star formation and gas accretion history as performed in cosmological simulations. They have managed to replicate the age-metallicity and age-[\(\alpha\)/Fe] relationship, the stellar density variation in the [Fe/H]-[\(\alpha\)/Fe]-plane as a function of positions in the Milky Way (Minchev et al., 2018; Haywood et al., 2019; Sharma et al., 2021; Johnson et al., 2021; Chen et al., 2023). The remainder of this paper is organized as follows: Section 2 gives an introduction to GCE models and briefly describes the ingredients in our model. Section 3 shows the parameter distribution for the models that satisfy part of all of the descriptions of the observed [\(\alpha\)/Fe] behavior. Section 4 discusses the implications of our results in light of recent work on the early Milky Way. Section 5 provides a summary of our results.
## 2 Model
Galactic Chemical Evolution (GCE) models utilize a set of parameters guided by empirical physical laws to simulate the chemical evolutionary trajectory of galaxies. The synthesis of new elements within stars and the subsequent release and recycling of gas consisting of these newly produced elements into star formation are critical components of these models. Further mechanisms such as accretion/inflow (introducing fresh gas into the model) and outflow (removing existing ISM) can directly or indirectly shape the chemical evolution depicted by these models. The computational time required to run these models is a fraction of what it takes to trace chemistry in cosmological simulations. Therefore, they allow us to quickly sample an extensive range of parameters to examine the impact of various mechanisms or events on the chemistry of galaxies.
The model utilized in this work is originally developed by Andrews et al. (2017) named _flexCE_. We kept most of the original design but updated a few ingredients. It has many features that make it ideal for exploring the physical conditions of galaxies through elemental abundances. First, it has a multi-phase ISM composed of a cold and warm component, thus relaxing the assumption of instantaneous recycling in most GCE models. The newly synthesized nucleosynthesis yields are not immediately returned to the cold ISM for the next round of star formation. Instead, they are stored in the warm ISM which cooled gradually over time. Second, it has a physical implementation of star formation and evolution. The amount of star formation activity in any given step is determined by the amount of cold ISM at the time and the stars are represented in stellar mass bins with lifetimes. The SFH in the model is regulated by the mechanisms and thus self-consistent. The original stellar lifetimes only depended on the progenitor mass through an analytic function. Instead, we sourced stellar lifetimes from PARSEC-1.2S isochrones of various progenitor masses and metallicities (Bressan et al., 2012). The tracking of stellar lifetimes is important for studying the nucleosynthesis inside low-mass stars, including the production of white dwarfs and in turn SNe Ia.
Third, it uses a complete suite of nucleosynthesis tables. We have updated the model with the most up-to-date tables from Kobayashi et al. (2020) and included magnetorotational supernovae (MRSNe). This allows us to trace up to 83 elements in the model. Lastly, the model has a large selection of free parameters that allow us to fine-tune the strengths of various mechanisms. We can prescribe a function that controls the inflow of fresh gas over time and adjust the mass-loading factor regulating the outflow from star formation and supernovae. The original model assigned the inflowing gas to the cold ISM when it joins the model, but we switched it to the warm ISM so the fresh gas mixes with the existing enriched warm. The cooling of the inflowing gas is governed by the same cooling timescale for the existing warm ISM. This change improves the smoothness of chemical evolutionary tracks when a large amount of gas with a different composition joins the model. Section 2.1 offers a brief description of the model for readers not familiar with the original version. More details can be found in Chen et al. (2023) where a multi-zone version of this model replicated the variation of the [Fe/H]-[\(\alpha\)/Fe] density distribution in various locations of the cross-section plane of the Milky Way.
### Setting up our GCE model
In order to investigate the [\(\alpha\)/Fe] behavior at the outset of the Milky Way's evolution, our model is set to run for 1.8 Gyr. This timing is designed to accommodate a subsequent high-[\(\alpha\)/Fe] population that could potentially be as ancient as twelve billion years old (Xiang and Rix, 2022). We divided the time frame into two phases, the first one lasting one Gyr allowing [\(\alpha\)/Fe] to decline and the second one lasting 800 Myr as [\(\alpha\)/Fe] rises. Each time step corresponds to \(dt=30\) Myr, reflecting the lifespan of the longest-living progenitors of CCSNe. Given that the proto-galaxy was likely considerably smaller than the current Milky Way, our circular box is assigned a radius of \(R=3\) kpc. The size of the box is only used for calculating the gas density in our model and does not reflect the physical size of the proto-Galaxy or the distribution of materials within it. The yield tables from Kobayashi et al. (2020) linearly interpolate across the mass bins and then along 1000 grid points in metallicity, ranging from \(Z=0\) to \(Z=0.06\). The interpolated yields at the nearest metallicity grid point for a stellar mass bin are retrieved when a stellar bin reaches its lifespan, or when a white dwarf explodes.
We now initialize the components in our GCE model that will house the chemical elements: stars, cold ISM, and warm ISM. The stars are represented by stellar bins ranging from 0.1 to 50 M\({}_{\odot}\), following the initial mass function (IMF) defined below:
\[\xi(m)\propto\begin{cases}m^{-1.3}&0.1\leq\text{m}<0.5\text{M}_{\odot}\\ m^{-2.3}&\text{m}\geq 0.5\text{M}_{\odot}\end{cases} \tag{1}\]
For stellar bins with masses less than 9 M\({}_{\odot}\), the bin width is 0.1 M\({}_{\odot}\). For those exceeding 9 M\({}_{\odot}\), the bin width expands to 1 M\({}_{\odot}\). These bin sizes correspond to the original progenitor mass grid points in the nucleosynthesis tables. Stars with a mass greater than 9 M\({}_{\odot}\) only survive for less than 30 Myr and will end their lifetime within a single time step. New stellar bins are created to contain the mass during star formation at each step, and we monitor the remaining mass over time. In addition to mass, we also log the gas composition encapsulated within the stars at the moment of formation. This chemical composition is updated when a stellar bin expires, according to the interpolated yield tables. When a star
with a mass between 3.2 and 8.5 M\({}_{\odot}\) dies, we reference its remnant mass in the yield tables and add it to a white dwarf reservoir, which is subsequently used to calculate the number of type Ia supernovae (SNe Ia). Both the cold and warm ISM components include 83 entries that correspond to the 83 elements present in our nucleosynthesis tables.
Unlike AGBs and CCSNe which release the yields at the end of their progenitors' lifetime, SNe Ia experience an additional delay time after the formation of white dwarfs (WDs). The total mass of WDs in the galaxy, originating from progenitors with masses between 3.2 and 8.5 M\({}_{\odot}\), divided by the Chandrasekhar limit, determines the maximum number of potential SNe Ia in the model. This number is multiplied by a fraction \(f_{\rm SNIa}\) and an exponential delay term \(dt/t_{\rm scale,SNeIa}\), where \(t_{\rm scale,SNeIa}\) is the delay timescale for SNe Ia.
\[N_{\rm SNIa,i}=f_{\rm SNIa}\frac{m_{\rm WD,i}}{1.44{\rm M}_{\odot}}\frac{dt}{t_ {\rm scale,SNeIa}} \tag{2}\]
The SNe Ia yields utilized in our model are metallicity-dependent and are thus interpolated and applied in a similar fashion to the AGB and CCSNe yields. In summary of the timescales of the three major nucleosynthesis channels, CCSNe explode within thirty Myr (one time step), followed by AGBs from intermediate-mass stars over a few Myr years to a few Gyrs, and lastly succeeded by their WD remnants initiating a SNe Ia.
The SFR in our model is computed using the Kennicutt-Schmidt (KS) law (Schmidt, 1959; Kennicutt, 1998; Kennicutt & Evans, 2012), as represented below:
\[\frac{d\Sigma_{*}}{dt}=\epsilon_{\rm SF}\Sigma_{g}^{1.4}\sim\epsilon_{\rm SF} \ \left(\frac{m_{i,\rm cold}}{\pi R^{2}}\right)^{1.4} \tag{3}\]
Here, \(m_{i,\rm cold}\) represents the quantity of cold ISM available in the box at time step \(i\). The radius of our GCE box is used for calculating the gas density and in turn the SFR in our model. The materials in the model are assumed to be uniformly distributed. The size of the proto-Galaxy could be rapidly changing but constraining it would require some understanding of the rate of chemical evolution. Here we only aim to capture the average effect of mechanisms. We have set the SFE, \(\epsilon_{\rm SF}\), a free parameter for the star formation mechanism so we decided to fix \(R\).
The newly formed stellar mass is distributed to stellar mass bins following the IMF mentioned previously, and the corresponding amount of cold ISM is locked inside these stars. Upon reaching their stellar lifetimes and releasing the gas enriched by nucleosynthesis, 1% is allocated to the cold ISM, 79% to the warm ISM, and the remaining is assumed to be lost from the model. The warm ISM cools exponentially over a timescale of \(t_{\rm cool}\), during which a fraction equal to \(dt/t_{\rm cool}\) is transferred to the cold ISM. The cooling of the warm ISM is performed at the beginning of each time step. The process of star formation and evolutionary events will cause a portion of the cold ISM to transition into warm ISM through feedback. We set the mass loading factor \(\eta\) at three, which implies that a quantity of cold ISM, equivalent to three times the mass of gas involved in star formation and newly produced yields, will be instantaneously heated into warm ISM that will eventually be recycled for star formation. Finally, during each time step, an influx of fresh infalling gas will replenish the warm ISM.
### Exhaustive parameter exploration
The conventional approach to creating a GCE model that reproduces specific chemical evolutionary tracks involves manually choosing parameter values and coming up with a standard matching model. However, this method demands strong observational constraints to restrict the resulting GCE scenario, such as the age-abundance relation and the stellar distribution of specific abundances. In the case of the Milky Way disk, we have a large amount of observational data to constrain our model. As for the pre-disk Milky Way, our observational data focus on a small number of metal-poor stars. We have little knowledge about the star formation history or the overarching properties of the Galaxy during the first two Gyr after its birth before the formation of the disk. As a result, we identified five free parameters and ran our model with randomly generated parameter values until we are able to map the distribution of feasible parameter combinations. The goal is to explore the physical conditions that may have led to the observed [\(\alpha\)/Fe]-rise after [Fe/H] \(\sim-1.3\) and [Mg/Fe] \(\sim 0.2\). The feasible ranges for these parameters are chosen to encompass the values typically utilized for Milky Way studies, as well as those identified during our initial exploration.
The free parameters are:
* the initial mass of cold interstellar medium ISM, \(m_{\rm 0,cold}\);
* the fraction of white dwarfs arising from progenitor stars with initial masses within the range of (3.2, 8.5)M\({}_{\odot}\), eligible for SNe Ia, \(f_{\rm SNIa}\);
* the cooling timescale of warm ISM, \(t_{\rm cool}\);
\begin{table}
\begin{tabular}{l l l} \hline \hline Parameter & Meaning & Value \\ \hline \(R\) & Radius of the box & 3 kpc \\ \(N\) & Power in star formation law & 1.4 \\ \(m_{\rm 0,warm}\) & Initial warm gas mass & 0 M\({}_{\odot}\) \\ \(r_{\rm min,SNeIa}\) & Minimum time delay before first SNe Ia & 150 Myr \\ \(t_{\rm scale,SNeIa}\) & Timescale for decay of SNe Ia & 1.5 Gyr \\ \(f_{\rm direct}\) & Fraction of supernovae ejecta directly into cold gas & 0.01 \\ \(f_{\rm eject}\) & Fraction of supernovae ejecta lost & 0.2 \\ \(\eta_{\rm SF}\) & Mass-loading factor for gas heated by star formation & 3.0 \\ \(\eta_{\rm SN}\) & Mass-loading factor for gas heated by supernovae & 3.0 \\ \(Z_{\odot}\) & Metallicity of the Sun & 0.0156 \\ \hline \(\epsilon_{\rm SF}\) & Star formation efficiency constant & \(10^{-11\cdots 9}\) \\ \(m_{\rm 0,cold}\) & Initial cold gas mass & \(10^{7.5\cdots 9.5}\) M\({}_{\odot}\) \\ \(\dot{m}_{\rm inflow}\) & Inflow rate of fresh gas & \(0-5\) M\({}_{\odot}\) per year \\ \(t_{\rm cool}\) & Cooling timescale of warm gas & \(10^{8-10}\) yr \\ \(f_{\rm SNIa}\) & Fraction of white dwarfs from stars within (3.2, 8.5) M\({}_{\odot}\) that turn into SNe Ia & 0.05-0.2 \\ \hline \end{tabular}
\end{table}
Table 1: The values of fixed and free parameters in our GCE model
* the SFE, \(\epsilon_{\rm SF}\); and
* the inflow rate, \(\dot{m}_{\rm inflow}\).
Each of the free parameters plays a crucial role in our GCE model.
* The initial mass of the cold ISM, \(m_{\rm 0,cold}\), sets the recorded value of [Fe/H] after the first round of star formation in the model. We permit it to be between \(10^{7.5}\) and \(10^{9.5}\) M\({}_{\odot}\).
* The fraction of Type Ia supernovae, \(f_{\rm SNIa}\), is estimated to be around 5% (Maoz et al., 2012), but the actual proportion remains uncertain at high redshift. We permit it to be between 5% and 20%. This key parameter influences the evolution of [Fe/H] and [\(\alpha\)/Fe] after several hundred million years when a considerable amount of SNe Ia start producing iron.
* \(t_{\rm cool}\) controls the rate at which newly synthesized metal is returned to the cold ISM for star formation and ranges between \(10^{8}\) to \(10^{10}\) years in our model. Determining a cooling timescale for our warm ISM is challenging because it realistically depends on factors such as temperature and metallicity (Krumholz, 2012). However, it should typically range from a few hundred million years to a few billion years.
* \(\epsilon_{\rm SF}\) controls the efficiency of the process through which cold ISM is converted into stars. The SFE constant can be as low as \(10^{-11}\) (approximately 1% per billion years) to cover the possibility of a low SFE scenario. It can also reach \(10^{-9}\), comparable to the values estimated by Bigiel et al. (2008) and Leroy et al. (2008) in nearby galaxies.
* The inflow rate introduces fresh gas into the model and ranges between 0 and 5 M\({}_{\odot}\) per year. The continuous inflow of pristine or metal-poor matter hinders the increase of [Fe/H] but fuels long-term star formation.
Figure 2: The aggregate effect of the five free parameters on the chemical evolutionary tracks. Each panel represents one free parameter in the GCE model. The SFE (\(\epsilon_{\rm SF}\)) and inflow rate (\(\dot{m}_{\rm inflow}\)) are allowed to change after one Gyr when we expect [\(\alpha\)/Fe] to reverse and thus represented in two panels respectively before and after the turning point. Each panel contains four tracks averaged across [Mg/Fe] within the four quartiles of the parameter range. The median value of each parameter is shown in the legend with the corresponding colour and line style. As the rest of the parameters are drawn randomly in each run, the effect of the other parameters is expected to even out, allowing us to observe the effect of a single parameter.
The values of the fixed and free parameters are summarized in Table 1. Parameters \(m_{\rm 0,cold}\), \(t_{\rm cool}\), and \(\epsilon_{\rm SF}\) are chosen from a log-uniform distribution, allowing their effects on log-scale elemental abundances to be better observed. The remaining two parameters are selected from a uniform distribution. The parameter values are drawn independently randomly each time a new run begins without prior memory. However, replicating the [\(\alpha\)/Fe]-rise is not possible if the parameter values remain constant. Once Type Ia supernovae commence, [\(\alpha\)/Fe] decreases monotonically, necessitating additional \(\alpha\)-elements through infall or enhanced star formation to reverse the trend. Consequently, our model allows for two channels for the production of additional \(\alpha\)-elements through enhanced star formation activity. The first channel is through the increase in the SFE. Provided that there is sufficient cold ISM in the model to sustain star formation, the sudden rise in the SFE can lead to a significant amount of CCSNe over a short period of time. Otherwise, the cold ISM is depleted and no further star formation activity is present in the model. The second channel is through gas accretion into the model. However, if we had kept the inflow gas pristine, the increased inflow would reduce the metallicity of the ISM, which was not observed. We changed its [Fe/H] ratio to -1.35 after [\(\alpha\)/Fe] dropped to the lowest point, which is lower than our target [Fe/H] = 1.3 after one Gyr. We assumed the inflow gas at this epoch to be \(\alpha\)-enhanced and assigned its [Mg/Fe] ratio based on pure CCSNe yields (\(\sim 0.41\)). The additional fresh gas can also boost the short-term SFR and give rise to CCSNe, even though the SFE does not necessarily rise to facilitate active star formation. Although \(\alpha\)-enhanced gas could bring in some \(\alpha\)-elements, the inflow gas is very metal-poor and contain little metal. Thus, the rise in [\(\alpha\)/Fe] is primarily driven by star formation.
## 3 Results
Before we dig into the [\(\alpha\)/Fe]-rise, we will look at each parameter to understand its effect on the chemical tracks. Figure 2 shows the median tracks for the four quartiles of each parameter value within its range. Since \(\epsilon_{\rm SF}\) and \(\dot{m}_{\rm inflow}\) are allowed to change after [\(\alpha\)/Fe] reaches the lowest value, there are two panels for each of these two parameters to showcase the tracks before and after the change. For each parameter in question, we divide its parameter values into four quartiles and obtain a median track over [\(\alpha\)/Fe]. We refer to the point where [\(\alpha\)/Fe] reaches the lowest value in the [Fe/H]-[\(\alpha\)/Fe]-plane as the [Fe/H]-[\(\alpha\)/Fe]-knee, which divides the chemical evolutionary history we study into two phases. Here are the main observations and the rationales behind them from Figure 2:
* Increasing the initial mass of cold ISM results in a [Fe/H]-[\(\alpha\)/Fe]-knee that is higher in [Fe/H] and lower in [\(\alpha\)/Fe]. Models with more massive initial cold ISM experience have a stronger initial burst of star formation and thus reach a higher metallicity. More active initial star formation also translates to more early SNe Ia, causing [\(\alpha\)/Fe] to decline sooner
* Increasing \(f_{\rm SNIa}\) results in a [Fe/H]-[\(\alpha\)/Fe]-knee that is lower in [\(\alpha\)/Fe] and slightly higher in [Fe/H]. A higher \(f_{\rm SNIa}\) leads to more SNe Ia and more active iron production. The additional iron translates to higher [Fe/H] and lower [\(\alpha\)/Fe].
* Increasing \(\epsilon_{\rm SF}\) during the first phase results in a [Fe/H]-[\(\alpha\)/Fe]-knee that is higher in [Fe/H] and lower in [\(\alpha\)/Fe]. Models with a higher SFE transform cold ISM into stars and in turn metals more efficiently and thus are more capable of reaching high [Fe/H]. The more efficient star formation also leads to more early SNe Ia and causes more iron to be produced sooner when given the same gas accretion history.
* \(\epsilon_{\rm SF}\) during the second phase has no aggregate effect on the [Fe/H]-[\(\alpha\)/Fe] tracks. The values of \(\epsilon_{\rm SF}\) before and after the [Fe/H]-[\(\alpha\)/Fe]-knee are independent as both are randomly chosen. \(\epsilon_{\rm SF,after}\) is strongly coupled with other parameters in controlling the star formation mechanism. The SFR does not increase simply because the SFE is turned up. It also depends on whether there is a sufficient amount
Figure 4: The density distribution of tracks after each selection criterion is incrementally applied to our GCE runs. Tracks in the top panel reach [Fe/H] of \(-1.25\pm 0.02\) after one Gyr. Tracks in the middle panel reach [Mg/Fe] of \(0.22\pm 0.02\) after one Gyr in addition to reaching [Fe/H] of \(-1.25\pm 0.02\) after one Gyr. Tracks in the bottom panel reach [Mg/Fe] of \(0.31\pm 0.02\) at the last step, besides satisfying the other two criteria. The lines correspond to three key abundance ratios identified from the median [Mg/Fe]-trend in Figure 3 used in the selection criteria. In each panel, the applied criterion is shown in dashed lines and the rest in dotted lines.
Figure 3: Selected in situ metal-poor stars from the H3 survey in the [Fe/H]-[Mg/Fe]-plane. The moving median of [Mg/Fe] is calculated along [Fe/H] with a window size of forty and shown in red. The three black dashed lines correspond to three key abundance ratios identified from the median track, [Fe/H] = -1.3, [Mg/Fe] = 0.25, [Mg/Fe] = 0.31.
of gas in the GCE model to sustain the additional star formation activity. The mass of the cold gas reservoir depends on the inflow history before the parameter change and the newly adopted inflow rate after.
* Inflow rate before the [Fe/H]-[\(\alpha\)/Fe]-knee has no significant effect on the tracks in the [Fe/H]-[\(\alpha\)/Fe]-plane, except when it is very small. Similar to \(\epsilon_{\rm SF,after}\), \(\dot{m}_{\rm inflow,before}\) is heavily coupled with other parameters. The elemental abundances reflect the balances of nucleosynthesis channels. Inflow only indirectly affects these channels by influencing the SFH. When the inflow rate is reasonably high, the SFE or \(f_{\rm SNIa}\) could be low so the star formation activity is suppressed. However, when \(\dot{m}_{\rm inflow,before}\) is very small, the GCE model can be treated as a closed box and thus its chemical enrichment is more effective, evidenced by the higher [Fe/H] and lower [Mg/Fe] of the blue track.
* Inflow rate after the [Fe/H]-[\(\alpha\)/Fe]-knee affects how high [\(\alpha\)/Fe] can rise during the second phase. Regardless of the SFH during the first phase, the sudden arrival of fresh gas inevitably causes a large number of massive stars to form and evolve over a short period of time, reversing the declining trend of [\(\alpha\)/Fe].
Although all of the free parameters influence chemical evolution collectively, some parameters can become redundant or crucial, depending on the circumstances. We identify three key elemental abundances from the H3 in situ sample to characterize the [Fe/H]-[\(\alpha\)/Fe]-track we are going to study. Figure 3 shows the [Fe/H]-[Mg/Fe]-plane. A moving median-[Mg/Fe] track is fitted to the sample along [Fe/H] with a window size of forty to reveal the trend in [Mg/Fe]. A similar trend can be seen in APOGEE data. The three dashed lines correspond to the three key elemental abundances from the [Mg/Fe]-trend, [Fe/H] = -1.3, [Mg/Fe] = 0.25, [Mg/Fe] = 0.31. We create three selection criteria from these three abundance ratios to isolate models that replicate the rise in [Mg/Fe]:
* [Fe/H] should reach \(-1.3\pm 0.02\) dex after one Gyr;
* [Mg/Fe] should reach \(0.25\pm 0.02\) dex after one Gyr;
* [Mg/Fe] should reach \(0.31\pm 0.02\) dex at the last time step.
These values are arbitrarily chosen by us based on visual checks. The abundances have a large spread in this part of the [Fe/H]-[\(\alpha\)/Fe]-plane so it is difficult to pinpoint the key abundance ratios. We leave a small margin for each key ratio because otherwise it could introduce large variations in the distribution of feasible parameters and prevent us from drawing physical insights. Nevertheless, the takeaway from this work is the qualitative trends of the parameter values that are immune to small adjustments of these ratios. Figure 4 shows the density distribution of tracks in [Fe/H]-[Mg/Fe] when each criterion is incrementally applied to all of the tracks generated from within our parameter space. In the top panel, only condition 1) is applied and 6,279 tracks (2.5% of the total number of runs) remain after the selection. The tracks significantly diverge after [Fe/H]=-2.25 when iron from SNe Ia starts to influence [Mg/Fe]. The middle and bottom panels of Figure 4 show the distribution of tracks in [Fe/H]-[Mg/Fe] when the first two criteria and all three are applied respectively. There are 1,456 tracks (0.58%) in the middle panel and only 110 (0.044%) in the bottom. We will walk through the main results from each criterion that is subsequently applied in the following sections in order.
### The target [Fe/H] sets the basic conditions
Figure 5 displays the distribution of free parameter values (\(m_{\rm 0,cold}\), \(f_{\rm SNIa}\), \(t_{\rm cool}\), \(\epsilon_{\rm SF}\), \(\dot{m}_{\rm inflow}\)) that satisfy the first criterion ([Fe/H] = \(-1.3\pm 0.02\) after one Gyr). The diagonal panels showcase one-dimensional histograms of the free parameters, while off-diagonal panels feature joint distribution smoothed via kernel density estimations. Examining the diagonal histograms, no preference for \(f_{\rm SNIa}\) and \(\epsilon_{\rm SF}\) emerges as their distribution remains uniform, while a preference for \(m_{\rm 0,cold}\sim 10^{9}\) M\({}_{\odot}\) and \(t_{\rm cool}\sim 10^{9}\) yr is apparent. As we have seen in Figure 2, models with a low SFE are unlikely to produce enough iron to meet the desired [Fe/H] at the [Fe/H]-[\(\alpha\)/Fe]-knee as we have very few models with a SFE less than \(10^{-10.5}\).
Correlations among the remaining free parameter values become evident upon inspecting the joint distributions. \(m_{\rm 0,cold}\), represented in the first column, determines the amount of star formation and consequently [Fe/H] after the first step in the model. We do not see any significant relationship between \(m_{\rm 0,cold}\) and \(f_{\rm SNIa}\), but we can see a positive correlation between \(m_{\rm 0,cold}\) and \(t_{\rm cool}\) when \(m_{\rm 0,cold}\) exceeds \(10^{8.7}\)M\({}_{\odot}\). Given that we set the power in the Kennicutt-Schmidt law to 1.4, a larger volume of cold ISM results in a proportionally larger increase in SFR, thereby affecting the amount of metal produced in the first step and thus the first recorded [Fe/H] in the model. The cooling of the warm ISM needs to be extended accordingly to prevent [Fe/H] from surpassing our target. When \(m_{\rm 0,cold}\) falls below \(10^{8.7}\)M\({}_{\odot}\), the slope is less steep because the metal yield from a low-mass initial cold ISM reservoir is not large enough to require additional cooling; in this case, the [Fe/H] progression rate can be modulated by other parameters.
Further down the first column, we can see a negative correlation between \(m_{\rm 0,cold}\), and \(\epsilon_{\rm SF}\). Contrary to the cooling timescale which delays the return of metals into the cold ISM, a high SFE accelerates the conversion of cold ISM into stars and in turn metals within the model. When a massive reservoir of cold ISM (high \(m_{\rm 0,cold}\)) is initially present in the model, the metallicity after one step of star formation is higher and thus the SFE should be restrained to prevent [Fe/H] from surpassing our specified value. Conversely, with an insignificant initial mass of cold ISM (low \(m_{\rm 0,cold}\)) and a dependency on infall to accumulate cold ISM, a high SFE is necessary to facilitate the increase in [Fe/H] over the first one Gyr to reach our [Fe/H] target in time. Nevertheless, there is a possible low-SFE sequence below \(\epsilon_{\rm SF}=10^{-10}\) that exhibits a weaker correlation. We will revisit this sequence in the next subsection.
The bottom panel on the first column shows a significant positive correlation between \(m_{\rm 0,cold}\) and \(\dot{m}_{\rm inflow}\). Star formation only converts a few percentages of cold ISM into stars per Gyr, resulting in only a minuscule amount of metal production relative to the amount of inflow gas. Quantitatively, based on the nucleosynthesis tables and the IMF we utilized, every solar mass of core-collapse supernova (CCSN), the primary production site of iron before the onset of SNe Ia, produces about \(6.3\times 10^{-4}\) M\({}_{\odot}\) of iron. The inflow gas during the first one Gyr is assumed to be pristine. As the cold ISM reservoir is much less massive at this time, even a few M\({}_{\odot}\) of pristine gas per year can significantly dilute the metal present in the model. Hence, infalling gas primarily inhibits the increase of [Fe/H] at this time. When \(m_{\rm 0,cold}\) is high and the early star formation burst launches [Fe/H] at a higher value, the inflow rate escalates correspondingly to decelerated the evolution of [Fe/H] subsequently.
There is a tight positive correlation between \(t_{\rm cool}\) and \(\epsilon_{\rm SF}\). In order to reach the same [Fe/H], the higher the SFE we adopt for the model, the longer we need to store the nucleosynthesis yields so that the same amount of metals is recycled into the cold ISM. This relationship only extends to \(t_{\rm cool}\) as high as around one Gyr. When the cooling timescale is longer than one Gyr, the model is forced to adopt a high SFE and regulate the chemical evolution with
other parameters. The joint distributions in the remaining panels do not show any significant relationship. Although SNe Ia are typically analogous to iron production when we are studying the chemical evolution in the Milky Way disk, they have a substantial delay time and do not have any significant impact on [Fe/H] during this phase. However, as we will see soon, [\(\alpha\)/Fe] is heavily influenced by SNe Ia.
In conclusion, our requirement that [Fe/H] should reach \(-1.3\pm 0.02\) dex in one Gyr selectively favours models featuring a substantial initial cold ISM reservoir (\(m_{\rm 0,cold}\approx 10^{9}\)M\({}_{\odot}\)), a moderate cooling timescale for the warm ISM (\(t_{\rm cool}\approx 1\)Gyr), and a relatively elevated SFE (\(t_{\rm SF}>10^{-10}\)). There is a strong negative correlation between \(m_{\rm 0,cold}\) and \(e_{\rm SF}\) when \(m_{\rm 0,cold}\) is higher than \(10^{8.5}\) M\({}_{\odot}\). In addition, \(m_{\rm 0,cold}\) is positively correlated with \(t_{\rm cool}\) and negatively correlated with \(e_{\rm SF}\). \(e_{\rm SF}\) is positively correlated with \(t_{\rm cool}\). The "simmering" phase proposed by Conroy et al. (2022) is characterized by a low SFE and a large inflow rate. Their scenario is unlikely based on what we observe in Figure 5 at this stage. Next, we will see constraining [\(\alpha\)/Fe] affects the parameter distributions.
### The frequency of SNe Ia controls the fall of [\(\alpha\)/Fe]
We now explore the parameter space when an additional criterion is imposed on [Mg/Fe]. Models whose parameter distributions are
Figure 5: The distribution of parameter values of the models that reach [Fe/H] = \(-1.25\pm 0.02\) after one Gyr. The distribution of tracks generated with these parameter values in [Fe/H]:[Mg/Fe] is shown in the top panel of Figure 4. The columns and rows correspond to five free parameters in the bottom panel of Table 1, i.e. the initial mass of cold ISM (\(m_{\rm 0,cold}\)), the fraction of white dwarfs arising from progenitor stars with initial masses within the range of (3.2, 8.5)M\({}_{\odot}\) eligible for SNe Ia (\(f\)SN\({}_{\rm Ia}\)), the cooling timescale of warm ISM (\(t_{\rm cool}\)), the SFE constant (\(e_{\rm SF}\)), the inflow rate (\(u_{\rm inflow}\)). We only show the values of the last two parameters before the [\(\alpha\)/Fe]-rise here. The diagonal panels are one-dimensional histograms for each free parameter and the off-diagonal terms are two-dimensional joint distributions between the parameters smoothed by kernel density estimations. The constant inflow rate and the initial SFE of the “simmering” phase scenario is marked for reference.
displayed in green in Figure 6 are required to hit both [Fe/H] = \(-1.3\pm 0.02\) dex and [Mg/Fe] = \(0.25\pm 0.02\) dex at the one Gyr mark. As expected, characteristics identified in Figure 5 reappear in Figure 6. However, some novel features emerge in the new parameter space, notably in relation \(f_{\rm SNIa}\), the parameter controlling the frequency of SNe Ia. We observe a strong negative correlation between \(f_{\rm SNIa}\) and \(m_{\rm 0,cold}\), \(t_{\rm cool}\), and \(\epsilon_{\rm SF}\). Unlike the rest of the free parameters that affect all nucleosynthesis channels, \(f_{\rm SNIa}\) only targets SNe Ia. When \(f_{\rm SNIa}\) is low and the synthesized iron is barely sufficient to depress [Mg/Fe], a high \(m_{\rm 0,cold}\)/\(\epsilon_{\rm SF}\) is required to form as many stars as possible early on so that more SNe Ia explode within our time frame to bring [Mg/Fe] to our target. Similarly, a short \(t_{\rm cool}\) ensures that the little iron from SNe Ia is recycled into the cold ISM sooner. Conversely, when \(f_{\rm SNIa}\) is high, the three parameters must work in the opposite direction to prevent too much iron from SNe from lowering [Mg/Fe] too much. The model by Conroy et al. (2022) only has a cold ISM component and effectively has \(t_{\rm cool}=0\), which is one of the reasons why they consider the "simmering" phase feasible.
The histogram for \(f_{\rm SNIa}\) indicates that \(f_{\rm SNIa}\) is likely higher than 10%, but there is too much of a spread to pin down the exact value, unlike \(t_{\rm cool}\) which shows a strong preference for one Gyr. Meanwhile, we observe shifts in the relationships among the parameters identified earlier. The correlation between \(m_{\rm 0,cold}\) and \(t_{\rm cool}\) has become less significant. The high-SFE sequence in the \(m_{\rm 0,cold}\)-\(\epsilon_{\rm SF}\) plane has been eliminated, leaving behind the low-SFE sequence. A high SFE would form too many stars early on and result in more front-loaded SNe Ia and a faster decline in [\(\alpha\)/Fe]. We want to remain in the high-[\(\alpha\)/Fe] regime after one Gyr so a moderate SFE is preferred. Regarding the relationship between SFE and \(t_{\rm cool}\), the peak has also gravitational towards a more moderate SFE. During the Milky Way's first Gyr, although the number of SNe Ia is not sufficient to influence [Fe/H], it has a substantial effect on [\(\alpha\)/Fe].
Figure 6: The distribution of parameter values of the models that reach [Fe/H] = \(-1.25\pm 0.02\) and [Mg/Fe] = \(-0.25\pm 0.02\) after one Gyr in the same style as Figure 5. The distribution of tracks generated with these parameter values in [Fe/H]:[Mg/Fe] is shown in the middle panel of Figure 4.
### The rise of [\(\alpha\)/Fe] requires a small existing gas reservoir
Finally, we will examine the parameter distributions of the models that not only reached the [Fe/H]-[\(\alpha\)/Fe]-knee in the first phase but also managed to raise [\(\alpha\)/Fe] during the second phase. Models whose parameter distributions are displayed in red in Figure 7 are required to satisfy all three criteria to complete the [\(\alpha\)/Fe]-reversal. The most distinguishing feature among models achieving a higher [\(\alpha\)/Fe] value during the second phase is their inflow rate restricted to less than one M\({}_{\odot}\) per year during the first phase. As low-[\(\alpha\)/Fe] gas in a model accumulates through inflow during the first phase, the amount of \(\alpha\)-elements needed to raise [\(\alpha\)/Fe] increases. There are two channels to boost short-term star formation for additional \(\alpha\)-elements. The first is to accumulate a large gas reservoir and increase the SFE (the "simmering" phase). The second is to introduce a massive inflow episode relative to the existing gas reservoir to the model. The contours and histogram in the bottom row of 6 prefer the second scenario. This is not surprising as Figure 5 has indicated from the start that a relatively high SFE is required to meet the [Fe/H] ratio of the [Fe/H]-[\(\alpha\)/Fe]-knee. A large amount of low-[\(\alpha\)/Fe] gas present from the first channel makes it extremely difficult to raise [\(\alpha\)/Fe]. The relations among the parameters identified in Figure 6 have also been updated. \(m_{\rm 0,cold}\) and \(t_{\rm cool}\) become more significantly and strongly correlated. The low-SFE sequence in the \(m_{\rm 0,cold}\)-\(\epsilon_{\rm SF}\) plane has been replaced by a sequence exhibiting a strong correlation across the entire range. A low \(f_{\rm SNIa}\) (\(\approx 0.1\)) becomes the preferred value. We can observe some additional substructures, highlighting the possibility of sub-scenarios.
Examining the change in the SFE and inflow rate reveals that inflow is the primary driver behind the [\(\alpha\)/Fe]-rise. Figure 8 illustrates the ratios of the SFEs and inflow rates before and after the [Fe/H]-[\(\alpha\)/Fe]
Figure 7: The distribution of parameter values of the models that reach [Fe/H] = \(-1.25\pm 0.02\) and [Mg/Fe] = \(0.25\pm 0.02\) after one Gyr as well as reaching [Mg/Fe] = \(0.31\pm 0.02\) after 1.8 Gyr in the same style as Figure 5 and Figure 6. The distribution of tracks generated with these parameter values in [Fe/H]-[Mg/Fe] is shown in the bottom panel of Figure 4.
knee for models meeting all three criteria. The ratios are shown in logarithmic scale so zero denotes where the parameter values remain constant. Surprisingly, the SFE declined in about 60% of the models, as far as 90 % in some models, even though a higher SFE would facilitate the additional star formation to raise [\(\alpha\)/Fe]. This makes inflow much more important in raising the SFR, as all of the models experienced an increase in the inflow rate, as much as forty times in some models. We now have a complete picture of the conditions that caused the [\(\alpha\)/Fe]-rise. The proto-Galaxy started with an initial gas reservoir of moderate mass (\(10^{8.5}-10^{9}\) M\({}_{\odot}\)). It maintained a small gas reservoir with little gas inflow (\(<1\) M\({}_{\odot}\) per year) as [Fe/H] climbed and [\(\alpha\)/Fe] declined naturally. However, a massive amount of inflow joined the proto-Galaxy around [Fe/H] \(=-1.3\), causing [\(\alpha\)/Fe] to rise as a result of the enhanced star formation activity. Through the entire proto-galaxy epoch, the SFE is high (\(>10^{-10}\)) and the frequency of SNe Ia could be higher than that measured from the local group which is \(3-10\) % (Maoz et al., 2012; Maoz and Mannucci, 2012). The recycling of metal is relatively efficient (\(t_{\rm cool}\approx\) one Gyr).
## 4 Discussion
The purpose of this work is to identify the parameter combinations that will cause [\(\alpha\)/Fe] to rise in the [Fe/H]-[\(\alpha\)/Fe]-plane from a galactic chemical evolution model and infer the physical condition of the Milky Way before the formation of the disk. We are primarily dealing with two major nucleosynthesis channels in the [Fe/H]-[\(\alpha\)/Fe]-plane: CCSNe and SNe Ia. Since only CCSNe synthesize \(\alpha\)-elements, the most logical explanation behind the rise in [\(\alpha\)/Fe] is a boost in the SFR. The question then is what causes the increase in the SFR.
There are two channels to boost star formation in the model, which correspond to two scenarios of what happened when the disk formed in the Milky Way. The first channel is to increase the SFE. However, a higher SFE would not translate to a high SFR unless there is a substantial amount of ISM to sustain star formation. Thus, the first scenario entails a high inflow rate of fresh gas to build up a large gas reservoir before disk formation and an elevated SFE as soon as the disk forms. The second channel is to introduce additional inflow, which is metal-poor and \(\alpha\)-enhanced at this time, to expand the gas reservoir and make available more gas for star formation. However, this requires the existing reservoir to be limited in mass, or the new inflow rate becomes unrealistically large. The ingredients for these two channels are organized in the flowchart in Figure 1. We performed an experiment by running our model 250,000 times with randomly generated parameters, two of which, the SFE and inflow rate, were allowed to change after one Gyr of runtime was reached. The results preferred the second scenario under which the inflow was initially suppressed.
We can now clarify the outcomes of our flowchart in Figure 1 and identify the most likely scenario for the [\(\alpha\)/Fe]-rise. As we mentioned, the rise in [\(\alpha\)/Fe] is achieved by a corresponding rise in the SFR, which requires at least one of two parameters, the SFE and inflow rate, to rise. It is impossible to reverse the declining trend of [\(\alpha\)/Fe] if neither parameter increases. According to the narrative of the "simmering" phase, the inflow rate remains high during the entire proto-Galaxy phase while the enhanced SFE boosts the SFR. This scenario was deemed feasible due to the design of the model by Johnson and Weinberg (2020). Their model uses a return function to determine the amount of evolved stellar mass instead of tracking stellar lifetime and does not have a multi-phase ISM. These ingredients significantly expedite the production and recycling of metals. Conroy et al. (2022) doubled the yields of SNe Ia, ran the model for 3.7 Gyr (twice as long as ours), and boosted the SFE by twenty-five times to replicate the [\(\alpha\)/Fe]-rise. Our results show that a large gas reservoir makes it difficult for [Fe/H] to reach \(-1.3\) in one Gyr and difficult for [\(\alpha\)/Fe] to rise, even though the SFE rose as much as ten times within our predefined range in some runs of our model. Thus, the amount of gas in the gas reservoir must have remained low while [\(\alpha\)/Fe] was declining. Since there is no existing gas to sustain enhanced star formation, the inflow of fresh gas becomes a necessary condition, while the SFE can rise or fall, as long as it is above a certain threshold.
### Further constraining parameters of proto Milky Way
We adopted a limited time frame (1.8 Gyr) and stringent criteria to replicate the [\(\alpha\)/Fe]-rise in our GCE model in this work, i.e. [Fe/H] \(=-1.3\pm 0.02\), [Mg/Fe] \(=0.25\pm 0.02\) after one Gyr and [Mg/Fe] \(=0.31\pm 0.02\) after 1.8 Gyr. The time frame was chosen to accommodate the possibility of a twelve-Gyr-old thick disk. The three elemental ratios were chosen to constrain parameter confounding in order to more clearly identify the parameter combinations that would produce the [\(\alpha\)/Fe] trend we approximated from observations. Changing the key abundance ratios in the criteria or allowing larger margins could shift the parameter distributions, but it will not invalidate the qualitative trends we observed in Figure 5 and 6. However, as the precision of the abundance measurements improves or the age estimates for the stars in question become available, we expect to come up with a quantitative approximation of the track rather than three abundance ratios. We can then further fine-tune our parameter choices by controlling the accumulation rate of metal in the model. Nevertheless, there are a few ways to further refine our parameters with tools other than GCE models. The contours in Figure 6 reveal
Figure 8: The change in the parameter values of the SFE and inflow rate before vs. after [\(\alpha\)/Fe] reaches the lowest value for models that exhibited a desired rise in [\(\alpha\)/Fe]. The dashed line marks where the parameter remains constant. About sixty per cent of the models had a decline in the SFE while all of them had an increase in the inflow rate.
strong correlations among the free parameters. If we can pin down the cooling timescale of the warm ISM in the early universe, we could retrieve the rest of the "free" parameters from the correlations. Except for the inflow rate, the remaining four parameters are well constrained once one of them is defined to replicate the [\(\alpha\)/Fe]-rise.
There are other properties we can examine, besides the chemical evolutionary track. Although models with different SFEs can achieve the same key abundance ratios, their stellar chemical distribution and global properties are vastly different. Figure 9 shows the Metallicity Distribution Function (MDF) and the [\(\alpha\)/Fe] Distribution Function (ADF) of the models satisfying all three criteria for the [\(\alpha\)/Fe]-rise, with a colour coding corresponding to their SFE. We chose SFE for the colour coding because it is measurable from observations or simulations. Models with lower SFEs conspicuously contain fewer metal-poor high-[\(\alpha\)/Fe] stars, as the SFR is much lower in the initial stage immediately after the birth of the galaxy. Conversely, models boasting exceptionally high SFEs display a dual-peaked distribution. The cold ISM reservoirs in these models are exhausted by the initial star formation burst and require the replenishment of inflow before star formation can resume. The exact ratio between the two peaks and the width of the gap between them are determined by the parameters we assign to the model. Figure 10 shows the evolution of the global properties of the same models in Figure 9 over time, colour-coded again by SFE. Specifically, we show the mass of cold ISM, warm ISM, and stars as well as the mass ratio between stars and cold gas over running time, and the star formation history. Except for the stellar mass panel, the rest of the panels show strong correlations of the properties with the SFEs. As observation evidence becomes more abundant in the future or procedures are developed to reduce the effect of the selection function, it is possible for us to constrain the SFE more precisely based on the chemical distribution or the global property of the in-situ population in the early Milky Way. We will subsequently discuss the complexities of our findings in the context of prior relevant studies.
### The inflow that fueled the Milky Way
There are two physically distinct regimes of gas accretion onto galaxies: a "cold" filamentary mode in which warm gas (\(<10^{5}\) K) accretes along filaments that can penetrate to the inner regions of halos and a "hot" flow mode wherein hot diffuse gas in extended halos cools over time (Keres et al., 2005, 2009). The cold accretion mode is more common in smaller galaxies and the hot mode is more common in massive galaxies. The important role of gas accretion has been established by cosmological simulations but the properties of accreted gas have been difficult to quantify because the baryon cycle is a complex process that involves the continuous interplay of gas inflow, outflow, and star formation (Dekel et al., 2009; van de Voort et al., 2011; Lagos et al., 2014; Nelson et al., 2015; Correa et al., 2018; Mitchell et al., 2020; Wright et al., 2020). The prototypes of Milky Way analogs in cosmological simulations at high redshift experience rapid gas accretion through cold streams with large angular momentum which is subsequently transferred to the existing halo, contributing to the increased spin of these galaxies (Stewart et al., 2011; Sales et al., 2012; Danovich et al., 2015). However, before the filaments with different directions become aligned and settle into disks, Milky Way prototypes take the shape of spheroids characterized by an extended profile and violent kinematics (Rosdahl & Blaizot, 2012; Stewart et al., 2013; Obreja et al., 2013, 2019; Bird et al., 2013, 2021; Meng et al., 2019). As these galaxies accrete more gas, they become massive enough to support a hot halo which subsequently triggers the transition from "cold" mode accretion to "hot" mode or cooling mode accretion, accompanied by the formation of a gaseous disk (Dekel et al., 2020; Stern et al., 2021; Hafen et al., 2022; Gurvich et al., 2023).
Our work is closely connected to a series of recent works on the pre-disk Milky Way. Belokurov & Kravtsov (2022) first identified a large number of stars before the coherent disk in the Galaxy formed and named this population _Aurora_. The features of this population are consistent with the scenario under which stars form in cold filaments that are rapidly accreted onto the Galaxy. The stars have a large scatter in elemental abundances which could be caused by the diverse conditions under which nucleosynthesis took place. Its spheroidal spatial distribution and isotropic velocity ellipsoid are expected from simulated galaxies that went through the chaotic phase of evolution. Additionally, these stars showed a strong positive correlation between metallicity and tangential velocity, which is a signature of the filaments transferring their high angular momentum to our Galaxy. (Conroy et al., 2022) extended the metallicity coverage of this population, especially towards the metal-poor end, and presented as a more complete picture of the [\(\alpha\)/Fe]-rise. The dynamical aspect of these works is corroborated by Yu et al. (2023) that showed that the orbits of in situ stars are closely related to their respective formation epoch from simulated Milky Way-mass galaxies in FIRE. Our accretion scenario suggests that the cold accretion should take time to ramp up in the proto-Galaxy.
Our results offer additional evidence from the perspective of chemical evolution that the brief [\(\alpha\)/Fe]-rise in the [Fe/H]-[\(\alpha\)/Fe]-plane is a signature of massive inflow that ended the prototype phase of the Milky Way and initiated the physical process through which the disk later formed. Although our GCE model does not simulate the dynamical features of our Galaxy during its earliest epoch, it is flexible with a wide selection of parameters governing the chemical evolution to explore the conditions behind the [\(\alpha\)/Fe]-rise. Unlike traditional GCE studies that present one model that demonstrates the most likely scenario, we ran our model 250,000 times to generate the distribution of parameter values. Although it is reasonable to expect the SFE to rise to facilitate enhanced star formation, our model showed that inflow must be suppressed until the rise of [\(\alpha\)/Fe], while the SFE can rise or fall. This scenario agrees with the cold mode accretion that fueled the formation of the early Milky Way suggested by Belokurov & Kravtsov (2022). Rix et al. (2022) estimated their in situ metal-poor sample to have a stellar mass M\({}_{*}>10^{8}\) M\({}_{\odot}\) which also agrees with the median stellar mass of the proto-Milky Way from our model (see Figure 10).
### Contamination from accreted stellar structures
It is possible that the stellar sample exhibiting the observed [\(\alpha\)/Fe]-rise is contaminated by accreted stellar structures. Distinguishing in situ and accreted stars in the halo of the Milky Way is primarily done through the analysis of their kinematic and chemical properties, but it is unclear to what extent the properties are related to the birth origin of stars. The kinematic properties of stars, such as their orbits and angular momentum characteristics, can provide insights into their origin and formation history. Accreted stars in the halo are typically associated with tidal debris from disrupted satellite galaxies, such as Sagittarius and Gaia-Sausage-Enceladus, and have highly inclined or eccentric orbits (Belokurov et al., 2018; Helmi et al., 2018). The presence of disrupted star clusters in the stellar halo is also indicative of an accreted component (Malhan et al., 2018; Shipp et al., 2018; Bonaca et al., 2021). Nevertheless, even among in situ stars in simulated Milky Way analogs, there is a wide range of dynamical features (Yu et al., 2023). Elemental abundances can also be used to distinguish in situ and accreted stars in the halo. The
distinct abundance patterns of accreted stars in the [Fe/H]-[\(\alpha\)/Fe] or [Mg/Mn]-[Al/Fe] plane suggest an accreted origin, indicating that their birth material was enriched in a lower mass potential well, such as a satellite galaxy (Hawkins et al., 2015; Lee et al., 2015; Belokurov et al., 2019; Feuillet et al., 2021)). The question of distinguishing the birth origin of stars is probably moot during the proto-Galaxy phase. El-Badry et al. (2018) studied the distribution of ancient stars in simulated Milky Way analogs in detail and found that most of the oldest stars are accreted through hierarchical assembly. At \(z=5\) (\(\sim 1.1\) Gyr after the Big Bang), the main progenitors of the Milky Way analogs contained only half of the old stars in stellar mass or less. If the observed [\(\alpha\)/Fe]-rise were manifested in one or multiple accreted structures, they likely represented a significant portion of the stellar mass of the proto-Galaxy.
### Implications on future GCE studies of the Milky Way disk
One of the challenges to studying the Milky Way disk with GCE models is setting the initial conditions when the disk first formed. The standard approach consists of initializing a GCE model with a reservoir of pristine gas and growing the reservoir over time with inflow which is also pristine or very metal-poor ([Fe/H] \(<\) -1). Two issues arise from this approach. The first is that the time it takes for [Fe/H] to reach a relatively high value, around \(-0.5\) for the thick disk, is often longer than the age of the thick disk stars, around ten to twelve Gyr. Only 2.5% of our runs reached [Fe/H] \(\approx-1.3\) in one Gyr and fewer reached [Fe/H] \(\approx-1.3\) at 1.8 Gyr. The SFE and other parameters can be adjusted so that [Fe/H] rises faster. However, a substantial amount of gas needs to be added to the reservoir via inflow early to achieve a high SFR during the formation of the thick disk, which is a common ingredient needed to replicate the [\(\alpha\)/Fe]-bimodality in the disk (Kubryk et al., 2015; Minchev et al., 2013; Johnson et al., 2021; Chen et al., 2023). The combination of a high SFE and a massive gas reservoir leads to too many metal-poor stars in the model. The second issue is how [\(\alpha\)/Fe] managed to remain high for an extended period of time until the thin disk started forming. The [\(\alpha\)/Fe] value of the thick disk ranges between 0.2 and 0.4. We chose [Mg/Fe] in this work which has an [\(\alpha\)/Fe]-ceiling of around 0.41 according to the yields of CCSNe. The [\(\alpha\)/Fe] value in a large fraction of runs in the top panel of Figure 4 have gone down by at least 0.1 dex after one Gyr when the thick disk had started to form according to Xiang & Rix (2022). As more SNe Ia explode, [\(\alpha\)/Fe] would decrease even further and deviate from the ratio associated with the high-[\(\alpha\)/Fe] population.
The scenario for the proto-Galaxy outlined in our work solves the above issues and allows future GCE studies of the Milky Way disk to set initial conditions in a self-consistent approach. The inflow is initially suppressed in the model, helping metals accumulate without being diluted by metal-poor gas. We had a fixed time stamp for [Fe/H] to reach around -1.3 at one Gyr. However, the model is capable of reaching a higher metallicity within the same amount of time in Figure 2. In the middle panel of Figure 10, most of our models formed \(10^{8}\) M\({}_{\odot}\) of stars, less than 1 % of the total stellar mass of our Galaxy. The small gas reservoir facilitates the rapid rise of
Figure 10: The masses of cold gas, warm gas, stellar mass, the mass ratios between stars and cold gas over the course of the GCE runs, and the star formation histories for the models exhibiting an early [\(\alpha\)/Fe]-rise. The panels from top to bottom show the evolution of the total cold gas mass, warm gas mass, stellar mass, and the ratio between cold gas and stellar mass.
Figure 9: Stellar density distribution of [Fe/H] and [Mg/Fe] in models with an [\(\alpha\)/Fe]-rise at the last time step (1.8 Gyr of runtime) colour-coded by the SFE. Models with different SFEs have distinct chemical distributions.
[Fe/H] while keeping the SFR low. As for the second issue, [\(\alpha\)/Fe] inevitably drops below the desired high value, but we rejuvenate the gas reservoir with a higher inflow rate. Since the initial gas reservoir is limited in mass, the large amount of fresh gas leads to a new episode of star formation, raising [\(\alpha\)/Fe] and further elevating [Fe/H]. This scenario allows us to achieve a high [\(\alpha\)/Fe] value in our model even after two to three Gyr to correctly reproduce the elemental abundances of the thick disk.
## 5 Conclusion
In this study, we deepened our understanding of the [\(\alpha\)/Fe]-rise observed in H3 and APOGEE data through a comprehensive investigation using our GCE Model. The main findings are as follows:
* The [\(\alpha\)/Fe]-rise is principally driven by gas inflow, thus adding a chemical evolution perspective to theories surrounding the Milky Way disk's spin-up phase. Specifically, the ISM of the proto-Galaxy initially appeared isolated to facilitate a quick rise in [Fe/H] and later underwent rapid gas accretion.
* Contrary to prior studies, our results show that the SFE does not play a deterministic role in the [\(\alpha\)/Fe]-rise, even though in theory the rise in the SFE should facilitate the [\(\alpha\)/Fe]-rise. Interestingly, SFE could either increase or decrease, yet still result in the observed [\(\alpha\)/Fe] rise under certain conditions.
* The models suggest that the earliest proto-Galaxy had an initial gas reservoir ranging from \(10^{8.5}\) to \(10^{9}\)M\({}_{\odot}\), along with efficient cooling of the warm ISM on a timescale of one Gyr.
* 10% of white dwarfs originating from stars within [3, 8] M\({}_{\odot}\), implying a higher binary fraction.
Our model provides a coherent framework that addresses several key questions in the chemical evolution of the Milky Way
* Our model addresses the lack of metal-poor stars by initially suppressing gas inflow, allowing for metal accumulation without dilution from metal-poor gas. The small initial gas reservoir and low SFR (not necessarily low SFE) also mean that fewer metal-poor stars are formed, which aligns well with observations.
* Our model accommodates the observed plateau in [\(\alpha\)/Fe] values by introducing a higher inflow rate after the initial phase. This reinvigorates star formation and allows for a sustained high [\(\alpha\)/Fe] level, yet lower than the original plateau. This explains why the thick disk's [\(\alpha\)/Fe] values plateau at around 0.3 dex, a level lower than those predicted by CCSNe (\(\sim\) 0.4 dex)
* By allowing for a rejuvenated gas reservoir with higher inflow rates, our model captures the startup of the disk formation from the perspective of chemical evolution and collaborates proposed scenarios for the formation of the thick disk.
Finally, we note that the [\(\alpha\)/Fe]-rise alone leaves considerable ambiguity in model parameters, leading to some degree of degeneracy. Our findings suggest that future observations--whether focused on gas mass or the distribution function in [Fe/H] and [\(\alpha\)/Fe]--could significantly improve our understanding of the ancient history of our Galaxy.
In a broader context, the methodology and insights gleaned from this study extend beyond the Milky Way. Similar approaches could be applied to other intriguing galaxies, such as M31 and the Large Magellanic Cloud, the latter of which have also shown fascinating chemical evolution patterns (Nidever et al., 2020). This opens up a wide field for future research. Our work emphasizes the ongoing importance of studying the chemical evolution of galaxies as a vital tool for understanding not only the Milky Way but also the Local Group at large. This significance is poised to grow as new generations of spectroscopic surveys come online and as forthcoming 30-m class telescopes continue to expand our observational horizons.
## Acknowledgements
YST acknowledges financial support from the Australian Research Council through DECRA Fellowship DE220101520. MRH acknowledges financial support from the Australian Research Council through the Laureate Fellowship awarded to Prof. Joss Bland-Hawthorn.
## Data Availability
The inclusion of a Data Availability Statement is a requirement for articles published in MNRAS. Data Availability Statements provide a standardised format for readers to understand the availability of data underlying the research results described in the article. The statement may refer to original data generated in the course of the study or to third-party data analysed in the article. The statement should describe and provide means of access, where possible, by linking to the data or providing the required accession numbers for the relevant databases or DOIs.
|
2304.12314 | Distilling from Similar Tasks for Transfer Learning on a Budget | We address the challenge of getting efficient yet accurate recognition
systems with limited labels. While recognition models improve with model size
and amount of data, many specialized applications of computer vision have
severe resource constraints both during training and inference. Transfer
learning is an effective solution for training with few labels, however often
at the expense of a computationally costly fine-tuning of large base models. We
propose to mitigate this unpleasant trade-off between compute and accuracy via
semi-supervised cross-domain distillation from a set of diverse source models.
Initially, we show how to use task similarity metrics to select a single
suitable source model to distill from, and that a good selection process is
imperative for good downstream performance of a target model. We dub this
approach DistillNearest. Though effective, DistillNearest assumes a single
source model matches the target task, which is not always the case. To
alleviate this, we propose a weighted multi-source distillation method to
distill multiple source models trained on different domains weighted by their
relevance for the target task into a single efficient model (named
DistillWeighted). Our methods need no access to source data, and merely need
features and pseudo-labels of the source models. When the goal is accurate
recognition under computational constraints, both DistillNearest and
DistillWeighted approaches outperform both transfer learning from strong
ImageNet initializations as well as state-of-the-art semi-supervised techniques
such as FixMatch. Averaged over 8 diverse target tasks our multi-source method
outperforms the baselines by 5.6%-points and 4.5%-points, respectively. | Kenneth Borup, Cheng Perng Phoo, Bharath Hariharan | 2023-04-24T17:59:01Z | http://arxiv.org/abs/2304.12314v1 | # Distilling from Similar Tasks for Transfer Learning on a Budget
###### Abstract
We address the challenge of getting efficient yet accurate recognition systems with limited labels. While recognition models improve with model size and amount of data, many specialized applications of computer vision have severe resource constraints both during training and inference. Transfer learning is an effective solution for training with few labels, however often at the expense of a computationally costly fine-tuning of large base models. We propose to mitigate this unpleasant trade-off between compute and accuracy via semi-supervised cross-domain distillation from a set of diverse source models. Initially, we show how to use task similarity metrics to select a single suitable source model to distill from, and that a good selection process is imperative for good downstream performance of a target model. We dub this approach DistillNearest. Though effective, DistillNearest assumes a single source model matches the target task, which is not always the case. To alleviate this, we propose a weighted multi-source distillation method to distill multiple source models trained on different domains weighted by their relevance for the target task into a single efficient model (named DistillWeighted). Our methods need no access to source data, and merely need features and pseudo-labels of the source models. When the goal is accurate recognition under computational constraints, both DistillNearest and DistillWeighted approaches outperform both transfer learning from strong ImageNet initializations as well as state-of-the-art semi-supervised techniques such as FixMatch. Averaged over 8 diverse target tasks our multi-source method outperforms the baselines by 5.6%-points and 4.5%-points, respectively.
## 1 Introduction
Recognition models get more accurate the larger they are and the more data they are trained on [22; 37; 47]. This is a problem for many applications of interest in medicine (e.g. X-ray analysis) or science (e.g. satellite-image analysis) where both labeled training data, as well as computational resources needed to train such large models, are lacking.
The challenge of limited labeled data can potentially be alleviated by fine-tuning large-scale "foundation models" [13; 22; 47]. However, fine-tuning is computationally expensive, especially when one looks at foundation models with billions of parameters [13]. Unfortunately, all evidence suggests that larger foundation models perform better at fine-tuning [22; 47]. This leaves downstream applications the unpleasant trade-off of expensive computational hardware for fine-tuning large models, or inaccu
rate results from smaller models. Motivated by this challenge, we ask _can we train accurate models on tight data and compute budgets without fine-tuning large foundation models?_
To set the scene, we assume the existence of a diverse set (both in architecture and task) of pre-trained source models (or foundation models). We do not have the resources to fine-tune these models, but we assume we can perform inference on these models and extract features, _e.g._ through APIs on cloud services [8; 35]. For the target task, we assume that labeled data is very limited, but unlabeled data is available. We then propose a simple and effective strategy for building an accurate model for the target task: DistillNearest. Concretely, we first compute a measure of "task similarity" between our target task and each source model and rank the source models accordingly. Then we pseudo-label the unlabeled data using the most similar source model. These pseudo-labels may not even be in the same label space as the target task, but we conjecture that due to the similarity between the source and target tasks, the pseudo-labels will still _group_ the target data points in a task-relevant manner. Finally, we train the target model using the pseudo-labels and the available ground truth labeled data. This allows us to bypass the large computations required to fine-tune source models and directly work on the target model. At the same time, we get to effectively use the knowledge of the large source model even if it is trained on a different task.
DistillNearest assumes that a _single_ best source model exists. But for some target tasks, we might need to combine multiple source models to achieve a sufficiently diverse representation to distill. We, therefore, propose an extension of our approach that distills _multiple (diverse) source models_ trained on different domains, weighted by their relevance for the target task. This extension obtains even further improvements on our target performance (see Figure 2). We dub this method DistillWeighted.
**We summarize our contributions as follows:**
* We train more than 200 models across a diverse set of source and target tasks using single-source distillation, and extensively show that the choice of source model is imperative for the predictive performance of the target model. To the best of our knowledge, no previous work has addressed how to efficiently select a teacher model for (cross-domain) distillation.
* We find that _task similarity metrics_ correlate well with predictive performance and can be used to efficiently select and weight source models for single- and multi-source distillation without access to any source data.
* We show that our approaches yield the best accuracy on multiple target tasks under compute and data constraints. We compare our DistillNearest and DistillWeighted methods to two baselines (transfer learning and FixMatch), as well as the naive case of DistillWeighted with _equal_ weighting (called DistillEqual), among others. Averaged over 8 diverse datasets, our DistillWeighted outperforms the baselines with at least 4.5% and in particular 17.5% on CUB200.
## 2 Related Work
**Knowledge Distillation** One key aspect of our problem is to figure out how to compress single or multiple large foundation models into an efficient target model. A common approach is knowledge distillation [5; 18] where an efficient student model is trained to mimic the output of a larger teacher model. However, most single-teacher [3; 10; 11; 28; 30] or multi-teacher knowledge distillation [16; 27; 38; 45] research focuses on the closed set setup, where the teacher(s) and the student both attempts to tackle the same task. To the best of our knowledge, compressing models specializing in various tasks different from the target task has rarely been explored in the literature. Our paper explores this setup and illustrates that carefully distilling source models trained on different tasks can bring forth efficient yet accurate models.
**Semi-Supervised Learning and Transfer** Given our target tasks are specified in a semi-supervised setting, it is customary to review methods for semi-supervised learning (SSL). The key to SSL approaches is how to effectively propagate label information from a small labeled dataset to a large unlabeled dataset. Along this vein, methods such as pseudo-labeling/self-training [25; 43] or consistency regularization [7; 36; 39] have shown remarkable results in reducing deep networks dependencies on large labeled datasets via unlabeled data. However, most SSL approaches focus on training models from scratch without considering the availability of pre-trained models. Given the increasing availability of large pre-trained models [31; 42], recent work has started exploring the intersection between transfer learning and SSL [1; 20; 34]. However, most of these works focus on how to transfer from a single pre-trained model to the target task. Our paper, however, explores an even more practical setup: how to transfer from multiple pre-trained models to a downstream task where in-domain unlabeled data are available. In principle, we could combine our approach with a lot of previous work on SSL to (potentially) gain even larger improvements, but to keep our method simple we leave such exploration to future work and focus on how to better utilize an available set of pre-trained models.
**Multi-Source Domain Adaptation** Our setup also bears a resemblance with multi-source domain adaptation (MSDA) [32] in which the goal is to create a target model by leveraging multiple source models. However, MSDA methods often assume the source and target models share the same label space to perform domain alignment. We do not make such an assumption and in fact, focus on the case where the label space of source and target tasks have minimal to no overlap. Besides, a lot of the MSDA approaches [32; 44; 48; 49] rely on the availability of source data or the fact that the source and target tasks share the same model architecture to build domain invariant features. Given the discrepancy in assumptions between MSDA and our setup, we do not consider any methods from this line of work as baselines.
Figure 3: We propose to weigh a set of \(S\) source models, \(\mathcal{M}_{s}=h_{s}\circ\phi_{s}\), by using task similarity metrics to estimate the alignment of each source model with the particular target task using a small probe set of labeled data, \(\overline{\mathcal{D}}_{r}^{p}\). Since the task similarity metrics are independent of feature dimension, we can utilize source models of any architecture and from any source task. We show that by choosing the weighting, \(\alpha_{1},\ldots,\alpha_{S}\), this way we are able to improve performance over transfer from ImageNet and training with FixMatch amongst others (see _e.g._ Table 1 and Figure 4).
**Transfer Learning From Multiple Sources** Transfer learning from multiple different pre-trained models has been explored in different setups. Bolya et al. [9] focuses on how to select a single good pre-trained model to use as a model initialization whereas we explore how to distill an efficient model from the pre-trained models (i.e. our target architecture could be different from those of the source models). Agostinelli et al. [4] focuses on how to select a subset of pre-trained models to construct an (fine-tuned) ensemble, whereas we focus on creating a single model. Li et al. [26] focuses on creating a generalist representation by equally distilling multiple pre-trained models using proxy/source data (which often requires high-capacity models) whereas our goal is to construct an efficient specialist model using the target data. All these works have indicated the importance of exploring how to best leverage a large collection of pre-trained models but due to differences in setup and assumptions, we do not (and could not) compare to them.
**Task Similarity / Transferability Metrics** A key insight of our approach is to leverage the similarity between the target and source tasks to compare and weigh different pre-trained source models during distillation. Characterizing tasks (or similarities between tasks) is an open research question with various successes. A common approach is to embed tasks into a common vector space and characterize similarities in said space. Representative research along this line of work include Achille et al. [2], Peng et al. [33], Wallace et al. [41]. Another related line of work investigates transferability metrics [6; 9; 14; 15; 29; 40]. After all, one of the biggest use cases of task similarities is to predict how well a model transfers to new tasks. Since it is not our intention to define new task similarity/transferability metrics for distillation, we use already established metrics that capture the similarity between source representations and one-hot labels to weigh the source models. Under this purview, metrics that characterize similarities between features such as CKA [12; 23] and transferability metrics based on features [9; 14] suffice.
## 3 Problem Setting
The aim of this paper is to train an accurate model for a given target task, subject to limited labeled data and computational constraints (_e.g._ limited compute resources). Formally, we assume that our target task is specified via a small labeled training set \(D^{l}_{\tau}\). Furthermore, we assume (a) the availability of a set of unlabeled data, \(D^{\underline{\tau}}_{\tau}\), associated with the target task, and (b) the ability to perform inference on a set \(\mathcal{S}=\{\mathcal{M}_{s}\}_{s=1}^{S}\) of \(S\) different _source_ models, \(\mathcal{M}_{s}\), trained on various source tasks different from the target task, We emphasize that we have no access to any source data which could be practical due to storage, privacy, and computational constraints. Neither do we need full access to the source models provided we can perform inference on the models anywise (_e.g._ through an API).
We assume that the architecture of the target model, \(\mathcal{M}_{\tau}\), must be chosen to meet any applicable computational constraints. This can imply that no suitable target architecture is available in the set of
Figure 4: Test accuracy for distillation with each dot representing single-source distillation from different source models. The colors represent the task similarity for the source models (from small to large; ). We include the performance from fine-tuning ImageNet (), DistillNearest; i.e. distillation of the highest ranked source model () as well as DistillEqual (), and DistillNeight\((p)\) where weights are proportional to task similarity with power \(p=1\) (), and \(p=12\) (), respectively. The numbers in parentheses at the bottom are Spearman correlations between the task similarity and test accuracy for single-source distillation.
source models \(\mathcal{S}\), making classical transfer learning impossible. For simplicity, we restrict our models (regardless of source or target) to classification models that can be parameterized as \(\mathcal{M}=h\circ\phi\); the feature extractor \(\phi\) embeds input \(\mathbf{x}\) into a feature representation, and the classifier head, \(h\), maps the feature \(\phi(\mathbf{x})\) into predicted conditional class probabilities, \(P(\mathbf{y}\mid\mathbf{x})\).
## 4 Cross-Task Distillation for Constructing Efficient Models from
To construct an efficient model, we propose to distill large foundation models. Along this vein, we propose two variants: (a) DistillNearest that distills the single nearest source model (Section 4.1) and (b) DistillWeighted that distills a weighted collection of source models (Section 4.2).
### DistillNearest
To construct a single efficient target model, DistillNearest undergoes two steps sequentially: (a) selecting an appropriate source model and (b) distilling the knowledge from the selected source model into the target model. For ease of exposition, we start by explaining the distillation process and then discuss how to select an appropriate source model.
Distilling a selected source model.Given a selected source model \(\mathcal{M}_{s}\), the target model \(\mathcal{M}_{\tau}=h_{\tau}\circ\phi_{\tau}\) is trained by minimizing a weighted sum of two loss functions,
\[\mathcal{L}_{\text{single}}\stackrel{{\text{def}}}{{=}}\lambda \mathcal{L}^{\text{labeled}}+(1-\lambda)\mathcal{L}_{s}^{\text{distill}}, \tag{1}\]
where \(\lambda\in[0,1]\). The first loss function is the standard supervised objective over the labeled data,
\[\mathcal{L}^{\text{labeled}}\stackrel{{\text{def}}}{{=}}\frac{1} {|\mathcal{D}_{\tau}^{l}|}\sum_{(\mathbf{x}_{i},\mathbf{y}_{i})\in\mathcal{D}_ {\tau}^{l}}\ell_{CE}\left(h_{\tau}(\phi_{\tau}(\mathbf{x}_{i})),\mathbf{y}_{i }\right), \tag{2}\]
where \(\ell_{CE}(\cdot,\cdot)\) is the cross-entropy loss. The second loss function is a distillation loss over the unlabeled data,
\[\mathcal{L}_{s}^{\text{distill}}\stackrel{{\text{def}}}{{=}} \frac{1}{|\mathcal{D}_{\tau}^{u}|}\sum_{\mathbf{x}_{i}\in\mathcal{D}_{\tau}^{u }}\ell_{CE}\left(h_{\tau}^{s}(\phi_{\tau}(\mathbf{x}_{i})),\mathcal{M}_{s}( \mathbf{x}_{i}))\right). \tag{3}\]
Note, the source and target tasks do not share the same label space so we introduce an additional classifier head, \(h_{\tau}^{s}\), which maps the features from the target task feature extractor, \(\phi_{\tau}\), to the label space of the source task. This additional classifier head, \(h_{\tau}^{s}\), is discarded after training and only the target classifier head, \(h_{\tau}\), is used for inference.
In principle, we could add additional semi-supervised losses, such as the FixMatch loss [36] to propagate label information from the labeled set to the unlabeled set for better performance, but this would add additional hyperparameters and entangle the effect of our methods. We leave such explorations to future work.
Selecting the nearest source model for distillation.Selecting a source model for distillation is an under-explored problem. Given the recent success of using task similarity metrics [9] for selecting foundation models for fine-tuning, we conjecture that high similarities between a source model and the target task could indicate better performance of the distilled model (we verify this in Section 5.2). However, quantifying similarities between tasks/models is an open research question with various successes [2; 29]. For simplicity, we pick our similarity based on one simple intuition: target examples with identical labels should have similar source representations and vice versa. Along this vein, the recently introduced metric, PARC [9] fits the bill.
For convenience, we briefly review PARC. Given a small labeled probe set \(\mathcal{D}_{\tau}^{p}=\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{n}\subseteq \mathcal{D}_{\tau}^{l}\) and a source representation of interest \(\phi_{s}\), PARC first constructs two distance matrices \(D_{\phi_{s}}\), \(D_{Y}\) based on the Pearson correlations between every pair of examples in the probe set;
\[D_{\phi_{s}} =1-\mathrm{pearson}(\{\phi_{s}(\mathbf{x}_{i})\}_{i=1}^{n}),\] \[D_{Y} =1-\mathrm{pearson}(\{\mathbf{y}_{i}\}_{i=1}^{n}).\]
PARC is computed as the Spearman correlation between the lower triangles of the distance matrices;
\[\mathrm{PARC}(\phi_{s},Y)=\mathrm{spear}\left(\{D_{\phi_{s}}[i,j]\}_{i<j},\{D_{Y }[i,j]\}_{i<j}\right).\]
Intuitively, PARC quantifies the similarity of representations by comparing the (dis)similarity structures of examples within different feature spaces: if two representations are similar, then (dis)similar examples in one feature space should stay (dis)similar in the other feature space. In Figure 4 and 5 we show that ranking source models by PARC correlates well with test accuracy and that selecting an appropriate source model can yield significant improvements.
### DistillWeighted
Above, DistillNearest assumes a single optimal source model exists for the target task, but what if no single source model aligns well with our target task? To alleviate this issue, we propose to distill multiple source models, weighted according to their similarities with the target tasks. In the following, we explain our weighted distillation objective and how the weights are constructed. Figure 3 is a schematic depiction of the approach DistillWeighted.
Weighted objective for distilling multiple sources.Given a set of source models \(\mathcal{S}=\{M_{s}\}_{s=1}^{S}\), we modify the distillation loss of (1) with a weighted sum of multiple distillation losses (one for each source model):
\[\mathcal{L}_{\mathrm{multi}}\overset{\text{def}}{=}\lambda\mathcal{L}^{ \text{labeled}}+(1-\lambda)\sum_{s=1}^{S}\alpha_{s}\mathcal{L}_{s}^{\text{ distill}}, \tag{4}\]
where \(\lambda,\alpha_{1},\dots,\alpha_{S}\in[0,1]\) (\(\mathcal{L}^{\text{labeled}}\) and \(\mathcal{L}_{s}^{\text{distill}}\) are as defined in (2) and (3), respectively). Here \(\alpha_{s}\) is the relative weight assigned to each source model such that \(\sum_{s=1}^{S}\alpha_{s}=1\). Once again, we could add additional semi-supervised losses, such as the FixMatch loss, but to ensure simplicity, we leave such explorations for future research.
Task similarity weighting of source modelsSimply assigning equal weight to all source models is sub-optimal (e.g. weighing source models trained on ImageNet and Chest X-ray equally might not be optimal for recognizing birds). As such, we propose to compute the source weight \(\alpha_{s}\) from a task
\begin{table}
\begin{tabular}{l|l|c c|c c c c c c c c} \hline \hline & & & & & & & & & & & & \\ & \multicolumn{3}{c|}{Target Data} & & & & & & & & & & \\ & Labeled & Unlabeled & & & & & & & & & & \\ \hline \multirow{6}{*}{\begin{tabular}{} \end{tabular} } & IN+Transfer & ✓ & - & 92.4 & 42.8 & 47.3 & 97.4 & 81.6 & 37.3 & 75.9 & 62.6 & 67.2 \\ & IN+FixMatch & ✓ & ✓ & **93.5** & 41.9 & 38.5 & **98.1** & **82.6** & _42.8_ & 83.4 & _65.8_ & 68.3 \\ \cline{2-13} & DistillRandomSelection & ✓ & ✓ & 89.6 & 46.5 & 46.6 & 97.4 & _81.8_ & 39.0 & 79.4 & 61.9 & 67.8 \\ & \multicolumn{3}{c|}{(0.24 G/FLO)} & \multicolumn{3}{c|}{(**Ours**) DistillNearest} & ✓ & \multicolumn{1}{c}{92.0} & 59.6 & 46.8 & 97.4 & 81.0 & 47.4 & 81.9 & **71.3** & 72.2 \\ \cline{2-13} & DistillEqual & ✓ & ✓ & 90.8 & 53.5 & 45.7 & 97.5 & 81.5 & 41.4 & _82.1_ & 62.1 & _69.3_ \\ & DistillRandomWeights & ✓ & ✓ & 87.9 & 44.9 & 46.9 & 97.8 & 81.6 & 39.6 & 80.2 & 59.2 & 67.3 \\ & \multicolumn{3}{c|}{(**Ours**) DistillWeighted} & ✓ & ✓ & _92.0_ & **60.0** & **47.7** & _97.6_ & 82.2 & **48.3** & **84.4** & 69.9 & **72.8** \\ \hline \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & IN+Transfer & ✓ & - & 85.0 & 18.4 & 46.2 & 91.9 & 67.8 & 13.0 & 50.9 & 29.1 & 50.3 \\ & Fine-tune Selected Source & ✓ & - & 88.0 & 30.4 & 42.9 & 89.8 & 74.5 & 17.9 & 66.8 & 41.3 & 56.5 \\ \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & IN+Transfer & ✓ & - & 91.8 & 42.8 & 41.4 & 96.8 & 80.5 & 36.5 & 84.8 & 65.9 & 67.6 \\ & Fine-tune Selected Source & ✓ & - & 91.6 & 61.2 & 48.6 & 96.9 & 78.3 & 33.0 & 87.8 & 71.8 & 71.2 \\ \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & IN+Transfer & ✓ & - & 92.2 & 37.8 & 45.2 & 96.6 & 80.2 & 34.0 & 80.2 & 58.2 & 65.6 \\ & Fine-tune Selected Source & ✓ & - & 91.3 & 58.2 & 46.4 & 97.0 & 75.8 & 35.4 & 80.7 & 69.3 & 69.3 \\ \hline \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } & IN+Transfer & ✓ & - & 92.9 & 42.0 & 43.4 & 96.8 & 79.9 & 39.9 & 83.3 & 65.9 & 68.0 \\ & Fine-tune Selected Source & ✓ & - & 93.0 & 70.8 & 43.9 & 97.2 & 81.3 & 47.4 & 84.8 & 79.3 & 74.7 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Cross-task distillation compared to baselines. MobileNetV3 models (target architecture) trained with our methods are highly competitive with baseline methods on MobileNetV3 as well as baseline methods for more demanding model architectures (source architectures: Alexnet, GoogLeNet, ResNet-18, ResNet-50). We highlight the top 3 methods, which comply with compute requirements (i.e. MobileNetV3) for each target task by **bold**, underline, and _italic_, respectively. We also indicate the target data used by different methods.
similarity metric between the \(s\)-th source model and the target task. In particular, let \(e_{s}\) be such a similarity metric, then we compute the source weights \(\{\alpha_{i}\}_{i\in[S]}\) as
\[\alpha_{i}=\frac{\underline{e}_{i}^{p}}{\sum_{s=1}^{S}\underline{e}_{s}^{p}}, \quad\text{where }\underline{e}_{j}=\max(0,e_{j}) \tag{5}\]
for \(j=1,\dots,S\). Here \(p\) is a hyperparameter to re-scale the distribution of the weights. Larger \(p\) assigns more weight to the most similar source models, while \(p=0\) corresponds to equal weights for all models (denoted DistillEqual), and \(p\to\infty\) assigns all weight to the most similar source model (i.e. DistillNearest). When relevant, we use the notation DistillWeight\((p)\) to indicate the choice of \(p\).
ScalabilityFor DistillWeighted to be feasible, compared to DistillNearest, we need to ensure that the training procedure scales well with the size of \(\mathcal{S}\). Since the computation of the weights \(\{\alpha_{s}\}_{s=1}^{S}\) is based on the small probe set and is almost identical to the selection procedure for DistillNearest this is a negligible step. When training the target model, we merely require one forward pass on the unlabeled target dataset with each source model (to obtain pseudo-labels) as well as training of a one-layer classifier head per source model, both of which are cheap compared to the full training procedure of the target model. Nonetheless, one could employ a pre-selection of the top-\(k\) source models with the largest task similarity, thereby reducing the number of classifier heads and forward passes required. However, doing so introduces another hyperparameter, \(k\), (i.e. how many models to use) complicating the analysis. Moreover, since large \(p\) induces such pre-selection in a _soft_ manner, we leave it to future research to determine how to select the appropriate \(k\).
## 5 Experiments and Results
### Experimental Setup
**Benchmark.** Despite our methods being designed with the interest of using large vision models (that are potentially only available for inference), such a setting is intractable for our research. Thus, to allow for controlled experimentation we restrict our source models to a more tractable scale. In particular, we modify an existing transfer learning benchmark: Scalable Diverse Model Selection by [9], and use the publicly available models to construct a set of source models for each target task. Thus, we consider a set consisting of 28 models: 4 architectures (AlexNet, GoogLeNet, ResNet-18, and ResNet-50 [17; 24]) trained on 7 different source tasks (CIFAR-10, Caltech101, CUB200, NABird, Oxford Pets, Stanford Dogs, and VOC2007). For the target tasks, we consider 8 different tasks covering various image domains (Natural images: CIFAR-10, CUB200, NABird, Oxford Pets, Stanford Dogs; X-ray: ChestX; Skin Lesion Images: ISIC; Satellite Images: EuroSAT). We carefully remove any source models associated with a particular target task, if such exists, in order to avoid information leakage between source and target tasks (see also supplementary materials for further considerations). For the target architecture, we use MobileNetV3 [19] due to its low computational requirements compared to any of the source models. We refer the reader to the supplementary material for further details on implementation.
**Baselines.** We consider a set of different baselines: based on ImageNet initializations we consider IN+Transfer (fine-tunes ImageNet representations using only the labeled data), and IN+FixMatch[36] (fine-tunes the ImageNet representation using labeled and unlabeled data), and based on source model initializations we fine-tune the highest-ranked source model of each source architecture. To show the importance of using the right source model(s) to distill, we also compare DistillNearest to DistillRandomSelection which is the average of distilling from a randomly selected source, and for comparison to DistillWeighted we also construct distilled models using the multi-source objective (4) with a random weight (DistillRandomWeights) and equal weights (DistillEqual). For ease of exposition, we present results for DistillNearest (Section 5.2) and DistillWeighted (Section 5.3) in separate sections.
### Results for DistillNearest
We compare DistillNearest with the baselines in Table 1 and Figure 4. Our observations are as follows.
**Distillation with the right source model is better than fine-tuning from ImageNet.** We observe that within the same target architecture (MobileNetv3), simply fine-tuning ImageNet representations (IN+Transfer) is less optimal than distilling from the most similar single model (DistillNearest). In fact, for fine-grained datasets such as CUB200, NABird, Oxford Pets, and Stanford Dogs, we observe that distilling from an appropriate source model (DistillNearest) could yield much better performance than fine-tuning from a generalist ImageNet representation. More surprisingly, even with the aid of unlabeled data, models fine-tuned from ImageNet representations using a label propagation style approach (IN+FixMatch) still underperform distillation-based methods by at least 3.9% on average. These observations indicate the importance of selecting the right source model for transfer/distillation.
**Distilling to efficient architecture could be better than fine-tuning larger models.** In Table 1, we include the performance when fine-tuning larger architectures trained on ImageNet (IN+Transfer) and the source model (of the same architecture) most similar to each target task selected using PARC (Fine-tune Selected Source). A few observations are immediate: (a) our choice of task similarity metric is effective for transfer; across all 4 architectures, we observe at least 4% improvement over simple fine-tuning from ImageNet, which validates the results by Bolya et al. [9], and (b) with the aid of unlabeled data and distillation, the computationally efficient architecture MobileNetV3 can outperform larger architectures fine-tuned on labeled data from the target task (i.e. AlexNet, GoogLeNet, ResNet-18). Although underperforming fine-tuning a ResNet-50 initialized with the most similar ResNet-50 source model by a mere average of 2.5%-points (Fine-tune Selected Source), using a ResNet-50 would require \(17.5\times\) more computations during inference to achieve such improvements.
#### 5.2.1 Task Similarity Metrics for DistillNearest
One key component of DistillNearest is to select the source model to perform cross-task distillation on using task similarity metrics. Despite many many existing metrics for quantifying task similarities, their effectiveness for distillation remains unclear. Given the myriads of metrics, we restrict our focus to metrics that can capture similarities between a source representation of a target example and its one-hot label representation. Along this vein, two questions arise: which metric to use for comparing representations, and which representations from a source model should be used to represent a target example?
For the first question, we look into multiple metrics in the literature that compares various representations: CKA [12], RSA [14], and PARC [9]. For the second question, we look into the common representations from a source model: the features \(\phi\) and the probabilistic outputs \(h\circ\phi\).
To establish the effectiveness of our choice of similarity metric, we report the Spearman correlation between the task similarities and the test accuracy of the distilled models in Table 2. We see that features from the source models can better capture the correlation between the source models and the test accuracy of the distilled models, than the probabilistic pseudo-labels. In addition, we also see a much higher correlation among natural tasks (compared to specialized tasks such as ChestX, EuroSAT, and ISIC) which suggests that our choice of task similarity is effective at selecting similar tasks. Besides, we also observe a higher correlation when using PARC compared to the other metrics, thus validating our choice of using PARC as the default metric.
To further establish the effectiveness of our metrics to rank various source models, we compute the relative test accuracy between the top-3 models most similar to the target task and the top-3 best models after distillation (see Table 3). Again, we observe that all three metrics are capable of ranking affinity between source models, but ranking the models with PARC outperforms the other two metrics.
### Results for DistillWeighted
From Table 1, we observe that DistillWeighted compares favorably to DistillNearest, thus the conclusions for DistillNearest translates to DistillWeighted. Yet, one particular task, Oxford Pets, is worth more attention. On Oxford Pets (classification of different breeds of cats and dogs), we observe that distilling from multiple weighted sources (DistillWeighted) is much better than distilling from the single most similar source (DistillNearest), which is a ResNet-18 trained on Caltech101 (that can recognize concepts such as Dalmatian dog, spotted cats, etc.). Although the
most similar source model contains relevant information for recognizing different breeds of dogs and cats, it might not contain all relevant knowledge from the set of source models that could be conducive to recognizing all visual concepts in Oxford Pets. In fact, we observe that the second most similar model is a GoogLeNet model trained on Stanford Dogs to recognize more dog breeds than the most similar source model (but incapable of recognizing cats). In this case, DistillWeighted allows aggregation of knowledge from multiple sources and can effectively combine knowledge from most similar source model contains relevant information for recognizing different breeds of dogs and cats, it might not contain all relevant knowledge from the set of source models that could be conducive to recognizing all visual concepts in Oxford Pets. In fact, we observe that the second most similar model is a GoogLeNet model trained on Stanford Dogs to recognize more dog breeds than the most similar source model (but incapable of recognizing cats). In this case, DistillWeighted allows aggregation of knowledge from multiple sources and can effectively combine knowledge from
\begin{table}
\begin{tabular}{l l|c c c c c c c c c} \hline \hline & & & & & & & & & & & \\ \hline \multirow{4}{*}{\(\bullet\)} & CKA & 0.72 & 0.62 & 0.23 & 0.39 & -0.04 & 0.31 & 0.69 & 0.11 & 0.38 \\ & PARC & 0.79 & 0.79 & 0.02 & 0.17 & 0.06 & 0.48 & 0.72 & 0.54 & 0.45 \\ & RSA & 0.82 & 0.31 & -0.11 & 0.30 & **0.10** & -0.03 & 0.65 & 0.38 & 0.30 \\ \hline \multirow{4}{*}{\(\bullet\)} & CKA & 0.82 & 0.39 & **0.36** & 0.21 & -0.04 & 0.47 & 0.69 & 0.55 & 0.43 \\ & PARC & 0.84 & **0.84** & 0.18 & **0.42** & -0.14 & **0.81** & 0.81 & 0.84 & **0.58** \\ \cline{1-1} & RSA & **0.86** & 0.81 & 0.03 & 0.38 & 0.03 & 0.28 & **0.89** & **0.85** & 0.52 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Spearman correlation between test accuracy after all possible single-source distillations and task similarities associated with the source models. Generally feature representations correlate better with distillation performance compared to pseudo-label representations.
Figure 5: Test accuracy of single-source distillation and raw task similarity score using PARC on the feature representations. The scores are on different scales for different tasks, but almost all tasks have a positive correlation between test accuracy and task similarity.
Figure 6: Improvement over IN+Transfer. Here \(\bullet\) is the average improvement over all eight target tasks and \(\circ\) represents the performance on a target task. Note, \(p=0\) corresponds to DistillEqual, and \(p=\infty\) corresponds to DistillNearest.
\begin{table}
\begin{tabular}{l l|l c c c c c c c c} \hline \hline & & & & & & & & & & \\ \hline \multirow{4}{*}{\(\bullet\)} & CKA & 99.1 & 95.6 & 97.4 & 99.6 & 98.8 & 89.4 & **100.0** & 97.6 & 97.2 \\ & PARC & 99.5 & **100.0** & 95.5 & 99.6 & 98.5 & 99.7 & 98.8 & **99.7** & 98.9 \\ & RSA & **100.0** & 77.7 & 96.5 & 99.7 & 98.5 & 87.2 & 98.6 & 97.6 & 94.5 \\ \hline \multirow{4}{*}{\(\bullet\)} & CKA & **100.0** & 95.6 & 97.0 & **99.8** & **99.0** & 93.3 & **100.0** & 96.4 & 97.6 \\ & PARC & **100.0** & **100.0** & **97.8** & 99.7 & 98.3 & **100.0** & 97.1 & 98.5 & **98.9** \\ \cline{1-1} & RSA & **100.0** & **100.0** & 96.7 & 99.8 & 98.9 & 94.9 & 98.9 & 98.8 & 98.5 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Relative accuracy of top-3 single-source distilled models selected by task similarity over the average of the 3 actual best models. We compute the average test accuracy of the top-3 highest ranked target models and divide it by the average of the 3 actually best-performing target models.
different source models for a more accurate target model than distillation from a single source. This suggests that _under certain conditions such as high heterogeneity in data, distilling from multiple source models can outperform distilling a single best source model._
#### 5.3.1 Task Similarity Metrics for Weighing Sources
We have established that our task similarity metric can capture the correlation between the source model representations and the test accuracy of the distilled models. However, it is not a priori clear that weighing source models based on the ranking of their affinity to the target task would yield better performance for multi-source distillation. As such, we investigate alternative choices of weighing schemes for a subset of 5 target tasks (CUB200, EuroSAT, ISIC, Oxford Pets, Stanford Dogs): Inverse (weights are inversely proportional to task similarity), DistillRandomWeights (weights are sampled uniformly on a 4-simplex), DistillRandomSelection (randomly selecting a single source model), and DistillEqual (equal weights for all models).
Through Figure 1, we find that distilling from a single or set of source models ranked using the similarity metric is much more effective than distilling from source models that are weighted randomly or equally (DistillRandomWeights or DistillEqual). In addition, the fact that Inverse underperforms IN+Transfer on average suggests that it is crucial to follow the ranking induced by the similarity metrics when distilling the sources and that the metric ranks both the most similar source models and the least similar source models appropriately.
#### 5.3.2 Effect of \(p\)
Our task similarity metrics give a good ranking of which source models to select for distillation but it is unclear whether the similarity score could be used directly without any post-processing. To investigate, we visualize the relationship between the test accuracy of the models distilled from a single source and our task similarity. From Figure 5, it is clear that the distribution of task similarities depends on the target task, which motivates our normalization scheme.
In addition, it is not apriori clear that the weights should scale linearly with the similarity scores. Thus, we investigate the effect of the rescaling factor, \(p\), for constructing the weights. In Figure 6, we see that although no rescaling (\(p=1\)) outperforms equal weighting, it is less optimal than _e.g._\(p=12\) (our default). This suggests that task similarity and good weights have a monotonic, but non-linear relationship.
### Additional Ablations and Analyses
Due to space constraints, we include additional ablations and analyses in the supplementary materials. We summarize the main findings as follows.
ResNet-50 as target model.Averaged over 8 tasks, DistillWeighted outperforms both IN+Transfer and DistillEqual by 5.6% and 3.8%, respectively. Also, compared to ImageNet initialization, using DistillWeighted with the most similar ResNet-50 source model as target model initialization improves accuracy by 1.0%.
Improvements on VTAB.DistillWeighted outperforms IN+Transfer averaged over the \(\bullet\)_Natural_ and \(\bullet\)_Specialized_ tasks of VTAB, by 5.1% and 0.8%, respectively. DistillNearest outperform by 4.8% and 0.6%, respectively.
Fewer labels.DistillWeighted and DistillNearest outperform IN+Transfer (by 6.8% and 4.4%, respectively) under a setup with even fewer labeled samples.
Additional analysis of task similarity metrics.We consider additional correlation metrics and top-\(k\) relative accuracies of the selected models -- all supporting the usefulness of task similarity to weigh and select source models.
## 6 Conclusion
We investigate the use of diverse source models to obtain efficient and accurate models for visual recognition with limited labeled data. In particular, we propose to distill multiple diverse source
models from different domains weighted by their relevance to the target task without access to any source data. We show that under computational constraints and averaged over a diverse set of target tasks, our methods outperform both transfer learning from ImageNet initializations and state-of-the-art semi-supervised techniques.
|
2306.13673 | Taming the Exponential Action Set: Sublinear Regret and Fast Convergence
to Nash Equilibrium in Online Congestion Games | The congestion game is a powerful model that encompasses a range of
engineering systems such as traffic networks and resource allocation. It
describes the behavior of a group of agents who share a common set of $F$
facilities and take actions as subsets with $k$ facilities. In this work, we
study the online formulation of congestion games, where agents participate in
the game repeatedly and observe feedback with randomness. We propose
CongestEXP, a decentralized algorithm that applies the classic exponential
weights method. By maintaining weights on the facility level, the regret bound
of CongestEXP avoids the exponential dependence on the size of possible
facility sets, i.e., $\binom{F}{k} \approx F^k$, and scales only linearly with
$F$. Specifically, we show that CongestEXP attains a regret upper bound of
$O(kF\sqrt{T})$ for every individual player, where $T$ is the time horizon. On
the other hand, exploiting the exponential growth of weights enables CongestEXP
to achieve a fast convergence rate. If a strict Nash equilibrium exists, we
show that CongestEXP can converge to the strict Nash policy almost
exponentially fast in $O(F\exp(-t^{1-\alpha}))$, where $t$ is the number of
iterations and $\alpha \in (1/2, 1)$. | Jing Dong, Jingyu Wu, Siwei Wang, Baoxiang Wang, Wei Chen | 2023-06-19T03:03:44Z | http://arxiv.org/abs/2306.13673v1 | Taming the Exponential Action Set: Sublinear Regret and Fast Convergence to Nash Equilibrium in Online Congestion Games
###### Abstract
The congestion game is a powerful model that encompasses a range of engineering systems such as traffic networks and resource allocation. It describes the behavior of a group of agents who share a common set of \(F\) facilities and take actions as subsets with \(k\) facilities. In this work, we study the online formulation of congestion games, where agents participate in the game repeatedly and observe feedback with randomness. We propose CongestEXP, a decentralized algorithm that applies the classic exponential weights method. By maintaining weights on the facility level, the regret bound of CongestEXP avoids the exponential dependence on the size of possible facility sets, i.e., \(\binom{F}{k}\approx F^{k}\), and scales only linearly with \(F\). Specifically, we show that CongestEXP attains a regret upper bound of \(O(kF\sqrt{T})\) for every individual player, where \(T\) is the time horizon. On the other hand, exploiting the exponential growth of weights enables CongestEXP to achieve a fast convergence rate. If a strict Nash equilibrium exists, we show that CongestEXP can converge to the strict Nash policy almost exponentially fast in \(O(F\exp(-t^{1-\alpha}))\), where \(t\) is the number of iterations and \(\alpha\in(1/2,1)\).
## 1 Introduction
Congestion games are a class of general-sum games that can be used to describe the behavior of agents who share a common set of facilities (resources) (Brown, 1949). In these games, each player chooses a combination of facilities, and popular facilities will become congested, yielding a lower utility for the players who choose them. Thus, players are incentivized to avoid congestion by choosing combinations that are less popular among the other players. A range of real-world scenarios can be captured by the congestion game model, such as traffic flow, data routing, and wireless communication networks (Tekin et al., 2012; Cheung et al., 2014; Zhang and Wang, 2020).
In the model of the congestion game, the Nash equilibrium is a popular concept to describe the behavior of selfish players and the dynamics induced by decentralized algorithms. It describes a stable state of the game where no player can improve their utility by unilaterally changing their choice of actions. When a unique Nash equilibrium exists in the congestion game, it can be a reference point for players to coordinate to avoid suboptimal outcomes. Beyond the Nash equilibrium, social welfare is a significant metric, capturing the overall utility or well-being of all players in
volved. It serves as a crucial benchmark, enabling the evaluation of the efficiency loss incurred when transitioning from centralized to decentralized algorithms.
In the classic one-shot congestion game setting, the Nash equilibrium and the loss of efficiency due to decentralized dynamics have been well studied (Roughgarden and Tardos, 2002; Roughgarden, 2007). However, these results do not provide insights into how players arrive at the equilibrium. This motivates the study of congestion games in an online learning framework, where players participate in the game repeatedly at every time step. This framework better models many realistic scenarios, such as the traffic congestion problem in urban areas. In this repeated congestion game setting, players such as drivers in a congested city must choose between different routes to reach their destinations every day. As more drivers use a particular route, the congestion on that route increases, leading to higher travel times and lower utility. In this scenario, players can update their desired route every day to optimize their utility, but the observed utility by each player may be subject to randomness due to uncertainty in the actual congestion situation (e.g., the influence of the weather). All these make it suitable to model the congestion game in an online learning framework.
While there have been various decentralized algorithms that can attain the Nash equilibria efficiently for the general online games, they can suffer from a linear dependency on the number of actions when directly applied to the congestion game (Daskalakis et al., 2011; Syrgkanis et al., 2015; Chen and Peng, 2020; Hsieh et al., 2021; Daskalakis et al., 2021; Giannou et al., 2021), which is exponential with \(k,F\). On the other hand, algorithms designed specifically for congestion games either only converge to Nash equilibria asymptotically (Kleinberg et al., 2009; Krichene et al., 2015; Palaiopanos et al., 2017), on average (Cui et al., 2022), or require additional assumptions on the structure of the game (Chen and Lu, 2015, 2016). Moreover, to the best of our knowledge, there is no algorithm that can simultaneously guarantee both low regret and fast convergence to Nash equilibrium for each player. While some online learning algorithms, such as exponential weights, have been shown to converge faster than others due to the specific choice of regularization (Giannou et al., 2021), previous regret results indicate that their guarantees still rely on the exponentially large number of actions, due to their specific form of updates (exponential weighting) (Cohen et al., 2016).
In this paper, we study the online congestion game with semi-bandit and full information feedback. We propose a decentralized algorithm that modifies the celebrated exponential weights algorithm, which can be utilized by each player without additional information about other players' utility. From the individual player's perspective, we show that the algorithm guarantees sublinear individual regret, with respect to the best action in hindsight when holding the other player's strategy fixed. We remark that the regret is also only linear with respect to the number of facilities. As a result of this, we show that the optimal social welfare can be efficiently approximated, up to an error that is only linear with respect to the number of facilities. When a strict Nash equilibrium exists for the congestion game, we also prove that our algorithm is capable of converging to the strict Nash equilibrium fast, with an almost exponentially fast rate that is only linear with respect to the number of facilities.
## 2 Related works
Learning in gamesOnline learning has a long history that is closely tied to the development of game theory. The earliest literature can be traced back to Brown's proposal on using fictitious play to solve two-player zero-sum games (Brown, 1949). It is now understood that fictitious play can converge very slowly to Nash equilibrium (Daskalakis and Pan, 2014). On the other side, it
has been shown that if each player of a general-sum, multi-player game experiences regret that is at most \(f(T)\), the empirical distribution of the joint policy converges to a coarse correlated equilibrium of the game with a rate of \(O(f(T)/T)\)(Cesa-Bianchi and Lugosi, 2006). This implies that a variety of online learning algorithms such as Hedge and Follow-The-Regularized-Leader algorithms can converge to coarse correlated equilibria at a rate of \(O(1/\sqrt{T})\).
While the standard no-regret learning dynamic can guarantee convergence to equilibria, it has been shown that more specialized no-regret learning protocols can do better (Daskalakis et al., 2011; Syrgkanis et al., 2015; Chen and Peng, 2020; Hsieh et al., 2021; Daskalakis et al., 2021). It has also been shown that when strict pure Nash equilibria are present, algorithms that are based on entropic regularization (e.g. exponential weights) can converge fast to the equilibria (Cohen et al., 2016; Giannou et al., 2021). Moreover, such convergence rate holds for a variety of different feedback models, from full information to bandit feedback.
Though all of the above-mentioned methods are applicable to congestion games, the results usually involve a linear dependency on the number of actions. As each action is a combination of the different facilities (resources) in the congestion games, the results lead to the undesirable exponential dependency on the number of facilities.
Learning in online congestion gamesCongestion games were first introduced in the seminal work Rosenthal (1973) as a class of games with pure-strategy Nash equilibria. It has then been extensively studied, where its Nash equilibria have been characterized in Roughgarden and Tardos (2002) and a comprehensive introduction has been given in Roughgarden (2007).
In the online setting, many works use no-regret learning to develop learning dynamics in this class of games for efficient convergence. Kleinberg et al. (2009) are the first to study no-regret learning for congestion games. They showed that the well-known multiplicative weights learning algorithm results in convergence to pure equilibria. Furthermore, they identified a set of mixed Nash equilibria that are weakly stable and showed that the distribution of play converges to this set. Followup works Krichene et al. (2015) showed that multiplicative weights algorithms converge to the set of Nash equilibria in the sense of Cesaro means, and Palaiopanos et al. (2017) investigated the effect of learning rate on convergence to Nash equilibria.
With an additional assumption of convex potential functions, Chen and Lu (2015, 2016) established a non-asymptotic convergence rate. However, their rate has an exponential dependency on the number of facilities. Cui et al. (2022) gave the first non-asymptotic convergence rate under semi-bandit feedback and without an exponential dependency on the number of facilities. However, the convergence is with respect to the averaged-over-time policy and suffers from a dependency of \(F^{9}\), where \(F\) is the number of facilities. This result is later improved by concurrent work Panageas et al. (2023), who proposed an online stochastic gradient descent algorithm that convergences to an \(\epsilon\)-approximate Nash equilibrium in \(O(\epsilon^{-5})\) time while each individual player enjoys a regret of \(O(T^{4/5})\).
Combinatorial bandits and shortest pathCombinatorial bandits offer an extension of the classic multi-armed bandit problem where the player must select an action that involves a combination of various resources (Cesa-Bianchi and Lugosi, 2012; Chen et al., 2013; Lattimore and Szepesvari, 2020). In a special case, the shortest path problem can be viewed as a combinatorial bandit problem where the resources are edges on a graph and the action is a path (Gyorgy et al., 2007). Efficient algorithms have been proposed for these problems, and it has been shown that the sublinear regret only linearly depends on the number of resources. However, it is important to note these algorithms
are designed for a single player, and as a result, they may not converge to a Nash equilibrium when applied directly to congestion games by allowing each player to execute the algorithm.
## 3 Problem Formulation
### Congestion game
A congestion game with \(n\) players is defined by \(\mathcal{G}=\left(\mathcal{F},\{\mathcal{A}_{i}\}_{i=1}^{n},\{r^{f}\}_{f\in \mathcal{F}}\right)\), where i) \(\mathcal{F}\) is the facility set that contains \(F\) facilities; ii) \(\mathcal{A}_{i}\) is the action space for player \(i\) and contains \(A\) actions (we assume that the action space for each player is the same), where each action \(a_{i}\in\mathcal{A}_{i}\) is a combination of \(k\) facilities in \(\mathcal{F}\); and iii) \(r^{f}:(\mathcal{A}_{1}\times\cdots\times\mathcal{A}_{n})\rightarrow[0,1]\) is the reward function for facility \(f\in\mathcal{F}\), which only depends on the number of players choosing this facility, i.e., \(\sum_{i=1}^{n}\mathbb{I}\{f\in a_{i}\}\).We denote \(a=(a_{i},a_{-i})\) as a joint action, where \(a_{-i}\) is the actions of all other players except player \(i\). The total reward collected by player \(i\) with joint action \(a=(a_{i},a_{-i})\) is \(r_{i}(a_{i},a_{-i})=\sum_{f\in a_{i}}r^{f}(a_{i},a_{-i})\). Without loss of generality, we assume that \(r^{f}(a)\in[0,1]\).
Deterministically playing actions \(a=(a_{i},a_{-i})\) is referred to as a pure strategy. The player can also play a mixture of pure strategy, \(\omega_{i}\in\Delta(\mathcal{A}_{i})\), where \(\Delta(\mathcal{A}_{i})\) denotes the probability simplex of action space \(\mathcal{A}_{i}\). Similarly, we use \(\omega=(\omega_{i},\omega_{-i})\) to denote a joint randomized policy.
### Nash equilibrium
One of the commonly used solution concepts in congestion games is Nash equilibrium (NE), and the policies that lead to a Nash equilibrium are referred to as Nash policies. The players are said to be in a Nash equilibrium when no player has an incentive from deviating from its current policy (as described in the definition below).
**Definition 3.1** (Nash equilibrium).: _A policy \(\omega^{*}=(\omega_{1}^{*},\ldots,\omega_{n}^{*})\) is called a **Nash equilibrium** if for all \(i\in[n]\), \(r_{i}(\omega_{i}^{*},\omega_{-i}^{*})\geq r_{i}(\omega_{i},\omega_{-i}^{*}) \,,\forall\omega_{i}\in\Delta(\mathcal{A}_{i})\). When \(\omega^{*}\) is pure, the equilibrium is called a **pure Nash equilibrium**. In addition, when the strategy is pure and the inequality is a strict inequality, the equilibrium is called a **strict Nash equilibrium**._
**Fact 3.1** ((Rosenthal, 1973)).: _There exists a pure Nash equilibrium in any congestion game._
### Social welfare and price of anarchy
Except for Nash equilibrium, another commonly used metric to measure the efficiency of the dynamics between the players is through social welfare. For a given joint action \(a=\{a_{i}\}_{i=1}^{n}\), the social welfare is defined to be the sum of players' rewards, i.e., \(W(a)=\sum_{i=1}^{n}r_{i}(a)\), and the optimal social welfare of the game is defined as
\[\mathrm{OPT}=\max_{a\in\mathcal{A}_{1}\times\cdots\times\mathcal{A}_{n}}W(a)\,.\]
This optimality is under the case where a central coordinator could dictate each player's strategy, and each player's individual incentives are not considered.
Based on the definition of \(\mathrm{OPT}\), We can define smooth games as follows.
**Definition 3.2** (Smooth game (Roughgarden, 2009)).: _A game is \((\lambda,\mu)\)-smooth if there exists a joint action \(a^{*}\) such that for any joint action \(a\), \(\sum_{i\in n}r_{i}\left(a_{i}^{*},a_{-i}\right)\geq\lambda\mathrm{OPT}-\mu W(a)\)._
Nisan et al. (2007) show that congestion games are smooth when the reward function are affine, that is, when \(r^{f}(a)\) is an affine function on the scalar variable \(\sum_{i=1}^{n}\mathbb{I}\{f\in a_{i}\}\). This property enables certain decentralized no-regret learning dynamics to efficiently approximate the optimal welfare (Syrgkanis et al., 2015).
### Online congestion game
In this paper, we study the congestion game in an online setting with a finite time horizon \(T\), where the underlying reward function is unknown. At each time step \(t\in[T]\), each player chooses (randomized) policy \(\omega_{i}^{t}\), from which it forms a joint policy \(\omega^{t}=(\omega_{1}^{t},\ldots,\omega_{n}^{t})\). Then each player \(i\) draws a random action \(a_{i}^{t}\sim\omega_{i}^{t}\), plays this action (denote \(a^{t}\) the joint action), and receives overall reward of \(\sum_{f\in a_{i}^{t}}R^{f}(a^{t})\), where \(R^{f}(a^{t})\)'s are random variables that satisfy the following assumption.
**Assumption 3.1**.: _For any facility \(f\in\mathcal{F}\), any joint action \(a^{t}\) and any player \(i\in[n]\), let \(\mathcal{H}_{t}\) be the history up to time step \(t-1\). Then, \(R^{f}(a)\in[0,1]\), and \(\mathbb{E}\left[R^{f}(a^{t})\mid\mathcal{H}_{t}\right]=r^{f}(a^{t})\)._
The assumption implies that the mean of \(R^{f}(a^{t})\) is always \(r^{f}(a^{t})\). Hence the Nash equilibrium and expected social welfare of the online congestion game is the same as those of the offline congestion game.
We consider two types of feedback rules in this paper: _semi-bandit feedback_, and _full information feedback_. In the semi-bandit feedback, player \(i\) observes all the \(R^{f}(a^{t})\)'s for any \(f\in a_{i}\) (only the facilities he played); and in full information feedback, player \(i\) observes all possible information \(R^{f}(a_{i},a_{-i})\), for every \(a\in\mathcal{A}_{i}\), \(\forall f\in a\).
The efficiency of a sequence of policy \(\{\omega_{i}^{t}\}_{t=1}^{T}\) can be measured by the individual regret of all the players (which is defined as follows).
**Definition 3.3** (Individual regret).: _The individual regret of player \(i\) playing policy \(\{\omega_{i}^{t}\}_{t=1}^{T}\) is defined as the cumulative difference between the received rewards and the rewards incurred by a best-inhalsight policy, that is_
\[\mathrm{Regret}_{i}(T)=\max_{\omega_{i}\in\mathcal{A}_{i},\{\omega_{-i}^{t}\} _{t=1}^{T}}\sum_{t=1}^{T}r_{i}(\omega_{i},\omega_{-i}^{t})-r_{i}(\omega_{i}^{ t},\omega_{-i}^{t})\,.\]
## 4 Algorithm
In this section, we introduce CongestEXP, a decentralized algorithm for online congestion games (Algorithm 1).
The algorithm uses the combinatorial nature of the action space. Each player maintains a sampling distribution \(\omega_{i}^{t}\) and a facility-level reward estimator \(\tilde{y}_{i}^{t}\). At each time step, they first draw a random action \(a_{i}^{t}\sim\omega_{i}^{t}\) and play this action. Then they use their received information to update \(\tilde{y}_{i}^{t}(f)\)'s (for all \(f\in\mathcal{F}\)) as follows
\[\tilde{y}_{i}^{t}(f)=1-\frac{\mathbb{I}\{f\in a_{i}^{t}\}(1-R^{f}(a^{t}))}{q_{ i}^{t}(f)}\,,\quad q_{i}^{t}(f)=\sum_{a_{i}\in\mathcal{A}_{i},f\in a_{i}}\omega_{i}^ {t}(a_{i})\,, \tag{1}\]
where \(q_{i}^{t}(f)\) is the probability that player \(i\) selects facility \(f\) at time \(t\) based on its current policy \(\omega_{i}^{t}\). One can easily check that if \(f\in a_{i}^{t}\), \(\tilde{y}_{i}^{t}(f)\) is an unbiased estimator for \(r^{f}(a^{t})\), and with these facility-level reward estimators, the players then update \(\omega_{i}^{t+1}\) as follows (exponential weighting), and then proceed to the next time step.
\[\omega_{i}^{t+1}(a)=\frac{\prod_{f\in a}\tilde{\omega}_{i}^{t}(f)}{\sum_{a_{i} \in\mathcal{A}_{i}}\prod_{f^{\prime}\in a_{i}}\tilde{\omega}_{i}^{t}(f^{\prime })}\,,\forall a\in\mathcal{A}_{i}\,,\quad\text{where}\;\;\tilde{\omega}_{i}^{t }(f)=\exp\left(\eta\sum_{j=1}^{t}\tilde{y}_{i}^{j}(f)\right)\,. \tag{2}\]
On the one hand, in the semi-bandit setting, our algorithm leverages this kind of feedback and estimates rewards at the facility level. We note that this idea has also been previously utilized to tackle online shortest path problems and combinatorial bandit problems, as documented in literature (Gyorgy et al., 2007; Cesa-Bianchi and Lugosi, 2012; Chen et al., 2013; Combes et al., 2015). This enables us to achieve a low individual regret (Theorem 5.1) and guarantee a lower bound for the overall social welfare (Corollary 5.1). On the other hand, our algorithm constructs exponential weights based on the reward estimation at the action level. This makes sure that the joint policy \(\omega^{t}\) can converge to a Nash equilibrium fast when it is nearby (Theorem 5.2 and 5.3).
In summary, our results indicate that adopting Algorithm 1 in a congestion game leads to favorable outcomes. Each player enjoys favorable cumulative individual rewards, without compromising the overall social welfare. Moreover, when the joint policy is close to the Nash equilibrium, players can quickly converge to a stable equilibrium state, avoiding inefficient and chaotic dynamics.
## 5 Theoretical Results
In this section, we present our main theoretical results.
### Sublinear individual regret with linear dependency on \(F\)
Our first theorem shows that each individual player enjoys a sublinear individual regret.
**Theorem 5.1**.: _Under semi-bandit feedback, Algorithm 1 with \(\eta=\frac{1}{\sqrt{T}}\) satisfies that for all \(i\in[n]\),_
\[\text{Regret}_{i}(T)=O\left(kF\sqrt{T}\right).\]
Compared with naively applying exponential weights on the congestion game (with a regret of \(\tilde{O}\left(\sqrt{A_{i}T}\right)\)(Auer et al., 2002)), we can see that Theorem 5.1 reduces the factor \(\sqrt{A_{i}}\) to \(kF\). This is a significant improvement since \(A\approx F^{k}\) is exponentially larger than \(kF\).
Though there exist some works to achieve a similar regret upper bound (Daskalakis et al., 2021), we emphasize that these algorithms only work in the full-information setting, but not the semi-bandit setting. Besides, our algorithm can converge to a strict Nash equilibrium fast, while existing ones can only guarantee to converge to a coarse correlated equilibrium (please see details in Section 5.2).
Tight approximation to optimal welfareOne immediate consequence of Theorem 5.1 is that our proposed algorithm can achieve a tight approximation to the optimal social welfare.
**Corollary 5.1**.: _Under semi-bandit feedback, if the congestion game is \((\lambda,\mu)\)-smooth, then Algorithm 1 with \(\eta=\frac{1}{\sqrt{T}}\) satisfies_
\[\frac{1}{T}\sum_{t=1}^{T}W(\omega^{t})\geq\frac{\lambda}{1+\mu}\mathrm{OPT}-O \left(\frac{nkF}{\sqrt{T}(1+\mu)}\right)\,.\]
We remark that \(\frac{\lambda}{1+\mu}\mathrm{OPT}\) is shown to be a tight approximation of optimal social welfare possible by offline algorithms that attain Nash equilibrium in congestion games (Roughgarden, 2009). Therefore, the above result shows that our algorithm is as efficient as any offline Nash policy asymptotically.
Technical highlight of Theorem 5.1In classical proofs of exponential weights algorithms, the regret is closely linked to the quadratic term of the reward estimator, i.e., \(\mathbb{E}_{t}[\sum_{a_{i}\in\mathcal{A}_{i}}\omega_{i}^{t}(a_{i})\left( \tilde{y}_{i}^{t}(a_{i})\right)^{2}]\)(Auer et al., 2002; Lattimore and Szepesvari, 2020), where \(\tilde{y}_{i}^{t}(a_{i})\) is the estimated reward of action \(a_{i}\) at time step \(t\), and \(\mathbb{E}_{t}[\cdot]\) denotes the conditional expectation over all history up to time \(t\). If we can upper bound this term by a constant polynomial with \(k\) and \(F\), then we can remove the exponential factor in the individual regret upper bound.
With our facility level estimator (Eq. (1)), \(\tilde{y}_{i}^{t}(a_{i})=\sum_{f\in a_{i}}\tilde{y}_{i}^{t}(f)\), the above term could be upper bounded as:
\[\sum_{a_{i}\in\mathcal{A}_{i}}\omega_{i}^{t}(a_{i})\bigg{(}\sum_{ f\in a_{i}}\tilde{y}_{i}^{t}(f)\bigg{)}^{2}\] \[\leq k\sum_{a_{i}\in\mathcal{A}_{i}}\omega_{i}^{t}(a_{i})\sum_{f \in a_{i}}\left(1-\frac{\mathbb{I}\{f\in a_{i}^{t}\}(1-R^{f}(a_{i}^{t},a_{-i}^ {t}))}{q_{i}^{t}(f)}\right)^{2} \tag{3}\] \[=k+k\sum_{a_{i}\in\mathcal{A}_{i}}\omega_{i}^{t}(a_{i})\sum_{f \in a_{i}}\left(\left(\frac{\mathbb{I}\left\{f\in a_{i}^{t}\right\}(1-R^{f}(a _{i}^{t},a_{-i}^{t}))}{q_{i}^{t}(f)}\right)^{2}-\frac{2\mathbb{I}\left\{f\in a _{i}^{t}\right\}(1-R^{f}(a_{i}^{t},a_{-i}^{t}))}{q_{i}^{t}(f)}\right)\] (4) \[\leq k+k\sum_{f\in\mathcal{F}}\left(\frac{\mathbb{I}\left\{f\in a _{i}^{t}\right\}(1-R^{f}(a_{i}^{t},a_{-i}^{t}))}{q_{i}^{t}(f)}\right)^{2}\sum_ {a_{i}\in\mathcal{A},f\in a_{i}}\omega_{i}^{t}(a_{i})\] \[\leq k+k\sum_{f\in\mathcal{F}}\left(\frac{\mathbb{I}\left\{f\in a _{i}^{t}\right\}(1-R^{f}(a_{i}^{t},a_{-i}^{t}))}{q_{i}^{t}(f)}\right)^{2}q_{i}^ {t}(f)\,, \tag{5}\]
where Eq. (3) is by the Cauchy-Shwarz inequality, Eq. (4) is by noting \(\sum_{a_{i}\in\mathcal{A}_{i}}\omega_{i}^{t}(a_{i})=1\), and Eq. (5) is by the definition of \(q_{i}^{t}(f)\). Notice that \((1-R_{i}^{f}(a^{t}))^{2}\) is upper bounded by \(1\) and taking a conditional expectation over all history up to time \(t\) yields (denote \(\mathbb{E}_{t}\) as the conditional expectation operator, and the \(q_{i}^{t}(f)\) cancels out the \(\mathbb{I}\{f\in a_{i}^{t}\}^{2}\)).
\[\mathbb{E}_{t}\bigg{[}\sum_{a_{i}\in\mathcal{A}_{i}}\omega_{i}^{t}(a_{i})\bigg{(} \sum_{f\in a_{i}}\tilde{y}_{i}^{t}(f)\bigg{)}^{2}\bigg{]}\leq\ k+kF\,, \tag{6}\]
and this is an upper bound polynomial with \(k\) and \(F\).
From the above explanation, one can see the necessity of estimating the rewards at the facility level. If the reward estimator is constructed at the action level, that is, an estimator of the form
\[\tilde{y}_{i}^{t}(a_{i})=k-\frac{\mathbb{I}\left\{a_{i}=a_{i}^{t}\right\}\left(k -\sum_{f\in a_{i}^{t}}R_{i}^{f}\left(a_{i}^{t},a_{-i}^{t}\right)\right)}{ \omega_{i}^{t}(a_{i})},\]
Consider the case that \(R_{i}^{f}\left(a_{i}^{t},a_{-i}^{t}\right)\) is always 0 and at the beginning \(\omega_{i}^{t}(a_{i})=1/|\mathcal{A}_{i}|\), then this quadratic term is approximately
\[\mathbb{E}_{t}\Bigg{[}\sum_{a_{i}\in\mathcal{A}_{i}}\omega_{i}^{t}(a_{i}) \left(\tilde{y}_{i}^{t}(a)\right)^{2}\Bigg{]}\approx\sum_{a_{i}\in\mathcal{A }_{i}}\omega_{i}^{t}(a_{i})\cdot\left(\omega_{i}^{t}(a_{i})\left(\frac{k}{ \omega_{i}^{t}(a_{i})}\right)^{2}\right)=k^{2}|\mathcal{A}_{i}|\,,\]
which scales with the number of actions and is thus always exponentially large.
### Fast convergence to strict Nash equilibrium
Beyond the low individual regret, we also show that our algorithm can produce a set of policies \(\{\omega_{i}^{t}\}_{t=1}^{T}\) that converges fast to a strict Nash equilibrium \(\omega_{i}^{*}\) in the full-information setting.
We first consider a simple case, where each player observes the expected rewards directly (which also take expectation on the randomness of \(a_{-i}^{t}\sim\omega_{-i}^{t}\), i.e. \(\mathbb{E}_{a_{-i}^{t}\sim\omega_{-i}^{t}}[f^{f}(a_{i},a_{-i}^{t})]\) for any \(a_{i}\)). In addition, we assume that any \(k\) facilities form an action.
**Theorem 5.2**.: _Consider the case where each player receives \(\mathbb{E}_{a_{-i}^{t}\sim\omega_{-i}^{t}}[r^{f}(a_{i}^{t},a_{-i}^{t})],\forall a _{i}\in\mathcal{A}_{i},\forall f\in a_{i}\) in a game that permits a strict Nash equilibrium \(\omega^{*}=(\omega_{1}^{*},\cdots,\omega_{n}^{*})\), and let \(\tilde{y}_{i}^{t}(f)=\mathbb{E}_{a_{-i}^{t}\sim\omega_{-i}^{t}}[r^{f}(a_{i}^{ t},a_{-i}^{t})]\) in Line 6 of Algorithm 1. Suppose \(\tilde{y}_{i}^{0}(f),\forall i\in[n]\) is initialized such that \(\omega^{0}\in U_{M}\subseteq U_{\epsilon}\), then for any \(i\in[n]\) and any \(t\), we have_
\[\left\|\omega_{i}^{t}-\omega_{i}^{*}\right\|_{1}\leq 2(kF\exp(-M-\eta\epsilon t))\,,\]
_where \(M\geq\left|\log\left(\frac{\epsilon}{2kF}\right)\right|\), and \(\epsilon\) is a constant that is game-dependent only._
**Remark 5.1**.: _We note that the convergence rate of the algorithm can be improved by increasing the step size \(\eta\). This is because when each player receives expected rewards, the player can take greedy steps toward the equilibrium strategy. This agrees with greedy strategies that are previously employed to reach strict Nash equilibrium (Cohen et al., 2016). However, such greedy policies would not work in the presence of reward uncertainty, as we will discuss in Theorem 5.3._
It is worth mentioning that the convergence rate of our algorithm does not rely on the number of actions \(A_{i}\), but rather solely on the number of facilities \(F\). This is an improvement over the previous findings for exponential weights algorithms with non-combinatorial action spaces in the context of a congestion game, where the rate is linearly dependent on the number of actions (Cohen et al., 2016). The reason for this is the utilization of our facility-level reward estimation technique once again.
Previous studies on the convergence rate of congestion games (Chen and Lu, 2015, 2016) have established a linear rate of convergence when the game possesses a smooth potential function and
the algorithm is given an appropriate initial starting point. The potential function provides a means to capture the incentives of all players to modify their actions and can be used to characterize the dynamics in policy updates. Assuming the smoothness of the potential function implies optimization on a simpler policy optimization landscape. In contrast, our algorithm achieves a much faster rate of convergence, and this convergence rate still holds even in the absence of a smooth potential function. We adopt a different approach where we directly argue through the algorithm update rule that the updated policy will always fall within a neighborhood around the Nash equilibrium. This bypasses the need for a smoothness potential function and demonstrates the effectiveness of our approach.
In addition, we remark that though some variants of Mirror Descent (MD) or Follow-the-Regularized-Leader (FTRL) algorithms are also proven to enjoy sublinear regret with logarithmic dependency on action space in the full information setting (Daskalakis et al., 2021), these results only imply convergence to an approximate coarse correlated equilibrium and do not enjoy convergence to Nash equilibrium. In comparison, Nash equilibrium is much more stable, as the dynamic will remain there unless external factors change, while coarse correlated equilibrium may be more sensitive to small changes in the correlation method, which can lead to deviation from the equilibrium (Nisan et al., 2007).
To prove Theorem 5.2, we first identify that there exists a neighborhood around the strict Nash equilibrium, such that for any player \(i\), his action in the strict Nash equilibrium is the only optimal choice.
**Lemma 5.1**.: _If there exists a strict Nash equilibrium \(a^{*}=(a_{1}^{*},\ldots,a_{n}^{*})\), then there exists \(\epsilon>0\) and a neighborhood \(U_{\epsilon}\) of \(a^{*}\), such that for all \(\tilde{\omega}=(\tilde{\omega}_{i},\tilde{\omega}_{-i})\in U_{\epsilon}\),_
\[r_{i}(a_{i}^{*},\tilde{\omega}_{-i})-r_{i}(a_{i},\tilde{\omega}_{-i})\geq \epsilon\,,\quad\forall i\in[n]\,,a_{i}\in\mathcal{A}_{i}\,,a_{i}\neq a_{i}^{ *}\,,\]
_where \(r_{i}(a_{i},\omega_{-i})\) is defined as \(\mathbb{E}_{a_{-i}\sim\omega_{-i}}[r_{i}(a_{i},a_{-i})]\)._
Proof.: By the existence of the strict Nash equilibrium, there exists \(\epsilon>0\) such that \(\forall a_{i}\in\mathcal{A}_{i}\), \(r_{i}\left(a_{i},a_{-i}^{*}\right)\leq r_{i}(a_{i}^{*},a_{-i}^{*})-2\epsilon\). Then by continuity, we know that for \((\tilde{\omega}_{i},\tilde{\omega}_{-i})\in U_{\epsilon}\), \(r_{i}(a_{i},\tilde{\omega}_{-i})\leq r_{i}(a_{i}^{*},\tilde{\omega}_{-i})-\epsilon\).
Moreover, if the difference in reward estimator \(\tilde{z}_{i}^{t}(a_{i})=\sum_{j=0}^{t}\left(\sum_{f\in a_{i}}\tilde{y}_{i}^{ j}(f)-\sum_{f^{\prime}\in a_{i}^{*}}\tilde{y}_{i}^{j}(f^{\prime})\right)\) is upper bounded by some small enough constant, then the induced policy of Algorithm 1 falls into the neighborhood set \(U_{\epsilon}\).
**Lemma 5.2**.: _Let \(\tilde{z}_{i}^{t}(a_{i})=\sum_{j=0}^{t}\left(\sum_{f\in a_{i}}\tilde{y}_{i}^{ j}(f)-\sum_{f^{\prime}\in a_{i}^{*}}\tilde{y}_{i}^{j}(f^{\prime})\right)\), and define_
\[U_{M}=\left\{\omega^{t}\text{ computed by Algorithm 1 }|\;\tilde{z}_{i}^{t}(a_{i})\leq-M\,, \forall a_{i}\neq a_{i}^{*},\forall i\in[n]\right\}\,.\]
_For sufficiently large \(M\), \(U_{M}\subseteq U_{\epsilon}\). Moreover, following the updates of Algorithm 1, if \(\omega^{t}\in U_{M}\), then \(\omega^{t+1}\in U_{M}\)._
Thus, if \(\omega^{0}\) is in the neighborhood \(U_{M}\subseteq U_{\epsilon}\), then the reward estimator \(\tilde{z}_{i}^{t}(a_{i})\) can only decrease (by Lemma 5.1), and hence the algorithm will give an updated policy \(\omega^{t}\) that is also within the neighborhood set \(U_{\epsilon}\).
Also, note that \(\omega_{i}^{*}\) is a strict Nash equilibrium, which implies that \(\left|\omega_{i}^{t}-\omega_{i}^{*}\right|_{1}=2(1-\omega_{i}^{t}(a_{i}^{*}))\). Hence, to establish the convergence rate, we need to lower bound
\[\omega_{i}^{t}(a_{i}^{*})=\frac{\prod_{f\in a_{i}^{*}}\tilde{\omega}_{i}^{t}(f)} {\sum_{a^{\prime}\in\mathcal{A}}\prod_{f^{\prime}\in a^{\prime}}\tilde{\omega} _{i}^{t}(f^{\prime})}=\frac{\prod_{f\in a_{i}^{*}}\exp\left(\eta\sum_{j=0}^{t} \tilde{y}_{i}^{j}(f)\right)}{\sum_{a^{\prime}\in\mathcal{A}}\prod_{f^{\prime} \in a^{\prime}}\exp\left(\eta\sum_{j=0}^{t}\tilde{y}_{i}^{j}(f^{\prime})\right) }\,.\]
Technical challengeWe remark that if we directly apply Lemma 5.2, we can get
\[\omega_{i}^{t}(a_{i}^{*}) =\frac{\prod_{f\in a_{i}^{*}}\tilde{\omega}_{i}^{t}(f)}{\sum_{a^ {\prime}\in\mathcal{A}}\prod_{f^{\prime}\in a^{\prime}}\tilde{\omega}_{i}^{t} (f^{\prime})}\geq\ \frac{1}{1+\sum_{a_{i}\in\mathcal{A}_{i},a_{i}\neq a_{i}^{*}}\left(\prod_{f \in a_{i}}\tilde{\omega}_{i}^{t}(f)-\prod_{f^{\prime}\in a_{i}^{*}}\tilde{ \omega}_{i}^{t}(f^{\prime})\right)}\] \[\geq 1-\sum_{a_{i}\in\mathcal{A}_{i},a_{i}\neq a_{i}^{*}}\left( \prod_{f\in a_{i}}\tilde{\omega}_{i}^{t}(f)-\prod_{f^{\prime}\in a_{i}^{*}} \tilde{\omega}_{i}^{t}(f^{\prime})\right).\]
Suppose one can upper bound \(\sum_{a_{i}\in\mathcal{A}_{i},a_{i}\neq a_{i}^{*}}\left(\prod_{f\in a_{i}} \tilde{\omega}_{i}^{t}(f)-\prod_{f^{\prime}\in a_{i}^{*}}\tilde{\omega}_{i}^{t }(f^{\prime})\right)\leq\exp(-t)\), then this gives \(1-\sum_{a_{i}\in\mathcal{A}_{i},a_{i}\neq a_{i}^{*}}\exp(-t)\), which yields a convergence rate of \((|\mathcal{A}_{i}|-1)\exp(-t)\) as \(\left\|\omega_{i}^{t}-\omega_{i}^{*}\right\|_{1}=2(1-\omega_{i}^{t}(a_{i}^{*}))\). This approach implies that the convergence rate scales linearly with the number of actions (thus exponentially with the number of facilities), and is what we wanted to avoid in the analysis.
To overcome this exponential dependency, we utilize the fact that any \(k\)-facility combination is an action, which means that we can order the facility from \(f_{1},\ldots,f_{F}\) in decreasing order of \(\tilde{\omega}_{i}^{t}(f)\) and \(f_{1},\ldots,f_{k}\) form the optimal pure Nash action \(a_{i}^{*}\).
Proof of Theorem 5.2.: Using the above-mentioned observation, we have,
\[\frac{\prod_{f\in a_{i}^{*}}\exp\left(\eta\sum_{j=0}^{t}\tilde{y }_{i}^{j}(f)\right)}{\sum_{a^{\prime}\in\mathcal{A}}\prod_{f^{\prime}\in a^{ \prime}}\exp\left(\eta\sum_{j=0}^{t}\tilde{y}_{i}^{j}(f)\right)} \geq\ \prod_{m\in[k]}\frac{\exp\left(\eta\sum_{j=0}^{t}\tilde{y}_{i}^{j} (f_{m})\right)}{\exp\left(\eta\sum_{j=0}^{t}\tilde{y}_{i}^{j}(f_{m})\right)+ \sum_{\ell>k}\exp\left(\eta\sum_{j=0}^{t}\tilde{y}_{i}^{j}(f_{\ell})\right)}\] \[=\ \prod_{m\in[k]}\frac{1}{1+\sum_{\ell>k}\exp\left(\eta\sum_{j=0}^{ t}\left(\tilde{y}_{i}^{j}(f_{\ell})-\tilde{y}_{i}^{j}(f_{m})\right)\right)}\] \[\geq\ \prod_{m\in[k]}\left(1-\sum_{\ell>k}\exp\left(\eta\sum_{j=0}^{ t}\left(\tilde{y}_{i}^{j}(f_{\ell})-\tilde{y}_{i}^{j}(f_{m})\right)\,\right) \right), \tag{7}\]
where the first inequality is by noting that any \(\prod_{f^{\prime}\in a^{\prime}}\exp\left(\eta\sum_{j=0}^{t}\tilde{y}_{i}^{t}(f^ {\prime})\right)\) on the denominator of the left-hand side can be found on the product of the denominators of the right-hand side.
Consider an action \(a_{i}\) formed from \(a_{i}^{*}\) by replacing \(f_{m}\) (\(m\leq k\)) with \(f_{\ell}\) (\(\ell\geq k+1\)), then
\[\tilde{z}_{i}^{t}(a_{i})=\sum_{j=0}^{t}\left(\sum_{f\in a_{i}}\tilde{y}_{i}^{j} (f)-\sum_{f^{\prime}\in a_{i}^{*}}\tilde{y}_{i}^{j}\left(f^{\prime}\right) \right)=\sum_{j=0}^{t}\left(\tilde{y}_{i}^{j}(f_{\ell})-\tilde{y}_{i}^{j}(f_{m })\right)\,, \tag{8}\]
\[\tilde{z}_{i}^{t}(a_{i})=\tilde{z}_{i}^{t-1}(a_{i})+\eta\bigg{(}\sum_{f\in a_{i}} \tilde{y}_{i}^{t}(f)-\sum_{f^{\prime}\in a_{i}^{*}}\tilde{y}_{i}^{t}\left(f^{ \prime}\right)\bigg{)}\leq-M-\eta\epsilon t\,, \tag{9}\]
where the inequality is by the condition of starting with \(\omega^{0}\in U_{M}\) such that \(M\) is large enough for \(U_{M}\subseteq U_{\epsilon}\) and Lemma 5.2.
Combining Equations (7), (8) and (9), we have \(\omega_{i}^{t}(a_{i}^{*})\geq 1-kF\exp(-M-\eta\epsilon t)\). Therefore, we obtain the result in Theorem 5.2, i.e., \(\big{\|}\omega_{i}^{t}-\omega_{i}^{*}\big{\|}_{1}=2(1-\omega_{i}^{t}(a_{i}^{*}) )\leq 2(kF\exp(-M-\eta\epsilon t))\).
Then we consider the case that each player observes only \(R^{f}(a_{i}^{t},a_{-i}^{t})\), instead of the expected rewards.
**Theorem 5.3**.: _Consider the case where each player receives a stochastic reward under the full information setting. Assume the game permits a strict Nash equilibrium \(\omega^{*}=(\omega_{1}^{*},\cdots,\omega_{n}^{*})\). Let \(\tilde{y}_{i}^{t}(f)=R^{f}(a_{i}^{t},a_{-i}^{t})\) in Line 6 of Algorithm 1, and set the learning rate to be time-dependent such that \(\sum_{t=0}^{\infty}\eta_{t}^{2}\leq\frac{\delta\cdot M^{2}}{8kn(F-1)}\leq \sum_{t=0}^{\infty}\eta_{t}=\infty\). Suppose \(\tilde{y}_{i}^{0}(f),\forall i\in[n]\) is initalized such that \(\omega^{0}\in U_{2M}\subseteq U_{\epsilon}\), then for any \(i\in[n]\) and any \(t\), we have_
\[\|\omega_{i}^{t}-\omega_{i}^{*}\|_{1}\leq 2kF\exp\left(-M-\epsilon\sum_{j=0}^{ t}\eta_{j}\right)\,,\]
_with probability at least \(1-\delta\), where \(M\geq\big{|}\log\big{(}\frac{\epsilon}{2kF}\big{)}\big{|}\), and \(\epsilon\) is a constant that is game-dependent only._
We remark that in the case of stochastic rewards, the convergence rate of our algorithm cannot be arbitrarily large, as the learning rate \(\eta\) cannot be taken to be arbitrarily large. If we take \(\eta_{t}=\beta t^{-\alpha}\), with \(\beta\) being a small positive constant and \(\alpha\in(1/2,1)\). Then our convergence rate is \(\|\omega_{i}^{t}-\omega_{i}^{*}\|_{1}\leq O\left(\exp\left(-\frac{\beta}{1- \alpha}t^{1-\alpha}\right)\right)\), which is close to exponentially fast convergence. When the reward function is smooth, we remark that this can imply each player only experience constant regret.
## 6 Conclusion
This paper studied the congestion game under semi-bandit feedback and presented a modified version of the well-known exponential weights algorithm. The algorithm ensures sublinear regret for every player, with the regret being linearly dependent on the number of facilities. Additionally, the proposed algorithm can learn a policy that rapidly converges to the pure Nash policy, with the convergence rate also being linearly dependent on the number of facilities. To our best knowledge, these are the first results on congestion games for sublinear individual regret and geometric Nash convergence rate, without an exponential dependency on the number of facilities.
There are several possible directions to further study the online congestion game. First, as our work only considers the semi-bandit feedback model for individual regret, the regret and convergence rate under the full-bandit feedback model remains unclear. For the Nash convergence result, our algorithm only enjoys theoretical guarantee in the full-information setting. It remains as future work to extend this result to semi-bandits and full-bandits feedback model. Moreover, it also remains in question whether the results of this work can be extended to the online Markov congestion game proposed by Cui et al. (2022). |
2301.09854 | Effective Baselines for Multiple Object Rearrangement Planning in
Partially Observable Mapped Environments | Many real-world tasks, from house-cleaning to cooking, can be formulated as
multi-object rearrangement problems -- where an agent needs to get specific
objects into appropriate goal states. For such problems, we focus on the
setting that assumes a pre-specified goal state, availability of perfect
manipulation and object recognition capabilities, and a static map of the
environment but unknown initial location of objects to be rearranged. Our goal
is to enable home-assistive intelligent agents to efficiently plan for
rearrangement under such partial observability. This requires efficient
trade-offs between exploration of the environment and planning for
rearrangement, which is challenging because of long-horizon nature of the
problem. To make progress on this problem, we first analyze the effects of
various factors such as number of objects and receptacles, agent carrying
capacity, environment layouts etc. on exploration and planning for
rearrangement using classical methods. We then investigate both monolithic and
modular deep reinforcement learning (DRL) methods for planning in our setting.
We find that monolithic DRL methods do not succeed at long-horizon planning
needed for multi-object rearrangement. Instead, modular greedy approaches
surprisingly perform reasonably well and emerge as competitive baselines for
planning with partial observability in multi-object rearrangement problems. We
also show that our greedy modular agents are empirically optimal when the
objects that need to be rearranged are uniformly distributed in the environment
-- thereby contributing baselines with strong performance for future work on
multi-object rearrangement planning in partially observable settings. | Engin Tekin, Elaheh Barati, Nitin Kamra, Ruta Desai | 2023-01-24T08:03:34Z | http://arxiv.org/abs/2301.09854v1 | # Effective Baselines for Multiple Object Rearrangement Planning in
###### Abstract
Many real-world tasks, from house-cleaning to cooking, can be formulated as multi-object rearrangement problems - where an agent needs to get specific objects into appropriate goal states. For such problems, we focus on the setting that assumes a pre-specified goal state, availability of perfect manipulation and object recognition capabilities, and a static map of the environment but unknown initial location of objects to be rearranged. Our goal is to enable home-assistive intelligent agents to efficiently plan for rearrangement under such partial observability. This requires efficient trade-offs between exploration of the environment and planning for rearrangement, which is challenging because of long-horizon nature of the problem. To make progress on this problem, we first analyze the effects of various factors such as number of objects and receptacles, agent carrying capacity, environment layouts etc. on exploration and planning for rearrangement using classical methods. We then investigate both monolithic and modular deep reinforcement learning (DRL) methods for planning in our setting. We find that monolithic DRL methods do not succeed at long-horizon planning needed for multi-object rearrangement. Instead, modular greedy approaches surprisingly perform reasonably well and emerge as competitive baselines for planning with partial observability in multi-object rearrangement problems. We also show that our greedy modular agents are empirically optimal when the objects that need to be rearranged are uniformly distributed in the environment - thereby contributing baselines with strong performance for future work on multi-object rearrangement planning in partially observable settings.
1
## 1 Introduction
Rearrangement problems, where the goal is to get a physical environment in a specific state has been proposed as the next frontier for embodied AI research [1]. Many tasks in everyday life from house cleaning [13, 14] to preparing groceries [15] can be formulated as rearrangement problems. Therefore, developing embodied agents to solve these problems would allow to us to make progress towards the next generation of home assistant agents.
In an embodied rearrangement task, an agent must rearrange an unknown environment using a combination of sensor observations and prior knowledge to reach a goal state, which is specified either geometrically, or through image, language, or predicates [1]. Solving such generic rearrangement tasks requires an agent to solve a plethora of sub-problems, such as, reasoning about the goal state through semantic and commonsense understanding; building a map of the environment in order to navigate, search, and explore; effectively planning to figure out which objects to pick-drop and in what order; and finally manipulating objects. These problems span the spectrum of perception, planning, navigation, and manipulation making rearrangement an extremely challenging problem [1].
Because of the complexity of rearrangement problems, previous research has focused on different slices of the problem. Some researchers focus on understanding the goal state by leveraging human preferences and commonsense reasoning [11, 12]; or through reasoning about changes in the environment configuration [16, 15]. Another body of work focuses on perception, planning, and manipulation for rearrangement, albeit for small number (upto 5) of objects [15, 16]. We focus on a specific slice of the rearrangement problem - _planning under partial observability_. In particular, to decouple our investigation from the manipulation, navigation and perception challenges, we assume perfect object recognition and interaction capabilities in the agent. We further assume the availability of a static map to focus on high-level task planning for rearrangement instead of integrated task and motion planning (TAMP) [1]. Lastly, instead of dealing with uncertain information ranging from incomplete maps, mis-classified objects etc., which makes it challenging to study the planning problem, we directly consider uncertainty over the object locations - decoupled from perception - which allows us to study the implications of uncertainty on the rearrangement planning problem in a more systematic and controlled manner. Specifically, the problem requires efficient exploration of the environment in combination with well-balanced planning for rearrangement and this forms the core of our work.
Overall, given a pre-specified goal state, perfect manipulation and object recognition capabilities, and a static map of the environment; our goal is to enable efficient planning
for rearrangement. Such a laser focus consequently enables us to analyze and present the effects of various factors such as number of objects, agent carrying capacity, environment layout complexity etc. on the complexity of planning. We find that higher agent capacity and larger environment layouts make the rearrangement planning problem challenging. On the other hand, counter-intuitively, higher number of objects reduces the problem complexity, since in this case rearrangement for seen objects implicitly shoulders the burden of exploration for unseen objects, thereby reducing the need for additional explicit exploration, and ultimately reducing the problem complexity.
Classical planners often fail to optimally solve task planning problems in real-time [1] and need a world model to be known. Motivated by these limitations of the classical approaches, we next present end-to-end monolithic deep reinforcement learning (DRL) based approaches. We find that akin to prior works, monolithic DRL methods do not succeed at long-horizon planning problems. We then propose modular methods as competitive baselines for planning under uncertainty in rearrangement problems. Specifically, our methods investigate ways to achieve better trade-off between exploration and planning. We empirically demonstrate that approaches which plan greedily and only explore conservatively achieve this trade-off optimally when objects that need to be rearranged have uniform distribution in the environments. We hope that our analysis and baselines will provide a good starting point and benchmark for future work on rearrangement planning.
## 2 Related Work
### Object rearrangement
Object rearrangement has been studied in robotics [1], for a variety of problems ranging from table-top rearrangement [1, 2, 10] to house-scale rearrangement [1]. Majority of planning research for rearrangement in robotics further focuses on integrated task and motion planning [11, 12, 13]. Instead, our focus is on high-level task planning for house-scale rearrangement. Recent work from Newman et al. [1] and Agia et al. [2] have also focused on task planning for house-scale object rearrangement problems, however they assume full observability while we are interested in task planning for rearrangement under partial observability.
Embodied AI community, which is interested in virtual agents in addition to physical agents, has also started pushing on house-scale rearrangement problems [1, 12, 13, 14, 15]. Because of the use of perceptual sensors, they also have to deal with partial observability while doing rearrangement planning. These works push on solving rearrangement planning in combination with perception, manipulation, and navigation for rearrangement. Consequently, they only show rearrangement for scenarios with small number of objects (upto 5) and single agent capacity. Instead, we are interested in understanding various factors that affect rearrangement planning problem under partial observability. In particular, our focus is on the trade-off between two main components of rearrangement planning: object discovery via exploration and planning for rearrangement. We next review related work for exploration and planning.
### Exploration in 3D environments
Exploring unknown environments is a well-studied problem in robotics and state-of-the-art results are generally achieved with frontier-based exploration (FBE) [16, 17]. The overarching idea in frontier-based approaches is to define a set of frontier locations and decide the next frontier location to explore [16]. Recent studies have employed utility functions to choose between frontier locations in combination with DRL techniques for exploration [1, 18]. Ramakrishnan et al. [1] further benchmarked various DRL approaches with several reward functions as well as FBE method for embodied exploration. As an alternative approach to enable exploration, Fukazawa et al. [12] uses the reaction-diffusion equation on graphs and optimize a custom potential function in order to create observation points; and then visit them in an optimal order. However, most of these studies focus solely on exploration of a 3D environment with the goal of maximizing area coverage or object/landmark detection. Instead, we are interested in exploration for object discovery to enable downstream rearrangement. We take inspiration from these studies and investigate both learning-based and frontier-based exploration approaches to enable rearrangement.
### Planning for rearrangement
End-to-end DRL approaches as well as classical approaches based on Traveling Salesman Problem (TSP) formulation have both been used for rearrangement planning [16, 14, 15, 17]. Since multiple object rearrangement is a long-horizon problem, monolithic DRL approaches for the same have only been shown to be successful on small grid maps [14, 15]. Researchers have therefore used modular approaches [16, 14] or classical approaches based on TSP [16] for household rearrangement planning. Based on these findings, we also model planning for rearrangement as Capacitated Vehicle Routing Problem (CVRP), a variant of TSP, and use OR-Tools [13] to solve the same.
## 3 Problem Characterization
In this section, we concretely define the Multiple Object Rearrangement Planning (MORP) problem in mapped environments with partial observability over object locations. We also detail our procedure for generating the required datasets and evaluation metrics.
### Task Definition
**Notation:** We will use a Partially Observable Markov Decision Processes (POMDP) framework to formally define MORP. Let \(s\in\mathcal{S}\) denote a state in state space \(\mathcal{S}\), \(o\in\mathcal{O}\) denote an observation in observation space \(\mathcal{O}\), \(a\in\mathcal{A}\) denote action in action space \(\mathcal{A}\), \(g\in\mathcal{G}\) denote a goal specification in goal space \(\mathcal{G}\), and finally \(\pi(a_{t}|o_{t-1}...o_{0},g)=Pr(a_{t}|o_{t-1}...o_{0},g)\) agent's goal-conditioned policy. Note that we will omit time \(t\) subscript unless indicated otherwise.
**Goal Specification:** We consider type-level geometric goal for rearrangement as described in Batra et al. (2020) i.e., we describe the goal state for rearrangement geometrically with target location for all objects. Note that in this paper we will only use integers for location variables. Let \(\mathbb{Z}\) denote the set of integers and \(\mathbb{N}\) denote the set of natural numbers. We define \(p_{i}^{0}\in\mathcal{P}:\mathbb{Z}^{n_{o}\times 2}\) to be initial 2D object locations and \(r_{i}\in\mathcal{R}:\mathbb{Z}^{n_{r}\times 2}\) to be 2D receptacle locations where \(n_{o}\in\mathbb{N}\) is number of the objects and \(n_{r}\in\mathbb{N}\) number of object types/receptacles. Each object type has a unique receptacle but multiple objects can be rearranged to the same receptacle. Let \(f\) be the correspondence function to map object index to an index corresponding to its receptacle type, so that we can elicit the object-receptacle pairing. A goal is then specified as \(g=\bigwedge_{i=1}^{n_{o}}(p_{i}==r_{f(i)})\).
**Rearrangement Task:** Thus, we can formally define a rearrangement task as, given a goal specification \(g\), the agent must transform the initial state \(s_{0}\) to a goal state \(s^{*}\), by solely acting based on observations \(o\in\mathcal{O}\), where \(s^{*}\in\mathcal{S}\) is a state that satisfies goal specifications. Hence, an agent spawned randomly in a home environment must find objects scattered randomly around the house and rearrange them to their desired receptacles (see Figure 1).
### Dataset
**Scenes:** We choose 215 scenes from the Gibson dataset Xia et al. (2018), with low clutter Ramakrishnan et al. (2021) and no or limited furniture. Gibson scenes are loaded with AI Habitat simulator Szot et al. (2021) and converted to top-down occupancy maps. The top-down occupancy maps are discrete 2D grids \(\mathcal{M}\in\mathbb{Z}^{200\times 150}\) with elements \(m_{ij}\in\{-1,0,1\}\rightarrow\{\text{Innavigable},\text{Unexplored Navigable},\text{ Explored Navigable}\}\) with a discretization of 0.1 meter per cell. The map size of \(200\times 150\) was empirically chosen. Scene metrics are presented in Table 1.
**Objects and Recptacles:** We randomly place objects and receptacles on the occupancy map. Object states contain only type and location information, therefore only distinction between objects is type. Agent interaction can only modify object locations \(p_{i}\) and there is no other parameter in the environment that changes during the episode. Objects and receptacles are not considered to occupy space on \(\mathcal{M}\) to simplify collision computation. Multiple objects can be located in the same grid cell \(m_{ij}\).
**Episodes:** Let \(x^{t}:\mathbb{Z}^{2}\) and \(\phi^{t}\in\{0,..,7\}\rightarrow\{N,NE,E,SE,SW,W,NW\}\) be location and orientation of agent at time step \(t\) where {N:North, E:East, S:South, W:West}. At start of an episode, the agent is spawned at a random location \(x^{0}\) and a random heading \(\phi^{0}\) (encoded as a 8-dimensional one-hot vector) as shown in Figure 1. The episode is terminated: (a) successfully, if all objects are in their receptacles before the time-step \(max_{t}\), or (b) unsuccessfully, when the maximum episode time-step \(max_{t}\) is reached. Some objects may already be in their receptacles when the episode starts, but the agent must still verify this by at least seeing them in their receptacles for successful completion of goal \(g\). Figure 2 shows a successful episode completion. We split the dataset into small/medium/large categories based on the area enclosed by the map. Each category contains train/validation/test splits. Episode statistics are presented in Table 4 in the Appendix.
### Evaluation
For MORP, we are interested in measuring the agent's success rate in completing the task and the agent's efficiency, i.e., distance traveled/time taken for the task. In addition, to evaluate exploration methods, we use object discovery and
\begin{table}
\begin{tabular}{|c|c c c|} & \multicolumn{1}{c}{nav-area (\(m^{2}\))} & \multicolumn{1}{c}{nav-complexity} & \multicolumn{1}{c}{\#scenes} \\ \hline large & 66.09 & 19.36 & 88 \\ \hline medium & 30.53 & 12.07 & 120 \\ \hline small & 1.38 & 1.39 & 7 \\ \end{tabular}
\end{table}
Table 1: Scene statistics: Navigable area1(nav-area) and navigation complexity (nav-complexity), which is defined as the maximum ratio of geodesic and euclidean distance between any two navigable points in a scene, are shown for scenes in our large/medium/small dataset splits.
Figure 1: Multiple Object Rearrangement Planning (MORP) problem. Top down map of the house with the objects to be rearranged shown as circles and receptacles as squares. Same color signifies object-receptacle pairing and multiple objects of the same type can be placed into a receptacle (e.g., books into bookshelf). With perfect object recognition capabilities, unique instance IDs track the objects.
map coverage metrics. We describe these metrics below:
**Rearrangement**:
* **Episode Success (\(ES\))**: If all objects are seen and rearranged, \(ES\) is 1 otherwise 0 [10].
* **Rearranged Object Ratio (\(ROR\))**: Ratio of the number of rearranged objects to the total number of objects that need to be rearranged.
* **Episodic Success Weighted by Path Length (\(ESPL\))**: In order to measure agent's efficiency, we compare path length and \(ES\) with oracle agent's path length. \(ESPL\) is defined as \[ESPL=ES\frac{z}{max(z,l)}\] (1) where \(z\in\mathbb{R}\) is the oracle agent's path length, and \(l\in\mathbb{R}\) is the path length [1]. An oracle agent is essentially an agent that has full observability of the environment and uses CVRP planner to optimally perform rearrangement, similar to [11]. We will describe this agent in more detail in Sec. 4.
**Exploration**:
* **Seen Object Ratio (\(SOR\))**: Ratio of the number of seen objects to the total number of objects.
* **Map Coverage (\(MC\))**: Ratio of the explored area to the total navigable area.
### Agent Definition
Next, we define an agent, which is capable of sensing the environment and objects, navigating around, and rearranging objects.
**Sensor Suite:**
* Top-down Occupancy Map \(\mathcal{M}\): We assume that a static top-down occupancy map of the environment \(\mathcal{M}\) is available to the agent. Such a map could be obtained by a one-time scan of a house and thus may be a reasonable assumption for future home-assistant agents of any form.
* Receptacle Locations \(\mathcal{R}\): Receptacle locations on the map \(\mathcal{M}\) are available to agent.
* Agent Location \(x\) and Orientation \(\phi\): Agent can detect its location, \(x=\)(x,y) coordinates on \(\mathcal{M}\) and orientation \(\phi\) in 8 discrete directions.
* Field of View \(FOV\): Agent can explore the map area and detect objects within its field-of-view determined by the conical sector \((\theta,r_{s})\) where \(\theta\) is angle around the orientation direction \(\phi\) and \(r_{s}\) is the cut-off distance from \(x\).
* Gripper \(\mathcal{H}\in\mathbb{R}^{c\times n_{a}}\): This proprioceptive sensor contains flags indicating whether object \(j\) is being held by the agent's gripper slot \(i\), where \(c\in\mathbb{N}\) is number of the slots equivalent to agent's carrying capacity. All objects can be held by any slot and a slot can only hold one object at a time.
**Action Space:** The agent has three discrete actions: _forward_, _left_ turn, and _right_ turn for navigation and a single _grab/drop_ action for object manipulation. The forward action moves the agent to one of 8 immediate grid cells in the map depending on its orientation \(\phi\). The left/right turn action changes \(\phi\) by \(45^{\circ}\) in counter-clockwise/clockwise direction respectively. The grab/drop action is similar to "discrete object grasping" [1]. Such action space \(\mathcal{A}\) enables abstraction from downstream embodiment including parameters for continuous control of motors and allows us to focus on high-level task planning and exploration.
## 4 Complexity Analysis and Benchmarking
In this section, we present an analysis of factors that affect the complexity of rearrangement planning. Inspired by related work, we investigate various factors for our analysis, such as agent carrying capacity \(c\)[1], total number of objects \(n_{o}\)[1], number of receptacles \(n_{r}\), and navigable area of the scene. To that end, we introduce heuristic and oracle agents and benchmark their performance using metrics introduced in Sec. 3 on the MORP as a way to analyze the complexity of MORP.
### Oracle Agent
In order to find an upper bound on performance for MORP, we consider the _oracle_ agent with access to privileged information. Specifically, the oracle agent has full state information and knows all object locations. Such full observability enables the agent to use a Capacitated Vehicle Routing Problem (CVRP) based approach to calculate shortest path length for rearranging objects, similar to [11]. We describe the CVRP formulation in more detail in Appendix A.2. We solve the formulated planning problem using OR-Tools [12].
Figure 2: A sample episode for MORP – Agent finds misplaced objects, carries them to known receptacles locations and places them. Object ids for seen and rearranged objects as well as object held by the agent are shown.
The oracle agent also leverages the full state information to ignore objects that are already arranged at the start of an episode. Such CVRP-based optimal path computation for only misplaced objects truly makes the oracle agent's performance an upper bound for multiple object rearrangement planning _without_ any uncertainty. We next extend this oracle agent with exploration capability in a heuristic manner to deal with the uncertainty of object location in our setup.
### Heuristic Agents
To benchmark the performance for multiple object rearrangement planning in presence of uncertainty (MORP), we consider _heuristic_ agents. We leverage the intuition that MORP requires solving two sub-problems - (a) efficient exploration and search of the indoor environment to deal with location uncertainty of misplaced objects, (b) optimal path planning balanced with such object search to rearrange the misplaced objects. Based on this intuition, our heuristic agents use a modular approach that greedily combines classical optimal approaches for both of these sub-problems.
In particular, we use CVRP-based approach similar to the oracle agent for rearrangement planning of discovered objects and frontier-based exploration (FBE) approaches and its variants for efficient exploration and object discovery, based on the evidence of FBE's performance in classical robotics applications [14]. The agent greedily chooses _planning_ if there are any seen objects that need to be rearranged, and _exploration_ otherwise. Our approach of such greedy combination of these exploration and planning methods is also similar to recent work on house-scale rearrangement in EAI [11, 12].
One can also think of _planning_ and _exploration_ as high-level actions. Given one of these high-level actions, the agent executes a sequence of low-level navigation actions described in Sec. 3 until certain conditions are satisfied: (1) _planning_ reaches the planned location, or (2) _exploration_ reaches the target location, or (3) maximum high-level action distance \(max_{dist}\) is reached. These navigation actions are computed to follow a shortest path 2 between the agent's current location and the target _planned_ or _exploration_ location. The agent executes _grab/drop_ action when it reaches a planned location and replans when new object(s) are discovered. Likewise, the agent chooses to explore if no objects have been discovered or if all the discovered objects are in their goal state and Rearranged Object Ratio (ROR) as described in Sec. 3 is less than 1. Such a modular approach of executing a low-level policy toward a goal given by a high-level policy is also inspired by recent work in embodied AI [1].
Footnote 2: The shortest path is computed using the A\({}^{*}\) algorithm [11].
We now describe the variants of the FBE method and a random exploration approach, which we investigated for efficient exploration in our heuristic agents. Each time the agent engages the high-level action of _exploration_, a target location to be explored is obtained based on one of these approaches, described below.
Weighted Frontier Based ExplorationFBE computes unexplored frontiers, which are the borderline grid-cells between explored and unexplored navigable area on a map. It then chooses the closest frontier location to the agent's location as the next location to visit for exploration [12]. Recent variants of FBE referred to as weighted FBE (WFBE) use a utility function to determine next the frontier location to visit [14]. The utility function balances the potential information gain for exploration achieved by visiting a frontier location with the distance that needs to be traveled to reach the frontier location from the agent's current location. The information gain for a given frontier is defined as the sum of newly seen grid-cells along the shortest path between the agent's current location and the frontier3. Inspired by the performance of WFBE method, we investigate two variants of WFBE, which leverage two different utility functions:
Footnote 3: We find no difference in WFBE performance when the information gain is computed only at the frontier location instead.
* \(\mathbf{WFBE}_{\mathbf{r}}\): uses the ratio between frontier information gain and frontier distance as the utility function. It chooses a frontier that maximizes this function.
* \(\mathbf{WFBE}_{\mathbf{w}}\): uses weighted sum of normalized distance and normalized gain as the utility function, where the weight \(w\) is weight on distance and \((1-w)\) is the weight on gain. It then chooses the frontier that minimizes this function. We experiment with different values of the \(w\): \(0,0.5,1\) respectively.
Please refer to the Appendix A.3 for more details on the utility functions.
Random Exploration (RND)This approach picks a random navigable location from the unexplored area as the target location for exploration.
### MORP Complexity Analysis
In order to analyze the effect of various factors such as agent carrying capacity \(c\), total number of objects \(n_{o}\), number of receptacles \(n_{r}\), and navigable area of the scene etc., we consider different configurations of MORP and show the performance of the heuristic and oracle agents on these configurations. Specifically, we vary these factors over a range to create different configurations of MORP: \(c\in\{1,3\}\), \(n_{o}\in\{1,3,5,10\}\), \(n_{r}\in\{1,2,3\}\). We present the empirical results in Table 2 with these different configurations. The "medium" and "large" dataset splits as described in Table 1 were used for these experiments. Note that we investigate both WFBE\({}_{\mathbf{r}}\) and WFBE\({}_{\mathbf{w}}\) with \(w=\{0,0.5,1\}\), but we found WFBE\({}_{\mathbf{r}}\) to work the best. We therefore use WFBE\({}_{\mathbf{r}}\) for most of our conclusions. Next, we summarize our main findings on the effect of the various factors on MORP.
**Higher agent capacity \(c\) makes MORP more challenging.** In Table 2, we observe that all agents' performance decreases with the increase in agent capacity \(c\). For instance, when the agent capacity increases from 1 to 3, on average ESPL drops by 10%. Such increase in MORP's complexity can be attributed to planning. Specifically, fig. 3 shows increase in planning time taken by the CVRP solver for oracle
agents with higher carrying capacity. Although we perform experiments with static agent capacity, practical instantiations of MORP such as house-cleaning might require dynamic capacity through use of containers etc., which may further increase the complexity of MORP.
**Higher number of objects reduces exploration complexity.** When there are more objects in the environment, higher percentage of the object discovery happens while planning. For instance, average percentage of objects discovered during the planning \(\{26\%,43\%,59\%\}\) increases with the total number of objects \(\{3,5,10\}\) respectively. This reduces the burden on exploration for object search. Inspite of increase in planning complexity with higher number of objects (see fig. 3), such reduction in exploration complexity also reduces the overall problem complexity thereby leading to improved ESPL (Tab. 2). Specifically, when the number of objects increases from 1 to 10, on average ESPL for agent with \(c=1\) increases by 20%. Such performance improvement in ESPL however disappears with increase in agent capacity e.g., no significant correlation between \(n_{o}\) and ESPL is observed for \(c=3\). This highlights complex interplay between exploration and planning complexity for MORP.
**Higher number of receptacles \(n_{r}>1\) reduces MORP's complexity only when the agent has higher capacity \(c>1\).** In Table 2, we see that increase in \(n_{r}\) has different effects on performance of agents with different \(c\). When \(n_{r}\) is increased from 1 to 5, we observe that on average ESPL for \(c=1\) decreases by 2%, whereas ESPL for \(c=3\) increases by 6%. It suggests that for episodes with multiple receptacles (\(n_{r}>1\)), there are scenarios where an object's receptacle is within the close proximity of another object. The agents can exploit such scenarios to reduce navigated distance and thereby ESPL for MORP.
**Greater navigable area worsens the performance.** Figure 4 shows how WFBE\({}_{\text{r}}\) exploration policy performance is negatively correlated with navigable area. Similar trends were observed with other exploration strategies. Overall, exploration strategy matters more for larger areas and thus affects performance. We didn't find any correlation between navigation complexity of the scenes (as described in Sec. 3) and performance.
### Agent Benchmarking
We next elaborate on the analysis and failure modes of the classical approaches that we used for exploration and planning in our heuristic and oracle agents. Specifically, we investigate the scaling of the CVRP solver, its applicability to real-time applications, and the performance of different exploration policies.
**Planning** In Figure 3, we see that the time spent by oracle agent to solve the CVRP problem increases exponentially w.r.t. \(n_{o}\). Figure 3 also shows that increasing \(c\) makes the planning more complex. Since our focus is not on solvers, we choose a CVRP solver configuration4 from ORTools (Perron and Furnon 2022) that can find optimal solution under real-time constraints for our experiments. Based on the exponential increase in compute time for CVRP problems, we recommend using satisficing or learning-based planners e.g., (Agia et al. 2022) to tackle larger-scale planning problems in the future.
Footnote 4: CVRP solver: RoutingModel, FirstSolutionStrategy: PARALLEL-CHEPEST-INSERTION, solution-limit:50.
**Exploration** We find that agents with WFBE\({}_{r}\) and WFBE\({}_{50}\) exploration policy outperform all other agents, which suggests that one could potentially learn to find an optimal balance between gain and distance. All other exploration approaches exhibit varied failure modes. For instance,
\begin{table}
\begin{tabular}{c|c|c|c|c c|c||c} \hline \multirow{2}{*}{_Explorer_} & \multirow{2}{*}{\(c\)} & \multirow{2}{*}{\(n_{r}\)} & \multicolumn{6}{c||}{\(ESPL\)} \\ \cline{3-8} & & & & 1 & 3 & 5 & 10 & _avg_ \\ \hline \multirow{2}{*}{RND} & 1 & 3 &.62 &.69 &.74 &.82 &.72 \\ & 3 & 3 &.62 &.63 &.60 &.62 &.62 \\ \hline \multirow{2}{*}{WFBE\({}_{1}\)} & 1 & 3 &.56 &.67 &.75 &.83 &.71 \\ & 3 & 3 &.56 &.62 &.61 &.64 &.61 \\ \hline \multirow{2}{*}{WFBE\({}_{0.5}\)} & 1 & 3 & **.68** &.75 & **.81** & **.87** & **.78** \\ & 3 & 3 & **.68** &.69 & **.65** &.67 &.67 \\ \hline \multirow{2}{*}{WFBE\({}_{0}\)} & 1 & 3 &.65 &.71 &.76 &.84 &.74 \\ & 3 & 3 &.65 &.65 &.61 &.65 &.64 \\ \hline \hline \multirow{2}{*}{WFBE\({}_{\text{r}}\)} & 1 & 1 &.67 &.79 &.83 &.91 &.80 \\ & 3 & 1 &.67 &.60 &.63 &.65 &.64 \\ \hline \multirow{2}{*}{WFBE\({}_{\text{r}}\)} & 1 & 3 &.67 & **.76** & **.81** & **.87** & **.78** \\ & 3 & 3 &.67 & **.70** & **.65** & **.68** & **.68** \\ \hline \multirow{2}{*}{WFBE\({}_{\text{r}}\)} & 1 & 5 &.67 &.76 &.81 &.88 &.78 \\ & 3 & 5 &.67 &.70 &.70 &.71 &.70 \\ \hline \end{tabular}
\end{table}
Table 2: MORP’s complexity analysis as measured through the ESPL performance metric of heuristic agents: We measure the effect of number of objects \(n_{o}\), number of receptacles \(n_{r}\) and agent capacity \(c\) on MORP. For all agents, other metrics were \(MC=0.83\), \(ES=1\) and \(SOR=1\) on average (detailed metrics are shown in Appendix A.4). Bold numbers indicate best performance in a particular MORP configuration.
Figure 4: Effect of navigable area on performance as measured by ESPL for WFBE\({}_{\text{r}}\) exploration policy.
Figure 3: Scalability of oracle agent: time spent by CVRP solver for different values of \(c\) and \(n_{o}\).
the agents with RND and WFBE\({}_{0}\) exploration policies start picking locations at the opposite sides of the map consecutively and keep going back and forth after exploring for a while. This prevents these agents from exploring unexplored areas. Consequently, the agents fail to discover objects and complete the task within maximum episode time \(max_{t}\). Although intuitively choosing the closest frontier location aka WFBE\({}_{100}\) or frontier with maximum gain i.e., WFBE\({}_{0}\) perform better than agents with RND exploration policy, they perform worse than the agents that consider both gain and distance aka agents with WFBE\({}_{r}\) and WFBE\({}_{50}\). Based on these observations, we also propose WFBE\({}_{r}\) and its performance on MORP as a baseline for benchmarking future research on MORP. Please see the Appendix for detailed comparison on various exploration approaches using all the metrics defined in Sec. 3.
## 5 Learning-based Agents
Inspired by the limitations of our heuristic agents and their classical exploration and planning approaches, we explore learning-based agents to obtain policies better than that of the heuristic agents for MORP. In particular, we investigate end-to-end approaches aka monolithic RL for MORP. In addition, we also experiment with ways to improve our heuristic agents. We describe these learning-based agents in detail below.
**Monolithic agents:** These agents leverage monolithic deep reinforcement learning approach such as [14] to learn direct mapping from observations to actions.
* _forward_, _left_, _right_, _grab/drop_. E2E-P agent thus learns to navigate, explore, and then to accomplish rearrangement from scratch.
* **Where-to-Go Planner (W2G-P)** learns where to go on the occupancy map \(\mathcal{M}\). Instead of low-level navigation actions of _forward_, _left_, _right_, W2G-P agent uses navigable cells from \(\mathcal{M}\) as actions in combination with the _grab/drop_ action. Once target cell from \(\mathcal{M}\) is chosen as the action, we obtain the low-level navigation actions to follow the shortest path between the agent's current location and the chosen cell as described in Sec. 4.2. Unlike E2E-P that has to learn low-level navigation in combination with exploration and planning for MORP, the W2G-P focuses on only learning exploration and planning.
**Modular agents:** To improve our modular heuristic agents that greedily combine WFBE-based exploration and CVRP-based planning (Sec. 4), we investigate ways to a) improve exploration performance, b) improve the trade-off between exploration and planning beyond greedy, and c) jointly improve both.
* **Learnt Explorer (LE)** is focused on improving exploration performance of WFBE methods in our heuristic agents, inspired by the performance difference between these agents in Table 2. Specifically, we learn a utility function approximation for frontier selection in WFBE, which trades-off information gain and frontier distance. We train two types of LE agents: (1) LF\({}_{disc}\) that chooses directly among candidate frontiers, in particular, these candidates correspond to the agent's discrete actions and (2) LE\({}_{w}\)[15] that outputs a continues variable \(w\in\mathbb{R}:[0,1]\) to be used as a weight on normalized distance and normalized information gain in WFBE\({}_{w}\) exploration policy. See Appendix A.3 for more details on WFBE\({}_{w}\) and candidate frontier computation. Contrary to WFBE\({}_{w}\), LE\({}_{w}\) dynamically changes the \(w\) at each step rather than using a fixed \(w\). We train LE agents explicitly on the exploration task of finding all objects but not rearranging them for the MORP dataset. We then evaluate the LE agents on MORP by combining them with CVRP planner, akin to the heuristic agents.
* **Optimal Balancer (OB)** agent learns to combine WFBE-based exploration and CVRP-based planning optimally, instead of greedily choosing between them as in the heuristic agents. OB agent thus has two corresponding high-level actions: _explore_ and _plan_ that it learns to choose from.
* **Balanced Explorer (BE)** learns exploration policy and optimal balance between _exploration_ and _planning_ jointly. BE agent's discrete actions thus consists of _plan_ action, which calls the CVRP planner; and actions that map to candidate frontier locations.
For all of the above agents, the input consists of occupancy map, receptacle locations, agent location, total number of objects to be rearranged, and agent's current gripper state as described in agent's sensor suite (see Sec. 3). For modular agents, we additionally use frontier locations as input. More details on the representations of these inputs for individual agents are described in Appendix A.5. For the policy architecture, we use ConvLSTM-like architecture for E2E-P and W2G-P agents and ConvMLP-like architecture5 for OB, LE, and BE agents. The reward function for all agents is the weighted sum of navigated distance, the number of newly seen objects, grab/drop reward, the newly discovered map area at time step \(t\), and the episode success reward. Please see the Appendix A.5 for more details.
Footnote 5: Since our inputs for OB, LE, and BE contain all the information pertaining to the sufficient state for MORP, recurrent policies are not needed. We verified this empirically by swapping the MLP layer with the recurrent LSTM layer and found no difference in ESPL of these agents.
### Training Process
We use Rllib [10] implementation of DD-PPO [15] in order to train our agents with train splits of the dataset. E2E-P and W2G-P agents are trained with a naive curriculum, where we first train with single object rearrangement episodes before training with MORP episodes. The "small" dataset split was used to train E2E-P and W2G-P. All other agents were trained using the "medium" and "large" dataset splits (see Table 1). Details on task configuration including hyperparameters for training and benchmarking can be found in Appendix A.5.
### Benchmarking Learning-based Agents
We evaluate the performance of learning-based agents and compare them against the best performing heuristic agent WFBE\({}_{\text{r}}\) in Table 3. We also present our conclusions on leveraging learning to obtain policies for MORP.
**Learning rearrangement planning from scratch is hard.** We find that both E2E-P and W2G-P fail in completing even single object rearrangement episodes. They learn to discover the object and can move towards the object, yet they fail to grab the object or to rearrange it into the receptacle. DRL approaches are known to struggle in long-horizon and sparse reward settings [13] as is the case with E2E-P. Likewise, DRL approaches struggle with large discrete actions spaces [12] such as that of W2G-P. Prior work on object rearrangement therefore uses modular approaches over monolithic ones [13, 14]. We do not include E2E-P and W2G-P metrics in Table 3 since these agents do not succeed in any MORP episodes.
**Learning exploration explicitly or jointly with rearrangement planning does not help MORP.** We first compare agents that explicitly learn to explore (LE\({}_{\text{w}}\) and LE\({}_{\text{disc}}\)) with WFBE-based exploration on purely exploration task of finding all objects in MORP scenes (see Table 8 in Appendix A.5 for detailed comparison of the same). We find that learnt exploration agents do not perform better than the WFBE policies when evaluated on this exploration task. Agents with short-sighted frontier-based exploration policies e.g., WFBE\({}_{\text{r}}\) and WFBE\({}_{w}\) agents, which try to find objects by exploring most area while navigating minimum distance, perform better than the learnt exploration policies that consider the long-term and minimize total path length in order to discover all the objects in LE\({}_{\text{w}}\) and LE\({}_{\text{disc}}\) agents. Consequently, combining them with optimal planning using the CVRP solver for rearrangement (LE\({}_{\text{w}}\), LE\({}_{\text{disc}}\) in Table 3) does not lead to improvement in MORP. This suggests that on average, finding the first object fast and counting on the object discovery during rearrangement planning is a good strategy for MORP. Since LE agents explicitly learn to explore without accounting for rearrangement planning, we also train the BE agent which learns to combines exploration with CVRP planning, while jointly learning to explore. However, we find that this doesn't improve performance on MORP either.
**The _greedy combination_ of frontier-based exploration and CVRP-based planning is _empirically_ optimal.** During the training of OB agent, we intermittently evaluate the agent on the test dataset in order to understand the evolution of agent behavior over the training process. Our evaluations indicate that in the early stages of the training, OB acts non-greedily as opposed to the heuristic WFBE\({}_{\text{r}}\) agent. Yet, despite hyper-parameter tuning such as that of the entropy loss coefficient in DD-PPO and reward shaping, OB eventually converges to a greedy behaviour. This is evident in \(\Delta ESPL\) between OB's performance and WFBE\({}_{\text{r}}\) agent's performance in Table 3, which is similar to a zero-centered normal distribution with 5% standard error. Post convergence, OB's high entropy action policies have occasional attempts (10% of the test data) to explore non-greedily i.e., explore even when there are discovered objects that need to be rearranged. Such non-greedy behavior enables OB to discover objects earlier than the WFBE\({}_{\text{r}}\) agent in certain episodes. However, on average OB's behavior is greedy. This empirically demonstrates that the greedy combination of frontier-based exploration and CVRP-based planning (as in WFBE agents) is optimal for MORP.
In summary, E2E-P and W2G-P demonstrate that conventional, monolithic deep RL does not succeed at MORP since the agent needs to learn navigation, exploration, and planning from scratch. Modular agents that leverage the inductive biases of conventional frontier-based exploration and CVRP-based planning in combination with learning to improve exploration and/or the trade-off between exploration and planning still do not outperform their greedy, heuristic counterparts.
## 6 Conclusion
We propose Multiple Object Rearrangement Planning (MORP) in mapped environments with partial observability over object locations as a benchmark task. We conduct a comprehensive complexity analysis of MORP, where we investigate various factors such as number of objects and receptacles, agent carrying capacity, environment layouts etc. that make MORP challenging. We further introduce competitive heuristic baselines for MORP that greedily combine classical frontier-based exploration and optimization-based planning. We also train reinforcement learning policies for MORP in order to outperform the heuristic baselines. However, we find that monolithic RL policies struggle at MORP while modular RL policies converge to behaviors similar to heuristic policies. This empirically demonstrates that greedy combination of exploration and planning for MORP is optimal when objects to be rearranged are uniformly distributed in the 3D environments. However, developing agents that outperform heuristic agents at MORP remains an open problem for future research.
\begin{table}
\begin{tabular}{c|c|c c c|c c||c c} & & \multicolumn{4}{c||}{\(ESPL\)} & \multicolumn{2}{c}{\(\Delta ESPL\)} \\ \hline \hline \multirow{2}{*}{Methods} & \(\begin{bmatrix}\text{h}_{o}\\ c\end{bmatrix}\) & \(1\) & \(3\) & \(5\) & \(10\) & \(\mu\) & \(\sigma\) \\ \hline WFBE\({}_{\text{r}}\) & \(1\) &.67 & **.76** & **.81** & **.87** & - & - \\ \cline{2-8} & \(3\) &.67 & **.70** & **.65** & **.68** & - & - \\ \hline LE\({}_{\text{w}}\) & \(1\) & **.68** &.72 &.75 &.85 &.029 &.15 \\ \cline{2-8} & \(3\) &.68 & **.70** & **.65** & **.68** &.002 &.15 \\ \hline LE\({}_{\text{disc}}\) & \(1\) &.64 &.69 &.72 &.82 &.062 &.19 \\ \cline{2-8} & \(3\) &.64 &.67 &.62 &.65 &.033 &.20 \\ \hline OB & \(1\) & **.68** & **.76** & **.81** & **.87** &.007 &.05 \\ \cline{2-8} & \(3\) & **.68** & **.70** & **.65** & **.68** &.006 &.05 \\ \hline BE & \(1\) &.58 &.65 &.72 &.77 &.097 &.19 \\ \cline{2-8} & \(3\) &.58 &.60 &.58 &.58 &.090 &.18 \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of heuristic and learning-based agents. \(\Delta ESPL\) shows mean and variance for \(ESPL\) difference between best performing heuristic agent (WFBE\({}_{\text{r}}\)) and other agents. For all agents, other metrics on average are: \(MC=0.83\), \(ES=1\) and \(SOR=1\). Bold numbers indicate best performance in a particular MORP configuration.
## Acknowledgements
We'd like to thank Kevin Carlberg and Roberto Calandra for many helpful discussions on this problem.
|
2304.05668 | Evidence of experimental three-wave resonant interactions between two
dispersion branches | We report the observation of nonlinear three-wave resonant interactions
between two different branches of the dispersion relation of hydrodynamic
waves, namely the gravity-capillary and sloshing modes. These atypical
interactions are investigated within a torus of fluid for which the sloshing
mode can be easily excited. A triadic resonance instability is then observed
due to this three-wave two-branch interaction mechanism. An exponential growth
of the instability and phase locking are evidenced. The efficiency of this
interaction is found to be maximal when the gravity-capillary phase velocity
matches the group velocity of the sloshing mode. For a stronger forcing,
additional waves are generated by a cascade of three-wave interactions
populating the wave spectrum. Such a three-wave two-branch interaction
mechanism is probably not restricted to hydrodynamics and could be of interest
in other systems involving several propagation modes. | Filip Novkoski, Chi-Tuong Pham, Eric Falcon | 2023-04-12T07:45:09Z | http://arxiv.org/abs/2304.05668v1 | # Evidence of experimental three-wave resonant interactions
###### Abstract
We report the observation of nonlinear three-wave resonant interactions between two different branches of the dispersion relation of hydrodynamic waves, namely the gravity-capillary and sloshing modes. These atypical interactions are investigated within a torus of fluid for which the sloshing mode can be easily excited. A triadic resonance instability is then observed due to this three-wave two-branch interaction mechanism. An exponential growth of the instability and phase locking are evidenced. The efficiency of this interaction is found to be maximal when the gravity-capillary phase velocity matches the group velocity of the sloshing mode. For a stronger forcing, additional waves are generated by a cascade of three-wave interactions populating the wave spectrum. Such a three-wave two-branch interaction mechanism is probably not restricted to hydrodynamics and could be of interest in other systems involving several propagation modes.
## I Introduction
Nonlinear wave interactions occur in a variety of systems, where waves of different wavenumbers and frequencies can exchange energy through nonlinear couplings. Such interactions also form the basis for wave-turbulent regimes, where a whole ensemble of waves with different wavenumbers interact among each other and can exhibit a cascade of energy from large to small scales [1].
The case of three-wave interactions is prevalent in many areas, such as plasma physics [2; 3], nonlinear optics [4], Rossby waves [5], and even mechanical systems such as suspended cables [6] or thin rings [7]. For hydrodynamic surface waves, three-wave interactions have been extensively studied in the case of gravity-capillary waves [8; 9] and hydroelastic waves [10]. In the case of gravity waves, four-wave interactions dominate [11; 12]. However, considering one-dimensional (1D) deep-water propagation leads at the leading order to five-wave resonant interactions for either pure capillary waves [13] or pure gravity waves [14; 15].
Three-wave systems can also be the source of instability [16]. Depending on the nonlinear coupling between the three waves, a single "mother" wave can give rise to two "daughter" waves, that then grow exponentially in amplitude. The waves of this triadic resonant instability (TRI) satisfy both resonances in wavenumber and frequency and has been widely observed in internal waves in stratified flows [17; 18; 19], as well as in inertial waves [20; 21] providing a potential route to wave turbulence [22; 23; 24; 25]. A special case of TRI is the parametric subharmonic instability, involving daughter waves with frequencies close to the first subharmonic of the mother wave and has been well investigated in areas such as plasma physics [26; 27] and oceanic systems [28].
Gravity-capillary waves on the two-dimensional surface of a fluid are well-known in both the linear regime [29] as well as the nonlinear wave-turbulent case [30; 31]. When one of the dimensions of the system is much smaller than the other, for example in canals, sloshing waves become apparent [29; 32], and lead to longitudinal waves with associated discrete transverse modes. This results in a countably infinite number of branches in the dispersion relation, each corresponding to one of the transverse modes, similar to modes of waveguides [33].
The sloshing modes can theoretically trigger nonlinear interactions between waves belonging to different branches of the dispersion relation [34]. However, the study of the interaction between nonlinear waves of different types (i.e., belonging to separate branches) has not been investigated experimentally so far. Such an unexplored interaction mechanism could potentially be applicable in various domains such as two-component systems [35; 36], atomic lattices [37], or plasma physics [38]. Such multiple branch dispersion relations are also characteristic of waves in waveguides, for example in solid and soft plates [33]. Besides, it provides a way to test wave turbulence in media where multiple wave species are present [39] and bears a similarity to interactions between interfacial and free-surface waves [40; 41].
Here, we will study the interaction between gravity-capillary waves and the first sloshing mode. To the best of our knowledge, such three-wave interactions have not been considered experimentally which is possibly due to the difficulty of cleanly exciting sloshing modes in typical experiments. The system under study is a torus of fluid, which has been shown to contain multiple modes of propagation, including sloshing [42], but also easily demonstrates nonlinear behavior, as shown in the case of Korteweg-de Vries (KdV) solitons [43]. Because of its relatively small size (\(R\approx 8\,\mathrm{cm}\)), the torus is easy to manipulate and excite.
The paper is organized as follows. First, we describe in Sec. II the experimental setup and the different branches of the dispersion relation. Section III will then present
the experimental results related to the three-wave two-branch interaction. In particular, the sloshing branch can trigger a triadic resonant instability, generating two gravity-capillary waves, for which the wave growth rate and phase locking are characterized. The efficiency of this mechanism is shown to be mediated by a velocity matching between the two types of propagation modes. Finally, Sec. IV draws the conclusions.
## II The torus of fluid
### Experimental setup
The experimental setup used is the same as the one described in Ref. [42]. The torus of fluid is formed by depositing distilled water on top of a circular plate that has been coated with a commercial superhydrophobic treatment. The plate has a triangular groove running along its perimeter, as is shown in Fig. 1a. The angle of the groove is \(\alpha=4.5^{\circ}\). It prevents the closing of the central hole of the torus due to capillarity.
The waves are created by a Teflon plate connected to an electromagnetic shaker with adjustable sinusoidal amplitude and frequency typically in the range 7-9 Hz [see Fig. 1b]. The waves propagate azimuthally on both the inner and outer borders of the torus. The waves also experience dissipation, which is primarily due to friction of the triple contact line, and not necessarily viscosity [43]. The motion of the border of the torus is captured by a camera located directly above the plate. By using a contour extraction algorithm we obtain the displacement \(\eta(\theta,t)\) of the borders. We will be focusing on the motion of the outer border unless otherwise mentioned. The second method of detecting the border displacement is through the use of a custom-made local capacitive wire probe, giving the position of the outer border over time at a fixed azimuthal point \(\theta_{0}\) with a high temporal resolution (2 kHz) [see Fig. 1a]. The central radius of the groove of the plate is \(R=7\) cm, while the torus size is fixed at the outer border radius \(R_{o}=7.85\) cm. The torus width is then fixed to \(W=R_{o}-R_{i}=2(R_{o}-R)=1.7\) cm.
### Dispersion relation and resonant interaction
The torus admits several modes of wave propagation such as gravity-capillary azimuthal waves and sloshing modes [42]. Using a sweep forcing, the experimental Fourier spectrum \(\tilde{\eta}(k_{\theta},f)\) of the outer border displacement \(\eta(\theta,t)\) is shown in Fig. 2 highlighting the gravity-capillary and sloshing branches.
The dispersion relation of gravity-capillary waves is found to be empirically well described by [42]
\[\omega_{\rm gc}^{2}=\left(g_{\rm eff}\frac{k_{\theta}}{R_{o}}+\frac{\sigma_{ \rm eff}}{\rho}\frac{k_{\theta}^{3}}{R_{o}^{3}}\right)\tanh\left(\frac{k_{ \theta}}{R_{o}}\chi^{2}\widetilde{W}\right), \tag{1}\]
with \(\widetilde{W}=W/2\) the half-width, \(\chi=R_{o}/R\) a measure of curvature, \(\rho=1000\) kg/m\({}^{3}\) the density of the fluid, and \(k_{\theta}\) the angular integer wavenumber, i.e., the discrete mode number which is given as \(k_{\theta}=kR_{o}\), with \(k\) the dimensional wavenumber of a wave traveling along the torus border. Since the waves are moving on a slope,
Figure 1: (a) Cross-section of the experimental setup with relevant quantities. (b) Side-view of the torus (\(R_{0}=7.85\) cm) under monochromatic excitation, \(f_{0}=7.4\) Hz.
Figure 2: Space-time Fourier spectrum \(\tilde{\eta}(k_{\theta},f)\) of the outer border displacement \(\eta(\theta,t)\) on a torus of fluid, \(R_{o}=7.85\) cm. Forcing: frequency sweep between 0 and 20 Hz on the outer border. Dashed lines: fit of the gravity-capillary dispersion relation (gc) of Eq. (1), and of the sloshing branch, \(\Sigma\), given by Eq. (2).
they experience an effective gravity which is given by \(g_{\rm eff}=g\sin\alpha\approx 0.77\,{\rm ms}^{-2}\). The effective surface tension is inferred from fitting the dispersion relation as \(\sigma_{\rm eff}=55\,{\rm mN/m}\). This low value is due to the channel geometry and renormalization effects [44].
Alongside the gravity-capillary branch, we consider the first sloshing mode, also given empirically as [42]
\[\omega_{\Sigma}^{2}=\omega_{0}^{2}+g_{\rm eff}\frac{k_{\theta}^{2}}{R}\,, \tag{2}\]
for values of \(k_{\theta}\lesssim 40\), where \(\omega_{0}\) is the cutoff frequency at \(k_{\theta}=0\). This relationship includes only gravity, and for higher frequencies surface tension needs to be taken into account. In the present work, we will assume that the relationship is a good approximation at the low wavenumbers we consider here.
We now turn to the nonlinear interactions between waves. Waves are capable of exchanging energy through nonlinear resonant interactions if they satisfy the conservation of both frequency and wavenumber, which in the case of three waves are
\[\begin{split} k_{1}&=k_{2}+k_{3}\,,\\ \omega_{1}&=\omega_{2}+\omega_{3}\,,\end{split} \tag{3}\]
with \(\omega_{i}=\omega(|k_{i}|)\), \(\omega(k)\) being the dispersion relation of the considered system. The above equations can be solved once the dispersion relation of the waves is provided. In addition, the involved waves do not need to be of the same type and may belong to different dispersion branches. Depending on the studied system, solutions of Eq. (3) can also give only trivial solutions which do not lead to an exchange of energy. We will be interested in studying the interaction of the two different modes mentioned above i.e., between the first two branches in Fig. 2 (namely the gravity-capillary and first sloshing branch) and whether they satisfy the conditions given by Eq. (3).
## III Experimental observations
### Triadic instability
A monochromatic signal is sent to the shaker at a frequency of \(f_{1}=\omega_{1}/2\pi\). We measure the displacement of the outer border \(\eta(t)\) at a fixed point for three different amplitudes of forcing as shown in Fig. 3
At low forcing, \(\eta(t)\) very closely resembles a sine wave. By increasing the amplitude, \(\eta(t)\) changes significantly, indicating the existence of a critical forcing amplitude and seems to become a superposition of multiple different frequencies, while still preserving some quasi-periodicity.
We compute the time-frequency spectrum of \(\eta(t)\) (also called spectrograms) to distinguish the frequency components contained in the signal, as shown in Fig. 4 for the three forcing amplitudes. For a low forcing, a single frequency is found, corresponding precisely to the forcing one, \(f_{1}\). As the forcing is increased, two additional subharmonic frequencies appear, neither of which is located at \(f_{1}/2\). This behavior is characteristic of the triadic resonant instability, where, by forcing the system at a given _pump frequency_\(f_{1}\), two subharmonic waves, \(f_{2}\) and \(f_{3}\) are pumped up from zero amplitude and thus begin to deplete the pump. It is worth noting that the sum of these two new frequencies, \(f_{2}\) and \(f_{3}\) equals \(f_{1}\). We also see that a typical time (of the order of 10 s) is necessary for these waves to be established in the spectrum. As the amplitude of forcing is increased further, additional frequencies besides the first pair become visible but take more time to appear, and they do so after the original pair is established.
Figure 3: Displacement \(\eta(t)\) at a fixed point for three different amplitudes of forcing (\(f_{1}=7.4\) Hz) increasing from top to bottom with the corresponding values of wave steepness \(\epsilon=0.003\), \(0.02\), and \(0.03\). The signal goes from a sine wave into a superposition of various subharmonics.
Figure 4: Time-frequency spectrum of the wave amplitudes of Fig. 3. For low forcing, a single frequency is present, \(f_{1}=7.4\) Hz (top), but for a high enough forcing, two frequencies appear at \(f_{2}=4.2\) Hz and \(f_{3}=3.2\) Hz (middle). Further increase of the forcing generates an ensemble of different modes (bottom).
### Resonance conditions
To verify that the signals we observe are due to a resonant three-wave interaction, we first consider the spatiotemporal signal of the torus outer border \(\eta(\theta,t)\). We then compute the corresponding space and time Fourier transform \(\tilde{\eta}(k_{\theta},\omega)\) as shown in Fig. 5. It gives us not only the frequency but also wavenumber information of the waves present in the system. Figure 5 shows that the pumping frequency \(f_{1}\) excited the sloshing branch, and the corresponding part on the gravity-capillary branch, but also two lower frequency points, \(f_{2}\) and \(f_{3}\) on the gravity-capillary branch. Note that the discreteness in \(k_{\theta}\) is due to the torus finite size, whereas the one in \(f\) corresponds to the inverse of the total measurement time.
Thus, one has to consider the following resonant conditions
\[\begin{split}\omega_{1}^{\Sigma}&=\omega_{2}^{ \text{gc}}+\omega_{3}^{\text{gc}}\,,\\ k_{\theta}\left(\omega_{1}^{\Sigma}\right)&=k_{ \theta}\left(\omega_{2}^{\text{gc}}\right)+k_{\theta}\left(\omega_{3}^{\text {gc}}\right)\,,\end{split} \tag{4}\]
which can be solved graphically in the \((k_{\theta},\omega)\) plane as demonstrated in Fig. 6. Experimentally we find the resonance conditions in both wavenumber and frequency to be verified by the points 1, 2, and 3 displayed in Fig. 5. Since the forcing frequency is known at all times, i.e., \(\omega_{1}=\omega_{1}^{\Sigma}\), we solve exactly the above Eq. (4), using the two branches of the dispersion relation of Eqs. (1) and (2), leading to a system of four equations and four unknowns. Since no analytic solution exists, we look for the two unknown daughter frequencies \(f_{2}\) and \(f_{3}\) numerically. It is also important to note that one of the daughter waves will always have a negative wavenumber, i.e., it will be counterpropagating with respect to the other two waves. If exclusively three interacting gravity-capillary waves are taken into account (i.e., no sloshing), no nontrivial solution to the above equations exists far from the capillary-gravity transition [45].
Due to the periodicity of the system and its finite size, the dispersion relation of the torus is necessarily discrete [42]. If we denote by \(\Delta_{\omega}\equiv\omega(k_{\theta}+1)-\omega(k_{\theta})\) the frequency gap between two adjacent discrete wavenumber \(k_{\theta}\) and by \(\Gamma_{\omega}\) the frequency nonlinear broadening of the dispersion relation, we can experimentally estimate whether discrete effects are to be taken into account (\(\Gamma_{\omega}/\Delta_{\omega}\ll 1\)) or if the system is in a kinetic regime (\(\Gamma_{\omega}/\Delta_{\omega}\gg 1\)) [46]. Indeed, we find approximately that
Figure 5: Fourier spectrum \(\tilde{\eta}(k_{\theta},f)\) of the torus outer border displacement \(\eta(\theta,t)\). Monochromatic forcing at \(f_{1}=7.9\) Hz. Points \((k_{2},f_{2})\) and \((k_{3},f_{3})\) lie on the gravity-capillary branch whereas, point \((k_{1},f_{1})\) lies on the sloshing branch where it is forced. All three points verify the resonance conditions in both frequency and wavenumber (see arrows). \(f_{0}\) is the cutoff frequency of the sloshing branch.
Figure 7: Measured values (dots) of the daughter frequencies \(f_{2}\) (red) and \(f_{3}\) (blue) for different mother frequencies \(f_{1}\). The dashed lines are theoretical values obtained numerically through the resonance conditions of Eq. (4). Solid line of slope 1 corresponds to \(f_{1}\) and green dots indicate the sum \(f_{2}+f_{3}\).
Figure 6: Graphical solution of the resonance conditions of Eq. (4) involving a sloshing wave decomposing into two gravity-capillary waves using the corresponding dispersion relations. One of the waves has to be counter-propagating to verify Eq. (4).
\(\Gamma_{\omega}/\Delta_{\omega}\in[3,6]\) in our experiment, hence we can consider the dispersion relation continuous.
The two daughter frequencies are now measured for different values of the mother frequency \(f_{1}\) to confirm that the resonance conditions are well satisfied. Both the numerical solution of Eq. (4) (dashed lines), and the experimentally found values (dots) of \(f_{1}\) and \(f_{2}\) are in very good agreement as shown in Fig. 7. As we can see the frequencies satisfy the conditions extremely well, and not only confirm the frequency matching condition but also wavenumber conservation since this is implicitly included when solving the resonance conditions in Eq. (4). This confirms that the system is experiencing a resonant three-wave (two-branch) interaction. We note that below the cutoff frequency \(\omega_{0}\) of the sloshing branch (\(f_{0}<7.1\) Hz - see Figs. 2 and 5), no solution occurs in Fig. 7. Indeed, forcing below \(f_{0}\) does not lead experimentally to the appearance of nonlinear resonant interactions.
### Wave amplitude growth
Three-wave interactions are usually described using amplitude equations. We now consider the amplitudes of each wave at frequency \(f_{i}\), which are experimentally accessible through the use of the Hilbert transform of \(\eta(t)\)[11]. This is done by first using a bandpass filter around the frequency of interest, onto which the Hilbert transform is then applied. This procedure then yields both the wave amplitude at frequency \(f_{i}\) but also its phase \(\varphi_{i}\).
We focus first on the displacement \(\eta(t)\) forced at \(f_{1}=7.4\) Hz from which we extract the three amplitudes. As we saw in Fig. 4 (middle), some typical time is needed for the transfer of energy from the mother wave \(f_{1}\) into the daughters \(f_{2}\) and \(f_{3}\). The temporal evolutions of the amplitudes of all three waves are shown in Fig. 8. Once the mother wave is established, (\(t<3\) s) it then increases rapidly up to a stationary out-of-equilibrium state (\(t>10\) s). The growth is indeed balanced by dissipation when it begins pumping the daughter waves which grow exponentially, (\(12<t<25\) s). Eventually, all three reach a stationary state (\(t>35\) s). The exponential growth of the two daughter waves indicates that they undergo an instability. In addition, the mother wave reaches an initially higher amplitude which then decreases to the steady one, since it transfers energy to the two daughter waves through the instability. It is important to note that the three amplitudes are of the same order of magnitude, and the mother wave cannot be considered to be much stronger than the daughter waves as it is usual in three-wave resonant interactions [8, 11, 26].
### Phase locking
The phase of each wave reads \(\varphi_{j}(\theta,t)=k_{\theta}\theta-\omega t+\phi_{j}\) and in general, it depends on time, \(\phi_{j}\) being an initial arbitrary constant. Conversely, when Eq. (4) is satisfied, the interaction phase defined as \(\Phi=\varphi_{1}-\varphi_{2}-\varphi_{3}\) remains constant (i.e., \(\phi_{1}-\phi_{2}-\phi_{3}=\text{const.}\)), thus making the three waves phase-locked. Experimentally, \(\varphi_{i}\) is measured by the argument of the Hilbert transform of \(\eta(t)\) in the stationary regime.
To avoid possible phase jumps, we plot the sine of the total phase \(\Phi\) in Fig. 9. Once the stationary regime is reached, the total phase remains constant over time. The
Figure 8: Semilog plot of amplitudes \(A_{i}\) as a function of time of the mother wave (green, \(f_{1}=7.4\) Hz), and the two daughter waves (in red and blue) measured using the Hilbert transform. We can observe that at around 12 s the two daughters start growing exponentially as \(e^{t/\tau}\) with \(\tau=7\) s (dashed line). Inset: same in linear scale for the whole duration of the experiment, \(T=240\) s. We can see how initially, as the daughter waves grow, the mother wave has to lose energy.
Figure 9: Temporal evolution of the sine of the total phase \(\varphi=\varphi_{1}-\varphi_{2}-\varphi_{3}\) of the three waves obtained using the argument of the Hilbert transform. The total phase \(\varphi\) is found to be locked to a value close to \(-\pi/2\). Same forcing as in Fig. 8.
three waves are thus phase-locked around \(\Phi\simeq-\pi/2\) as expected theoretically for a three-wave resonant mechanism (see Sec. III.5).
### Amplitude equations
We now consider the three-wave amplitude equations in the case of resonant interaction [47, 34] for a physical description of this instability
\[\dot{A}_{1} =iI_{23}A_{2}A_{3}\,, \tag{5}\] \[\dot{A}_{2} =iI_{13}A_{1}A_{3}^{*}\,,\] (6) \[\dot{A}_{3} =iI_{12}A_{1}A_{2}^{*}\,, \tag{7}\]
with \(A_{i}\) the complex wave amplitude and \(I_{i,i+1}\) are the unknown positive interaction coefficients. Note that the latter are known for gravity-capillary wave interaction involving no sloshing [48]. We will not approach the full problem of the above equations, which constitute an integrable system [49]. We instead focus only on the case where the pump-wave has a fixed amplitude \(A_{1}\) (the so-called pump-wave approximation). Indeed, we saw experimentally that the stationary regime of the mother wave is established before the one of two daughter waves. For completeness, we include damping as well, leading to
\[\dot{A}_{2} =iI_{13}A_{1}A_{3}^{*}-\alpha_{2}A_{2}\,, \tag{8}\] \[\dot{A}_{3} =iI_{12}A_{1}A_{2}^{*}-\alpha_{3}A_{3}\,, \tag{9}\]
with \(\alpha_{j}\) the temporal damping rate of wave \(j\). Inserting Eq. (8) into Eq. (9) leads to
\[\ddot{A}_{3}=I_{13}I_{12}A_{3}|A_{1}|^{2}-(\alpha_{2}+\alpha_{3})\dot{A}_{3}- \alpha_{2}\alpha_{3}A_{3}\,, \tag{10}\]
which has a solution of the form
\[A_{3}=a_{+}e^{\sigma_{+}t}+a_{-}e^{\sigma_{-}t}\,, \tag{11}\]
with \(a_{\pm}\) depending on the initial conditions and the growth rate obeying
\[\sigma_{\pm}=-\frac{\alpha_{2}+\alpha_{3}}{2}\pm\sqrt{I_{12}I_{13}|A_{1}|^{2}+ \frac{(\alpha_{2}-\alpha_{3})^{2}}{4}}\,. \tag{12}\]
Thus an instability (i.e., \(\sigma_{+}>0\)) can be observed provided the mother amplitude overcomes a threshold due to dissipation. The daughter waves then grow exponentially, as observed experimentally. This means that if at time \(t=0\), only wave 1 has a finite amplitude, the other two waves, which are infinitesimal in magnitude, will be pumped up exponentially, and eventually be bounded by damping. The exact values of the interaction coefficients would follow from a weak nonlinear expansion of equations of motion, which for the case of the torus in this experimental geometry are so far unknown.
As for the phases of the waves, denoting \(A_{j}=a_{j}e^{i\varphi_{j}}\) yields the equation for the temporal evolution of the total phase [47]
\[\dot{\Phi}=a_{1}a_{2}a_{3}\left(\frac{I_{23}}{a_{1}^{2}}-\frac{I_{13}}{a_{2}^ {2}}-\frac{I_{12}}{a_{3}^{2}}\right)\cos\Phi=\beta\cos\Phi\,. \tag{13}\]
If we consider that in the final stationary state all three amplitudes are constant, one finds a solution of the form
\[\Phi=2\arctan\left[\tanh\left(\frac{\beta(t_{0}+t)}{2}\right)\right]\,, \tag{14}\]
which at large \(t\) leads to \(\varphi=\text{sgn}(\beta)\pi/2\). Depending on the sign of \(\beta\), i.e., the values of the interaction coefficients and amplitudes, the sign of the total phase will be differently determined, which we find experimentally to be \(-\pi/2\). We find that the interaction phase \(\Phi\) does not change with the frequency of the mother wave in the experiment.
### Maximal energy transfer by velocity matching
We now turn to the dependence of the daughter amplitudes on the frequency \(f_{1}\) of the mother wave. The daughter wave amplitudes normalized by the mother wave \(A_{2,3}/A_{1}\) are shown as a function of \(f_{1}\) in Fig. 10. The two daughter waves appear to follow the same relation and experience a maximal relative amplitude, making their amplitudes significantly larger than the mother wave. The plot is strongly reminiscent of the resonance curve of a driven harmonic oscillator. The peak of this curve, experimentally found to be at \(f_{M}=7.7\) Hz, has to
Figure 10: Normalized amplitudes of the two daughter waves for different frequencies of the mother wave \(f_{1}\). We clearly observe a peak at around \(f_{M}\approx 7.7\) Hz, where the daughter waves are three times larger than the mother wave in amplitude. Inset: Predicted group velocity of the sloshing mode (blue) and phase velocity of gravity-capillary mode (red), intersecting at \(k_{c}=6\) corresponding to the peak frequency \(f_{M}=7.7\) Hz.
be located at a frequency that depends only on the system properties. We find it to be close to the frequency \(\omega_{\Sigma}(k_{c})\) where \(k_{c}=6\) is the wavenumber at which the group velocity, \(\Omega_{q}^{\Sigma}=\mathrm{d}\omega_{\Sigma}/\mathrm{d}k_{\theta}\), of the sloshing branch and the phase velocity of the gravity-capillary, \(\Omega_{p}^{\mathrm{sc}}=\omega_{\mathrm{gc}}/k_{\theta}\), intersect, numerically found to be \(f_{c}\approx 7.7\) Hz, shown in the inset of Fig. 10.
The energy transfer is thus most efficient when a velocity matching occurs between the group velocity of the sloshing mode and the phase velocity of the gravity-capillary mode. Such an atypical velocity matching involving group and phase velocities differs from the usual phase-phase velocity matching [50], but has been considered theoretically for surface and internal waves [51, 52]. The energy transfer is thus found to be maximal when the carrier of a sloshing wavepacket has the same velocity as a gravity-capillary monochromatic wave for an identical wavenumber. Note that the efficiency of the wave interaction is thus related to the velocity matching, whereas the triadic interaction is the transfer mechanism.
Sloshing branches have previously been modeled, in the linear case, using systems of oscillators [53], while their nonlinear interaction remains more complicated. A model of the branch interaction would have to resemble the driven harmonic oscillator whose amplitude depends on \(\omega_{c}^{2}-\omega_{3}^{2}\), similar to that found in [9], where \(\omega_{c}=\omega_{\Sigma}(k_{c})\) is the frequency at which \(\Omega_{p}^{\mathrm{sc}}=\Omega_{g}^{\Sigma}\).
### Subfrequency wave generation
As already noted at the bottom of Fig. 4, more than the expected three frequencies appear in the spectrum for high enough forcings. In order to better understand this, we apply a stronger monochromatic forcing leading to the power spectrum in Fig. 11. As shown above, the forcing frequency \(f_{1}\), through the TRI, creates two daughters at \(f_{2}\) and \(f_{3}\). Under sufficiently strong forcing, these two daughters, through three-wave interactions create a second pair of frequencies which satisfies
\[\begin{split} f_{2}-f_{3}&=f_{-}^{(1)}\,,\\ f_{-}^{(1)}+f_{+}^{(1)}&=f_{1}\,,\end{split} \tag{15}\]
where \(f_{\pm}^{(1)}\) is the first generation of secondary waves, the superscript indicating the generation order and the subscript sign indicates the relative value. The notation \(+\) indicates the largest of the pair \(f_{\pm}\) and vice-versa. The relationship between the frequencies, governed by Eq. (15), is well verified experimentally in Fig. 11. These grand-daughters can go on generating another generation \(f_{\pm}^{(2)}\) in exactly the same way, which can then be repeated again and so on, thus populating the region with a high number of discrete peaks (see Fig. 11). This mechanism (analogous to that described for internal waves [19]) leads to a discrete type of energy cascade, where energy is transmitted into all the different possible daughter-wave generations. More generally, we have for the \(n\)th wave generation
\[\begin{split} f_{2}-f_{3}&=f_{-}^{(n)}\,,\\ f_{-}^{(n)}+f_{+}^{(n)}&=f_{1}\,,\end{split} \tag{16}\]
as also well observed in Fig. 11.
Figure 11: Power spectrum of a signal forced at \(f_{1}=7.4\) Hz. The forcing is strong enough to excite additional couples besides the primary three-wave pairs, \(f_{1}^{(1)}\) and \(f_{2}^{(1)}\). The first two daughters, continue to generate firstly the subharmonic secondary waves of Eq. (15), which then go on to create tertiary waves through interactions with the mother wave at \(f_{1}\).
Figure 12: Fourier spectrum \(\tilde{\eta}(k_{\theta},f)\) of the torus outer border displacement \(\eta(\theta,t)\). Most of the energy is concentrated along the gravity-capillary branch. Quasiresonant interaction is observed following Eq. (17) and Eq. (18) with \(\delta k_{\theta}=1.6\). Monochromatic forcing at \(f_{1}=7.9\) Hz.
### Upper-frequency wave generation
Let us now focus on the high-frequency part of the spectrum (\(f_{1}<f<2f_{1}\)) in Fig. 11. The corresponding discrete set of peaks is formed in a way similar to that of in Sec. III.7, but involving interaction with \(f_{1}\). We find that these tertiary waves arise from the interaction of the daughter waves (\(f_{2,3}\)) or of the secondary waves (\(f_{\pm}^{(n)}\)) with the mother wave \(f_{1}\). We find that they satisfy the following conditions:
\[\begin{split} f_{2,3}+f_{1}&=g_{\pm}^{(0)}\,,\\ f_{\pm}^{(n)}+f_{1}&=g_{\pm}^{(n)}\,.\end{split} \tag{17}\]
Note that the zeroth generation (\(g_{\pm}^{0}\)) is determined by the daughter waves \(f_{2}\) and \(f_{3}\), whereas the \(n\)th generation (\(n>0\)) involves the secondary waves. This relationship is verified in Fig. 11. Such interaction thus provides a way to populate the high-frequency content of the spectrum with discretely excited modes. The generation mechanism of Eq. (17) is further iterated, e.g., \(g_{\pm}^{(0)}+f_{1}=h_{\pm}^{(0)}\), \(g_{\pm}^{(n)}+f_{1}=h_{\pm}^{(n)}\), as observed experimentally (not shown in Fig. 11).
To determine whether tertiary waves lie on the dispersion relation and are also resonant in wavenumber we compute the experimental space-time Fourier spectrum \(\tilde{\eta}(k_{\theta},f)\) of the torus outer border displacement as shown in Fig. 12. First, we can indeed observe that the excited tertiary waves lie on the dispersion relation. Interestingly, for a given tertiary wave, all branches present at that frequency are excited. But, looking more carefully and taking \(g_{-}^{0}\) as an example, we find that
\[k_{\Sigma}\left(g_{-}^{0}\right) =k_{\mathrm{gc}}(f_{1})+k_{\mathrm{gc}}(f_{3})+\delta k_{\theta}\,, \tag{18}\] \[g_{-}^{0} =f_{1}+f_{3}\,, \tag{19}\]
where \(\delta k_{\theta}\) corresponds to the widening of the gravity-capillary dispersion branch due to nonlinearity. \(\delta k_{\theta}\) is inferred from the standard deviation of a Gaussian fit around the peak of the Fourier spectrum at a fixed \(f\). Equation (18) implies that a quasiresonant interaction occurs in wavenumber. We observed this in Fig. 12, since the frequencies have a perfect match, whereas the wavenumbers require broadening to fall on the dispersion relation.
### Bicoherence
Finally, we experimentally quantify the three-wave interactions (i.e., \(\nu_{1}+\nu_{2}=\nu_{3}\)) by computing the normalized third-order correlation in frequency of the wave elevation called bicoherence [54]
\[B(\nu_{1},\nu_{2})=\frac{|\langle\tilde{\eta}^{*}(\nu_{1})\tilde{\eta}^{*}( \nu_{2})\tilde{\eta}(\nu_{1}+\nu_{2})\rangle|}{\sqrt{\langle|\tilde{\eta}( \nu_{1})\tilde{\eta}(\nu_{2})|^{2}\rangle}\langle|\tilde{\eta}(\nu_{1}+\nu_{2} )|^{2}\rangle}\,, \tag{20}\]
where \(*\) denotes the complex conjugate. \(\langle\cdot\rangle\) corresponds to an ensemble average over 101 temporal windows of the signal. The normalization is such that \(B\in[0,1]\) where 0 represents no correlation and 1 a perfect correlation.
The bicoherence for a monochromatic forcing at 7.5 Hz is depicted in Fig. 13. We observe the primary mother wave at \((1,1)\). Note that \(B(\nu_{1},\nu_{2})\) is symmetric about the \(\nu_{2}=\nu_{1}\) diagonal. Moreover, the green dashed line shows all of the frequency pairs whose sum is \(f_{1}\), but not all of them form resonant triads. The resonant triad \(f_{1}=f_{2}+f_{3}\) is only found to occur at the points \(A\) and \(B\), which are the intersection points between the green dashed line and the red dashed line coming from solving the resonance conditions of Eq. (4). We can see that the daughters are located on the intersection of the resonant manifold with the frequency-sum line.
We can see that the plane is populated by other points, some of which are trivial (e.g., \(f_{1},f_{1}\)), as well as secondary and tertiary waves. According to Eq. (15)b, secondary waves will be located on the green dashed line since their sum yields \(f_{1}\). The first generation \(f_{\pm}^{(1)}\) is found at points \(C\), and by symmetry, \(D\).
Tertiary waves, however, can be seen to satisfy \(g_{\pm}^{(n)}\) + \(f_{\mp}^{(n)}=2f_{1}\) according to Eq. (17)b and using Eq. (16)b. This is evidenced in Fig. 13 by points E and F, lying on the cyan dashed line which contains all points whose sum is equal to \(2f_{1}\).
Figure 13: Bicoherence \(B(\nu_{1},\nu_{2})\) of a wave elevation signal, recorded during 40 min, at a given point. Forcing: sine wave with \(f_{1}=7.5\) Hz. The mother wave is located at \((1,1)\), while all possible pairs whose sum is 1 are on the green oblique dashed line of slope \(-1\). The two daughter waves are located at the intersection of this line with the resonant manifold (dashed red line), solutions of Eq. (4), at points \(A(f_{2},f_{3})\) and \(B(f_{3},f_{2})\). Points \(C\) and \(D\) satisfy Eq. (15)b whereas points \(E\) and \(F\) follow Eq. (17)b.
Conclusion
We have demonstrated the existence of nonlinear three-wave resonant interactions occurring between two different branches of the hydrodynamic wave dispersion relation, namely the gravity-capillary and sloshing modes. To the best of our knowledge, this three-wave two-branch interaction mechanism has never been reported experimentally in any wave system.
The system used is a torus of fluid for which the sloshing mode can easily be excited. When subjected to a weak monochromatic forcing, a triadic resonance instability is first observed with an exponential growth of the daughter waves and a phase locking of the three waves. The efficiency of this interaction is found to be maximum when the gravity-capillary phase velocity matches the group velocity of the sloshing mode. The interaction between waves belonging to these two branches can be considered as an analog of a forced harmonic oscillator. For stronger forcing, additional waves are generated by a cascade of three-wave interactions populating the high-frequency part of the wave spectrum. Since this mechanism authorizes three-wave interactions in a 1D system far from the gravity-capillary transition, it thus paves the way to reach a wave turbulence regime triggered by this atypical mechanism.
In the future, we plan to explore the role of the system periodicity on the wave interactions and on a possible wave turbulence regime, as previously shown for solitons [43]. Finally, such a three-wave two-branch interaction mechanism is probably not restricted to hydrodynamics and could be of primary interest in other fields involving several propagation modes, such as elastic plates [33], or optical waveguides [55].
###### Acknowledgements.
We thank A. Di Palma, and Y. Le Goas for technical help. This work is supported by the French National Research Agency (ANR SOGOOD project No. ANR-21-CE30-0061-04), and by the Simons Foundation MPS No. 651463-Wave Turbulence (USA).
|
2307.04026 | Dowker-type theorems for disk-polygons in normed planes | A classical result of Dowker (Bull. Amer. Math. Soc. 50: 120-122, 1944)
states that for any plane convex body $K$ in the Euclidean plane, the areas of
the maximum (resp. minimum) area convex $n$-gons inscribed (resp.
circumscribed) in $K$ is a concave (resp. convex) sequence. It is known that
this theorem remains true if we replace area by perimeter, the Euclidean plane
by an arbitrary normed plane, or convex $n$-gons by disk-$n$-gons, obtained as
the intersection of $n$ closed Euclidean unit disks. The aim of our paper is to
investigate these problems for $C$-$n$-gons, defined as intersections of $n$
translates of the unit disk $C$ of a normed plane. In particular, we show that
Dowker's theorem remains true for the areas and the perimeters of circumscribed
$C$-$n$-gons, and the perimeters of inscribed $C$-$n$-gons. We also show that
in the family of origin-symmetric plane convex bodies, for a typical element
$C$ with respect to Hausdorff distance, Dowker's theorem for the areas of
inscribed $C$-$n$-gons fails. | Bushra Basit, Zsolt Lángi | 2023-07-08T18:20:39Z | http://arxiv.org/abs/2307.04026v5 | # Dowker-type theorems for disk-polygons in normed planes
###### Abstract.
A classical result of Dowker (Bull. Amer. Math. Soc. 50: 120-122, 1944) states that for any plane convex body \(K\) in the Euclidean plane, the areas of the maximum (resp. minimum) area convex \(n\)-gons inscribed (resp. circumscribed) in \(K\) is a concave (resp. convex) sequence. It is known that this theorem remains true if we replace area by perimeter, the Euclidean plane by an arbitrary normed plane, or convex \(n\)-gons by disk-\(n\)-gons, obtained as the intersection of \(n\) closed Euclidean unit disks. The aim of our paper is to investigate these problems for \(C\)-\(n\)-gons, defined as intersections of \(n\) translates of the unit disk \(C\) of a normed plane. In particular, we show that Dowker's theorem remains true for the areas and the perimeters of circumscribed \(C\)-\(n\)-gons, and the perimeters of inscribed \(C\)-\(n\)-gons. We also show that in the family of origin-symmetric plane convex bodies, for a typical element \(C\) with respect to Hausdorff distance, Dowker's theorem for the areas of inscribed \(C\)-\(n\)-gons fails.
Key words and phrases:Dowker's theorem, circumscribed polygon, inscribed polygon, normed plane, spindle convexity, \(C\)-convexity 2020 Mathematics Subject Classification: 52A40, 52A21, 52A30 Partially supported by the National Research, Development and Innovation Office, NKFI, K-147544 grant.
terminology in [4, 16], and call a set satisfying the property in Mayer's paper \(C\)_-spindle convex_, or shortly \(C\)_-convex_, and if \(C\) is a closed Euclidean unit ball, we call it spindle convex (see Definition 2).
One of the results related to spindle convex sets is due to G. Fejes Toth and Fodor [10] who extended Dowker's theorems, together with their variants for perimeter, for spindle convex sets; in these theorems the role of inscribed or circumscribed convex \(n\)-gons is played by the so-called _disk-\(n\)-gons_, obtained as the intersections of \(n\) closed Euclidean unit disks. They also proved similar theorems in hyperbolic or spherical plane.
Our main goal is to investigate a normed version of the problem in [10]. To state our results, recall that the unit ball of any finite dimensional normed space is a convex body symmetric to the origin \(o\), and any such body is the unit ball of a finite dimensional normed space. Thus, in the paper we choose an arbitrary \(o\)-symmetric convex disk \(C\) in the real normed space \(\mathbb{R}^{2}\), and work in the normed plane with unit disk \(C\), which we regard as \(\mathbb{R}^{2}\) equipped with the norm \(\|\cdot\|_{C}\) of \(C\). In the paper, by a convex disk we mean a compact, convex planar set with nonempty interior. We denote the family of convex disks by \(\mathcal{K}\), and the family of \(o\)-symmetric convex disks by \(\mathcal{K}_{o}\). In the paper we regard \(\mathcal{K}\) and \(\mathcal{K}_{o}\) as topological spaces with the topology induced by Hausdorff distance.
Before presenting our results, recall the well-known fact that any finite dimensional real normed space can be equipped with a Haar measure, and that this measure is unique up to multiplication of the standard Lebesgue measure by a scalar (cf. e.g. [22]). This scalar does not play a role in our investigation and in the paper area(\(\cdot\)) denotes \(2\)-dimensional Lebesgue measure.
**Definition 1**.: _For any \(C\in\mathcal{K}_{o}\) and convex polygon \(Q\), we define the \(C\)-perimeter of \(Q\) as the sum of the lengths of the sides of \(Q\), measured in the norm generated by \(C\). The \(C\)-perimeter of a convex disk \(K\subset\mathbb{R}^{2}\), denoted by \(\operatorname{perim}_{C}(K)\), is the supremum of the \(C\)-perimeters of all convex polygons inscribed in \(K\)._
We note that, moving its vertices one by one to the boundary of \(K\) in a suitable direction, for any convex polygon \(Q\) contained in \(K\) one can find a convex polygon \(Q^{\prime}\) inscribed in \(K\) with \(\operatorname{perim}_{C}(Q)\leq\operatorname{perim}_{C}(Q^{\prime})\). This shows, in particular, that for any two plane convex bodies \(K\subseteq L\subset\mathbb{R}^{2}\), we have \(\operatorname{perim}_{C}(K)\leq\operatorname{perim}_{C}(L)\), with equality if and only if \(K=L\) (see also [19]). Furthermore, it is worth observing that a straightforward modification of Definition 1 can be used to define the \(C\)_-length_ of a rectifiable curve \(\Gamma\subset\mathbb{R}^{2}\), denoted by \(\operatorname{arclength}_{C}(\Gamma)\).
Our next definition can be found in [16] and its origin goes back to [18].
**Definition 2**.: _Let \(C\in\mathcal{K}_{o}\) and consider two (not necessarily distinct) points \(p,q\in\mathbb{R}^{2}\) such that a translate of \(C\) contains both \(p\) and \(q\). Then the \(C\)-spindle (denoted as \([p,q]_{C}\)) of \(p\) and \(q\) is the intersection of all translates of \(C\) that contain \(p\) and \(q\). If no translate of \(C\) contains \(p\) and \(q\), we set \([p,q]_{C}=\mathbb{R}^{2}\). We call a set \(K\subset\mathbb{R}^{2}\)\(C\)-spindle convex (or shortly \(C\)-convex), if for any \(p,q\in K\), we have \([p,q]_{C}\subseteq K\)._
We recall from [16, Corollary 3.13] that a closed set in \(\mathbb{R}^{2}\) different from \(\mathbb{R}^{2}\) is \(C\)-convex if and only if it is the intersection of some translates of \(C\).
**Definition 3**.: _The intersection of \(n\) translates of \(C\) is called a C-\(n\)-gon for \(n\geq 3\)._
In our next definition and throughout the paper, area(\(\cdot\)) denotes standard Lebesgue measure.
**Definition 4**.: _Let \(n\geq 3\) and let \(K\) be a \(C\)-convex disk in \(\mathbb{R}^{2}\), where \(C\in\mathcal{K}_{o}\). We set_
\[\begin{split}\hat{A}_{n}^{C}(K)=&\inf\{\mathrm{area}( Q):Q\text{ is a $C-n-\text{gon circumscribed about }K$}\};\\ \hat{a}_{n}^{C}(K)=&\sup\{\mathrm{area}(Q):Q\text{ is a $C-n-\text{gon inscribed in }K$}\};\\ \hat{P}_{n}^{C}(K)=&\inf\{\mathrm{perim}_{C}(Q):Q \text{ is a $C-n-\text{gon circumscribed about }K$}\};\\ \hat{p}_{n}^{C}(K)=&\sup\{\mathrm{perim}_{C}(Q):Q \text{ is a $C-n-\text{gon inscribed in }K$}\}.\end{split} \tag{1}\]
**Theorem 1**.: _For any \(C\in\mathcal{K}_{o}\) and \(C\)-convex disk \(K\), the sequences \(\{\hat{A}_{n}^{C}(K)\}\), \(\{\hat{P}_{n}^{C}(K)\}\) are convex, and the sequence \(\{\hat{p}_{n}^{C}(K)\}\) is concave. That is, for any \(n\geq 4\), we have_
\[\hat{A}_{n-1}^{C}(K)+\hat{A}_{n+1}^{C}(K)\geq 2\hat{A}_{n}^{C}(K), \hat{P}_{n-1}^{C}(K)+\hat{P}_{n+1}^{C}(K)\geq 2\hat{P}_{n}^{C}(K),\text{ and }\] \[\hat{p}_{n-1}^{C}(K)+\hat{p}_{n+1}^{C}(K)\leq 2\hat{p}_{n}^{C}(K).\]
As a consequence of Theorem 1, we prove Theorem 2, and recall that similar statements have been derived in [9] for the Euclidean areas of inscribed and circumscribed polygons from the classical results of Dowker in [6] (for their spindle convex variants, see [10]).
**Theorem 2**.: _Let \(n\geq 3\) and \(k\geq 2\). Assume that \(k\) is a divisor of \(n\) and both \(K\) and \(C\) have \(k\)-fold rotational symmetry. Then there are \(C\)-\(n\)-gons \(Q^{A}\), \(Q^{P}\) circumscribed about \(K\) which have \(k\)-fold rotational symmetry, and \(\mathrm{area}(Q^{A})=\hat{A}_{n}^{C}(K)\) and \(\mathrm{perim}_{C}(Q^{P})=\hat{P}_{n}^{C}(K)\). Similarly, there is a \(C\)-\(n\)-gon \(Q^{p}\) inscribed in \(K\) which has \(k\)-fold rotational symmetry, and \(\mathrm{perim}_{C}(Q^{p})=\hat{p}_{n}^{C}(K)\)._
Before our next theorem, we remark that in a topological space \(\mathcal{F}\), a subset is called _residual_ if it is a countable intersection of sets each of which has dense interior in \(\mathcal{F}\). The elements of a residual subset of \(\mathcal{F}\) are called _typical_. Our next result shows that Dowker's theorem for the sequence \(\{A_{n}^{C}(K)\}\) fails in a strong sense.
**Theorem 3**.: _A typical element \(C\) of \(\mathcal{K}_{o}\) satisfies the property that for every \(n\geq 4\), there is a \(C\)-convex disk \(K\) with_
\[\hat{a}_{n-1}^{C}(K)+\hat{a}_{n+1}^{C}(K)>2\hat{a}_{n}^{C}(K).\]
The structure of the paper is as follows. In Section 2, we present the necessary notation and prove some lemmas. Then in Sections 3 and 4 we prove Theorems 1 and 2, and Theorem 3, respectively. Finally, in Section 5, we collect our additional remarks and propose some open problems.
## 2. Preliminaries
In the paper, for simplicity, for any \(x,y\in\mathbb{R}^{2}\), we denote by \([x,y]\) the closed segment with endpoints \(x,y\). We equip \(\mathbb{R}^{2}\) also with a Euclidean norm, which we denote by \(\|\cdot\|\), and use the notation \(B^{2}\) for the Euclidean closed unit disk centered at \(o\). Recall that the _Euclidean diameter_ of a compact set \(X\subset\mathbb{R}^{2}\) is the Euclidean distance of a farthest pair of points in \(X\). If we replace Euclidean distance by distance measured in the norm of \(C\), we obtain the _\(C\)-diameter_ of \(X\).
Recall that for any set \(X\subseteq\mathbb{R}^{2}\), the _\(C\)-convex hull_, or shortly _\(C\)-hull_ is the intersection of all \(C\)-convex sets that contain \(C\). We denote it by \(\mathrm{conv}_{C}(X)\), and note that it is \(C\)-convex, and if \(X\) is closed, then it coincides with the intersection of all translates of \(C\) containing \(X\)[16].
In the following list we collect some elementary properties of \(C\)-spindles and \(C\)-\(n\)-gons that we are going to use frequently in the paper.
**Remark 1**.: _We have the following._
1. _For any_ \(x,y\in\mathbb{R}^{2}\) _with_ \(\|x-y\|_{C}\leq 2\)_,_ \([x,y]_{C}\) _is the intersection of at most two translates of_ \(C\)_, and if_ \([x,y]_{C}\) _is a translate of_ \(C\)_, then_ \(\|x-y\|_{C}=2\)_._
2. _Conversely, a nonempty intersection of at most two translates of_ \(C\) _is the_ \(C\)_-spindle of two (not necessarily distinct) points._
3. _For any_ \(x,y\in\mathbb{R}^{2}\)_,_ \([x,y]_{C}=[x,y]\) _if and only if a translate of_ \(C\) _contains_ \([x,y]\) _in its boundary._
4. _If_ \([x,y]_{C}\neq[x,y]\)_, then_ \([x,y]_{C}\) _is a centrally symmetric convex disk whose boundary consists of two arcs, connecting_ \(x\) _and_ \(y\)_, that are contained in the boundary of some translates of_ \(C\)_._
5. _Any_ \(C\)_-_\(n\)_-gon is the_ \(C\)_-hull of at most_ \(n\) _points contained in a translate of_ \(C\)_, and vice versa._
**Remark 2**.: _Let \(x,y\in C\in\mathcal{K}_{o}\), with \(\|x-y\|_{C}<2\). Then, for any sequences \(x_{m}\to x\), \(y_{m}\to y\), \(C_{m}\to C\) with \(x_{m},y_{m}\in\mathbb{R}^{2}\) and \(C_{m}\in\mathcal{K}_{o}\), we have \([x_{m},y_{m}]_{C_{m}}\to[x,y]_{C}\)._
We observe that the statement in Remark 2 does not necessarily hold if \(\|x-y\|_{C}=2\). As an example, we can choose \(C\) as a parallelogram, \(x_{m}=x\) and \(y_{m}=y\) as the midpoints of two opposite sides \(S_{1},S_{2}\) of \(C\), and \(\{C_{m}\}\) as a sequence of \(o\)-symmetric hexagons inscribed in \(C\) whose elements intersect \(S_{1}\) and \(S_{2}\) only in \(x\) and \(y\), respectively.
For any \(n\geq 4\), let \(\mathcal{K}_{a}^{n}\) denote the subfamily of the elements \(C\) of \(\mathcal{K}_{0}\) satisfying the Dowker-type inequality \(\hat{a}_{n-1}^{C}(K)+\hat{a}_{n+1}^{C}(K)\leq 2\hat{a}_{n}^{C}(K)\) for any \(C\)-convex disk \(K\). We define \(\mathcal{K}_{A}^{n}\), \(\mathcal{K}_{p}^{n}\) and \(\mathcal{K}_{P}^{n}\) similarly. Our first lemma describes the topological properties of these families.
**Lemma 1**.: _For any \(n\geq 4\), \(\mathcal{K}_{a}^{n},\mathcal{K}_{A}^{n},\mathcal{K}_{p}^{n}\) and \(\mathcal{K}_{P}^{n}\) are closed._
Proof.: We prove the assertion only for \(\mathcal{K}_{a}^{n}\), as for the other quantities the proof is analogous. Let \(C\notin\mathcal{K}_{a}^{n}\), and suppose for contradiction that there is a sequence \(C_{m}\in\mathcal{K}_{a}^{n}\) with \(C_{m}\to C\). Since \(C\notin\mathcal{K}_{a}^{n}\), there is a \(C\)-convex disk \(K\) satisfying \(\hat{a}_{n-1}^{C}(K)+\hat{a}_{n+1}^{C}(K)>2\hat{a}_{n}^{C}(K)\). By Remark 1, if \(K\) contains points at \(C\)-distance equal to \(2\), then \(K\) is a \(C\)-spindle, which yields that \(\hat{a}_{j}(K)=\operatorname{area}(K)\) for any \(j\geq 3\). Thus, according to our assumptions, \(K\) does not contain points at \(C\)-distance equal to \(2\), i.e its \(C\)-diameter is strictly less than \(2\). On the other hand, since \(K\) is \(C\)-convex, \(K\) is the intersection of the translates of \(C\) that contain it. Thus, there is a set \(X\subset\mathbb{R}^{2}\) such that \(K=\bigcap_{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatornameoperatorname}}}}}}}}}} }}X(x+C)\).
Let \(K_{m}=\bigcap_{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatornameoperatornameoperatorname{\operatorname }}}}}}}}}}} }}XX(x+C_{m})\). Then, clearly, \(K_{m}\) is \(C_{m}\)-convex, and \(K_{m}\to K\). For \(j=n-1,n+1\), let \(Q_{j}\) be a \(C\)-\(j\)-gon inscribed in \(K\) such that \(\operatorname{area}(Q_{j})=\hat{a}_{j}^{C}(K)\). Then, as \(K_{m}\to K\) and \(C_{m}\to C\), there are sequences \(\{Q_{n-1}^{m}\}\) and \(\{Q_{n+1}^{m}\}\) such that for \(j=n-1,n+1\), \(Q_{j}^{m}\) is a \(C_{m}\)-\(j\)-gon inscribed in \(K_{m}\), and \(Q_{j}^{m}\to Q_{j}\). By the properties of Hausdorff distance, the \(C_{m}\)-diameter of \(K_{m}\) is strictly less than \(2\) if \(m\) is sufficiently large. Then we can apply Remark 2, and obtain that \(\operatorname{area}(Q_{j}^{m})\to\operatorname{area}(Q_{j})\) for \(j=n-1,n+1\). From this, we have \(\operatorname{area}(Q_{n-1}^{m})+\operatorname{area}(Q_{n+1}^{m})\to\hat{a}_{ n-1}^{C}(K)+\hat{a}_{n+1}^{C}(K)\). On the other hand, since \(C_{m}\in\mathcal{K}_{a}^{n}\), there is a sequence \(\{Q_{n}^{m}\}\) such that \(Q_{n}^{m}\) is a \(C_{m}\)-\(n\)-gon inscribed in \(K_{m}\), and \(2\operatorname{area}(Q_{n}^{m})\geq\operatorname{area}(Q_{n-1}^{m})+ \operatorname{area}(Q_{n+1}^{m})\). By compactness, we may assume that \(\{Q_{n}^{m}\}\) converges to a \(C\)-\(n\)-gon \(Q_{n}\). Clearly, \(Q_{n}\) is contained in \(K\), and by Remark 2, \(\operatorname{area}(Q_{n}^{m})\to\operatorname{area}(Q_{n})\). Thus, \(\hat{a}_{n-1}^{C}(K)+\hat{a}_{n+1}^{C}(K)\leq 2\operatorname{area}(Q_{n}) \leq 2\hat{a}_{n}^{C}(K)\); a contradiction.
Lemma 1 readily yields Corollary 1, since the intersection of arbitrarily many closed sets is closed.
**Corollary 1**.: _The family \(\bigcup_{n=4}^{\infty}\mathcal{K}_{a}^{n}\) of the elements \(C\) of \(\mathcal{K}_{0}\) satisfying \(\hat{a}_{n-1}^{C}(K)+\hat{a}_{n+1}^{C}(K)\leq 2\hat{a}_{n}^{C}(K)\) for all \(n\geq 4\) and all \(C\)-convex disks \(K\) is closed in \(\mathcal{K}_{0}\). Similar statements hold for the families \(\bigcup_{n=4}^{\infty}\mathcal{K}_{p}^{n}\), \(\bigcup_{n=4}^{\infty}\mathcal{K}_{A}^{n}\) and \(\bigcup_{n=4}^{\infty}\mathcal{K}_{p}^{n}\)._
**Definition 5**.: _Let \(C\in\mathcal{K}_{o}\), and let \(x,y\) be points with \(\|x-y\|_{C}\leq 2\). Then the arc-distance \(\rho_{C}(x,y)\) of \(x,y\) with respect to \(C\) (or shortly, \(C\)-arc-distance of \(x\) and \(y\)) is the minimum of the \(C\)-length of the arcs, with endpoints \(x,y\), that are contained in \(z+\operatorname{bd}(C)\) for some \(y\in\mathbb{R}^{2}\)._
**Remark 3**.: _For any \(x,y\in\mathbb{R}^{2}\) with \(\|x-y\|_{C}\leq 2\), if \([x,y]_{C}\neq[x,y]\), then \(\rho_{C}(x,y)=\frac{1}{2}\operatorname{perim}_{C}([p,q]_{C})\). Furthermore, if \([x,y]_{C}=[x,y]\), then \(\rho_{C}(x,y)=\|x-y\|_{C}\)._
We recall the following version of the triangle inequality from [16, Theorem 6].
**Lemma 2** (Langi, Naszodi, Talata).: _Let \(C\in\mathcal{K}_{0}\), and let \(x,y,z\) be points such that each pair has a \(C\)-arc-distance._
1. _If_ \(y\in\operatorname{int}[x,z]_{C}\)_, then_ \(\rho_{C}(x,y)+\rho_{C}(y,z)\leq\rho_{C}(x,z)\)_._
2. _If_ \(y\in\operatorname{bd}[x,z]_{C}\)_, then_ \(\rho_{C}(x,y)+\rho_{C}(y,z)=\rho_{C}(x,z)\)_._
3. _If_ \(y\notin[x,z]_{C}\) _and_ \(C\) _is smooth, then_ \(\rho_{C}(x,y)+\rho_{C}(y,z)\geq\rho_{C}(x,z)\)_._
We start with a consequence of this inequality.
**Lemma 3**.: _Let \(p,q,r,s\in\mathbb{R}^{2}\) be distinct points contained in a translate of the smooth o-symmetric convex disk \(C\), and assume that \(\operatorname{bd}\operatorname{conv}_{C}\{p,q,r,s\}\) contains all of them and in this counterclockwise order. Then_
\[\rho_{C}(p,q)+\rho_{C}(r,s)\leq\rho_{C}(p,r)+\rho_{C}(q,s).\]
Proof.: Note that according to our conditions, the two \(C\)-arcs in the boundary of \([p,r]_{C}\) intersect both \(C\)-arcs consisting of the boundary of \([q,s]_{C}\). Let \(s^{\prime}\) denote the intersection point of one of the \(C\)-arcs in \(\operatorname{bd}[p,r]_{C}\) and one of the \(C\)-arcs in \(\operatorname{bd}[q,s]\), where the arcs are chosen to satisfy \(s^{\prime}\in\operatorname{bd}\operatorname{conv}_{C}\{p,q,r\}\) and \(s^{\prime}\in\operatorname{conv}_{C}\{p,r,s\}\). Then \(s^{\prime}\notin[p,q]_{C}\) and \(s^{\prime}\notin[r,s]_{C}\). Since \([s,s^{\prime}]_{C},[q,s^{\prime}]_{C}\subset[q,s]_{C}\), it is easy to see that \(p,q,s^{\prime}\), and also \(r,s,s^{\prime}\) are in \(C\)-convex position. Thus, by Lemma 2, we have \(\rho_{C}(p,q)\leq\rho_{C}(p,s^{\prime})+\rho_{C}(q,s^{\prime})\) and \(\rho_{C}(r,s)\leq\rho_{C}(r,s^{\prime})+\rho_{C}(s,s^{\prime})\), implying the assertion.
In the following lemma, let \(\mathbb{S}^{1}\) denote the Euclidean unit circle centered at the origin. For simplicity, if \(x,y\in\mathbb{S}^{1}\), we denote by \(\overrightarrow{xy}\) the Euclidean closed circle arc obtained as the orbit of \(x\) when it is rotated around \(o\) in counterclockwise direction until it reaches \(y\). Let \(\mathcal{S}\) denote the family of closed circle arcs \(\overrightarrow{xy}\) of \(S\). Furthermore, we say that a function \(f:\mathcal{S}\to\mathbb{R}\) has a \(k\)-fold rotational symmetry for some positive integer \(k\), if for any \(S,S^{\prime}\in\mathcal{S}\), where \(S^{\prime}\) is a rotated copy of \(S\) in counterclockwise direction with angle \(\frac{2\pi}{k}\), we have \(f(S)=f(S^{\prime})\). Lemma 4 can be regarded as a functional form of Dowker's theorems.
**Lemma 4**.: _Let \(f:\mathcal{S}\to\mathbb{R}\) be a bounded function with \(f(\overrightarrow{xx})=0\) for all \(x\in\mathbb{S}^{1}\). For any integer \(n\geq 3\), let_
\[M_{n}=\sup\{\sum_{S\in\mathcal{K}}f(S):X\subset\mathcal{S}\text{ is a tiling of }\mathbb{S}^{1}\text{ with }|X|=n\}.\]
_If for any \(\overrightarrow{x_{2}x_{3}}\subset\overrightarrow{x_{1}x_{4}}\), we have_
\[f(\overrightarrow{x_{1}x_{3}})+f(\overrightarrow{x_{2}x_{4}})\geq f( \overrightarrow{x_{1}x_{4}})+f(\overrightarrow{x_{2}x_{3}}),\]
_then the sequence \(\{M_{n}\}\) is concave. Furthermore, if in addition, there is some positive integer \(k\) such that \(k|n\) and \(f\) has \(k\)-fold rotational symmetry, and there is an \(n\)-element tiling \(X\) of \(\mathbb{S}^{1}\) such that \(M_{n}=\sum_{S\in\mathcal{K}}f(S)\) then there is an \(n\)-element tiling \(X^{\prime}\) of \(\mathbb{S}^{1}\) with \(k\)-fold rotational symmetry such that \(M_{n}=\sum_{S\in\mathcal{K}^{\prime}}f(S)\)._
Before the proof, we remark that \(X\subset\mathcal{S}\) is called an \(m\)-tiling of \(\mathbb{S}^{1}\) for some positive integer \(m\) if every point of \(\mathbb{S}^{1}\) belongs to at least \(m\) members of \(X\), and to the interiors of at most \(m\) members of \(X\).
Proof.: To prove the assertion for \(\{M_{n}\}\), we need to show that \(M_{n-1}+M_{n+1}\leq 2M_{n}\) is satisfied for any \(n\geq 4\). In other words, we need to show that for any tilings \(X=\{\overline{x_{0}x_{1}},\ldots x_{n-2}\overline{x_{n-1}}\}\), \(Y=\{\overline{y_{0}y_{1}},\ldots\overline{y_{n}y_{n+1}}\}\) of \(\mathbb{S}^{1}\), there are tilings \(Z=\{\overline{z_{0}}\overline{z_{1}},\ldots\overline{z_{n-1}}\overline{z_{n}}\}\) and \(W=\{\overline{w_{0}w_{1}},\ldots w_{\overline{n-1}}\overline{w}_{n}\}\) of \(\mathbb{S}^{1}\) such that
\[\sum_{i=1}^{n-1}f(\overline{x_{i-1}x_{i}})+\sum_{i=1}^{n+1}f(\overline{y_{i-1 }y_{i}})\leq\sum_{i=1}^{n}f(\overline{z_{i-1}z_{i}})+\sum_{i=1}^{n}f(\overline {w_{i-1}w_{i}}).\]
Note that the union \(A_{0}\) of the two tilings is a \(2\)-tiling of \(\mathbb{S}^{1}\). Assume that \(x_{1},x_{2},\ldots,x_{n-1}\), and \(y_{1},y_{2},\ldots,y_{n+1}\) are in this counterclockwise order in \(\mathbb{S}^{1}\), and that \(y_{1}\in\overline{x_{1}x_{2}}\). Due to the possible existence of coinciding points in the above two sequences, we unite these sequences as a single sequence \(v_{1},v_{2},\ldots,v_{2n}\) in such a way that the points are in this counterclockwise order in \(\mathbb{S}^{1}\), \(v_{1}=x_{1}\), and removing the \(x_{i}\) (resp. \(y_{j}\)) from this sequence we obtain the sequence \(y_{1},\ldots,y_{n+1}\) (resp. \(x_{1},\ldots,x_{n-1}\)). In the proof we regard this sequence as a cyclic sequence, where the indices are determined mod \(2n\), and, with a little abuse of notation, we say that \(\overline{v_{i}v_{j}}\)_covers_\(\overline{v_{k}v_{l}}\) only if \(\overline{v_{k}v_{l}}\subseteq\overline{v_{i}v_{j}}\) and \(i<k<l<j<i+2n\). Our main goal will be to modify the \(2\)-tiling \(A_{0}\) in such a way that the value of \(f\) does not decrease but the number of covering pairs strictly decreases.
Note that since \(A_{0}\) is the union of two tilings consisting of \((n-1)\) and \((n-1)\) arcs, respectively, \(A_{0}\) contains covering pairs. Assume that \(\overline{v_{i}v_{j}}\) covers \(\overline{v_{k}v_{l}}\). Then let \(A_{1}\) denote the \(2\)-tiling of \(\mathbb{S}^{1}\) in which \(\overline{v_{i}v_{j}}\) and \(\overline{v_{k}v_{l}}\) are replaced by \(\overline{v_{i}v_{l}}\) and \(\overline{v_{k}v_{j}}\). According to our conditions, \(\sum_{S\in A_{0}}f(S)\leq\sum_{S\in A_{1}}f(S)\), and the number of covering pairs in \(A_{1}\) is strictly less than in \(A_{0}\). Repeating this procedure we obtain a \(2\)-tiling \(A_{t}\) of \(\mathbb{S}^{1}\) for which \(\sum_{S\in A_{0}}f(S)\leq\sum_{S\in A_{t}}f(S)\) and which does not contain covering pairs. Then, \(A_{t}\) decomposes into the two tilings \(\{\overline{v_{1}},\overline{v_{3}},\overline{v_{3}v_{5}},\ldots,\overline{ v_{2n-1}v_{1}}\}\) and \(\{\overline{v_{2}},\overline{v_{4}},\overline{v_{4}v_{6}},\ldots,\overline{v_{2 n}v_{2}}\}\), each of which contains exactly \(n\) arcs. This proves the assertion for \(\{M_{n}\}\).
Now we prove the second part. Let \(X\) be an \(n\)-element tiling of \(\mathbb{S}^{1}\) such that \(M_{n}=\sum_{S\in X}f(S)\). Assume that \(X\) does not have \(k\)-fold rotational symmetries. For \(i=1,2,\ldots,k\), let \(X_{i}\) denote the rotated copy of \(X\) by \(\frac{2i\pi}{k}\) in counterclockwise direction. Then \(Y=\bigcup_{i=1}^{k}X_{i}\) is a \(k\)-fold tiling of \(\mathbb{S}^{1}\) with \(k\)-fold rotational symmetry, and \(\sum_{S\in Y}f(S)=k\sum_{S\in X}f(S)\). Since \(X\) has no \(k\)-fold rotational symmetry, \(Y\) contains covering pairs, and we may apply the argument in the previous paragraph.
We remark that an analogous proof yields Lemma 5, the proof of which we leave to the reader.
**Lemma 5**.: _Let \(f:\mathcal{S}\to\mathbb{R}\) be a bounded function with \(f(\widehat{p}\widehat{p})=0\) for all \(p\in\mathbb{S}^{1}\). For any integer \(n\geq 3\), let_
\[m_{n}=\inf\{\sum_{S\in X}f(S):X\subset\mathcal{S}\text{ is a tiling of }\mathbb{S}^{1}\text{ with }|X|=n\}.\]
_If for any \(\overline{x_{2}x_{3}}\subset\overline{x_{1}x_{4}}\), we have_
\[f(\overline{x_{1}x_{3}})+f(\overline{x_{2}x_{4}})\leq f(\overline{x_{1}x_{4}}) +f(\overline{x_{2}x_{3}}),\]
_then the sequence \(\{m_{n}\}\) is convex. Furthermore, if in addition, there is some positive integer \(k\) such that \(k|n\), and \(f\) has \(k\)-fold rotational symmetry, and there is an \(n\)-element tiling \(X\) of \(\mathbb{S}^{1}\) such that \(m_{n}=\sum_{S\in X}f(S)\) then there is a tiling \(X^{\prime}\) of \(\mathbb{S}^{1}\) with \(k\)-fold rotational symmetry such that \(m_{n}=\sum_{S\in X^{\prime}}f(S)\)._
In the next lemma, by the partial derivatives \((\partial_{p}f)(\widehat{p_{0}q_{0}})\) (resp. \((\partial_{q}f)(\widehat{p_{0}q_{0}})\)) of the function \(f(\widehat{p_{0}q_{0}})\) at \(\widehat{p_{0}q_{0}}\), we mean the derivative of the function \(f(\widehat{p(t)q_{0}})\) (resp. \(f(\widehat{q(t)p_{0}})\)) at \(t=0\), where \(p(t)\) (resp. \(q(t)\)) is the rotated copy of \(p_{0}\) (resp. \(q_{0}\)) around \(o\) by angle \(t\) in counterclockwise direction.
**Lemma 6**.: _Let \(f:\mathcal{S}\to\mathbb{R}\) be a bounded function with \(f(\widehat{p}p)=0\) for all \(p\in\mathbb{S}^{1}\). Assume that for any \(\widehat{p_{0}q_{0}}\in\mathbb{S}^{1}\), where \(p_{0}\neq q_{0}\), \((\partial_{p}\partial_{q}f)(\widehat{p_{0}q_{0}})\) is a continuous function of \(\widehat{p_{0}q_{0}}\) in both variables. Then, for any \(x_{1},x_{2},x_{3},x_{4}\in\mathbb{S}^{1}\) in this counterclockwise order, we have_
\[f(\overline{x_{1}x_{3}})+f(\overline{x_{2}x_{4}})\geq f(\overline{x_{1}x_{4}} )+f(\overline{x_{2}x_{3}})\]
_if and only if \((\partial_{p}\partial_{q}f)(\widehat{p_{0}q_{0}})\geq 0\) for all \(p_{0}\neq q_{0}\). Similarly, for any \(x_{1},x_{2},x_{3},x_{4}\in\mathbb{S}^{1}\) in this counterclockwise order, we have_
\[f(\overline{x_{1}x_{3}})+f(\overline{x_{2}x_{4}})\leq f(\overline{x_{1}x_{4}} )+f(\overline{x_{2}x_{3}})\]
_if and only if \((\partial_{p}\partial_{q}f)(\widehat{p_{0}q_{0}})\leq 0\) for all \(p_{0}\neq q_{0}\)._
Proof.: We prove only the first part. Assume that \((\partial_{p}\partial_{q}f)(\widehat{p_{0}q_{0}})\geq 0\) for all \(p_{0}\neq q_{0}\). Let \(\overline{x_{2}x_{3}}\subset\overline{x_{1}x_{4}}\). Then, by the Newton-Leibniz Theorem we have
\[0\leq\int_{x_{3}}^{x_{4}}\int_{x_{1}}^{x_{2}}(\partial_{p}\partial_{q}f)( \widehat{p_{0}q_{0}})\,dp_{0}\,dq_{0}=f(\overline{x_{2}x_{4}})-f(\overline{x_ {2}x_{3}})-f(\overline{x_{1}x_{4}})+f(\overline{x_{1}x_{3}}).\]
Furthermore, if we have \((\partial_{p}\partial_{q}f)(\widehat{p_{0}q_{0}})<0\) for some \(p_{0}\neq q_{0}\), then, by continuity and the same argument, there are some points \(x_{1},x_{2}\) and \(x_{3},x_{4}\) sufficiently close to \(p_{0}\) and \(q_{0}\), respectively, such that \(\overline{x_{2}x_{3}}\subset\overline{x_{1}x_{4}}\), and \(0>f\big{(}\overline{x_{2}x_{4}}\big{)}-f\big{(}\overline{x_{2}x_{3}}\big{)}-f (\overline{x_{1}x_{4}})+f\big{(}\overline{x_{1}x_{3}}\big{)}\).
## 3. Proof of Theorems 1 and 2
Note that by Lemma 1 and Corollary 1, it is sufficient to prove Theorem 1 for any everywhere dense subset of \(\mathcal{K}_{o}\), and applying a similar consideration, we have the same for Theorem 2. Thus, we may assume that \(C\) has \(C^{\infty}\)-class boundary and strictly positive curvature. Under this condition, the quantities defined in Definition 4 are continuous functions of \(K\) for any fixed value of \(n\), and thus, we may assume that \(K\) has \(C^{\infty}\)-class boundary, and the curvature of \(\operatorname{bd}(K)\) at any point \(p\) is strictly greater than the curvature of \(\operatorname{bd}(C)\) at the point \(q\) with the same outer unit normal as \(p\).
**Remark 4**.: _Under the above conditions, for any points \(p,q\in\operatorname{bd}(K)\), \([p,q]_{C}\setminus\{p,q\}\subset\operatorname{int}(K)\)._
In the proof we identify \(\mathbb{S}^{1}\) with the set \(\mathbb{R}/\{2k\pi:k\in\mathbb{Z}\}\). Let us parametrize \(\operatorname{bd}(K)\) as the curve \(\Gamma:\mathbb{S}^{1}\to\mathbb{R}^{2}\), where the outer unit normal vector at \(\Gamma(\varphi)\) is \((\cos\varphi,\sin\varphi)\). Then, for any two points \(\Gamma(\varphi_{1}),\Gamma(\varphi_{2})\) with \(\varphi_{1}<\varphi_{2}<\varphi_{1}+2\pi\), let us denote the arc of \(\Gamma\) connecting them in counterclockwise direction by \(\Gamma|_{[\varphi_{1},\varphi_{2}]}\). Furthermore, recall [16, Corollary 3.13], stating that \(K\) is the intersection of the translates of \(C\) containing it. Thus, for any \(\varphi\in[0,2\pi]\), there is a unique translate \(x+C\) of \(C\) containing \(K\) with \(\Gamma(\varphi)\in\operatorname{bd}(x+C)\). We denote this translate by \(C(\varphi)=x(\varphi)+C\), and call it the _supporting \(C\)-disk_ of \(K\) at \(\Gamma(\varphi)\) (see Figure 1).
We define the following regions:
1. \(r(\varphi_{1},\varphi_{2})\) is the closure of the connected component of \(K\setminus[\Gamma(\varphi_{1}),\Gamma(\varphi_{2})]_{C}\) containing \(\Gamma|_{[\varphi_{1},\varphi_{2}]}\);
2. \(R(\varphi_{1},\varphi_{2})\) is the closure of the connected component of \((C(\varphi_{1})\cap C(\varphi_{2})\setminus K)\) containing \(\Gamma|_{[\varphi_{1},\varphi_{2}]}\);
3. \(p(\varphi_{1},\varphi_{2})=\operatorname{perim}_{C}(r(\varphi_{1},\varphi_{2})- \operatorname{aclength}_{C}(\Gamma|_{[\varphi_{1},\varphi_{2}]})\);
4. \(p(\varphi_{1},\varphi_{2})=\operatorname{perim}_{C}(r(\varphi_{1},\varphi_{2})- \operatorname{aclength}_{C}(\Gamma|_{[\varphi_{1},\varphi_{2}]})\);
5. \(p(\varphi_{1},\varphi_{2})=\operatorname{perim}_{C}(r(\varphi_{1},\varphi_{2})- \operatorname{aclength}_{C}(\Gamma|_{[\varphi_{1},\varphi_{2}]})\);
6. \(p(\varphi_{1},\varphi_{2})=\operatorname{perim}_{C}(r(\varphi_{1},\varphi_{2})- \operatorname{aclength}_{C}(\Gamma|_{[\varphi_{1},\varphi_{2}]})\);
7. \(p(\varphi_{1},\varphi_{2})=\operatorname{perim}_{C}(r(\varphi_{1},\varphi_{2})- \operatorname{aclength}_{C}(\Gamma|_{[\varphi_{1},\varphi_{2}]})\);
8. \(p(\varphi_{1},\varphi_{2})=\operatorname{perim}_{C}(r(\varphi_{1},\varphi_{2})- \operatorname{aclength}_{C}(\Gamma|_{[\varphi_{1},\varphi_{2}]})\);
9. \(p(\varphi_{1},\varphi_{2})=\operatorname{perim}_{C}(r(\varphi_{1},\varphi_{2})- \operatorname{aclength}_{C}(\Gamma|_{[\varphi_{1},\varphi_{2}]})\);
10. \(p(\varphi_{1},\varphi_{2})=\operatorname{perim}_{C}(r(\varphi_{1},\varphi_{2})- \operatorname{aclength}_{C}(\Gamma|_{[\varphi_{1},\varphi_{2}]})\);
11. \(p(\varphi_{1},\varphi_{2})=\operatorname{perim}_{C}(r(\varphi_{1},\varphi_{2})- \operatorname{aclength}_{C}(\Gamma|_{[\varphi_{1},\varphi_{2}]})\);
12. \(p(\varphi_{1},\varphi_{2})=\operatorname{perim}_{C}(r(\varphi_{1},\varphi_{2})- \operatorname{aclength}_{C}(\Gamma|_{[\varphi_{1},\varphi_{2}]})\);
13. \(p(\varphi_{1},\varphi_{2})=\operatorname{perim}_{C}(r(\varphi_{1},\varphi_{2})- \operatorname{aclength}_{C}(\Gamma|_{[\varphi_{1},\varphi_{2}]})\);
14. \(p(\varphi_{1},\varphi_{2})=\operatorname{perim}_{C}(r(\varphi_{1},\varphi_{2})- \operatorname{aclength}_{C}(\Gamma|_{[\varphi_{1},\varphi_{2}]})\);
15. \(p(\varphi_{1},\varphi_{2})=\operatorname{perim}_{C}(r(\varphi_{1},\varphi_{2})- \operatorname{aclength}_{C}(\Gamma|_{[\varphi_{1},\varphi_{2}]})\);
16. \(p(\varphi_{1},\varphi_{2})=\operatorname{perim}_{C}(r(\varphi_{1},\varphi_{2})- \operatorname{aclength}_{C}(\Gamma|_{[\varphi_{1},\varphi_{2}]})\);
17. \(p(\varphi_{1}
2. \(A(\varphi_{1},\varphi_{2})=\operatorname{area}(R(\varphi_{1},\varphi_{2}))\);
3. \(P(\varphi_{1},\varphi_{2})=\operatorname{perim}_{C}(R(\varphi_{1},\varphi_{2})- \operatorname{arclength}_{C}(\Gamma|_{[\varphi_{1},\varphi_{2}]})\).
### The proof of Theorems 1 and 2 for \(\hat{A}_{n}^{C}(K)\)
Let \(I[X]:\mathbb{R}^{2}\to\mathbb{R}\) denote the indicator function of \(X\subset\mathbb{R}^{2}\). Then it can be seen directly that for any \(\varphi_{1}<\varphi_{2}<\varphi_{3}<\varphi_{4}<\varphi_{1}+2\pi\), the function
\[I[R(\varphi_{1},\varphi_{4})]+I[R(\varphi_{2},\varphi_{3})]-I[R(\varphi_{1}, \varphi_{3})]-I[R(\varphi_{2},\varphi_{4})]\]
has nonnegative values at every point. Thus, the conditions of Lemma 5 are satisfied, implying the statement.
### The proof of Theorems 1 and 2 for \(\hat{p}_{n}^{C}(K)\)
Let \(\varphi_{1}<\varphi_{2}<\varphi_{3}<\varphi_{4}<\varphi_{1}+2\pi\). Then, by Lemma 3,
\[\rho_{C}(\Gamma(\varphi_{1}),\Gamma(\varphi_{4}))+\rho_{C}(\Gamma(\varphi_{2} ),\Gamma(\varphi_{3}))\leq\rho_{C}(\Gamma(\varphi_{1}),\Gamma(\varphi_{3}))+ \rho_{C}(\Gamma(\varphi_{2}),\Gamma(\varphi_{4})).\]
Thus, the conditions of Lemma 4 are satisfied, implying our statement.
### The proof of Theorems 1 and 2 for \(\hat{P}_{n}^{C}(K)\)
By Lemmas 5 and 6, it is sufficient to prove that for any \(\varphi_{1}<\varphi_{2}<\varphi_{1}+\pi\), the function \(\partial_{\varphi_{1}}\partial_{\varphi_{2}}P\) is a continuous nonpositive function. In the remaining part of the subsection we prove this property.
For brevity, for any \(\alpha<\beta<\alpha+2\pi\), we define \(z(\alpha,\beta)\) as the intersection point of \(\operatorname{bd}(C(\alpha))\) and \(\operatorname{bd}(C(\beta))\) contained in the boundary of \(R(\alpha,\beta)\). First, observe that \(P(\varphi_{1},\varphi_{2})=\rho_{C}(\Gamma(\varphi_{1}),z(\varphi_{1},\varphi _{2}))+\rho_{C}(z(\varphi_{1},\varphi_{2}),\Gamma(\varphi_{2}))\). Clearly, since \(C\) has \(C^{\infty}\)-class boundary, \(\rho_{C}(\cdot,\cdot)\) is a \(C^{\infty}\)-class function, implying that \(P(\varphi_{1},\varphi_{2})\) is \(C^{\infty}\)-class, and \(\partial_{\varphi_{1}}\partial_{\varphi_{2}}P\) is continuous.
Now, let \(0<|\Delta_{1}|,|\Delta_{2}|\leq\varepsilon\) for some sufficiently small \(\varepsilon>0\), and set \(p=z(\varphi_{1},\varphi_{2})\), \(q_{1}=z(\varphi_{1},\varphi_{2}+\Delta_{2})\), \(q_{2}=z(\varphi_{1}+\Delta_{1},\varphi_{2})\) and \(q=z(\varphi_{1}+\Delta_{1},\varphi_{2}+\Delta_{2})\).
Figure 1. An illustration for the notation in the proof of Theorems 1 and 2, showing the support disks \(C(\varphi_{1})\), \(C(\varphi_{2})\), and the \(C\)-spindle \([\Gamma(\varphi_{1}),\Gamma(\varphi_{2})]_{C}\). The parts of the boundaries of these objects belonging to \(r(\varphi_{1},\varphi_{2})\) and \(R(\varphi_{1},\varphi_{2})\) are denoted by dashed lines.
To prove the assertion, it is sufficient to prove that
\[0\geq\frac{1}{\Delta_{1}}\left(\frac{P(\varphi_{1}+\Delta_{1}, \varphi_{2}+\Delta_{2})-P(\varphi_{1}+\Delta_{1},\varphi_{2})}{\Delta_{2}}-\frac {P(\varphi_{1},\varphi_{2}+\Delta_{2})-P(\varphi_{1},\varphi_{2})}{\Delta_{2}} \right)=\] \[=\frac{1}{\Delta_{1}\Delta_{2}}\left(P(\varphi_{1}+\Delta_{1}, \varphi_{2}+\Delta_{2})-P(\varphi_{1}+\Delta_{1},\varphi_{2})-P(\varphi_{1}, \varphi_{2}+\Delta_{2})+P(\varphi_{1},\varphi_{2})\right).\]
We do it in the case that \(\Delta_{1}<0\) and \(\Delta_{2}>0\), in the other cases a straightforward modification yields the assertion. Note that in this case it is sufficient to show that
\[\rho_{C}(p,q_{1})+\rho_{C}(p,q_{2})\leq\rho_{C}(q,q_{1})+\rho_{C}(q,q_{2}).\]
For \(i=1,2\), let \(v_{i}\) denote the tangent vector of \(C(\varphi_{i})\) at \(p\) pointing 'towards' \(q_{i}\) in its boundary, and let \(w_{i}\) denote the tangent vector of \(\operatorname{bd}K\) at \(\Gamma(\varphi_{i})\) pointing towards \(p\) in \(\operatorname{bd}(C(\varphi_{i}))\).
**Lemma 7**.: _Let \(C(\varphi)=x(\varphi)+C\). Then \(\lim_{\Delta\to 0\to 0}\frac{x(\varphi+\Delta)-x(\varphi)}{x(\varphi+\Delta)-x( \varphi)}=\pm v\) for any value of \(\varphi\), where \(v\) is the unit tangent vector of \(\operatorname{bd}(K)\) at \(\Gamma(\varphi)\) pointing in the positive direction._
Proof.: Let \(\Theta(\varphi)\) denote the point of \(\operatorname{bd}(C)\) with outer unit normal vector \(\left(\cos\varphi,\sin\varphi\right)\). Then \(x(\varphi)=\Gamma(\varphi)-\Theta(\varphi)\) and more generally,
\[x(\varphi+\Delta)-x(\varphi)=\left(\Gamma(\varphi+\Delta)-\Gamma(\varphi) \right)-\left(\Theta(\varphi+\Delta)-\Theta(\varphi)\right).\]
Note that \(\lim_{\Delta\to 0\to 0}\frac{\Gamma(\varphi+\Delta)-\Gamma(\varphi)}{|\Gamma( \varphi+\Delta)-\Gamma(\varphi)|}=\lim_{\Delta\to 0}\frac{\Theta(\varphi+\Delta)- \Theta(\varphi)}{|\Theta(\varphi+\Delta)-\Theta(\varphi)|}=\pm v\), and, by the choice of the parametrization of \(\Gamma\) and \(\Theta\), \(\lim_{\Delta\to 0}\frac{|\Theta(\varphi+\Delta)-\Theta(\varphi)|}{|\Gamma( \varphi+\Delta)-\Gamma(\varphi)|}=\frac{\kappa_{\Gamma}(\varphi)}{\kappa_{ \Theta}(\varphi)}\), where \(\kappa_{\Gamma}(\varphi)\) and \(\kappa_{\Theta}(\varphi)\) denote, the curvature of \(\Gamma\) and \(\Theta\) at \(\Gamma(\varphi)\) and \(\Theta(\varphi)\),respectively. Thus, the assertion follows from our assumption that \(\kappa_{\Theta}(\varphi)\neq\kappa_{\Gamma}(\varphi)\).
By Remark 1, \(C(\varphi_{1})\cap C(\varphi_{2})\) is the \(C\)-spindle of \(p\) and another point, which we denote by \(p^{\prime}\). By convexity, the tangent vectors of \(\operatorname{bd}(C(\varphi_{1}))\) pointing in counterclockwise direction, turn in counterclockwise direction from \(p\) to \(p^{\prime}\). Thus, the directions of the vectors \(v_{2},w_{1},v_{1}\) are in this order in counterclockwise orientation, and the same holds for the vectors \(v_{2},w_{2},v_{1}\).
Figure 2. Notation for the proof of Theorems 1 and 2 for \(\hat{P}^{C}_{n}(K)\).
For \(i=1,2\), let \(C(\varphi_{i}+\Delta_{i})=y_{i}+C(\varphi_{i})\). Then, by Lemma 7, if \(\Delta_{i}\) is sufficiently small, we have that the vectors \(y_{1},y_{2}\) are between \(v_{1}\) and \(v_{2}\) according to counterclockwise orientation.
Consider the translate \(C_{i}^{\prime}\) of \(C(\varphi_{i})\) by \(q_{i}-p\). The boundary of this translate contains \(q_{i}\), and \(v_{i}\) is a tangent vector of \(C_{i}^{\prime}\) at \(q_{i}\). Thus, if \(q^{\prime}=q_{1}+q_{2}-p\) (i.e. \(q^{\prime}\) is the unique point for which \(p,q_{1},q^{\prime},q_{2}\) are the vertices of a parallelogram in this counterclockwise order), then \(q^{\prime}\) lies in the boundary of both \(C_{1}^{\prime}\) and \(C_{2}^{\prime}\). On the other hand, by our observation about the tangent lines, if \(\Delta_{i}\) are sufficiently small, then \(q^{\prime}\) is contain in \(Q\). By symmetry, \(\rho_{C}(p,q_{1})=\rho_{C}(q^{\prime},q_{1})\) and \(\rho_{C}(p,q_{2})=\rho_{C}(q^{\prime},q_{2})\), and thus, the required inequality follows from the remark after Definition 1.
## 4. Proof of Theorem 3
We prove the statement in several steps. For brevity, for any points \(z_{1},z_{2},\ldots,z_{k}\in\mathbb{R}^{2}\), we set \([z_{1},z_{2},\ldots,z_{k}]=\operatorname{conv}\{z_{1},z_{2},\ldots,z_{k}\}\) and \([z_{1},z_{2},\ldots,z_{k}]_{C}=\operatorname{conv}_{C}\{z_{1},z_{2},\ldots,z_{ k}\}\).
**Step 1**.
Let us fix a Cartesian coordinate system, and consider the points \(p_{1}=(0,-1-t)\), \(p_{2}=(2.1,-0.9-t)\), \(p_{3}=(t+2,-1)\), \(p_{4}=(t+2,1)\), \(p_{5}=(2.1,0.9+t)\), \(p_{6}=(0,1+t)\), \(q_{1}=(t,-1)\), \(q_{2}=(t,1)\), \(q_{3}=(-t,1)\) and \(q_{4}=(-t,-1)\) (see Figure 3). In the construction we assume that \(t\) is a sufficiently large positive value. We define the hexagon \(H=[p_{1},q_{1},q_{2},p_{6},q_{3},q_{4}]\) and the octagon \(K_{1}=[p_{1},p_{2},\ldots,p_{6},q_{3},q_{4}]\). Note that \(H\subset K_{1}\), and set \(G=\operatorname{bd}(K_{1})\setminus\operatorname{bd}(H)\), and \(G^{\prime}=\operatorname{bd}(K_{1})\cap\operatorname{bd}(H)\). In the following, \(D_{1}\) denotes the Euclidean diameter of \(K_{1}\).
We define \(C_{1}\) as an \(o\)-symmetric convex \(14\)-gon with vertices \(x_{1},x_{2},\ldots,x_{14}\) in counterclockwise order such that
1. \(x_{1}\) and \(x_{8}\) are on the negative and the positive half of the \(y\)-axis, respectively;
2. \(C_{1}\) is symmetric to both coordinate axes;
3. the sides \([x_{1},x_{2}]\), \([x_{2},x_{3}]\), \([x_{3},x_{4}]\), \([x_{4},x_{5}]\) are parallel to \([p_{1},p_{2}]\), \([p_{1},p_{3}]\), \([p_{2},p_{3}]\) and \([p_{3},p_{4}]\), respectively;
4. we have \(\|x_{2}-x_{1}\|,\|x_{3}-x_{2}\|,\|x_{4}-x_{3}\|>D_{1}\), and \(\|x_{5}-x_{4}\|=2\), i.e. \([x_{4},x_{5}]\) is a translate of \([p_{3},p_{4}]\).
Note that by our conditions, for any two point \(u,v\in G\), each of the two \(C_{1}\)-arcs in the boundary of \([u,v]_{C_{1}}\) consists of translates of subsets of at most two consecutive sides of
Figure 3. The hexagon \(H\) and the octagon \(K_{1}\). In the illustration, \(t=10\).
\(C_{1}\), or they contain translates of \([x_{4},x_{5}]\) and possibly translates of subsets of the sides \([x_{3},x_{4}]\) and \([x_{5},x_{6}]\). In particular, \([p_{1},p_{6}]_{C_{1}}=H\).
We estimate \(\operatorname{area}([p_{1},q,p_{6}]_{C_{1}})\) for any \(q\in G\) with nonnegative \(y\)-coordinate. In the following \(\bar{p}=(0,t+2)\) denotes the midpoint of \([p_{3},p_{4}]\).
_Case 1_: \(q\in[\bar{p},p_{4}]\). Then \(\operatorname{bd}([p_{1},q,p_{6}]_{C_{1}})\) consists of \(G^{\prime}\), parts of the segments \([p_{1},p_{3}]\) and \([p_{4},p_{6}]\), and two segments with \(q\) as an endpoint, parallel to \([p_{2},p_{3}]\) and \([p_{4},p_{5}]\), respectively. Thus, \(\operatorname{area}([p_{1},q,p_{6}]_{C_{1}})\) is maximal if \(q=\bar{p}\), implying that
\[\operatorname{area}([p_{1},q,p_{6}]_{C_{1}})\leq\operatorname{area}([p_{1}, \bar{p},p_{6}]_{C_{1}})=\operatorname{area}(H)+\frac{3}{2}t+3\]
_Case 2_: \(q\in[p_{4},p_{5}]\). Assume that the \(x\)-coordinate of \(q\) is at least \(t+1\). Then the curve \(\operatorname{bd}([p_{1},q,p_{6}]_{C_{1}})\) consists of \(G^{\prime}\), a segment containing \([p_{1},q_{1}]\), a segment parallel to \([p_{3},p_{4}]\) and ending at \(q\), and segment parallel to \([p_{4},p_{6}]\) and ending at \(q\), and a subset of \([p_{5},p_{6}]\). Observe that if \(t\) is sufficiently large, in this case \(\operatorname{area}([p_{1},q,p_{6}]_{C_{1}})\) is maximal if the \(x\)-coordinate of \(q\) is equal to \(t+1\). A similar consideration shows that if the \(x\)-coordinate of \(q\) is at most \(t+1\), then \(\operatorname{area}([p_{1},q,p_{6}]_{C_{1}})\) is maximal if \(q=p_{5}\). Thus, in Case 2 we have
\[\operatorname{area}([p_{1},q,p_{6}]_{C_{1}})\leq\operatorname{area}([p_{1},p_ {5},p_{6}]_{C_{1}})=\]
\[=\operatorname{area}(H)+\operatorname{area}([q_{2},p_{4},p_{5},p_{6})=\frac{1 }{2}\left(\operatorname{area}(H)+\operatorname{area}(K_{1})\right)-2\]
_Case 3_: \(q\in[p_{5},p_{6}]\). Then \(\operatorname{bd}([p_{1},q,p_{6}]_{C_{1}})\) consists of \(G^{\prime}\), a segment parallel to \([q_{2},p_{6}]\) and ending at \(q\), a segment containing \([p_{1},q_{1}]\) as a subset, and a translate of \([p_{3},p_{4}]\). Thus, in this case \(\operatorname{area}([p_{1},q,p_{6}]_{C_{1}})\) is maximal if \(q=p_{5}\), and we have
\[\operatorname{area}([p_{1},q,p_{6}]_{C_{1}})\leq\operatorname{area}([p_{1},p_ {5},p_{6}]_{C_{1}})=\frac{1}{2}\left(\operatorname{area}(H)+\operatorname{area }(K_{1})\right)-2.\]
Combining our results, if \(t\) is sufficiently large, for any \(q,q^{\prime}\in G\)
\[\operatorname{area}([p_{1},q,p_{6}]_{C_{1}})+\operatorname{area} ([p_{1},q^{\prime},p_{6}]_{C_{1}})\leq\operatorname{area}(H)+\operatorname{ area}(K_{1})-4<\\ <\operatorname{area}(H)+\operatorname{area}(K_{1})=\operatorname {area}([p_{1},p_{6}]_{C_{1}})+\operatorname{area}([p_{1},p_{2},p_{5},p_{6}] _{C_{1}}), \tag{2}\]
where we used the observation that \([p_{1},p_{2},p_{5},p_{6}]_{C_{1}}=K_{1}\).
In the remaining part of the construction, we fix \(t\) in such a way that (2) is satisfied.
**Step 2**.
In the next step, based on Step 1, we construct some \(C_{2}\in\mathcal{K}_{o}\) and a \(C_{2}\)-convex disk \(K_{2}\) such that
\[\hat{a}_{3}^{C_{2}}(K_{2})+\hat{a}_{5}^{C_{2}}(K_{2})>2\hat{a}_{4}^{C_{2}}(K_{2}). \tag{3}\]
Let \(p_{7}=(-s,0)\), where \(s\) is sufficiently large, and set \(K_{2}=\operatorname{conv}(K_{1}\cup\{p_{7}\})\) (see Figure 4). Let \(D_{2}\) denote the Euclidean diameter of \(K_{2}\), and let \(C_{1}^{+}\) (resp. \(C_{1}^{-}\)) denotes the set of the points of \(\operatorname{bd}(C_{1})\) with nonnegative (resp. nonpositive) \(x\)-coordinates. We define \(C_{2}\) as follows:
1. \(C_{2}\) is symmetric to both coordinate axes.
2. \(\operatorname{bd}(C_{2})\) contains some translates \(u+C_{1}^{+}\) and \(-u+C_{1}^{-}\), where \(u\) points in the direction of the positive half of the \(x\)-axis. We set \(w_{3}=u+x_{1}\).
3. In addition to the above two translates, \(\operatorname{bd}(C_{2})\) consists of segments \([w_{1},w_{2}]\), \([w_{2},w_{3}]\) and their reflections about one or both of the coordinate axes, such that \([w_{1},w_{2}]\), \([w_{2},w_{3}]\) are parallel to \([p_{6},p_{7}]\) and \([p_{5},p_{7}]\), respectively, and \(|w_{1}-w_{2}|,|w_{2}-w_{3}|>D_{2}\).
We remark that if \(s\) is sufficiently large, then there is some \(C_{2}\in\mathcal{K}_{o}\) satisfying the above conditions, and \(K_{2}\) is \(C_{2}\)-convex.
In the following, let \(Q_{4}=[z_{1},z_{2},z_{3},z_{4}]_{C_{2}}\) denote a maximal area \(C\)-\(4\)-gon inscribed in \(K_{2}\). Let \(H^{\prime}=\operatorname{conv}(H\cup\{p_{7}\})=[p_{1},p_{6},p_{6}]_{C_{2}}\) and observe that \(K_{2}=[p_{1},p_{2},p_{5},p_{6},p_{7}]_{C_{2}}\). Then, to show the inequality in (3), it is sufficient to show that \(\operatorname{area}(H^{\prime})+\operatorname{area}(K_{2})>2\operatorname{ area}(Q_{4})\). Let \(Q=[p_{1},p_{5},p_{6},p_{7}]_{C_{2}}\). By the consideration in Step 1, we have that \(\operatorname{area}(Q)=\frac{1}{2}(\operatorname{area}(H^{\prime})+ \operatorname{area}(K_{2}))-2\). Thus, we have \(\operatorname{area}(Q_{4})\geq\frac{1}{2}(\operatorname{area}(H^{\prime})+ \operatorname{area}(K_{2}))-2\).
Let us define the points \(v_{1}\) and \(v_{6}\) as the images of \(p_{1}\) and \(p_{6}\), respectively, under the homothety with center \(p_{7}\) and homothety ratio \(\frac{1}{\sqrt{s}}\). An elementary computation shows that then \(v_{1}=\left(-\left(1-\frac{1}{\sqrt{s}}\right)s,-\frac{1+t}{\sqrt{s}}\right) \in[p_{1},p_{7}]\) and \(v_{6}=\left(-\left(1-\frac{1}{\sqrt{s}}\right)s,\frac{1+t}{\sqrt{s}}\right) \in[p_{6},p_{7}]\). Note that since \(|v_{2}-v_{1}|=\frac{2(1+t)}{\sqrt{s}}<2\) if \(s\) is sufficiently large, and \(\operatorname{bd}(C_{2})\) contains two vertical segments of length \(2\), we may assume that \([v_{1},v_{6}]_{C_{2}}=[v_{1},v_{6}]\). In other words, we may assume that there is a translate of \(C\) that contains \(K_{2}\setminus[v_{1},p_{7},v_{6}]\) and does not overlap \([v_{1},p_{7},v_{6}]\). Thus, if \(z_{i}\notin[v_{1},p_{7},v_{6}]\) for any \(1\leq i\leq 4\), then \(Q_{4}\subseteq K_{2}\setminus[v_{1},p_{7},v_{6}]\), implying that in this case \(\operatorname{area}(Q_{4})\leq\operatorname{area}(K_{2})-\operatorname{ area}([v_{1},p_{7},v_{6}])=\operatorname{area}(K_{2})-2\sqrt{s}(1+t)<\frac{1}{2}( \operatorname{area}(H^{\prime})+\operatorname{area}(K_{2}))-2\); a contradiction. Consequently, in the following we may assume that \(z_{4}\in[v_{1},p_{7},v_{6}]\).
Let \(v_{5}^{\prime}\) and \(v_{7}^{\prime}\) be the images of \(p_{5}\) and \(p_{7}\), respectively, under the homothety with center \(p_{6}\) and ratio \(\frac{1}{\sqrt{s}}\). Note that since there is a side of \(C\) parallel to \([v_{5}^{\prime},v_{7}^{\prime}]\), we have \([v_{5}^{\prime},v_{7}^{\prime}]_{C_{2}}=[v_{5}^{\prime},v_{7}^{\prime}]\), and, as in the previous paragraph, if \(z_{i}\notin[v_{1},p_{7},v_{6}]\) for any \(1\leq i\leq 4\), then \(\operatorname{area}(P_{4})\leq\operatorname{area}(K_{2})-\operatorname{ area}([v_{5}^{\prime},v_{7}^{\prime},p_{6}])\). On the other hand, we have \(|p_{6}-p_{7}|>s\) and that the length of the corresponding height of \([p_{5},p_{6},p_{7}]\) is greater than \(0.1\) by the definition of \(p_{5}\). Thus, \(\operatorname{area}([v_{5}^{\prime},v_{7}^{\prime},p_{6}])=\frac{\operatorname {area}([p_{5},p_{6},p_{7}])}{\sqrt{s}^{2}}>0.1\sqrt[3]{s}\), implying that since \(\operatorname{area}(Q_{4})\geq\operatorname{area}(Q)\), which otherwise by our inequalities does not hold if \(s\) is sufficiently large, we may assume that some \(z_{i}\), say \(z_{3}\), is an element of \([v_{1},p_{7},v_{6}]\).
We obtain similarly that if \(s\) is sufficiently large, some \(z_{i}\), say \(z_{1}\), is contained in the triangle \([v_{7}^{\prime\prime},p_{1},v_{2}^{\prime\prime}]\), where \(v_{7}^{\prime\prime}\) and \(v_{2}^{\prime\prime}\) are the images of \(p_{7}\) and \(p_{2}\), respectively, under the homothety with center \(p_{1}\) and ratio \(\frac{1}{\sqrt{s}}\). These observations, the consideration in Step
\(1\), and the inequality \(\operatorname{area}(Q_{4})\geq\operatorname{area}(Q)\) yield that as \(s\to\infty\), we have \(z_{1}\to p_{1}\), \(z_{3}\to p_{6}\) and \(z_{4}\in[v_{1},p_{7},v_{6}]\), and \(\min\{|z_{2}-p_{2}|,|z_{2}-p_{5}|\}\to 0\), implying that in this case \(\operatorname{area}(Q_{4})\to\operatorname{area}(Q)\). This shows that if \(s\) is sufficiently large, then \(\operatorname{area}(H^{\prime})+\operatorname{area}(K_{2})>2\operatorname{ area}(Q_{4})\).
Before proceeding to the final step, we make two important observations that we are going to use. Here, by \(C_{2}^{+}\) and \(C_{2}^{-}\), we denote the parts of \(\operatorname{bd}(C_{2})\) contained in the closed half planes \(\{x\geq 0\}\) and \(\{x\leq 0\}\), respectively.
1. A straightforward modification of the construction in Step 2 yields, for any \(n\geq 4\), the existence of some \(C_{n}\in\mathcal{K}_{0}\) and a \(C_{n}\)-convex disk \(K_{n}\) such that \(\hat{a}_{n-1}^{C_{n}}(K_{n})+\hat{a}_{n+1}^{C_{n}}(K_{n})>2\hat{a}_{n}^{C_{n}} (K_{n})\).
2. To guarantee the required inequalities in Steps 1 and 2, we used the properties of the arcs of \(C_{2}\) entirely contained in \(C_{2}^{+}\) or \(C_{2}^{-}\). Thus, if \(C_{2}^{\prime}\) is an \(o\)-symmetric plane convex body containing \(C_{2}^{+}\) and \(C_{2}^{-}\) in its boundary, then we have \(\hat{a}_{3}^{C_{2}^{\prime}}(K_{2})+\hat{a}_{5}^{C_{2}^{\prime}}(K_{2})>2\hat{ a}_{4}^{C_{2}^{\prime}}(K_{2})\).
We combine these two observations in the following remark.
**Remark 5**.: _For any \(n\geq 4\), there is some \(C_{n}\in\mathcal{K}_{o}\) and a \(C_{n}\)-convex disk \(K_{n}\) such that if any \(C_{n}^{\prime}\in\mathcal{K}_{o}\) contains \(C_{n}^{+}\) and \(C_{n}^{-}\) in its boundary, where by \(C_{n}^{+}\) and \(C_{n}^{-}\), we denote the parts of \(\operatorname{bd}(C_{n})\) contained in the closed half planes \(\{x\geq 0\}\) and \(\{x\leq 0\}\), respectively, then \(K_{n}\) is \(C_{n}^{\prime}\)-convex, and_
\[\hat{a}_{n-1}^{C_{n}^{\prime}}(K_{n})+\hat{a}_{n+1}^{C_{n}^{\prime}}(K_{n})>2 \hat{a}_{n}^{C_{n}^{\prime}}(K_{n}).\]
**Step 3**.
Now we prove Theorem 3. Let \(n\geq 4\). Recall that \(\mathcal{K}_{a}^{n}\) denotes the elements \(C\) of \(\mathcal{K}_{o}\) such that for any \(C\)-convex disk \(K\), we have \(\hat{a}_{n-1}^{C}(K)+\hat{a}_{n+1}^{C}(K)\leq 2\hat{a}_{n}^{C}(K)\), and set \(\overline{\mathcal{K}}_{a}^{n}=\mathcal{K}_{o}\smallsetminus\mathcal{K}_{a}^{n}\). Observe that by Lemma 1, \(\overline{\mathcal{K}}_{a}^{n}\) is open. We show that it is everywhere dense in \(\mathcal{K}_{o}\).
Let \(C\) be an arbitrary element of \(\mathcal{K}_{o}\) and let \(\varepsilon>0\). Note that for any nondegenerate linear transformation \(h:\mathbb{R}^{2}\to\mathbb{R}^{2}\), \(K\) is \(C\)-convex if and only if \(h(K)\) is \(h(C)\)-convex, and for any \(n\geq 4\), if \(K\) is \(C\)-convex, then \(\hat{a}_{n}^{C}(K)=\hat{a}_{n}^{h(C)}(h(K))\). Thus, without loss of generality, we may assume that there are vertical supporting lines of \(C\) meeting \(\operatorname{bd}(C)\) at some points \(\pm p\) of the \(x\)-axis. We choose our notation such that \(p\) is on the positive half of the axis.
Consider the convex disk \(C_{n}\in\mathcal{K}_{0}\) in Remark 5. Let us define the nondegenerate linear transformation \(h_{\lambda,\mu}:\mathbb{R}^{2}\to\mathbb{R}^{2}\) by \(h_{\lambda,\mu}(x,y)=(\lambda x,\mu y)\). Then, if we choose suitable sufficiently small values \(\mu,\lambda>0\), then there is a translate \(C^{+}\) of \(h_{\lambda,\mu}(C_{n}^{+})\), and an \(o\)-symmetric convex disk \(C^{\prime}\) containing \(C^{+}\) in its boundary such that \(C^{+}\subset(C+\varepsilon B^{2})\setminus C\), and \(C\subset C^{\prime}\). Then \(C^{\prime}\cap\bigl{(}C+\varepsilon B^{2}\bigr{)}\in\mathcal{K}_{o}\) contains translates of \(h_{\lambda,\mu}(C_{n}^{+})\) and \(h_{\lambda,\mu}(C_{n}^{-})\) in its boundary, the Hausdorff distance of \(C\) and \(C^{\prime}\) is at most \(\varepsilon\), and, if we set \(K^{\prime}=h_{\lambda,\mu}(K_{n})\), by Remark 5 we have
\[\hat{a}_{n-1}^{C^{\prime}}(K^{\prime})+\hat{a}_{n+1}^{C^{\prime}}(K^{\prime})>2 \hat{a}_{n}^{C^{\prime}}(K^{\prime}).\]
Thus, \(\overline{\mathcal{K}}_{a}^{n}\) is everywhere dense, which immediately yields that \(\bigcap_{n=4}^{\infty}\overline{\mathcal{K}}_{a}^{n}\) is residual, implying Theorem 3.
## 5. Remarks and questions
**Remark 6**.: _For \(C\in\mathcal{K}_{o}\), \(K\in\mathcal{K}\) and positive integer \(n\geq 3\), let_
\[\bar{P}_{n}^{C}(K)=\inf\{\operatorname{perim}_{C}(Q):Q\text{ is a convex $n-gon$ circumscribed about $K$}\}; \tag{4}\]
\[\bar{p}_{n}^{C}(K)=\sup\{\operatorname{perim}_{C}(Q):Q\text{ is a convex $n-gon$ inscribed in $K$}\}. \tag{5}\]
_As we have observed in the introduction, it is known [19] that for any \(C\in\mathcal{K}_{o}\) and \(K\in\mathcal{K}\), the sequences \(\{\bar{P}_{n}^{C}(K)\}\) and \(\{\bar{p}_{n}^{C}(K)\}\) are convex and concave, respectively. Our approach yields a new proof of these statements by applying Theorem 1 for \(\lambda C\), where \(\lambda\to\infty\)._
Applying Theorem 2 for \(\lambda C\) with \(\lambda\to\infty\), we obtain the following.
**Remark 7**.: _Let \(C\in\mathcal{K}_{o}\), \(K\in\mathcal{K}\) and \(n\geq 3\). If, for some positive integer \(k\), Let \(C\in\mathcal{K}_{o}\), \(K\in\mathcal{K}\), \(n\geq 3\) and \(k\geq 2\). Assume that \(k\) is a divisor of \(n\) and both \(K\) and \(C\) have \(k\)-fold rotational symmetry. Then there is a convex \(n\)-gon \(Q^{p}\) circumscribed about \(K\) with \(\operatorname{perim}_{C}(Q^{p})=\bar{P}_{n}^{C}(K)\) such that \(Q^{p}\) has \(k\)-fold rotational symmetry. Similarly, there is a convex \(n\)-gon polygon \(Q^{p}\) inscribed in \(K\) which has \(k\)-fold rotational symmetry, and \(\operatorname{perim}_{C}(Q^{p})=\bar{p}_{n}^{C}(K)\)._
In the remaining part of the paper, we denote the set \((1,\infty)\cup\{\infty\}\) by \([1,\infty]\).
Let \(p,q\in[1,\infty]\) satisfy the equation \(\frac{1}{p}+\frac{1}{q}=1\). For any \(K,L\in\mathcal{K}\), G. Fejes Toth [8] introduced the _weighted area deviation_ of \(K,L\) with weights \(p,q\) as the quantity \(\operatorname{area}^{p,q}(K,L)=p\operatorname{area}(K\setminus L)+q \operatorname{area}(L\setminus K)\). He proved that if for any \(K\in\mathcal{K}\), \(\bar{a}_{K}^{C}(n,p,q)\) denotes the minimal weighted area deviation of \(K\) and an arbitrary convex \(n\)-gon, then the sequence \(\{\bar{a}_{K}^{C}(n,p,q)\}\) is convex. Based on this idea, we introduce the following quantity.
Let \(p,q\in[1,\infty]\) satisfy the equation \(\frac{1}{p}+\frac{1}{q}=1\), and let \(C\in\mathcal{K}_{0}\), \(K\in\mathcal{K}_{0}\). We call the quantity
\[\operatorname{perim}_{C}^{p,q}(K,L) =p\left(\operatorname{arclength}_{C}(\operatorname{bd}(K) \setminus\operatorname{int}(L))-\operatorname{arclength}_{C}(\operatorname{ bd}(L)\cap K)\right)+\] \[\quad+q\left(\operatorname{arclength}_{C}(\operatorname{bd}(L) \setminus\operatorname{int}(K))-\operatorname{arclength}_{C}(\operatorname{ bd}(K)\cap L)\right)\]
the _weighted \(C\)-perimeter deviation_ of \(K,L\) with weights \(p,q\). Here we note that by convexity, \(\operatorname{arclength}_{C}(\operatorname{bd}(K)\setminus\operatorname{ int}(L))\geq\operatorname{arclength}_{C}(\operatorname{bd}(L)\cap K)\) and \(\operatorname{arclength}_{C}(\operatorname{bd}(L)\setminus\operatorname{ int}(K))\geq\operatorname{arclength}_{C}(\operatorname{bd}(K)\cap L)\), with equality if and only if \(K\subseteq L\) and \(L\subseteq K\), respectively. Let \(\bar{p}_{K}^{C}(n,p,q)\) denote the minimal \(C\)-perimeter deviation of \(K\) and an arbitrary convex \(n\)-gon. We remark that if \(K\) is \(C\)-convex, by replacing the convex \(n\)-gons in the definitions of \(\bar{a}_{K}^{C}(n,p,q)\) and \(\bar{p}_{K}^{C}(n,p,q)\) with \(C\)-\(n\)-gons, we may analogously define the quantities \(\hat{a}_{K}^{C}(n,p,q)\) and \(\hat{p}_{K}^{C}(n,p,q)\), respectively. This leads to the following problems.
**Problem 1**.: _Prove or disprove that for any \(p,q\in[1,\infty]\) with \(\frac{1}{p}+\frac{1}{q}=1\), \(C\in\mathcal{K}_{o}\) and \(K\in\mathcal{K}\), the sequence \(\{\bar{p}_{K}^{C}(n,p,q)\}\) is convex._
**Problem 2**.: _Prove or disprove that for any \(p,q\in[1,\infty]\) with \(\frac{1}{p}+\frac{1}{q}=1\), \(C\in\mathcal{K}_{o}\) and \(C\)-convex disk \(K\in\mathcal{K}\), the sequence \(\{\hat{p}_{K}^{C}(n,p,q)\}\) is convex. Does the same hold for \(\{\hat{a}_{K}^{C}(n,p,q)\}\) if \(C\) is the Euclidean unit disk?_
Before our last problem, we remark that \(\hat{a}_{K}^{C}(n,1,\infty)=\operatorname{area}(K)-\hat{a}_{K}^{C}(n)\) and \(\hat{a}_{K}^{C}(n,\infty,1)=\hat{A}_{K}^{C}(n)-\operatorname{area}(K)\).
**Problem 3**.: _Is there a value \(p_{0}\in(1,\infty)\) such that for any \(p\) with \(p_{0}<p\leq\infty\) and \(q\) satisfying \(\frac{1}{p}+\frac{1}{q}=1\), for any \(C\in\mathcal{K}_{o}\) and \(C\)-convex disk \(K\in\mathcal{K}\), the sequence \(\{\hat{a}_{K}^{C}(n,p,q)\}\) is convex?_
|
2301.07879 | Unposed: Unsupervised Pose Estimation based Product Image
Recommendations | Product images are the most impressing medium of customer interaction on the
product detail pages of e-commerce websites. Millions of products are onboarded
on to webstore catalogues daily and maintaining a high quality bar for a
product's set of images is a problem at scale. Grouping products by categories,
clothing is a very high volume and high velocity category and thus deserves its
own attention. Given the scale it is challenging to monitor the completeness of
image set, which adequately details the product for the consumers, which in
turn often leads to a poor customer experience and thus customer drop off.
To supervise the quality and completeness of the images in the product pages
for these product types and suggest improvements, we propose a Human Pose
Detection based unsupervised method to scan the image set of a product for the
missing ones. The unsupervised approach suggests a fair approach to sellers
based on product and category irrespective of any biases. We first create a
reference image set of popular products with wholesome imageset. Then we create
clusters of images to label most desirable poses to form the classes for the
reference set from these ideal products set. Further, for all test products we
scan the images for all desired pose classes w.r.t. reference set poses,
determine the missing ones and sort them in the order of potential impact.
These missing poses can further be used by the sellers to add enriched product
listing image. We gathered data from popular online webstore and surveyed ~200
products manually, a large fraction of which had at least 1 repeated image or
missing variant, and sampled 3K products(~20K images) of which a significant
proportion had scope for adding many image variants as compared to high rated
products which had more than double image variants, indicating that our model
can potentially be used on a large scale. | Saurabh Sharma, Faizan Ahemad | 2023-01-19T05:02:55Z | http://arxiv.org/abs/2301.07879v1 | # Unposed: Unsupervised Pose Estimation based Product Image Recommendations
###### Abstract.
Product images are the most impressing medium of customer interaction on the product detail pages of e-commerce websites. Millions of products are onboarded on to webstore catalogues daily and maintaining a high quality bar for a product's set of images is a problem at scale. Grouping products by categories, clothing is a very high volume and high velocity category and thus deserves its own attention. Given the scale it is challenging to monitor the completeness of image set, which adequately details the product for the consumers, which in turn often leads to a poor customer experience and thus customer drop off.
To supervise the quality and completeness of the images in the product pages for these product types and suggest improvements, we propose a Human Pose Detection based unsupervised method to scan the image set of a product for the missing ones. The unsupervised approach suggests a fair approach to sellers based on product and category irrespective of any biases. We first create a reference image set of popular products with wholesome imageset. Then we create clusters of images to label most desirable poses to form the classes for the reference set from these ideal products set. Further, for all test products we scan the images for all desired pose classes w.r.t. reference set poses, determine the missing ones and sort them in the order of potential impact. These missing poses can further be used by the sellers to add enriched product listing image. We gathered data from popular online webstore and surveyed -200 products manually, a large fraction of which had at least 1 repeated image or missing variant, and sampled 3K products(-20K images) of which a significant proportion had scope for adding many image variants as compared to high rated products which had more than double image variants, indicating that our model can potentially be used on a large scale.
pose detection, interpretable, image tagging +
Footnote †: journal: Computer Vision and Pattern Recognition |
2303.11862 | An MDP approach for radio resource allocation in urban Future Railway
Mobile Communication System (FRMCS) scenarios | In the context of railway systems, the application performance can be very
critical and the radio conditions not advantageous. Hence, the communication
problem parameters include both a survival time stemming from the application
layer and a channel error probability stemming from the PHY layer. This paper
proposes to consider the framework of Markov Decision Process (MDP) to design a
strategy for scheduling radio resources based on both application and PHY layer
parameters. The MDP approach enables to obtain the optimal strategy via the
value iteration algorithm. The performance of this algorithm can thus serve as
a benchmark to assess lower complexity schedulers. We show numerical
evaluations where we compare the value iteration algorithm with other
schedulers, including one based on deep Q learning. | Vincent Corlay, Jean-Christophe Sibel | 2023-03-21T14:05:55Z | http://arxiv.org/abs/2303.11862v1 | An MDP approach for radio resource allocation in urban Future Railway Mobile Communication System (FRMCS) scenarios
###### Abstract
In the context of railway systems, the application performance can be very critical and the radio conditions not advantageous. Hence, the communication problem parameters include both a survival time stemming from the application layer and a channel error probability stemming from the PHY layer. This paper proposes to consider the framework of Markov Decision Process (MDP) to design a strategy for scheduling radio resources based on both application and PHY layer parameters. The MDP approach enables to obtain the optimal strategy via the value iteration algorithm. The performance of this algorithm can thus serve as a benchmark to assess lower complexity schedulers. We show numerical evaluations where we compare the value iteration algorithm with other schedulers, including one based on deep Q learning.
Scheduling, application-oriented systems, cross-layer, neural networks, Markov decision process.
## I Introduction
On the one hand, the automated train control is a crucial railway service use case and induces a change of communication paradigm compared to the current railway system: It might require at the same time a large throughput, a very high reliability, and a sufficient availability. On the other hand, the success of the 3GPP 5G NR standard for communication systems makes the underlying technology relevant for specific scenarios such as that of the railway systems. As a result, the Future Railway Mobile Communication Systems (FRMCS) propose mechanisms to take advantage of 5G-related aspects to offer specific railway services such as the automated train control [1].
Within this context, the scheduling of radio resources plays an important role to efficiently share the said resources between several users. The Round-Robin [2] and the Priority-Queue [3] are well-known schedulers whose computational complexity is very low but whose performance is not optimal with respect to application-level metrics. As a matter of fact, they do not take into account all the parameters impacting the application performance. For the current purpose, the objective is to consider both application-level parameters and lower layer parameters, such as the radio conditions, to adapt the scheduling strategy.
This approach is in line with the Release 19 of 3GPP [4] which specifies the service requirements for the 5G system. In this latter reference, it is explained that "the communication service is considered unavailable if it does not meet the pertinent Quality of Service (QoS) requirements. For example, the communication service is unavailable if a message is not correctly received within a specified time, which is the sum of maximum allowed end-to-end latency and survival time".
Recently, we introduced a new paradigm [5] for the design of a radio resource scheduler in a multi-agent setting. It takes into account in the meantime an application layer parameter, the survival time, and a PHY layer parameter, the channel error probability. To enhance the scheduler of [5], a low-complexity heuristic to approximately solve the scheduling optimization problem, we formalize in this paper the scheduling problem as a Markov Decision Process (MDP) [6, 7]. One advantage is that MDP provide an optimal solution for the optimization problem. Another advantage is that MDP have been widely experienced and give also access to many sub-optimal existing algorithms. More specifically, we shall consider the value iteration algorithm as well as the deep Q learning algorithm. The first algorithm optimally solves the scheduling problem but with a prohibitive complexity as the problem size grows. The second algorithms is sub-optimal but scales with the problem size, similarly to the heuristic proposed in [5]. Consequently, we compare these algorithms and discuss the performance complexity trade-off.
This paper is organized as follows. Section II presents the scheduling problem in the scope of FRMCS, Section III describes the system, Section IV introduces MDP within the scheduling framework, and Section V exhibits numerical results to challenge MDP with other schedulers.
## II FRMCS context
Within the various scenarios under the umbrella of the train control [1], we focus on the remote driving for train shutting yards. This implies a remote driver driving the train in order to bring it back to the train station. The data provided to the said driver is mainly made of images and videos to allow the driver to be aware, in real-time, of the train surrounding environment. In other words, the data to be transmitted involves a large quantity of payloads. Moreover, as this environment is shared with other trains, moving or not, remote driving raises a safety problem.
Combining a high throughput with a high reliability and a high availability is one of the 5G proposals, e.g., for the V2X scenarios [8] or the factory automation scenarios [9]. The application requirements provided by the 3GPP specifications embody the performance target for the access layer design through the QoS. The access layer is understood in this paper to comprise the PHY and MAC layers. For example, the QoS comprises a guaranteed bit-rate, a latency, etc. However, the metrics that are used as inputs and outputs for the mechanisms of the access layer are low-level metrics, e.g., the channel error probability, the frame error rate, the channel busy/occupancy ratio. Even though these metrics are helpful to underline the behaviour of a single layer, namely the access layer, they do not well reflect the expected synergy with the application layer.
From another perspective, some companies provide in [10] results of radio performance for NR railway systems considering a system-level framework. Within this framework, the scheduler plays an important role to efficiently share the radio resources between the various users. The said companies only consider the Round-Robin scheduler [2], a scheduler that assigns equal amount of resources to each user, regardless of the channel quality or performance requirements for the applications. Among other consequences, a user with a high channel quality achieves a much higher throughput than what it needs while, at the same time, a user with a low channel quality cannot achieve the required throughput. Furthermore, a proportional-fair strategy would bring only limited benefits as it does not consider the application parameters. Consequently, following the scope in [5], we orient the current paper on the scheduling problem taking into account application aspects as well as access layer aspects.
## III Description of the system
This section presents the scheduling problem by defining the application traffic model, the radio resources used by the scheduler and the application behaviour, similarly to [5, Sec. II.A]. The difference with [5] is the inclusion of the payload size, and the division of the payload in several packets.
### _Traffic model_
We consider a discrete-time system divided in time slots whose length is \(dt\), e.g., \(dt=1\) msec. Let \(N\) be the number of agents in the system. Two main traffic models for the application layer are commonly proposed in the literature [11]:
* _Full buffer traffic model_: The buffers of an agent have always an unlimited amount of data to transmit.
* _Finite buffer traffic model_: An agent is assigned a finite payload to transmit when it arrives. The agent leaves the system when the payload reception is completed.
In this study, we consider a _full-finite buffer traffic model_. An agent \(A_{k}\) is assigned a finite payload of size \(P_{k}\) to transmit. As soon as the payload has been fully received, the application buffer of \(A_{k}\) is immediately refilled with a payload of the same size. In this model, all the agents are always active as they have always a payload in their buffer.
### _Scheduling resource & Payload aspect_
We consider a single radio resource per time slot \(t\). All agents simultaneously compete for the resource at \(t\) and only one agent finally obtains it. The channel error probability \(p_{k}\) is the probability for the agent \(A_{k}\) that a transmission at any time \(t\) fails because of the channel. It is assumed to be constant over time.
We assume that only a packet of size \(\Gamma_{k}\) can be transmitted in one time slot for \(A_{k}\). The payload is thus necessarily split into \(C_{k}\) packets at the scheduler level, i.e., \(P_{k}=C_{k}\Gamma_{k}\). The payload is transmitted once the associated packets have all been successfully transmitted.
### _Application behaviour and performance metric_
The application is monitored according to three events for any agent. We assume that the agents use the same application, i.e., they have the same survival time \(\tau\):
* The event _survival time failure_ for an agent (E1): "No payload is successfully transmitted during the last \(\tau\) time slots" where \(\tau\) is a strictly positive integer. For an agent \(A_{k}\) at time \(t\), we accordingly introduce \(\tau_{k}(t)\) as the remaining time before (E1). This means that \(0\leq\tau_{k}(t)\leq\tau\) and \(\tau_{k}(t)\) is a decreasing function of \(t\).
* The micro-event _successful transmission of a packet at time \(t\)_ (e0).
* The macro-event _successful transmission of a payload at time \(t\)_ (E0): This event is the result of \(C_{k}\) events (e0) required to to transfer a whole message.
With both the events (E0) and (E1) for any agent \(A_{k}\) at time \(t\), \(\tau_{k}(t)\) is immediately set to its maximum value \(\tau\).
For a single agent \(A_{k}\), at any time \(t\), the quantity \(V_{k}(t)\) is the number of failures (E1) met by \(A_{k}\) until time \(t\). The performance metric is chosen as the failure rate \(F(t)\):
\[F(t)=\frac{\sum_{k=1}^{N}V_{k}(t)}{t}. \tag{1}\]
Given that several agents might fail at a single time slot, the value of the failure rate can be greater than one. This is to expect for very small values of the survival time \(\tau\).
## IV Framework of the Markov Decision Process
We consider the infinite-horizon MDP with discounted sum-reward criterion, as presented in [6, Chap. 6], to model the scheduling problem.
### _Model_
As for a standard MDP, we use four variables to model the scheduling problem:
* _State_\(\mathbf{S}_{t}\in\mathbb{N}^{N\times 2}\): The element \(\mathbf{S}_{t}(k,1)\) is the number \(\tau_{k}(t)\) of remaining time slots at time \(t\) before meeting a failure for agent \(A_{k}\). The element \(\mathbf{S}_{t}(k,2)\) is the number \(c_{k}(t)\) of remaining packets at time \(t\) before the application message is fully transmitted. As \(0\leq\tau_{k}(t)\leq\tau\) and \(1\leq c_{k}(t)\leq C_{k}\), the state \(\mathbf{S}_{t}\) belongs to a state space \(\mathcal{S}\) whose size is \(|\mathcal{S}|=(\tau+1)^{N}\prod_{k=1}^{N}C_{k}\).
* _Action_\(a_{t}\in\{0,1,...,N\}\): The index of the agent who gets the resource at time \(t\). The action \(a_{t}\) belongs to an action space \(\mathcal{A}\) whose size is the number of agent \(N\).
* _Short-term reward_\(r_{t}\in\{-N,...,-1,0\}\): Minus the number of events (E1) at time \(t\).
* _Transition probability_: If \(a_{t}=k\), the packet for \(A_{k}\) is well received with probability \(1-p_{k}\). Hence, \(a_{t}\) leads to only two states \(\mathbf{S}_{t+1}\) with the non-zero transition probability \(p(\mathbf{S}_{t+1}|\mathbf{S}_{t},a_{t})\).
The MDP introduces the _long-term reward_ or _gain_\(G_{t}\) at time \(t\) defined as:
\[G_{t}=\sum_{t^{\prime}=t}^{\infty}\lambda^{t^{\prime}-t}r_{t^{\prime}}, \tag{2}\]
where \(\lambda<1\) is the _discount factor_. If \(\lambda\) is close to 1, this quantity is almost the same as the (unnormalized) failure rate (1). The goal for the scheduling problem is thus to find a _policy_\(\pi\), being the time sequence of allocation decisions, that maximizes the expected gain \(\mathbb{E}[G_{t}]\).
The MDP then defines \(v^{\pi}(\mathbf{S})\) the _value of the state_\(\mathbf{S}\) under the policy \(\pi\) as the expected gain given that \(\mathbf{S}_{t}=\mathbf{S}\):
\[v^{\pi}(\mathbf{S})=\mathbb{E}[G_{t}|\mathbf{S}_{t}=\mathbf{S}]. \tag{3}\]
Then, a policy \(\pi^{*}\) is optimal if:
\[v^{\pi^{*}}(\mathbf{S})\geq v^{\pi}(\mathbf{S}),\ \forall\mathbf{S}\in \mathcal{S}\ \text{and}\ \forall\pi\neq\pi^{*}. \tag{4}\]
Under the optimal policy, the value of the state \(\mathbf{S}\) can be expressed via the Bellman's equation as:
\[v^{\pi^{*}}(\mathbf{S})=\max_{a\in\mathcal{A}}Q(\mathbf{S},a), \tag{5}\]
where \(Q(\mathbf{S},a)\) is the _state-action value_ computed as:
\[Q(\mathbf{S},a)=\sum_{\mathbf{S}^{\prime}\in\mathcal{S}}p(\mathbf{S}^{\prime }|\mathbf{S},a)\Big{(}r(\mathbf{S}^{\prime},\mathbf{S})+\lambda v^{\pi^{*}}( \mathbf{S}^{\prime})\Big{)}, \tag{6}\]
where \(r(\mathbf{S}^{\prime},\mathbf{S})\) is the reward obtained when going from \(\mathbf{S}\) to \(\mathbf{S}^{\prime}\). Given an optimal policy \(\pi^{*}\), the optimal action to take when in a state \(\mathbf{S}\) is obtained as:
\[a^{*}=\operatorname*{arg\,max}_{a\in\mathcal{A}}Q(\mathbf{S},a). \tag{7}\]
### _Value iteration_
One standard approach to obtain (3) under \(\pi^{*}\) is to operate the _Value Iteration_ algorithm (VI) that consists in computing the state values in an iterative manner. Let us define \(v^{(i)}(\mathbf{S})\) as the approximate of \(v^{\pi^{*}}(\mathbf{S})\) at iteration \(i\). We accordingly define the approximate of \(Q(\mathbf{S},a)\) at iteration \(i\) as:
\[Q^{(i)}(\mathbf{S},a)=\sum_{\mathbf{S}^{\prime}\in\mathcal{S}}p(\mathbf{S}^{ \prime}|\mathbf{S},a)\Big{(}r(\mathbf{S}^{\prime},\mathbf{S})+\lambda v^{(i)} (\mathbf{S}^{\prime})\Big{)}, \tag{8}\]
which leads to:
\[v^{(i+1)}(\mathbf{S})=\max_{a\in\mathcal{A}}\ Q^{(i)}(\mathbf{S},a), \tag{9}\]
where \(v^{(i)}(\mathbf{S})\) converges to \(v^{\pi^{*}}(\mathbf{S})\) for all \(\mathbf{S}\)[6]. The optimal action is then chosen as \(a^{(I)}(\mathbf{S})=\operatorname*{arg\,max}_{a\in\mathcal{A}}\ Q^{(I)}( \mathbf{S},a)\), where \(I\) denotes the index of the last iteration.
As already mentioned, the VI finds the optimal policy with respect to \(\mathbb{E}[G_{t}]\). Hence, modelling the scheduling problem in the MDP framework enables to get the optimal performance in the cases where the VI is tractable. As a direct consequence, the VI can be used as a benchmark to assess the performance of less complex heuristics.
### _Deep Q learning_
Even though the VI provides the optimal estimate of the states value, it suffers from significant computational complexity and memory consumption as it needs to cover all the states in \(\mathcal{S}\) of size \(|\mathcal{S}|=(\tau+1)^{N}\prod_{k=1}^{N}C_{k}\). A practical alternative to the VI is the temporal-difference learning [7, Chap. 6] on which \(Q\)-learning and deep \(Q\) learning rely.
Instead of looping over all the states to estimate \(v^{\pi^{*}}(\mathbf{S})\), these algorithms walk across a few states within \(\mathcal{S}\). In the standard approach, the system is in state \(\mathbf{S}_{t}\) at \(t\) and an action \(a_{t}\) is done. One then gets a reward \(r(\mathbf{S},\mathbf{S}^{\prime})\) and the new state \(\mathbf{S}^{\prime}\) of the system at time \(t+1\). It is therefore a Monte Carlo approach. The model to estimate \(Q^{\pi^{*}}(\mathbf{S},a)\) is updated via an error signal \(\Delta_{t}\), obtained from two estimates of \(Q^{\pi^{*}}(\mathbf{S},a)\):
\[\Delta_{t}=\hat{Q}(\mathbf{S},a)-\hat{Q}^{\prime}(\mathbf{S},a) \tag{10}\]
where:
* \(\hat{Q}^{\prime}(\mathbf{S},a)=r(\mathbf{S},\mathbf{S}^{\prime})+\lambda \max_{a^{\prime}}\hat{Q}(\mathbf{S}^{\prime},a^{\prime})\) is a first estimate of \(Q^{\pi^{*}}(\mathbf{S},a)\) based on the observed reward and the subsequent state \(\mathbf{S}^{\prime}\) obtained when taking the action \(a\) in state \(\mathbf{S}\), and where \(\hat{Q}(\mathbf{S}^{\prime},a^{\prime})\) is obtained via the current model.
* \(\hat{Q}(\mathbf{S},a)\) is a second estimate of \(Q^{\pi^{*}}(\mathbf{S},a)\) obtained via the current model.
With deep Q learning the model is a neural network which is trained via the gradient of the error signal with respect to the parameters of the neural network [12].
## V Numerical evaluations
This section presents some performance results of the VI and deep Q learning algorithms, and compare them with other schedulers.
### _Challengers_
We compare the MDP-based algorithms with three other schedulers:
* _Round-Robin_ (RR). It consists in allocating the resource to the \(N\) agents following a buffer of the agents index. The said buffer is a random permutation of \([0,\ldots,N-1]\). After \(N\) allocations, the RR replaces its buffer with a new random permutation of \([0,\ldots,N-1]\). This randomization prevents an agent from being always out at each period of the RR. The RR is a low complexity algorithm.
* _Priority-queue_ (PQ). It consists in allocating the resource to the agent whose number of remaining time slots before meeting a failure is the lowest, i.e.,: \(a_{t}=\text{arg min}_{k}\tau_{k}(t)\). This scheduler is very easy to implement with no concern neither on the computational complexity nor on the memory consumption. Nevertheless, it does not use
the channel error probabilities \(p_{k}\) and the number of remaining packets \(c_{k}(t)\). The PQ is a low complexity algorithm.
* _On-line_ (OL) [5]. It uses a heuristic \(f_{r}(t,k)\) to estimate a probability of survival time failure for every agent \(A_{k}\). The agent to allocate is then selected to minimize a subsequent global probability of failure. In [5], there is no notion of division of the payload in \(C_{k}\) packets. Therefore, we need to slightly modify the heuristic \(f_{r}(t,k)=p_{k}^{\tau_{k}(t)}\) with: \[f_{r}(t,k)=p_{k}^{\frac{\tau_{k}(t)}{c_{k}(t)}}.\] (11) The OL is a moderately low complexity algorithm.
### _Scenario and assumptions_
We consider a small-sized scenario with \(N=3\) agents and with \(\tau\) spanning from \(N\) to 12. We assume the channel error probabilities to be \(p_{1}=10^{-3},p_{2}=10^{-2},p_{3}=10^{-1}\), i.e., such that \(A_{1}\) has a very good channel, \(A_{3}\) suffers from a difficult channel, and \(A_{2}\) is in between. Several situations are studied regarding the number of packets \(C_{1},C_{2},C_{3}\) per payload for the agents denoted by the set \(\{C_{1},C_{2},C_{3}\}\): \(\{1,1,2\}\), \(\{1,1,3\}\), \(\{2,2,2\}\) and \(\{2,2,3\}\). Consequently, the size of the state space varies from 128 to 26364 elements. In these configurations, the system has the greatest channel error probability on the agent with the greatest number of packets per payload. This is a way to stress the scheduler by setting difficult situations. The purpose of the evaluations is to observe the performance dependence on the values of \(\{C_{k}\}_{k}\) given \(\{p_{k}\}_{k}\). We let all the schedulers run for a time duration of \(10^{6}\) time slots to be able to observe failure rates up to around \(10^{-5}\). We recall that several failures might occur at a given time slot according to (1). Hence, the performance metric is \(F=F(10^{6})\) (see (1)).
The challengers are then compared taking the value of \(F\) for each of them, namely \(F_{\text{RR}}\) for the Round-Robin, \(F_{\text{PQ}}\) for the Priority-Queue, \(F_{\text{OL}}\) for the On-Line, \(F_{\text{VI}}\) for the VI, \(F_{\text{PQ}}\) for the deep Q learning. To simplify the wording here after, we denote by \(F_{X}(\{a,b,c\})\) the value of the failure rate for the scheduler \(X\in\{\text{RR},\text{PQ},\text{OL},\text{VI}\}\) with the configuration \(C_{1}=a,C_{2}=b,C_{3}=c\).
### _Details for Deep Q learning_
We consider deep Q learning for the case \(\{C_{1},C_{2},C_{3}\}=\{2,2,3\}\). We consider two neural networks whose input is the state \(\mathbf{S}_{t}\). The first network NN\({}^{(1)}\) is trained with the true situation \(p_{1}=10^{-3},p_{2}=10^{-2},p_{3}=10^{-1}\) considering \(\tau=10\). With such parameters, \(A_{1}\) and \(A_{2}\) encounter errors at a very low frequency, which may be an issue as deep Q learning is a Monte Carlo algorithm (analysis validated by the experiments, see the next section). To mitigate this issue, we train a second network NN\({}^{(2)}\) in a situation with \(p_{1}=10^{-2},p_{2}=10^{-1},p_{3}=0.5\) such that \(A_{1}\) and \(A_{2}\) have more channel errors, while keeping the hierarchy \(p_{1}<p_{2}<p_{3}\). In other words, we introduce a model mismatch for the training. Deep Q learning with NN\({}^{(1)}\) is called DQ\({}^{(1)}\) and DQ\({}^{(2)}\) with NN\({}^{(2)}\).
The neural network used in the simulation is a residual neural network of 22 layers comprising a total of 26200 parameters. Note however that a smaller neural network with only 3000 parameters yields similar results. Such a large network is considered such that its size is not the performance bottleneck.
### _Simulation results_
The results are displayed in Fig. 1. First of all, we observe that all schedulers have similar performance with large values of \(F\) for low values of \(\tau\). Unsurprisingly, a larger \(\tau\) is required to reduce the failure rate. Moreover, the challenging zone for discriminating between the schedulers is then for middle or large values of \(\tau\), e.g., for \(\tau\geq 4\) with {1,1,2}, for \(\tau\geq 5\) with {1,1,3}, for \(\tau\geq 7\) with {2,2,2}, and for \(\tau\geq 8\) with {2,2,3}.
Then, we observe that for such challenging zones, \(F_{\text{RR}}\) is far beyond \(F_{\text{PQ}},F_{\text{OL}}\) and \(F_{\text{VI}}\). This is expected as RR is the only scheduler which does not consider any agent parameter. Another common observation in every figure is that:
\[F_{\text{VI}}\leq F_{\text{OL}}\leq F_{\text{PQ}}\leq F_{\text{RR}}, \tag{12}\]
This confirms that the VI is the reference scheduler. This also shows that the low complexity schedulers PQ and OL perform better than the naive RR. Consequently, the analysis will focus on the performance of OL and PQ in comparison with that of VI.
The configurations {1,1,2},{2,2,2} result in \(F_{\text{OL}}\approx F_{\text{PQ}}\) whereas the other configurations {1,1,3},{2,2,3} result in \(F_{\text{OL}}<F_{\text{PQ}}\) in a more obvious manner. As an example, at \(\tau=10\), \(F_{\text{PQ}}(\{1,1,3\})=3F_{\text{OL}}(\{1,1,3\})\) and at \(\tau=12\), \(F_{\text{PQ}}(\{2,2,3\})=4F_{\text{OL}}(\{2,2,3\})\). Also, the y-distance between \(F_{\text{OL}}\) and \(F_{\text{PQ}}\) increases when increasing \(\tau\) in a faster way with \(C_{3}=3\) than with \(C_{3}=2\). For example, \(F_{\text{PQ}}(\{2,2,2\})\approx 1.5F_{\text{OL}}(\{2,2,2\})\) at \(\tau=10\) and \(F_{\text{PQ}}(\{2,2,2\})\approx F_{\text{OL}}(\{2,2,2\})\) at \(\tau=6\) while \(F_{\text{PQ}}(\{2,2,3\})\approx 2.6F_{\text{OL}}(\{2,2,3\})\) at \(\tau=10\) and \(F_{\text{PQ}}(\{2,2,3\})\approx F_{\text{OL}}(\{2,2,3\})\) at \(\tau=6\). This expansion effect is also observed between OL and VI, e.g., \(F_{\text{OL}}(\{2,2,3\})\approx 4.2F_{\text{VI}}(\{2,2,3\})\) at \(\tau=10\) and \(F_{\text{OL}}(\{2,2,3\})\approx F_{\text{VI}}(\{2,2,3\})\) at \(\tau=6\). As \(F_{\text{OL}}\leq F_{\text{PQ}}\), though, OL can be said to be more robust than PQ when increasing the payload of the worse channel agent. In other words, enlarging the payload of the worse channel agent leads to select OL rather than PQ.
Alternatively, we can compare the schedulers with respect to the survival time they require to reach a target failure rate \(F_{\text{trg}}\). As \(\tau\) is an integer, we round \(\tau\) to the nearest greater integer for comparison purpose. For example, when fixing \(F_{\text{trg}}=10^{-4}\), we see that for {1,1,2},{1,1,3},{2,2,2},{2,2,3}, respectively, VI fulfills the target with \(\tau=7,8,9,10\), OL fulfills the target with \(\tau=7,9,9,11\), and PQ fulfills the target with \(\tau=8,9,10,12\). PQ is then always one time slot ahead from VI whereas OL may reach the same survival time.
Consequently, OL provides nearly optimal performance with less constrains on the application compared with PQ.
Finally, we focus on the curves showing the performance of deep Q learning with {2,2,3}. We observe that \(F_{\text{DQ}^{(1)}},F_{\text{DQ}^{(2)}}\) are both greater than \(F_{\text{VI}}\), i.e., they do not reach the optimal policy. However, when \(\tau\geq 11\), we see that \(F_{\text{DQ}^{(2)}}\leq F_{\text{PQ}}\) and \(F_{\text{DQ}^{(2)}}\leq F_{\text{OL}}\). Extrapolating the curves to \(\tau\geq 12\), we expect DQ\({}^{(1)}\) and DQ\({}^{(2)}\) to perform even better than PQ and OL. In other words, when going further than the \(\tau\) value used for training, DQ seems to well behave whereas running DQ for \(\tau\) values lower than that of training, DQ is clearly suboptimal. We also observe that DQ\({}^{(2)}\), trained with the model mismatch, performs better than the standard training DQ\({}^{(1)}\). This highlights that the rare channel errors is indeed a problem for deep Q learning, and having a model mismatch for the training improves the performance. Nevertheless, this raises a new problem: How to chose the most adequate model parameters for the training? These preliminary performance results also indicate, though, that it might be possible to approach the VI performance provided good training parameters. Therefore, as DQ is less complex than VI and less memory consuming than VI, this opens the room for scaling the scheduling problem to a larger state space, e.g., with more agents, with greater values of \(\tau\), and with greater values of \(C_{k}\).
## VI Conclusions
In this paper, we formalized the scheduling problem in the framework of MDP and we showed how to find the optimal scheduling strategy. This enables to assess the performance of candidate lower complexity schedulers. Indeed, the optimal scheduler suffers from complexity and storage issues. Among the lower complexity scheduler, the deep Q learning approach based on neural networks is investigated. We observed that training the neural network is not straightforward because of the scarcity of error events of some agents. Nevertheless, the performance can be improved by introducing a model mismatch for the training step. These preliminary results indicate that it may be possible to approach the optimal performance with this lower complexity scheduler. This may offer the possibility to scale the scheduler and therefore address larger systems.
|
2305.13323 | Rescaling strange-cluster stars and its implications on
gravitational-wave echoes | Solid states of strange-cluster matter called strangeon matter can form
strangeon stars that are highly compact. We show that strangeon matter and
strangeon stars can be recast into dimensionless forms by a simple
reparametrization and rescaling, through which we manage to maximally reduce
the number of degrees of freedom. With this dimensionless scheme, we find that
strangeon stars are generally compact enough to feature a photon sphere that is
essential to foster gravitational-wave (GW) echoes. Rescaling the dimension
back, we illustrate its implications on the expanded dimensional parameter
space, and calculate the GW echo frequencies associated with strangeon stars,
showing that the minimum echo frequency is $\sim 8$ kHz for empirical parameter
space that satisfies the GW170817 constraint, and can reduce to $\mathcal
O(100)$ Hertz at the extended limit. | Chen Zhang, Yong Gao, Cheng-Jun Xia, Renxin Xu | 2023-05-16T20:23:17Z | http://arxiv.org/abs/2305.13323v2 | # Rescaling Strangeon Stars and its Implications on Gravitational-wave Echoes
###### Abstract
Solid states of strange-cluster matter called strangeon matter can form strangeon stars that are highly compact. We show that strangeon matter and strangeon stars can be recast into dimensionless forms by a simple reparametrization and rescaling, through which we manage to maximally reduce the number of degrees of freedom. With this dimensionless scheme, we find that strangeon stars are generally compact enough to feature a photon sphere that is essential to foster gravitational-wave (GW) echoes. Rescaling the dimension back, we illustrate its implications on the expanded dimensional parameter space, and calculate the GW echo frequencies associated with strangeon stars, showing that the minimum echo frequency is \(\sim 8\) kHz for empirical parameter space that satisfies the GW170817 constraint, and can reduce to \(\mathcal{O}(100)\) Hertz at the extended limit.
## I Introduction
Recent gravitational wave (GW) observations from compact binary mergers by LIGO/Virgo collaborations [1; 2; 3; 4; 5; 6; 7] have significantly advanced our understanding of black holes and compact stars. These binary merger events have inspired many studies on exotic compact objects (ECOs), which are black hole mimickers that share a similar compactness but lack an event horizon [8; 9; 10; 11; 12; 13; 14]. Most interests on probes of ECOs are focused on the distinctive signatures from gravitational wave echoes in the postmerger signals [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32]1, in which a wave that falls inside the gravitational potential barrier (near the photon sphere) travels to a reflecting boundary of the ECO before returning to the barrier after some time delay.
Footnote 1: For probes of ECOs using other methods, see Ref. [33; 34; 35; 36].
We want to explore the possibility of GW echoes in the context of realistic compact stars, given the detected binary neutron star merger events. To generate GW echoes, a star object must feature a photon sphere at \(R_{P}=3M\), where \(M\) is the object's mass. The minimum radius for compact stars should be above the Buchdahl's limit \(R_{B}=9/4M\)[37]. Therefore, GW echo signals are possible if \(R_{B}<R<R_{P}\). This compactness criterion excludes the realistic neutron stars [38; 39]. To achieve ultra-compact stellar structure, people commonly assumed ad-hoc exotic equations of state (EOS) [39; 40; 41; 42; 43] or modified gravity [44; 45; 46].
Quark matter, a state comprised of deconfined free-flowing quarks, can possibly exist inside the neutron star core (i.e. hybrid stars [47; 48]) or the crust (i.e. inverted hybrid stars [49]), or constitutes entire star called quark star. Strange quark stars [50; 51; 52; 53; 54; 55] composed of strange quark matter (SQM) [56; 57; 58; 59] and up-down quark stars [60; 61; 62; 63; 64; 65; 66; 67; 68] composed of up-down quark matter (\(ud\)QM) [69], can be more compact than neutron stars. As Ref. [70] has shown, physically-motivated quark stars can feature GW echoes, but require perturbative QCD corrections [71; 72] and color superconductivity [74; 75; 76; 77; 78; 79] effects to be compact enough. It is interesting to explore whether we can have more compact objects from other physical grounds.
Strangeon matter is similar to strange quark matter where both are composed of a nearly equal number of \(u,d,s\) quarks [80; 81; 82; 83]. However, strangeon matter has quarks localized as clusters, in a state more like a solid. Strangeon stars [80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90] composed of strangeon matter have intrinsic stiff EOS and large compactness, and they had already been proposed to support massive pulsars (\(>2M_{\odot}\)[83]) before the announcement of the first massive pulsar PSR J1614-2230 [91]. It is then natural to explore whether strangeon stars can feature GW echoes.
As for the organization of this paper, we first work out
a general dimensionless rescaling for the Lennar-Jones model of strangeon matter. This greatly reduced the number of model parameters from three to one, enabling us to perform a simple but general analysis over the whole parameter space. Then we apply the rescaling scheme to the studies of strangeon stars and GW echoes.
## II Dimensionless rescaling of strangeon
The mass density \(\rho\) and pressure density \(p\) of zero-temperature dense matter composed of strangeons derived from Lennard-Jones potential [83; 92] reads
\[\rho =2\epsilon\left(A_{12}\sigma^{12}n^{5}-A_{6}\sigma^{6}n^{3} \right)+nN_{\rm q}m_{\rm q}\,, \tag{1}\] \[p =n^{2}\frac{{\rm d}(\rho/n)}{{\rm d}n}=4\epsilon\left(2A_{12} \sigma^{12}n^{5}-A_{6}\sigma^{6}n^{3}\right)\,, \tag{2}\]
where \(A_{12}=6.2\), \(A_{6}=8.4\), and \(n\) is the number density of strangeons. \(N_{\rm q}m_{\rm q}\) is the mass of a strangeon with \(N_{\rm q}\) being the number of quarks in a strangeon and \(m_{q}\) being the average constituent quark mass. The contributions from degenerate electrons and vibrations of the lattice are neglected due to their expected smallness.
At the surface of strangeon stars, the pressure becomes zero, and we obtain the surface number density of strangeons as \(\left[A_{6}/(2A_{12}\sigma^{6})\right]^{1/2}\). For convenience, it is transformed into baryon number density, i.e.,
\[n_{\rm s}=\left(\frac{A_{6}}{2A_{12}}\right)^{1/2}\frac{N_{\rm q}}{3\sigma^{3 }}\,, \tag{3}\]
so that the EOS can be rewritten into a form that depends on parameter set (\(\epsilon\), \(n_{s},N_{q}\)):
\[\rho =\frac{1}{9}\epsilon\frac{A_{6}^{2}}{A_{12}}\left(\frac{{N_{q}}^{ 4}}{18n_{s}^{4}}n^{5}-\frac{N_{q}^{2}}{n_{s}^{2}}n^{3}\right)+m_{q}N_{q}n, \tag{4}\] \[p =\frac{2}{9}\epsilon\frac{A_{6}^{2}}{A_{12}}\left(\frac{{N_{q}^{ 4}}}{9n_{s}^{4}}n^{5}-\frac{N_{q}^{2}}{n_{s}^{2}}n^{3}\right). \tag{5}\]
We find that one can further remove the parameters \(n_{s}\) and \(N_{q}\) by doing the following dimensionless rescaling:
\[\bar{\rho}=\frac{\rho}{m_{q}\,n_{s}},\;\bar{p}=\frac{p}{m_{q}\,n_{s}},\,\bar{n }=\frac{N_{q}\,n}{n_{s}},\,\bar{\epsilon}=\frac{\epsilon}{N_{q}\,m_{q}}, \tag{6}\]
so that
\[\bar{\rho} =\frac{a}{9}\bar{\epsilon}\left(\frac{1}{18}\bar{n}^{5}-\bar{n} ^{3}\right)+\bar{n}, \tag{7}\] \[\bar{p} =\frac{2\,a}{9}\bar{\epsilon}\left(\frac{1}{9}\bar{n}^{5}-\bar{n} ^{3}\right), \tag{8}\]
where \(a=A_{6}^{2}/A_{12}=8.4^{2}/6.2\approx 11.38\). We thus managed to reduce the parameter degree of freedom from 3 (\(n_{s}\), \(\epsilon\), \(N_{q}\)) to simply 1 (\(\bar{\epsilon}\)). Besides, we note that the rescaled number density at zero pressure always remain \(\bar{n}=3\). Requiring \(\bar{\rho}\) to be positive at zero pressure set a theoretical upper bound for \(\bar{\epsilon}\): \(\bar{\epsilon}_{\rm max}^{\rm theo}=2/a\approx 0.1757\). However, the value of this upper bound is slightly beyond the empirical expectation \(\bar{\epsilon}_{\rm max}\sim 120{\rm MeV}/(3\times 310{\rm MeV})\approx 0.13\). In the following we will adopt the empirical upper bound \(\bar{\epsilon}_{\rm max}^{\rm em}=0.13\) on this physical ground, with additional comments about results from \(\bar{\epsilon}_{\rm max}^{\rm theo}\) at proper places.
## III Rescaling strangeon stars
Inspecting the Tolman-Oppenheimer-Volkoff (TOV) equation [93; 94],
\[\frac{dm}{dr} =4\pi\rho r^{2}\,, \tag{9}\] \[\frac{dp}{dr} =(\rho+p)\frac{m+4\pi pr^{3}}{2mr-r^{2}},\]
we note that the mass and radius can also be rescaled into dimensionless forms in geometric units (\(G=c=1\))2:
Footnote 2: Note that \(m_{q}n_{s}\), which is in units MeV/fm\({}^{3}\) in natural units, is in the dimension of \([L^{-2}]\) in geometric units here.
\[\bar{m}=m\sqrt{m_{q}\,n_{s}},\quad\bar{r}=r\sqrt{m_{q}\,n_{s}}, \tag{10}\]
so that the TOV equation can be converted into the dimensionless form (simply replace nonbarred symbols with barred ones). Solving the dimensionless TOV equation, we obtain the results for the rescaled \(\bar{M}-\bar{R}\) relation shown in Fig. 1. One can easily recast it into dimensional form by reversing the rescaling relation Eq. (10). At \(\bar{\epsilon}=\bar{\epsilon}_{\rm max}^{\rm em}=0.13\), we have \((\bar{M}_{\rm TOV},\bar{R}_{\rm TOV})\approx(0.149,\,0.348)\). Lifting \(\bar{\epsilon}\) to \(0.175\) that is close to \(\bar{\epsilon}_{\rm max}^{\rm theo}\), we obtain \((\bar{M}_{\rm TOV},\bar{R}_{\rm TOV})\approx(1.28,\,2.89)\) correspondingly.
The detailed dependences of maximum compactness \(M_{\rm TOV}/R_{\rm TOV}\) and the maximum (rescaled) mass
on the single parameter \(\bar{\epsilon}\) are illustrated more explicitly in Fig 23. We find that all \(\bar{M}-\bar{R}\) configurations are compact enough to feature a photon sphere while not exceeding the Buchdahl's limit for a large range of \(\bar{\epsilon}\) variations.4 Besides, we see clear positive correlations. As \(\bar{\epsilon}\) increases to \(\bar{\epsilon}^{\rm theo}_{\rm max}\), the maximum mass rapidly grows, while the maximum compactness reaches the Buchdahl's limit.
Footnote 3: Note that here we have extended \(\bar{\epsilon}\) to \(\bar{\epsilon}^{\rm theo}_{\rm max}\) to have a general view.
Footnote 4: We examined \(\bar{\epsilon}\) as low as \(10^{-8}\) order.
The rescaled results on tidal deformabilites are shown in Fig 3. We see that at a given mass, a larger \(\bar{\epsilon}\) increases tidal deformability due to a larger radius, as can be observed from Fig 1. Besides, we also see that a larger \(\bar{\epsilon}\) yields a smaller tidal deformability at the corresponding maximum mass point due to the associated larger compactness.
## III GW echoes from strangeon stars
The effective potential of axial gravitational perturbations \(\Psi_{s,l}\) in curved background has the general form [43; 21]:
\[V(r) = B(r)\Big{\{}\frac{l(l+1)}{r^{2}}+\frac{1-s^{2}}{2rA(r)}\left( \frac{B^{\prime}(r)}{B(r)}-\frac{A^{\prime}(r)}{A(r)}\right) \tag{11}\] \[+ 8\pi(p(r)-\rho(r))\delta_{s,2}\Big{\}},\]
where the azimuthal quantum number \(l\geq s\) with \(s=0,\pm 1,\pm 2\) for scalar, vector and tensor modes, respectively. And
\[B(r)=e^{2\Phi(r)}\,,\,A(r)=\frac{1}{1-2m(r)/r} \tag{12}\]
as the metric factors of curved line element describing spherical symmetric spacetime: \(ds^{2}=-B(r)dt^{2}+A(r)dr^{2}+r^{2}d\theta^{2}+r^{2}\sin^{2}\theta d\phi^{2}\). \(\Phi(r)\) is solved via
\[\frac{d\Phi}{dr}=-\frac{1}{\rho+p}\frac{dp}{dr}, \tag{13}\]
together with the TOV equation Eq. (9). Apparently, we can apply rescaling relation Eq. (10) and Eq. (6), and perform a further rescaling \(\bar{V}=V/(m_{q}\,n_{s})\) to convert the whole program into a dimensionless form as other barred quantaties. We obtain the effective potential of the lowest axial gravitational perturbation mode (\(l=s=2\)) in
Figure 1: \(\bar{M}\)-\(\bar{R}\) of strangeon stars for various \(\bar{\epsilon}\), sampling \(0.0001\sim 0.13\) in equal \(\Delta\bar{\epsilon}\) spacing from the lighter black line to the darker black lines, respectively. Solid dots denote the maximum mass configurations.
Figure 3: \(\Lambda\)-\(\bar{M}\) of strangeon stars for various \(\bar{\epsilon}\). The line-color convention follows that of Fig 1.
strangeon star background shown in Figure 45, which abruptly changes at the star surface, diverges towards star center with an outside peak around \(3\bar{M}\), forming a trapping cavity for gravitational waves. We see clearly the trend how the trapping cavity develops and evolves as the parameter \(\bar{\epsilon}\) increases.
Footnote 5: Note that we normalized \(\bar{V}(\bar{r})\) respect to \(\bar{V}(\bar{r}=3\bar{M})\), and \(\bar{r}\) respect to \(\bar{M}\), where the rescaling factors cancel and thus would yield same result for the dimensional version.
The characteristic echo time is the light time from the star center to the photon sphere [16; 17; 18],
\[\tau_{\rm echo}=\int_{0}^{3M}\!\!\frac{dr}{\sqrt{e^{2\Phi(r)}\left(1-\frac{2m( r)}{r}\right)}}\;, \tag{14}\]
We can also do the dimensionless rescaling
\[\bar{\tau}_{\rm echo}=\tau_{\rm echo}\sqrt{m_{q}\,n_{s}}, \tag{15}\]
such that Eq. (14) can also be calculated in a dimensionless approach. After obtaining the echo time, we directly get the GW echo frequency from the relation [16; 17; 18]
\[f_{\rm echo}=\frac{\pi}{\tau_{\rm echo}}, \tag{16}\]
and similarly, we can rescale it into the dimensionless form \(\bar{f}_{\rm echo}\) via the relation
\[\bar{f}_{\rm echo}=\frac{f_{\rm echo}}{\sqrt{m_{q}\,n_{s}}}. \tag{17}\]
In Fig. 5, we show the results of rescaled GW echo frequencies \(\bar{f}_{\rm echo}\) versus the rescaled center pressure \(\bar{p}_{c}\) for the stellar configurations of Fig. 1 that can generate echoes. Note that each curve's left and right ends are truncated at the point where \(\bar{R}=3\bar{M}\), and at the point of maximum mass, respectively.
As \(\bar{\epsilon}\) and the central pressure \(\bar{p}_{c}\) increases, \(\bar{f}_{\rm echo}\) decreases (due to the increasing compactness) with a lower bound \(\bar{f}_{\rm echo}^{\rm min}(\bar{\epsilon})\) set at the \(\bar{p}_{c}\) of the maximum mass point. At \(\bar{\epsilon}=\epsilon_{\rm max}^{\rm em}=0.13\), we have the minimum echo frequency \(\bar{f}_{\rm echo}^{\rm min}=0.655\), translating to
\[f_{\rm echo}^{\rm min}\approx 2\left(\frac{n_{s}}{0.24\,{\rm fm}^{-3}} \right)^{1/2}\,{\rm kHz}\quad\text{for }\bar{\epsilon}=\bar{\epsilon}_{\rm max}^{\rm em}. \tag{18}\]
Lifting \(\bar{\epsilon}\) to \(0.175\) that is close to \(\bar{\epsilon}_{\rm max}^{\rm theo}\), we obtain \(\bar{f}_{\rm echo}^{\rm min}=0.023\), mapping to
\[f_{\rm echo}^{\rm min}\approx 67\left(\frac{n_{s}}{0.24\,{\rm fm}^{-3}} \right)^{1/2}\,{\rm Hz}\quad\text{for }\bar{\epsilon}\approx\bar{\epsilon}_{\rm max}^{\rm theo}. \tag{19}\]
Interestingly, we see that in this extreme limit, the minimum echo frequencies lie well within the sensitivity range of LIGO [43; 1].
## III Dimensional parameter space
In Fig 6, we show the derived quantites in dimensional forms (\(f_{\rm echo}\), \(M_{\rm TOV}\)) and (\(\Lambda\), \(C\)) in dimensional parameter space of (\(\epsilon\), \(n_{s}\)) by rescaling back previous simple dimensionless results using relations Eq. (6) and Eq. (10).
Figure 6 manifests the apparent scaling behaviour:
Figure 4: Radial profiles of effective potentials for axial gravitational perturbations of the \(l=s=2\) mode in strangeon-star background at \(M_{TOV}\) points for various \(\bar{\epsilon}\). The color convention of black lines follows that of Fig 1. The red line denotes the \(\bar{\epsilon}=0.175\approx\bar{\epsilon}_{\rm max}^{\rm theo}\) limit.
* Decreasing \(\epsilon\) for given \(N_{q}\) or increasing \(N_{q}\) for given \(\epsilon\) is equivalent in terms of \(\bar{\epsilon}\).
* Dimensionless quantities like the compactness \(C=M/R\) should be independent of \(n_{s}\). This explains why the blue dotted lines are flat.
* Dimensional quantities follows the scaling relation dictated by Eq. (6) and Eq. (10). For example, the maximum mass \(M_{\rm TOV}\) (and corresponding radius \(R_{\rm TOV}\)) scale as \(\sqrt{n_{s}}\).
From Fig 6, we see explicitly that the minimum echo frequency is \(\sim 8\) kHz for parameter space of large \(\epsilon\) and small \(n_{s}\) while satisfies the GW170817 tidal deformability constraint \(\Lambda(1.4\,M_{\odot})\lesssim 800\), and can be reduced to 5 kHz if GW170817 constraint is dropped, i.e., if assuming the detected stars in binary merger not strangeon stars.
## IV Summary
We worked out a first rescaling scheme that enables us to maximally reduce the number of free parameters into a single parameter \(\bar{\epsilon}\) for strangeon matter. Utilizing this scheme, we demonstrated that strangeon stars composed of strangeon matter generally have very large compactness with large \(\bar{\epsilon}\) in most of its parameter space. We showed that all strangeon stars can meet the compactness condition for generating GW echoes, i.e., they feature a photon sphere within Buchdahl's limit. The minimum echo frequencies are of a few kilohertz for the empirical range of \(\bar{\epsilon}\), and can reduce to \(O(100)\) Hertz if \(\bar{\epsilon}\) is extended to its allowed limit. We explicitly constructed the corresponding dimensional parameter space of \(\epsilon\) and \(n_{s}\) with variations of \(N_{q}\) in their empirical range, and demonstrated that \(f_{\rm echo}^{\rm min}\approx 8\) kHz for realisitic parameter space that satisfy astrophysical constraints like GW170817, and can reduce to 5 kHz if the latter constraint is dropped.
It is generally expected that including the star-rotation effect can slightly reduce the echo frequencies [39; 43]. For strangeon stars, we expect rotation would yield a similar reduction of \(f_{\rm echo}\), potentially reducing frequencies to what LIGO can detect, considering our obtained \(f_{\rm echo}^{\rm min}\approx 5\sim 8\) kHz for the realistic non-rotating case is not very far from its detection limit. We leave this interesting possibility for future studies.
**Acknowledgments.** C. Zhang greatly thank Prof. Renxin Xu for the visit invitation to Peking University and is very grateful for the hospitality during the visit. C. Zhang is supported by the Institute for Advanced Study at The Hong Kong University of Science and Technology. C.J Xia is supported by National Natural Science Foundation of China (Grant No. 12275234) and National SKA Program of China (Grant No. 2020SKA0120300). R.X Xu is supported by the National SKA Program of China (2020SKA0120100).
Figure 6: Physical parameter space for (a) \(N_{q}=18\) and (b)\(N_{q}=9\). Black lines denote \(f_{\rm echo}\)/kHz, with red lines denoting the maximum masses \(M_{\rm TOV}/M_{\odot}\), green lines for the tidal deformabilities \(\Lambda\) at \(1.4\,{\rm M}_{\odot}\) and blue dotted lines for the maximal compactness \(M_{\rm TOV}/R_{\rm TOV}\). |
2304.03219 | Modification of Lie's transform perturbation theory for charged particle
motion in a magnetic field | It is pointed out that the conventional Lie transform perturbation theory for
the guiding center motion of charged particles in a magnetic field needs to be
modified for ordering inconsistency. There are two reasons. First, the ordering
difference between the temporal variation of gyrophase and that of the other
phase space coordinates needs to be taken into account. Second, it is also
important to note that the parametric limit of the derivative of a function is
not equivalent to the derivative of the limit function. When these facts are
taken into account, the near identity transformation rule for one form related
to the Lagrangian is modified. With the modified near identity transformation
rule, the drift motion of charged particles can be described in the first
order, instead of in the second order and beyond through a tedious expansion
process as in the conventional formulation. This resolves the discrepancy
between the direct and Lie transform treatments in the Lagrangian perturbation
theory for charged particle motion in a magnetic field. | Linjin Zheng | 2023-04-06T16:51:20Z | http://arxiv.org/abs/2304.03219v2 | # Modification of Lie's transform perturbation theory
###### Abstract
It is pointed out that the conventional Lie transform perturbation theory for the guiding center motion of charged particles in a magnetic field needs to be modified for ordering inconsistency. There are two reasons. First, the ordering difference between the temporal variation of gyrophase and that of the other phase space coordinates needs to be taken into account. Second, it is also important to note that the parametric limit of the derivative of a function is not equivalent to the derivative of the limit function. When these facts are taken into account, the near identity transformation rule for one form related to the Lagrangian is modified. With the modified near identity transformation rule, the drift motion of charged particles can be described in the first order, instead of in the second order and beyond through a tedious expansion process as in the conventional formulation. This resolves the discrepancy between the direct and Lie transform treatments in the Lagrangian perturbation theory for charged particle motion in a magnetic field.
pacs: 52.53.Py, 52.55.Fa, 52.55.Hc +
[FOOTNO
Introduction
The guiding center motion of charged particles in a magnetic field is a fundamental topic for magnetically confined fusion research. The standard guiding center theory has been developed since the 1950s as reviewed in reference [1]. The topic was later revisited using the Hamiltonian and Lagrangian theories, for example in references [2; 3; 4; 5; 6]. This is mainly related to the development of nonlinear gyrokinetic simulation as studied or reviewed in references [8; 9; 10]. This is due to the concerns that the standard guiding-center theory does not preserve Liouville's theorem [11] and the energy conservation law for time-independent systems. These properties are especially important for long-time simulations. This has motived the further development of the Hamiltonian and Lagrangian theories for guiding center motion.
One of the important developments in the Hamiltonian and Lagrangian theories for guiding center motion of charged particles is the introduction of the phase space Lagrangian theories and the Lie transform perturbation technique. The pioneer contributions in this procedure can be found in references [4; 5; 6; 7] and references therein. The detailed Lie transformation procedure for guiding center motion of charged particles, which were omitted in the original work in reference [4], were given in reference [12]. The Lie transform method provides a systematic perturbation theory for guiding center motion and has many advantages, such as the near identity transform process allowing the expansion generator be determined in the order-by-order analyses. Also, the backward transformation can be obtained easily from the forward transformation.
However, there is a discrepancy between the direct and Lie transform treatments in the Lagrangian perturbation theory in phase space for charged particle motion in a magnetic field. In the direct method, the phase space Lagrangian valid to the first order is given as follows [6]
\[{}^{d}\Gamma=\left(\frac{e}{mc}\mathbf{A}+u\mathbf{b}\right)\;\cdot\;d\mathbf{ X}+\frac{mc}{e}\mu d\zeta-\left(\frac{u^{2}}{2}+\mu B+\frac{e}{m}\varphi \right)dt, \tag{1}\]
while the standard Lie transform theory [4], which is detailed in the appendix of reference [12] (Eq. (B18) with the zeroth order contribution added), yields in the same order
\[{}^{d}\Gamma\;=\;\left(\frac{e}{mc}\mathbf{A}+u\mathbf{b}\right)\;\cdot\;d \mathbf{X}-\left(\frac{u^{2}}{2}+\mu B+\frac{e}{m}\varphi\right)dt. \tag{2}\]
Here, the general phase space coordinate system \(\vec{Z}=\{{\bf X},\mu,u,\zeta;t\}\) is used, \({\bf X}\) is related to the guiding center coordinate, \(\mu=v_{\perp}^{2}/2B\) is the magnetic moment, \({\bf v}\) denotes the particle velocity with \(v_{\perp}\) being the perpendicular component and \(u\) the parallel component, and \(t\) represents time, \({\bf A}\) is the vector potential, \({\bf B}\) is the magnetic field, \({\bf b}={\bf B}/B\), \(\varphi\) denotes the electric scalar potential, \(e\) is the charge, \(c\) is the speed of light, the boldface denotes the vector in configuration space, \(\bar{(\cdot)}\) and \(\bar{(\cdot)}\) are introduced to represent respectively the covariant and contravariant vectors in the phase space with time included. The constant factor \(m\) for Lagrangian, for example in Eqs. (1) and (2), has been cast aside. To distinguish one form (a covariant vector)\(\bar{\Gamma}=\{\Gamma_{\mu}\}\) and \(\Gamma_{\mu}dZ^{\mu}\), the notation for the scalar zero form \({}^{d}\Gamma=\Gamma_{\mu}dZ^{\mu}\) is introduced. In these analyses, the gyrofrequency \(\Omega=eB/mc\) is assumed to be high, i.e., \((v/R)/\Omega\sim(\partial/\partial t)/\Omega\sim{\cal O}(\epsilon)\ll 1\), where \(R\) is the scale of electromagnetic field, which is larger than the Larmor radius by an order of magnitude. It is also assumed that \(v_{\bf E}=|{\bf E}\times{\bf B}|/B^{2}\ll v\).
Comparing Eqs. (1) and (2) one can see that the term "\((mc/e)\mu d\zeta\)" is missing in the conventional Lie transformation treatment in the first order. Note that
\[\frac{(mc/e)\mu d\zeta}{u{\bf b}\cdot d{\bf X}} \sim \frac{(mc/e)(v_{\perp}^{2}/B)(d\zeta/dt)}{u{\bf b}\cdot(d{\bf X}/ dt)} \tag{3}\] \[\sim \frac{(v_{\perp}^{2}/(eB/mc))(d\zeta/dt)}{u{\bf b}\cdot(d{\bf X}/ dt)}\sim\frac{v_{\perp}^{2}}{u^{2}}\sim 1.\]
Here, it has been used that \(d\zeta/dt\sim eB/mc\) and \(d{\bf X}/dt\sim{\bf v}\). The ordering estimate in Eq. (3) shows that one cannot regard \((mc/e)\mu d\zeta\) as \(O(\epsilon)\) as compared to \(u{\bf b}\cdot d{\bf X}\). Equation (2) is therefore ordering inconsistent because the term \(u{\bf b}\cdot d{\bf X}\) is kept but the term \((mc/e)\mu d\zeta\) is dropped, noting that they are of the same order.
Note that the detailed derivation process was omitted in Ref. [4]. This leads us to discuss directly Ref. [12]. Nevertheless, as pointed in Ref. [12], the detailed derivation given in the Appendix B of the paper is similar to that in Ref. [4] except the rotation effects being added. It is especially noted that the ordering inconsistency in Eq. (B18) of Ref. [12] discussed above appears also in Eq. (20) of Ref. [4]. Like a mathematical theorem, it persists unless a counter-proof is given. The Lie transform theory has become a standard perturbation theory after thorough reviews for example in the journal Reviews of modern physics (see for instance Refs. [5; 6; 7; 10]). From the citation list of relevant articles one can see that the classical works in Lie transform are still actively used.
As will be seen, this is because in the conventional Lie transform perturbation theory the following type of deduction has been employed
\[\left.\frac{\partial F}{\partial Z}\right|_{\epsilon=0}=\frac{\partial\left.F \right|_{\epsilon=0}}{\partial Z}. \tag{4}\]
Apparently, this is not generally applicable. The parametric limit of the derivative of a function is not equivalent to the derivative of the limit function. This becomes serious for the system with rapidly varying coordinate. For example for the case with \(F=\left.F\right|_{\epsilon=0}+\epsilon f(Z)\) and \(\partial f/\partial Z\sim 1/\epsilon\), one has
\[\left.\frac{\partial F}{\partial Z}\right|_{\epsilon=0}=\frac{\partial\left.F \right|_{\epsilon=0}}{\partial Z}+\frac{\partial f}{\partial Z}\neq\frac{ \left.\partial\left.F\right|_{\epsilon=0}}{\partial Z}.\right. \tag{5}\]
The terms \(\frac{\partial\left.F\right|_{\epsilon=0}}{\partial Z}\) and \(\frac{\partial f}{\partial Z}\) are actually of the same order in this case. The Larmor radius expansion exactly has this feature. Although the Larmor radius is small, but the gyrophase varies in time rapidly.
It is interesting to point out that a similar case in gyro-fluid models was pointed out earlier in Ref. [13] and further clarified later in Refs. [14; 15]. It is also related to the use of the parametric expansion with respect to the small parameter \((\partial/\partial t)/\Omega\ll 1\). When it is applied to the second order fluid moment of Vlasov equation the inclusion of first order Finite-Larmor-Radius (FLR) corrections to a double-adiabatic closure in a fluid modelling causes the dispersion relation of magneto-acoustic waves propagating perpendicularly to a background magnetic field to get a wrong sign in their spatial dispersion. The unphysical result is fixed only if the \(\epsilon\ll 1\) expansion is directly performed on the fluid moment equations which account for the full anisotropy of the second and third order velocity moments.
In this paper, we modify the conventional Lie transformation formalism for the guiding center motion of charged particles in a magnetic field by taking into account the ordering difference between the temporal variations of gyrophase and that of the other phase space coordinates and the fact that the parametric limit of the derivative of a function is not equivalent to the derivative of the limit function. This leads us to resolve the discrepancy between the direct and Lie transform treatments in the Lagrangian perturbation theory in the phase space. The modification presented in this paper is expected to affect generally the Lie transform perturbation formulation for the systems with a rapidly varying coordinate.
The manuscript is organized as follows. In Sec. II a brief review of the conventional Lie transform theory is given; in Sec. III. the modification of Lie's transform perturbation theory
for charged particle motion in a magnetic field is described; In the last section, conclusions and discussion are presented. Appendix A is introduced to double confirm the newly derived transformation rule for one form.
## II Review of the conventional Lie transform perturbation theory
In this section, we briefly review the conventional Lie transform perturbation theory for the charged particle motion in a magnetic field [4; 6]. This will pave the way for the modified theory to be described in the next section.
With a small parameter \(\epsilon\), the near-identity coordinate transform can be generally expressed as
\[Z^{\mu} = z^{\mu}+\epsilon Z^{\mu}_{1f}(\vec{z})+\epsilon^{2}Z^{\mu}_{2f}( \vec{z})+\cdots\]
Here, the subscript "f" denotes the forward transformation from the current to the new coordinates. For the charged particle motion in a magnetic field, the forward transformation is simply the transformation to the guiding center. The forward transformation can be generally denoted as
\[Z^{\mu}=Z^{\mu}_{f}(\vec{z},\epsilon). \tag{6}\]
We also introduce the backward transformation as
\[z^{\mu}=Z^{\mu}_{b}(\vec{Z}_{f}(\vec{z},\epsilon),\epsilon). \tag{7}\]
In the Lie transformation, the coordinate transformation is specified through a generator \(g^{\mu}\) such that
\[\frac{\partial Z^{\mu}_{f}(\vec{z},\epsilon)}{\partial\epsilon} = g^{\mu}(\vec{Z}_{f}(\vec{z},\epsilon))\quad\text{and}\quad Z^{\mu}_{f}(\vec{ z},0)=z^{\mu}. \tag{8}\]
Applying \(\partial/\partial\epsilon\) on the backward transformation in Eq. (7), one obtains
\[\frac{\partial Z^{\mu}_{b}}{\partial Z^{\nu}_{f}}\frac{\partial Z ^{\nu}_{f}}{\partial\epsilon}+\frac{\partial Z^{\nu}_{b}}{\partial\epsilon}=0.\]
Here, the summation for repeated indices is implied as usual. Using Eq. (8), one obtains
\[\frac{\partial Z^{\mu}_{b}(\vec{z},\epsilon)}{\partial\epsilon} = -g^{\nu}(\vec{Z})\frac{\partial Z^{\mu}_{b}}{\partial Z^{\nu}_{f}}. \tag{9}\]
We first consider the application of the Lie transformation on a scalar. Suppose there is a forward transform from a scalar \(s(\vec{z})\) to a new scalar \(S(\vec{Z},\epsilon)\) such that
\[S(\vec{Z},\epsilon)=s(\vec{Z}_{b}(\vec{Z},\epsilon)). \tag{10}\]
Applying \(\partial/\partial\epsilon\) on it, one obtains
\[\frac{\partial S(\vec{Z},\epsilon)}{\partial\epsilon}=\frac{\partial s(\vec{Z }_{b}(\vec{Z},\epsilon))}{\partial Z_{b}^{\mu}}\frac{\partial Z_{b}^{\mu}}{ \partial\epsilon}=\frac{\partial S(\vec{Z},\epsilon)}{\partial Z_{b}^{\mu}} \frac{\partial Z_{b}^{\mu}}{\partial\epsilon}.\]
Using Eq. (9) and the chain rule, one obtains
\[\frac{\partial S(\vec{Z},\epsilon)}{\partial\epsilon}=-g^{\mu}(\vec{Z})\frac{ \partial S(\vec{Z},\epsilon)}{\partial Z^{\mu}}. \tag{11}\]
Defining the operator \(L_{g}\equiv g^{\mu}(\partial/\partial Z^{\mu})\). Eq. (11) becomes
\[\frac{\partial S}{\partial\epsilon}=-L_{g}S. \tag{12}\]
This further gives that
\[\frac{\partial^{n}S}{\partial\epsilon^{n}}=\left(-L_{g}\right)^{n}S. \tag{13}\]
We now expand \(S(\vec{Z},\epsilon)\) in a Taylor series:
\[S(\vec{Z},\epsilon)\;=\;\sum_{n=0}^{+\infty}\frac{\epsilon^{n}}{n!}\frac{ \partial^{n}S}{\partial\epsilon^{n}}\Bigg{|}_{\epsilon=0}.\]
If the limit and derivative are commutable, this equation becomes
\[S(\vec{Z},\epsilon)\;=\;\sum_{n=0}^{+\infty}\frac{\epsilon^{n}}{n!}\frac{ \partial^{n}S|_{\epsilon=0}}{\partial\epsilon^{n}}=\sum_{n=0}^{+\infty}\frac{ \epsilon^{n}}{n!}\frac{\partial^{n}s}{\partial\epsilon^{n}}. \tag{14}\]
Using Eqs. (13) and (14), one obtains the Lie transformation of a scalar:
\[S=e^{-\epsilon L_{g}}s. \tag{15}\]
As pointed out in the introduction section that the parametric limit of the derivative of a function is not equivalent to the derivative of the limit function, this transformation rule needs to be verified case by case. Because of the operator expression, the inverse transform is simply
\[s=e^{\epsilon L_{g}}S, \tag{16}\]
evaluating at \(\epsilon=0\).
The coordinate transformation defined in equations (6) and (7) indicates that \(z^{\alpha}=Z_{b}^{\alpha}(\vec{Z},\epsilon)\). Comparing with Eq. (10), one can see that the coordinate transformation is just a special case of scalar transformation. Let us introduce the coordinate function \(I^{\alpha}\) defined by \(I^{\alpha}(\vec{z})=z^{\alpha}\) for a particular \(\alpha\). One then has
\[Z_{b}^{\alpha}=e^{-\epsilon L_{g}}I^{\alpha}(\vec{z}). \tag{17}\]
Explicitly, one has [12]
\[Z^{\alpha}(\vec{Z},\epsilon) = z^{\alpha}-\epsilon L_{g}z^{\alpha}+\frac{1}{2}\epsilon^{2}L_{g} \left(L_{g}z^{\alpha}\right)+\cdots \tag{18}\] \[= z^{\alpha}-\epsilon g_{1}^{\alpha}-\epsilon^{2}\left(g_{2}^{ \alpha}-\frac{1}{2}g_{1}^{\beta}\frac{\partial g_{1}^{\alpha}}{\partial z^{ \beta}}\right)+\cdots,\]
where \(g_{\cdots}\) are the functions of \(\vec{z}\). Since the transformation is invertible, the inverse transformation is given by
\[z^{\alpha}(\vec{z},\epsilon) = Z^{\alpha}+\epsilon g_{1}^{\alpha}+\epsilon^{2}\left(g_{2}^{ \alpha}+\frac{1}{2}g_{1}^{\beta}\frac{\partial g_{1}^{\alpha}}{\partial Z^{ \beta}}\right)+\cdots, \tag{19}\]
where \(g_{\cdots}\) are the functions of \(\vec{Z}\).
Next, we consider the Lie transform on a 1-form in the phase space: \(\gamma_{\mu}\). For a coordinate transformation \(\vec{z}\rightarrow\vec{Z}\), the new 1-form \(\Gamma_{\mu}\) follows the invariant relation: \(\Gamma_{\mu}dZ^{\mu}=\gamma_{\mu}dz^{\mu}\). This is completely equivalent to the usual rule for transforming a covariant vector:
\[\Gamma_{\mu}(\vec{Z},\epsilon) = \frac{\partial Z_{b}^{\nu}(\vec{Z},\epsilon)}{\partial Z^{\mu}} \gamma_{\nu}(\vec{Z}_{b}(\vec{Z},\epsilon)). \tag{20}\]
Applying \(\partial/\partial\epsilon\) on it and using Eq. (11), one obtains
\[\frac{\partial\Gamma_{\mu}(\vec{Z},\epsilon)}{\partial\epsilon} = -\frac{\partial}{\partial Z^{\mu}}\left[g^{\lambda}(\vec{Z}) \frac{\partial Z_{b}^{\nu}(\vec{Z},\epsilon)}{\partial Z^{\lambda}}\right] \gamma_{\nu}(\vec{Z}_{b}(\vec{Z},\epsilon)) \tag{21}\] \[-g^{\lambda}(\vec{Z})\frac{\partial Z_{b}^{\nu}(\vec{Z},\epsilon )}{\partial Z^{\mu}}\frac{\partial\gamma_{\nu}(\vec{Z}_{b}(\vec{Z},\epsilon)) }{\partial Z^{\lambda}}\] \[= -g^{\lambda}(\vec{Z})\left[\frac{\partial\Gamma_{\mu}(\vec{Z}, \epsilon)}{\partial Z^{\lambda}}-\frac{\partial\Gamma_{\lambda}(\vec{Z}, \epsilon)}{\partial Z^{\mu}}\right]-\frac{\partial}{\partial Z^{\mu}}\left[g^ {\lambda}(\vec{Z})\Gamma_{\gamma}(\vec{Z},\epsilon)\right].\]
Let \(\xi_{\mu}\) be an arbitrary 1-form and \(L_{g}\bar{\xi}\) another 1-form whose components given by
\[(L_{g}\bar{\xi})_{\mu} = g^{\nu}\left(\partial_{\nu}\xi_{\mu}-\partial_{\mu}\xi_{\nu} \right). \tag{22}\]
To be specific, we point out that Eq. (22) is just Eq.(46) in Ref. [5], in which the last term of Eq. (21) has been dropped in view of that the total derivative does not contribute to the variational principle. With this definition, Eq. (21) can be expressed as
\[\frac{\partial\bar{\Gamma}}{\partial\epsilon}\;=\;-L_{g}\bar{\Gamma}-\bar{ \partial}(\vec{g}\cdot\bar{\Gamma}), \tag{23}\]
which is just Eq.(47) in Ref. [5]. Here, \(\bar{\partial}=\{\partial/\partial Z^{\mu}\}\).
Noting the symmetry property, one can prove that \(L_{g}\bar{\partial}\) always vanishes (similar to \(\nabla\,\times\,\nabla=0\) in the three dimensional case) and \(g^{\mu}(L_{g}\bar{\xi})_{\mu}=0\). Therefore, Eq. (23) can be applied inductively to yield
\[\frac{\partial^{n}\bar{\Gamma}}{\partial\epsilon^{n}}\;=\;\left(-L_{g}\right) ^{n}\bar{\Gamma}+\left(-\bar{\partial}\vec{g}\cdot\,\right)^{n}\bar{\Gamma}. \tag{24}\]
If the limit and and derivative are commutable, Eq. (24) leads to
\[\bar{\Gamma}(\vec{Z},\epsilon)\;=\;\sum_{n=0}^{+\infty}\frac{\epsilon^{n}}{n! }\frac{\partial^{n}\bar{\Gamma}}{\partial\epsilon^{n}}\Bigg{|}_{\epsilon=0}= \sum_{n=0}^{+\infty}\frac{\epsilon^{n}}{n!}\frac{\partial^{n}\bar{\Gamma}|_{ \epsilon=0}}{\partial\epsilon^{n}}=\sum_{n=0}^{+\infty}\frac{\epsilon^{n}}{n! }\frac{\partial^{n}\bar{\gamma}}{\partial\epsilon^{n}}.\]
Using this equation and Eq. (13), one obtains
\[\bar{\Gamma}\;=\;e^{-\epsilon L_{g}}\bar{\gamma}+\bar{\partial}S \tag{25}\]
Here, \(\bar{\partial}S\) results from the second term on the right hand side of Eq. (24) and gives rise to an exact differential in the variational principle. The inverse of Eq. (25) is simply
\[\bar{\gamma}\;=\;e^{\epsilon L_{g}}\bar{\Gamma}+\bar{\partial}s, \tag{26}\]
where \(s\) is a scalar different from \(S\). As pointed out in the introduction section that the parametric limit of the derivative of a function is not equivalent to the derivative of the limit function, this transformation rule needs to be verified case by case.
Using the transformation formula for scalar and 1-form in equations (15) and (25) (together with Eq. (22)) respectively, one can develop the high order perturbation theory. We assume that the 1-form in the phase space which can be expanded as follows
\[\bar{\gamma}=\bar{\gamma}^{(0)}+\epsilon\bar{\gamma}^{(1)}+\epsilon^{2}\bar{ \gamma}^{(2)}+\cdots. \tag{27}\]
It is also assumed that the lowest-order dynamics with \(\bar{\gamma}^{(0)}\) is well understood, i.e., its solutions are known or can be easily obtained. Therefore, the lowest-order trajectory can be used to find the solutions of higher orders.
In order to simplify the 1-form to sufficient orders, the following overall transformation operator, which is a composition of individual Lie transforms, is introduced
\[T\;=\;\cdots T_{3}T_{2}T_{1} \tag{28}\]
with
\[T_{n}\;=\;e^{-\epsilon^{n}L_{n}}. \tag{29}\]
Here, \(L_{n}\) denotes \(L_{g_{n}}\) as defined for scalar and 1-form in equations (15) and (25) (together with Eq. (22)) respectively. The generators \(g_{n}^{\mu}\) with \(n=1,\cdots,n\) will be used to simply the fundamental 1-form in Eq. (27) to order \(n\). The inverse of the transformation vector is
\[T^{-1}\;=\;T_{1}^{-1}T_{2}^{-1}T_{3}^{-1}\cdots \tag{30}\]
with
\[T_{n}^{-1}\;=\;e^{\epsilon^{n}L_{n}}. \tag{31}\]
When successive Lie transforms are applied in this manner, Eq. (25) becomes, noting that \(T_{i}\bar{\partial}\) always vanishes,
\[\bar{\Gamma}\;=\;T\bar{\gamma}+\bar{\partial}S, \tag{32}\]
where \(S\) collects all possible scalar contributions. Expanding \(\bar{\Gamma}\) and \(S\) in powers of \(\epsilon\) as well as \(\bar{\gamma}\) in Eq. (27) and collecting terms in each order, one obtains
\[\bar{\Gamma}^{(0)} = \bar{\gamma}^{(0)}, \tag{33}\] \[\bar{\Gamma}^{(1)} = \bar{\partial}S^{(1)}-L_{1}\bar{\gamma}^{(0)}+\bar{\gamma}^{(1)},\] (34) \[\bar{\Gamma}^{(2)} = \bar{\partial}S^{(2)}-L_{2}\bar{\gamma}^{(0)}+\bar{\gamma}^{(2)} -L_{1}\bar{\gamma}^{(1)}+\frac{1}{2}L_{1}^{2}\bar{\gamma}^{(0)},\] (35) \[\bar{\Gamma}^{(3)} = \bar{\partial}S^{(3)}-L_{3}\bar{\gamma}^{(0)}+\bar{\gamma}^{(3)} -L_{2}L_{1}\bar{\gamma}^{(0)}+\frac{1}{6}L_{1}^{3}\bar{\gamma}^{(0)}\] (36) \[-L_{2}\bar{\gamma}^{(1)}+\frac{1}{2}L_{1}^{2}\bar{\gamma}^{(1)} -L_{1}\bar{\gamma}^{(2)},\]
and so on.
These complete the basic theoretical review of the conventional Lie transform theory in references [4; 6]. The conventional Lie transform theory was applied to study the charged
particle motion in a magnetic field [4; 12]. In the Appendix B of Ref. [12], the Lagrangian in the zeroth order was obtained as follows
\[{}^{d}\Gamma^{(0)}\:=\:\frac{e}{mc}\mathbf{A}\,\cdot\,d\mathbf{X}, \tag{37}\]
while the first order Lagrangian is given as follows (Eq. (B18) in Ref. [12])
\[{}^{d}\Gamma^{(1)}\:=\:u\mathbf{b}\,\cdot\,d\mathbf{X}-\left(\frac{u^{2}}{2}+ \mu B+\frac{e}{m}\varphi\right)dt. \tag{38}\]
Combining Eqs. (37) and (38) yields Eq. (2) discussed in the introduction.
In these reviews, the correction as pointed out in the introduction section has not been included, especially as will be seen in the next section, the expansions of the phase space Lagrangian in Eqs. (33) - (36) will be modified. Consequently, the first order Lagrangian in Eq. (38) will be modified. The modification is related to the difference between Eq. (38) and the result by the direct approach in Eq. (1). The term \((mc/e)\mu d\zeta\) is missing in Eq. (38), although it is of the same order as the term \(u\mathbf{b}\,\cdot\,d\mathbf{X}\) as shown in the ordering analysis in Eq. (3). The term \((mc/e)\mu d\zeta\) is only picked up in the next order in the conventional Lee transform approach (Eq. (B30) in Ref. [12] or Eq.(29) in Ref. [4]). Note that Eq. (38) is obtained by strictly following the standard Lie transform formulation. As explained in the next section, the problem lies in that the commutation between the limit and derivative in deriving Eq. (25) is illegitimate in the case with a fast varying coordinate. In the derivation of Eq. (25), the commutation as shown in Eq. (4) was used to reduce \(\bar{\Gamma}\) to \(\bar{\gamma}\) in the conventional Lie transform formulation. As explained alternatively in Appendix A, the illegitimate commutation is equivalent to assume that \(\Gamma_{\mu}dZ^{\mu}=\gamma_{\mu}dZ^{\mu}\), which is apparently invalid. The correct one should be \(\Gamma_{\mu}dZ^{\mu}=\gamma_{\mu}dz^{\mu}\).
## III Modification of Lie Transformation
In this section, we describe the modification of the conventional Lie transformation formulation for the systems containing rapidly varying coordinates. For the guiding center motion of a charged particle in a magnetic field, the gyrophase is this type of coordinates. This helps solve the inconsistency in the phase-space Lagrangian perturbation theories between the direct derivation and the Lie transform formulation as pointed out in the introduction section.
Strictly speaking, the transformation rule for 1-form in Eq. (22) is correct. However, when applying this rule in equations (34)-(36), the ordering inconsistency occurs. One needs to take into account the difference between \(\Gamma_{\nu}\) and \(\gamma_{\nu}\). This is because the parametric limit of the derivative of a function is not equivalent to the derivative of the limit function. Equation (25) does not apply to the systems containing a rapidly varying coordinate. To correct the transformation rule, we continue the derivation of the transformation rule in Eq. (21) (or Eq. (22)) to include the transformation from \(\Gamma_{\nu}\) to \(\gamma_{\nu}\) in Eq. (20). This is carried out as follows
\[-g^{\lambda}(\vec{Z})\left(\frac{\partial\Gamma_{\nu}(\vec{Z}, \epsilon)}{\partial Z^{\lambda}}-\frac{\partial\Gamma_{\lambda}(\vec{Z}, \epsilon)}{\partial Z^{\nu}}\right)dZ^{\nu}\] \[= -g^{\lambda}(\vec{Z})\left(\frac{\partial\gamma_{\mu}(\vec{Z}_{ b}(\vec{Z},\epsilon))}{\partial Z^{\lambda}}\frac{\partial Z^{\mu}_{b}}{ \partial Z^{\nu}}-\frac{\partial\gamma_{\mu}(\vec{Z}_{b}(\vec{Z},\epsilon))} {\partial Z^{\nu}}\frac{\partial Z^{\mu}_{b}}{\partial Z^{\lambda}}\right)dZ ^{\nu}.\]
Using the coordinate transformation rule in Eq. (19), one obtains the modified transform rule for 1 form
\[(L_{g}\bar{\Gamma})_{\mu} = g^{\nu}\left(\partial_{\nu}\gamma_{\mu}-\partial_{\mu}\gamma_{ \nu}\right) \tag{39}\] \[-g^{\nu}\left[\left(\partial_{\nu}\gamma_{\delta}\right)\left( \partial_{\mu}g^{\delta}_{1}\right)-\left(\partial_{\mu}\gamma_{\delta}\right) \left(\partial_{\nu}g^{\delta}_{1}\right)\right]+\cdots.\]
Here, the second term seems to be formally one order smaller than the first term on the right. However, if the components of \(dz^{\nu}\) are different in order, the second term on the right hand side of Eq. (39) has to be kept for rapidly varying components for ordering consistency. The gyrophase is an example of rapidly varying coordinates in considering the charged particle motion in a magnetic field. Since the transformation rule in Eq. (39) is fundamentally important, an alternative derivation is provided in Appendix A. In the Appendix A, it is also pointed out that the conventional transform rule for 1 form reviewed in the previous section is actually obtained from the formula \(\Gamma_{\mu}dZ^{\mu}=\gamma_{\mu}dZ^{\mu}\), instead of \(\Gamma_{\mu}dZ^{\mu}=\gamma_{\mu}dz^{\mu}\). This is the consequence of the non-consistent exchange between the limit (\(\epsilon\to 0\)) and derivative.
The phase space Lagrangian for a charged particle motion in the electromagnetic field is given in references [4; 6]
\[{}^{d}\gamma = \left(\frac{e}{mc}A_{\mu}+v_{\mu}\right)dz^{\mu}+\left(\frac{v^{2 }}{2}+\frac{e}{m}\varphi\right)dt.\]
Here again, \({}^{(d)}\gamma\)" has been used in order to show the individual components of the one form \(\bar{\gamma}\) explicitly. In the perturbation analyses, \({}^{d}\gamma\) is expanded in \(\epsilon\) as follows
\[{}^{d}\gamma_{0} = \frac{e}{mc}A_{\mu}(z)dz^{\mu}, \tag{40}\] \[{}^{d}\gamma_{1} = v_{\mu}dz^{\mu}+\left(\frac{v^{2}}{2}+\frac{e}{m}\varphi\right)dt. \tag{41}\]
In the zeroth order, one has
\[{}^{d}\Gamma^{(0)} = dS^{(0)}+\frac{e}{mc}A_{\mu}(Z)\frac{\partial z^{\mu}}{\partial Z ^{\nu}}dZ^{\nu} \tag{42}\] \[= dS^{(0)}+\frac{e}{mc}A_{\mu}(Z)dZ^{\mu}-\frac{e}{mc}A_{\mu}(Z)dg_ {1}^{\mu}.\]
Here, \(dS=\partial_{\mu}SdZ^{\mu}\), the last term on the right is kept, since \(dg_{1}^{\mu}\) can be of order \(1/\epsilon\) as can be proved _a posteriori_. Letting \(S_{0}=A_{\mu}g_{1}^{\mu}\), one has \(dS^{(0)}-\epsilon A_{\mu}dg_{1}^{\mu}=\epsilon g_{1}^{\mu}dA_{\mu}\). Noting that \(dA_{\mu}\) is of order unity, one obtains
\[{}^{d}\Gamma^{(0)} = \frac{e}{mc}\mathbf{A}(\mathbf{X})\cdot d\mathbf{X}. \tag{43}\]
This derivation is in fact similar to the direct derivation of Eq. (3.41) in reference [6].
The first order Lagrangian in Eq. (34) becomes
\[{}^{d}\Gamma^{(1)} = dS^{(1)}-L_{1}^{conv\ d}\gamma^{(0)}+{}^{d}\gamma^{(1)}-v_{\mu} dg_{1}^{\mu}-g_{1}^{\lambda}\frac{\partial\gamma_{\mu}^{(0)}}{\partial Z^{ \lambda}}dg_{1}^{\mu}. \tag{44}\]
Here, \(L_{1}^{conv}\) is the conventional operator given in Eq. (22). The last two terms are the additional terms as compared to the conventional result as reviewed in Eq. (34). The fourth term on the right derives from the correction of the term \(v_{\mu}dz^{\mu}\), similar to the last term on the right hand side of Eq. (42). The last term comes from the correction to the 1-form transformation rule as shown in Eq. (39).
As shown in the appendix of reference [12], the conventional contribution can be reduced as fowllows
\[L_{1}^{conv\ d}\gamma_{0} = -\frac{e}{mc}g_{1}^{i}\left(\frac{\partial\mathbf{A}_{j}}{ \partial X^{i}}-\frac{\partial\mathbf{A}_{i}}{\partial X^{j}}\right)dX^{j} \tag{45}\] \[= -\frac{e}{mc}\mathbf{g}_{1}^{\mathbf{X}}\,\times\,\mathbf{B}\, \cdot\,d\mathbf{X},\]
where \(\mathbf{B}=\nabla\,\times\,\mathbf{A}\) has been used. Thus, the one form in Eq. (44) becomes
\[{}^{d}\Gamma^{(1)} = dS^{(1)}+\left[(u\mathbf{b}+\mathbf{v}_{\perp})-\frac{e}{mc} \mathbf{B}\,\times\,\mathbf{g}_{1}^{\mathbf{X}}\right]\,\cdot\,d\mathbf{X}- \mathbf{v}\,\cdot\,d\mathbf{g}_{1}^{\mathbf{X}}-\frac{e}{mc}\mathbf{g}_{1}^{ \mathbf{X}}\,\cdot\,\nabla\mathbf{A}\,\cdot\,d\mathbf{g}_{1}^{\mathbf{X}} \tag{46}\] \[-\left(\frac{u^{2}}{2}+\mu B+\frac{e}{m}\varphi\right)dt.\]
Similar to the direct reduction procedure in reference [6], noting further that
\[\frac{1}{2}d\left({\bf g}_{1}^{\bf X}\,\cdot\,\nabla{\bf A}\,\cdot \,{\bf g}_{1}^{\bf X}\right) = \frac{1}{2}\left(d{\bf g}_{1}^{\bf X}\,\cdot\,\nabla{\bf A}\,\cdot \,{\bf g}_{1}^{\bf X}+{\bf g}_{1}^{\bf X}\,\cdot\,\nabla{\bf A}\,\cdot\,d{\bf g }_{1}^{\bf X}\right)\] \[+{\bf g}_{1}^{\bf X}\,\cdot\,\left(d\nabla{\bf A}\right)\,\cdot \,{\bf g}_{1}^{\bf X},\]
one obtains
\[{\bf g}_{1}^{\bf X}\,\cdot\,\nabla{\bf A}\,\cdot\,d{\bf g}_{1}^{ \bf X} = -\frac{1}{2}\left(d{\bf g}_{1}^{\bf X}\,\cdot\,\nabla{\bf A}\, \cdot\,{\bf g}_{1}^{\bf X}-{\bf g}_{1}^{\bf X}\,\cdot\,\nabla{\bf A}\,\cdot\,d {\bf g}_{1}^{\bf X}\right)\] \[+\frac{1}{2}\left(d{\bf g}_{1}^{\bf X}\,\cdot\,\nabla{\bf A}\, \cdot\,{\bf g}_{1}^{\bf X}+{\bf g}_{1}^{\bf X}\,\cdot\,\nabla{\bf A}\,\cdot\,d {\bf g}_{1}^{\bf X}\right)\] \[= -\frac{1}{2}\left(d{\bf g}_{1}^{\bf X}\,\cdot\,\nabla{\bf A}\, \cdot\,{\bf g}_{1}^{\bf X}-{\bf g}_{1}^{\bf X}\,\cdot\,\nabla{\bf A}\,\cdot \,d{\bf g}_{1}^{\bf X}\right)+\frac{1}{2}d\left({\bf g}_{1}^{\bf X}\,\cdot\, \nabla{\bf A}\,\cdot\,{\bf g}_{1}^{\bf X}\right)\] \[-{\bf g}_{1}^{\bf X}\,\cdot\,\left(d\nabla{\bf A}\right)\,\cdot\, {\bf g}_{1}^{\bf X}.\]
Excluding the exact derivative and \(O(\epsilon)\) terms, one has
\[{\bf g}_{1}^{\bf X}\,\cdot\,\nabla{\bf A}\,\cdot\,d{\bf g}_{1}^{ \bf X} = -\frac{1}{2}{\bf g}_{1}^{\bf X}\,\cdot\,\left(d{\bf g}_{1}^{\bf X }\,\cdot\,\nabla{\bf A}-{\bf g}_{1}^{\bf X}\,\cdot\,\nabla{\bf A}\right) \tag{47}\] \[= \frac{1}{2}{\bf g}_{1}^{\bf X}\,\times\,d{\bf g}_{1}^{\bf X}\, \cdot\,{\bf B}+O(\epsilon).\]
Here, it has been noted that \(d{\bf g}_{1}^{\bf X}\,\times\,\nabla\,\times\,{\bf A}=\nabla{\bf A}\,\cdot \,d{\bf g}_{1}^{\bf X}-d{\bf g}_{1}^{\bf X}\,\cdot\,\nabla{\bf A}\). Therefore, one form in Eq. (46) is further reduced to
\[{}^{d}\Gamma^{(1)} = dS^{(1)}+\left[(u{\bf b}+{\bf v}_{\perp})-\frac{e}{mc}{\bf B}\, \times\,{\bf g}_{1}^{\bf X}\right]\,\cdot\,d{\bf X}-{\bf v}\,\cdot\,d{\bf g}_ {1}^{\bf X}-\frac{1}{2}\frac{e}{mc}{\bf g}_{1}^{\bf X}\,\times\,d{\bf g}_{1}^{ \bf X}\,\cdot\,{\bf B} \tag{48}\] \[-\left(\frac{u^{2}}{2}+\mu B+\frac{e}{m}\varphi\right)dt.\]
Again similar to the direct reduction procedure in reference [6], one can see that, to reduce \({}^{d}\Gamma_{1}\) one can choose
\[{\bf g}_{1}^{\bf X}\,=\,-\mathbf{\rho}, \tag{49}\]
where \(-\mathbf{\rho}={\bf v}\,\times\,{\bf b}/\Omega\). In this case, the term \({\bf v}_{\perp}\,\cdot\,d{\bf X}\) term is cancelled, the term \({\bf v}\,\cdot\,d{\bf g}_{1}^{\bf X}\) becomes of order \(\epsilon\), and the term \(-\frac{1}{2}\frac{e}{mc}{\bf g}_{1}^{\bf X}\,\times\,d{\bf g}_{1}^{\bf X}\, \cdot\,{\bf B}\) is reduced to \((e/mc)\mu d\zeta\). Combing the contributions from \({}^{d}\Gamma^{(0)}\) and \({}^{d}\Gamma^{(1)}\), one finally obtains from Eq. (48)
\[{}^{d}\Gamma=\left(\frac{e}{mc}{\bf A}+u{\bf b}\right)\,\cdot\,d{\bf X}+\frac {mc}{e}\mu d\zeta-\left(\frac{u^{2}}{2}+\mu B+\frac{e}{m}\varphi\right)dt. \tag{50}\]
This is the result obtained with the modified transform rule in Eq. (39).
Equation (50) agrees with the result using the direct approach in Eq. (1) [6], This is different from the conventional results in references [4; 12], as cited in Eqs. (37) and (38),
in which the same result (i.e., the term \((mc/e)\mu d\zeta\)) is only obtained in the second order, instead of the first order. As indicated in the ordering analyses in Eq. (3), the conventional result in Eq. (38) with \(u{\bf b}\,\cdot\,d{\bf X}\) being kept but the term \((mc/e)\mu d\zeta\) dropped is ordering inconsistent.
Equation Eq. (50) is obtained through the modified transformation rule for one form in Eq. (39). Two key factors are taken into account in deriving Eq. (39). First, the ordering difference between the temporal variation of gyrophase and that of the other phase space coordinates needs to be taken into account. Second, it is also important to note that the limit and derivative cannot be commuted in general. One can expect that not only the transformation rule for one form is changed, but also any higher forms involving \(dz^{\mu}\) (for example 2 or 3 forms, etc..) are affected as well. Nevertheless, it is noteworthy to note that since the main change for guiding center motion theory lies in the transformation rule for one form, i.e., the Lagrangian, and one usually does not need a backward transformation for Lagrangian. The usual transformation rule for zero form remains basically valid. This leads the forward and backward guiding center coordinate transformation in the conventional Lie transform formulation in Eq. (17) remain applicable. This indicates that the backward coordinate transformation can still be easily obtained by the inversion of the near identity exponential operator \(e^{-\epsilon L_{g}}\) from the forward transformation. Therefore, the modified framework for the case with a fast varying coordinate remains convenient for practical applications.
## IV Conclusions and discussion
In this paper, we show that the conventional Lie transform perturbation theory for the guiding center motion of charged particles in a magnetic field needs to be modified for ordering inconsistency. The reasons are two folds. First, the components of \(dz^{\mu}\) can be different in order. In the case of the guiding center motion of charged particles in a magnetic field, the temporal variation of gyrophase is much faster than other phase space coordinates. This is actually noted in the non-Lie-transform formulation in reference [6]. The other is related to the basic calculus rule. The parametric limit of the derivative of a function is not equivalent to the derivative of the limit function. This leads to the change of the near identity transformation rule.
With this ordering correction made, it is shown that the Lie transform approach can achieve the same phase space Lagrangian of guiding center motion in the first order as obtained by the direct approach in reference [6]. Without the correction, the same Lagrangian can only be obtained by the expansion in the second order and beyond [4; 12]. This is a mathematical problem. Its results were confirmed in multiple ways. The recovery of the results with direct approach in Eq. (1) is a direct justification. The ordering analyses in Eq. (3) justify the current result in Eq. (50), instead of the conventional one in Eq. (38). Also, the key result in the paper, i.e., the transformation rule for one form in Eq.(39), is double confirmed by an alternative derivation in Appendix A.
Let's discuss this further. The conventional Lie transform theory [4; 12] actually belongs to the regular perturbation theory. Because \(d{\bf X}\) and \(d\zeta\) are different in order, there needs to be a singular perturbation theory. This is similar to the treatment of boundary layer problem in the fluid theory, the boundary layer theory applied to tearing modes in plasma physics [16], and the renormalization process in dealing with the divergent issue in the quantum field theory. In many fields of physics, people have experienced such type of perturbation theory evolution. When the regular perturbation theory was found to be incorrect, the singular perturbation theory is developed with some kind of "renormalization". In this regard, J. R. Cary and A. J. Brizard made an important contribution in Ref. [6].
The importance of current work lies in the modification of the transformation rule for one form or the Lagrangian for the system with a fast-varying coordinate. In mathematics, differential forms give a unified description for defining integrands over curves, surfaces, solids, and higher-dimensional manifolds [7; 17]. It has many applications, especially in physics, geometry, and topology. This certainly includes plasma physics due to pioneer contributions by R. G. Littlejohn, J. R. Cary, A. J. Brizard, et al. [5]. For example, Hamilton's principle of least action is directly related to one form. The physics system often needs perturbation analyses. The Lie transform formalism for near identity transformation in the phase space provides a unique and powerful tool for analyzing the Lagrangian system. In plasma physics, it has been used to study the charged particle motion in a magnetic field, the nonlinear gyrokinetics, the magnetic field flow, et al. (see for instance the review articles in Refs. [5; 6; 10]). The Lie transform perturbation theory deals with the variations of various forms, i.e., the integrands, with the change of the variable differentials taken into account. The change of the one-form transform rule pointed out in this paper is critically
important to the Lie transform framework. To justify the need for this modification on a solid basis, the charged particle motion in a magnetic field is used as an example for demonstration since it can be compared with the results derived directly and verified by the obvious ordering analyses. The correction to the near identity transformation rule affects not only the theory of the guiding center motion of charged particles in a magnetic field, but also generally to the systems with a rapidly varying coordinate. In plasma physics, for example, one can see that the earlier derivation of the nonlinear gyrokinetic equation needs to be repaired since the last term in the newly derived transform rule in Eq. (39) has not been taken into consideration. This also affects the applications in the classical mechanics and even the fundamental mathematical formulation of Lie transformation as well. The principle for the modification of one form pointed out in this paper affects the transform rules for other forms, for example the two form related to the exterior derivative. Again, two factors are needed to consider for other forms in the perturbation analyses. First, one cannot just treat the integrands, i.e., various forms, order by order, but ignore the ordering difference between the variable differentials in the integration. Second, one cannot simply commute the limit and derivative. This requires a systematical reformulation of Lie transform for higher forms and will be addressed in the future work. Nevertheless, for studying the Lagrangian system, the treatment of one form given in this work is usually sufficient. These discussion indicate that the current work has a severe impact on the Lie perturbation theory. It basically limits the applicability of the conventional Lie transform theory only to the system without a fast-varying coordinate. Note that one of the most important applications of Lie transform is to treat the system with a fast-varying coordinate to obtain the averaged effects over the fast-varying coordinate. These indicate that the results in this paper are important.
This research is supported by Department of Energy Grants DE-FG02-04ER54742.
## Appendix A Alternative derivation
In this Appendix, we provide an alternative derivation of the transformation rule in Eq. (39). Note that
\[\Gamma_{\mu}dZ^{\mu}=\gamma_{\mu}dz^{\mu}. \tag{42}\]
Using the transformation rule for scale function in Eq. (12), one obtains
\[\frac{\partial}{\partial\epsilon}\left(\Gamma_{\mu}dZ^{\mu}\right) = \frac{\partial}{\partial\epsilon}\left(\gamma_{\mu}dz^{\mu}\right) \tag{12}\] \[= \gamma_{\mu}d\frac{\partial z^{\mu}}{\partial\epsilon}+\frac{ \partial\gamma_{\mu}}{\partial\epsilon}dz^{\mu}\] \[= -d\gamma_{\mu}\frac{\partial z^{\mu}}{\partial\epsilon}+\frac{ \partial\gamma_{\mu}}{\partial\epsilon}\frac{\partial z^{\mu}}{\partial Z^{ \nu}}dZ^{\nu}+d\left(\gamma_{\mu}\frac{\partial z^{\mu}}{\partial\epsilon}\right)\] \[\rightarrow -\frac{\partial\gamma_{\mu}}{\partial Z^{\nu}}\frac{\partial z^{ \mu}}{\partial\epsilon}dZ^{\nu}+\frac{\partial\gamma_{\mu}}{\partial\epsilon} \frac{\partial z^{\mu}}{\partial Z^{\nu}}dZ^{\nu}.\]
Here, the total derivative has been dropped, since we consider the variational principle. Using Eqs. (9) and (11), one has
\[\frac{\partial}{\partial\epsilon}\left(\Gamma_{\mu}dZ^{\mu}\right) = \frac{\partial\gamma_{\mu}}{\partial Z^{\nu}}\left(g^{\lambda}\frac{ \partial z^{\mu}}{\partial Z^{\lambda}}\right)dZ^{\nu}-g^{\lambda}\frac{ \partial\gamma_{\mu}}{\partial Z^{\lambda}}\frac{\partial z^{\mu}}{\partial Z ^{\nu}}dZ^{\nu} \tag{13}\]
Carrying out the expansion for \(\partial z^{\nu}/\partial Z^{\nu}\) to sufficient order, the transformation rule for one form in Eq. (39) is recovered.
Similar to the derivation of Eq. (12), it can be shown that the conventional result
\[\frac{\partial}{\partial\epsilon}\left(\Gamma_{\mu}dZ^{\mu}\right) = g^{\mu}\left(\frac{\partial\gamma_{\nu}}{\partial Z^{\mu}}-\frac{ \partial\gamma_{\mu}}{\partial Z^{\nu}}\right)dZ^{\nu}\]
is actually obtained from the formula \(\Gamma_{\mu}dZ^{\mu}=\gamma_{\mu}dZ^{\mu}\), instead of \(\Gamma_{\mu}dZ^{\mu}=\gamma_{\mu}dz^{\mu}\) in Eq. (11). This is the consequence of the non-consistent exchange between the limit (\(\epsilon\to 0\)) and derivative.
|
2304.14765 | LostPaw: Finding Lost Pets using a Contrastive Learning-based
Transformer with Visual Input | Losing pets can be highly distressing for pet owners, and finding a lost pet
is often challenging and time-consuming. An artificial intelligence-based
application can significantly improve the speed and accuracy of finding lost
pets. In order to facilitate such an application, this study introduces a
contrastive neural network model capable of accurately distinguishing between
images of pets. The model was trained on a large dataset of dog images and
evaluated through 3-fold cross-validation. Following 350 epochs of training,
the model achieved a test accuracy of 90%. Furthermore, overfitting was
avoided, as the test accuracy closely matched the training accuracy. Our
findings suggest that contrastive neural network models hold promise as a tool
for locating lost pets. This paper provides the foundation for a potential web
application that allows users to upload images of their missing pets, receiving
notifications when matching images are found in the application's image
database. This would enable pet owners to quickly and accurately locate lost
pets and reunite them with their families. | Andrei Voinea, Robin Kock, Maruf A. Dhali | 2023-04-28T11:23:44Z | http://arxiv.org/abs/2304.14765v1 | # LostPaw: Finding Lost Pets using a Contrastive Learning-based Transformer with Visual Input
###### Abstract
Losing pets can be highly distressing for pet owners, and finding a lost pet is often challenging and time-consuming. An artificial intelligence-based application can significantly improve the speed and accuracy of finding lost pets. In order to facilitate such an application, this study introduces a contrastive neural network model capable of accurately distinguishing between images of pets. The model was trained on a large dataset of dog images and evaluated through 3-fold cross-validation. Following 350 epochs of training, the model achieved a test accuracy of 90%. Furthermore, overfitting was avoided, as the test accuracy closely matched the training accuracy. Our findings suggest that contrastive neural network models hold promise as a tool for locating lost pets. This paper provides the foundation for a potential web application that allows users to upload images of their missing pets, receiving notifications when matching images are found in the application's image database. This would enable pet owners to quickly and accurately locate lost pets and reunite them with their families.
contrastive learning neural networks object detection transformers
## 1 Introduction
Losing a beloved pet can be a traumatic experience for their owners. Pet owners often go to great lengths to find their lost pets. These efforts include posting flyers, searching online, and hiring private investigators. Unfortunately, these methods are often unsuccessful, as it can be difficult to exhaustively search an area, especially when the pets take longer to be found. In addition, one of the challenges of finding lost pets is that they can travel long distances from their homes, especially if they become disoriented or afraid. In many cases, pets that go missing are found a short distance from their homes, often within a few blocks or even just a few houses away. However, it is not uncommon for pets to travel much further, especially if humans or other animals chase them or the smells and sights of new environments attract them. In such cases, owners must overcome additional hurdles to find their pets and thus cannot rely on traditional search methods.
In many situations, owners rely on information from other people who might not have seen the original request for help. Unfortunately, attempts to help in this scenario are often limited, as there are no unified channels that could be accessed both by volunteers and worried owners. In such cases, using artificial intelligence can be especially helpful, as it can analyze images of pets from any location and help identify and reunite them with their owners. However, comparing two images containing pets is not trivial and can most often fool human volunteers.
In recent years, contrastive learning has emerged as a promising solution to the problem of differentiating between two or more input data classes in computer vision (Chen et al., 2020). This approach involves training a machine learning model to identify subtle differences between images by comparing pairs of data samples. This technique has demonstrated notable efficacy in various visual recognition tasks, such as image classification, where models are trained to differentiate between objects or scenes based on visual features.
There are several advantages to using contrastive learning methods. One of the primary benefits is that they allow for the efficient learning of high-dimensional representations of data, as the model is able to learn the relevant features for each class by comparing them to others. Additionally, contrastive learning can be used to understand representations invariant to certain types of transformations, such as rotation or scaling, which can be helpful in tasks such as object recognition. For example, using this approach, a model can be trained on a large dataset of images of various pets by comparing their characteristics to distinguish between them and identify which ones are most likely to be the missing pet. This could save pet owners time and effort in their search for their lost animal and be a valuable tool for animal shelters and other organizations.
A contrastive neural network model leverages latent spaces to analyze and compare pairs of images to identify their similarities and differences. More specifically, by extracting and encoding image features into a high-dimensional space, the model can identify unique patterns and similarities between images, making it possible to distinguish between different breeds and individuals accurately. This technology can potentially revolutionize the way lost pets are found, making it easier for owners to locate their missing companions and reunite with them. Therefore, in this paper, we will delve further into the technical details of such a contrastive neural network model and discuss its potential applications for solving the problem of searching for a lost pet.
## 2 Related works
Several components are required to create a contrastive learning model capable of differentiating between images of pets. A fundamental part of such a model is a neural network architecture that can learn a robust and effective data representation. In this study, we employed the Vision Transformer model as the foundation of our contrastive learning model. In addition, we used the Detection Transformer model to extract the pets from the images and the AutoAugment feature to augment the images. Finally, to optimize the model, we utilize a contrastive loss function, which allows the model to learn the underlying structure of the data by contrasting similar and dissimilar examples. In the following sections, we provide a more in-depth description of these technologies and their implementation in our contrastive learning transformer model.
### Transformer models
Transformer models are a type of neural network architecture widely successful in various natural language processing tasks and have achieved state-of-the-art results on a large selection of benchmarks (Vaswani et al., 2017). One key feature of transformers is self-attention mechanisms, which allow the model to attend to different parts of the input data at different times while processing it. This allows the model to effectively capture long-range dependencies in the data, which is particularly useful for tasks such as language translation, where the meaning of a word can depend on the context in which it is used.
In addition to self-attention mechanisms, transformers also use multi-headed attention, which allows the model to attend to multiple parts of the input data simultaneously. This allows the model to process the data and improve its performance on tasks such as language translation. Overall, transformers have proven to be a powerful and effective neural network architecture for a wide range of natural language processing tasks and have been applied to a variety of other tasks as well, including image classification and object detection.
### Detection Transformer
The Detection Transformer (DETR) is a set-based object detector that utilizes a Transformer encoder-decoder architecture (Carion et al., 2020). This architecture is designed to be an end-to-end object detector, meaning it can perform object classification and bounding box regression. The model consists of a convolutional neural network backbone,
which extracts features from the input image. These features are then flattened and supplemented with a positional encoding before being passed through a Transformer encoder. Finally, the Transformer encoder processes the features and generates a set of feature maps representing the objects in the image.
The output of the Transformer encoder is then passed to a Transformer decoder, which inputs several fixed, learned positional embeddings called _object queries_. The Transformer decoder attends to the encoder output and generates a set of embeddings, one for each object query. Each embedding is passed through a shared feedforward network that predicts either a detection or a "no object" class. In the case of detection, the model returns the class of the object (e.g., a cat or a dog) and a bounding box that represents where the object is in the image.
### Vision Transformer
Vision Transformer (ViT) is a neural network architecture designed to perform image classification tasks by processing raw pixel values as input rather than using convolutional layers as in traditional image classification models (Dosovitskiy et al., 2021). ViT consists of a series of transformer blocks, each containing a self-attention mechanism. This allows the network to process input tokens in a contextualized manner by weighting the importance of different tokens based on their relationships to other tokens. More specifically, the input tokens are represented by 16\(\times\)16 pixel patches from an image. The transformer blocks are used to process these patches and extract relevant features for image classification. The patches are first embedded into a high-dimensional space using a learned linear transformation and then passed through twelve transformer blocks. Each transformer block takes in a sequence of patches as input and produces a new sequence of patches as output.
As described in the study of Dosovitskiy et al., the output of the transformer blocks is then passed through a linear layer and a softmax function to produce the final class probabilities. ViT is trained using supervised learning, where the ground truth class labels are used to compute the cross-entropy loss and backpropagate the gradients through the model to update the weights. Therefore, it is possible to use a model trained on a classification task to fine-tune ViT for various tasks, such as comparing pairs of images.
### AutoAugment
AutoAugment (Cubuk et al., 2019) is a method for automating the process of data augmentation, which is the practice of applying various transformations to images in a dataset. This step is performed in order to artificially increase the size of the dataset and improve the robustness of machine learning models trained on the given data. AutoAugment formulates the problem of finding the optimal data augmentation policy as a discrete search problem and implements a search algorithm that uses a recurrent neural network as a controller. Given this controller, AutoAugment samples a policy that specifies which image processing operations to use, the probability of using each operation in each batch, and the magnitude of the operations.
The search algorithm is trained using policy gradient methods, which allow the algorithm to update the controller based on the validation accuracy of a neural network trained with the frozen architecture and the sampled policy. As such, various pre-configured AutoAugment policies are available, including various transformation functions that take an image as input and return an altered image. These operations include shear, translation, rotation, and various contrasting colors, brightness, and sharpness adjustments. As such, AutoAugment is an effective way to augment data, which can improve the performance of machine learning models, and is particularly effective for image classification tasks.
### Contrastive loss
Contrastive loss is a loss function commonly used in machine learning models designed for unsupervised learning (Hadsell et al., 2006). It is based on the idea of contrastive learning, where the goal is to learn a representation of the data that captures its underlying structure, such as the relationships between different classes or the differences between similar and dissimilar examples.
Contrastive loss is often used in conjunction with a neural network architecture known as a Siamese network (Koch et al., 2015), a type of network consisting of two or more identical subnetworks that are trained to process different input data. The subnetworks are typically trained using the same weights, which allows them to learn a shared representation of the data. This shared representation is then used to compute the contrastive loss, which is used to update the weights of the network.
As proposed by Hadsell et al., the contrastive loss function can be seen in Equation 1. Here, \(d\) represents a function that calculates the Euclidean distance between the two vectors representing each pet, and \(m\) is a contrastive margin that controls how sensitive the model is to marking images as similar. The loss function in this approach minimizes the
distance between feature vectors of the same class (i.e., the same pet) while simultaneously maximizing the distance between feature vectors of different classes (i.e., different pets). Furthermore, the loss function ensures that the distance between the feature vectors of dissimilar examples exceeds the margin given by the hyperparameter \(m\).
\[\mathcal{L}(X_{1},X_{2})=\frac{1}{2}\;\mathbf{E}\left[\begin{cases}\max\{m-d(X_{ 1},X_{2}),0\}^{2}&\text{different pet}\\ d(X_{1},X_{2})^{2}&\text{same pet}\end{cases}\right] \tag{1}\]
## 3 Methodology
In this study, we designed a contrastive neural network model to differentiate between pictures of pets. The model was trained on a large dataset of images of various breeds of dogs and was evaluated on a held-out test set of images. The model was implemented using the PyTorch framework Paszke et al. (2017), and we used a combination of supervised learning techniques to train the model. In the following section, we will describe the dataset used for training and evaluation, the model architecture and training procedure, and the evaluation metrics used to assess the model's performance.
### Dataset
To create the dataset used in this study, we obtained images of pets from adoption websites such as AdoptAPet ado. Each image was fed through the DETR model, and the resulting bounding boxes of pets were used to crop them from the image. For this study, we focused only on images of dogs, resulting in 31 860 pets being stored with an average of 2.47 images per pet (78 702 total images). The cropped images were then resized to fit a square of \(384\times 384\) pixels, and if the image was not wide or tall enough, the missing area was filled with black. Each image was then augmented twice using a pre-trained AutoAugment model, which followed the policies CIFAR10, ImageNet, and SVHN. A test set was created by setting aside images extracted from 3595 pets, with a total of 8854 images. The augmented dataset contained 236 106 train images and 26 562 test images, which were used to train and evaluate the contrastive neural network model developed in this study. For a schematic view of the data pipeline, see Figure 1.
To enable the use of our data for contrastive learning, we needed to further combine the images into pairs, forming a pairwise dataset. Each pair was labeled as either _different_ or _same_, and contained two images of size \(384\times 384\). The pairwise dataset was compiled using a random number generator to select labels and images, and a seed value was used to ensure that the dataset is reproducible between runs.
To sample a pair during the dataset generation, we first chose whether the label is _same_ or _different_ based on a similarity probability. A value of 50% was chosen for this probability in order to create an approximately equal number of the same and different pairs. By selecting pairs to be the same or different with equal probability, we ensured that the contrastive ViT model was exposed to a balanced set of training labels. This can help prevent the model from becoming biased towards one type of example or the other and improve the model's generalization performance on the held-out test set.
Figure 1: Data collection process. The top nodes represent the individual steps that were taken for each image. The diagrams at the bottom show possible configurations of each step.
Following this, the images for the pair were selected from the cropped image set described earlier. If the label was _different_, we randomly selected two different pets from the dataset (using the same seed described earlier), and then selected an image for each pet. If the label was _same_, a random pet was chosen, and the pair of images was made up of two different images of the same pet. Furthermore, as each image in the dataset was augmented twice, we ensure that we never choose two augmentations of the same image. The process of selecting pairs was repeated to generate an extensive set of training pairs for the contrastive ViT model, an example of which can be observed in Figure 2.
As the dataset is dynamically created during the training process, methods to validate a trained model require additional consideration to account for the stochastic nature of the data. In this study, we opt to use \(k\)-fold cross-validation with the pairwise dataset, where each pair is assigned sequentially to one of \(k\) distinct folds. Given that one of the \(k\) folds is used for testing and the remaining ones for training, the model is trained \(k\) times, with each fold being selected only once for testing. Doing so ensures that the model is not exposed to the same pairs for training and testing. However, since the data is partitioned into folds based on the pairs, there exists a possibility that the same images may be present during training and cross-validation. More specifically, due to the randomized process of selecting a second image, pairs may lead to an image being compared with one that was met before. However, given the great size of the dataset, encountering duplicate pairs is an unlikely outcome during training and cross-validation. Nevertheless, we present additional results on a held-out test set containing completely novel pets to alleviate this issue.
### Model architecture
To develop the contrastive ViT model, we used ViT as the backbone of the model Wu et al. (2020). The output of ViT was flattened, and we appended three fully connected layers to the end of the model, which reduced the size of the output to the desired latent vector size. The last three layers of the contrastive ViT model consisted of two hidden layers and a final output layer. The hidden layers contained twice the number of neurons as the size of the latent space, while the final output layer contained the same number of neurons as the size of the latent space. These layers were used to transform the output of the ViT backbone into a compact representation of the data that captured the underlying structure of the image. The ELU activation function was used between the final layers to allow the propagation of negative values, which might become available during the use of the contrastive loss function. For a graphical overview of the model architecture, see Figure 3.
During training, only the last three layers of the model were fine-tuned, while the backbone parameters were kept frozen. While this may lead to limitations in the ability of the model to learn new lower-level features, it results in a more stable training process. Furthermore, this reduces the risk of overfitting the data due to the fewer parameters being optimized. To support our decision to freeze the backbone, we conducted an ablation study where the entire network parameters, including the ViT backbone, were updated during training. The results of this study indicated a significant decrease in performance, which is discussed in detail in the Results section.
The hyperparameters for the contrastive ViT model were chosen through an empirical process of parameter sweeps. This involved training the model with a range of different values for each hyperparameter and then evaluating the model's performance on a validation set. By comparing the results of the different runs, we were able to identify the
Figure 2: Example data pairs with labels underneath. Some of the images have been augmented.
values of the hyperparameters that resulted in the best performance. This process allowed us to fine-tune the model and ensure it achieved the best possible performance in differentiating between pictures of pets. Table 1 shows a list of these hyperparameters.
### Evaluation metrics
As we briefly described earlier, we used \(k\)-fold cross-validation to evaluate the performance of the contrastive learning model. This is a standard method for evaluating machine learning models that help to mitigate the risk of overfitting. In \(k\)-fold cross-validation, the training data is divided into \(k\) subsets, or _folds_, and the model is trained \(k\) times, each time using \(k-1\) subsets for training and the leftover subset for testing. This allows the model to be evaluated on various test sets, which helps provide a more robust estimate of its generalization performance.
In our study, we used \(k=3\) for our cross-validation, which resulted in 3 different models being trained and evaluated. We trained the model for a fixed number of epochs for each fold and used the validation set to tune the model's hyperparameters. Once the model was trained, we evaluated it on the test set and recorded its performance in terms of accuracy, the type I and type II errors, as well as the \(F_{1}\) score. The type I error represents the proportion of false positives, while the type II error represents the probability of false negatives. Using these values, we calculate the precision and recall of our model, which are used to obtain the \(F_{1}\) score value (see Equation 2). The precision and recall values represent the performance of the classification model on the given dataset.
\begin{table}
\begin{tabular}{l l} \hline Name & Value \\ \hline Epochs & 350 \\ Latent Space Size & 512 \\ Batch Size & 8 \\ Batch Count per Epoch & 128 \\ Test Batch Size & 8 \\ Test Batch Count & 128 \\ Optimizer & AdamW \\ Learning Rate & 5.0e-5 \\ Weight Decay & 2.0e-4 \\ Contrastive Margin & 1.66 \\ \hline \end{tabular}
\end{table}
Table 1: Hyperparameters for the contrastive ViT model.
Figure 3: Architecture of the Contrastive Vision Transformer model.
Precision is the proportion of true positive predictions out of all positive predictions made by the model and measures how accurate the model is when it predicts positive instances. A high precision score indicates that it is usually correct when the model predicts a positive instance. Recall, on the other hand, is the proportion of true positive predictions out of all actual positive instances in the dataset, and measures how well the model is able to detect all positive instances in the dataset. A high recall score indicates that the model is able to identify most of the positive instances. Finally, the \(F_{1}\) score, as described in Equation 2, is a harmonic mean of the precision and recall values. It combines precision and recall into a single score that represents the model's overall performance. By examining these metrics, we were able to get a more detailed understanding of the model's performance.
\[F_{1}=2\cdot\frac{\text{Precision}\cdot\text{Recall}}{\text{Precision}+\text{ Recall}} \tag{2}\]
After evaluating the model on each of the three folds, we averaged the test metrics to obtain a final score indicating whether our method is overfitting to the provided data. This allowed us to get a more reliable estimate of the model's generalization performance.
## 4 Results
As described earlier, we used 3-fold cross-validation to evaluate the performance of our contrastive learning model. Similarly, a held-out test set was used to assess the model's generalizability. Throughout 350 epochs, an average \(F_{1}\) score of 88.8% on the cross-validation set was achieved. In addition, the model was trained on a large dataset and did not appear to be overfitting, as the validation accuracy closely followed the training accuracy (see Figure 3(a)).
In addition to the accuracy results, we further examined the loss values of the model during training (see Figure 3(b)). We observed that the loss value steadily dropped from a starting value of approximately 1.16 to a final value of approximately 0.04. This trend generally indicates the model is steadily learning a better representation of the data throughout training. Furthermore, the low loss value suggests that the model was able to learn an adequate representation of the data, which could potentially allow the model to make accurate decisions on unseen samples.
When examining the errors of the model (see Figure 5), we observed that the model initially classified every pair of pet images as the same pet. However, over the course of training, the model learned to differentiate between different pets, and the type I error decreased. Furthermore, the type II error was very close to zero for most of the training period. These results suggest that the model could learn a robust and relatively effective representation of the data, which could distinguish between different pets.
When examining the outcomes of the models on the held-out test set, as illustrated in Table 2, we noted that the average \(F_{1}\) score was 91.1% (SD=0.41%). Similarly, the mean type I error was 9.7%, and the type II error was 0.06%. These outcomes indicate an improvement over the metric values recorded on the train and validation sets. This may be
Figure 4: Mean train accuracy and loss of the contrastive ViT model, averaged over three model runs. The data for accuracy was smoothed by averaging the values every five epochs.
attributed to a variety of reasons, which are discussed in detail in the subsequent section. However, the results still suggest that the model has effectively generalized to new data.
Furthermore, during the ablation study, we observed that a fully-trained contrastive ViT model achieved a cross-validation \(F_{1}\) score of 80.0% and a held-out test set \(F_{1}\) score of 78.6%. This provides strong support for our decision to set the layers of the backbone model as fixed. In particular, the model appears to overfit the data more than it does with frozen layers since the held-out test set performance is inferior to the validation performance. For more results of the ablation study, see Figure 6 in Appendix A.
## 5 Discussion and conclusions
In this study, we trained a contrastive neural network model to differentiate between pictures of dogs and evaluated its performance on a held-out test set. The results of the 3-fold cross-validation have shown that the contrastive learning model can differentiate between pictures of pots with acceptable accuracy and can generalize to unseen data. In the following section, we will discuss the results of the evaluation in more detail, as well as the potential implications of these results for the use of artificial intelligence in the search for lost pets.
One issue with the results is the comparatively high number of false positives. While this might initially seem like a negative aspect of the model's performance, it could be beneficial in finding lost pets. Suppose there are only a relatively small number of missing pets in a given area at any given time. In that case, a high rate of false positives may not be a significant issue, as these can easily be dismissed.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Metric} & \multicolumn{4}{c}{Dataset} \\ \cline{2-5} & Training & Cross-val. & Held-out & Held-out std. \\ \hline Accuracy & 0.8687 & 0.8737 & 0.9028 & 0.0036 \\ Type I error & 0.1309 & 0.1261 & 0.0966 & 0.0036 \\ Type II error & 0.0004 & 0.0002 & 0.0006 & 0.0003 \\ \(F_{1}\) score & 0.8838 & 0.8880 & 0.9108 & 0.0041 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Accuracy and errors of the model, given the various sets employed.
Figure 5: Type I and II errors of the model on the test set at every epoch. The data for the errors were smoothed by averaging the values every five epochs.
Another issue to consider is the use of the AutoAugment feature, which sometimes inverts the color of the pet images. While this could potentially skew the training and testing accuracy, it could also lead to better generalization. By introducing variations in the color of the images, the model may be able to learn more robust features that are less sensitive to changes in the appearance of the images. This could help the model to perform better on real-world data that may contain variations in color and lighting conditions.
A potential issue has been observed in the accuracy metrics of the model, where the cross-validation accuracy and held-out test set accuracy are higher than the training accuracy, which requires further investigation. The higher cross-validation accuracy may be a result of random fluctuations around the true accuracy, as the two accuracies frequently cross each other during training, as depicted in Figure 4a. The reason for the improvement of over 1% in the performance of the held-out test set compared to the cross-validation set is uncertain. This discrepancy could be attributed to the data distribution or the reduced number of pets in the held-out test set. However, it is worth noting that adequate measures have been taken to eliminate any systematic errors that could have influenced the observed performance gains.
One potential direction for future research would be to expand the network to include other types of pets. This could be done by first using DETR to identify which pet is present in the image (e.g., a cat or a dog). Once the image has been identified as containing a specific pet, it could be passed through a separate fine-tuned model that is specialized in comparing pets within each class. This approach would allow the network to take advantage of the strengths of both the DETR and ViT models and could lead to a more robust model due to having more contrastive data available.
Similarly, while this study focused on a specific dataset of dog images, the contrastive learning approach described in this paper can also be applied to other datasets. By training on a diverse set of images, the model can learn to differentiate between various classes of images, which could be useful in multiple applications beyond pet identification. For example, the contrastive ViT model could be applied to the classification of medical images, the identification of wildlife species, or the comparison of handwriting styles. The potential use cases are numerous, and the results of this study suggest that contrastive learning can be an effective tool for improving the accuracy of image classification models.
|
2305.18276 | Development of a ROS-based Architecture for Intelligent Autonomous on
Demand Last Mile Delivery | This paper presents the development of the JKU-ITS Last Mile Delivery Robot.
The proposed approach utilizes a combination of one 3D LIDAR, RGB-D camera, IMU
and GPS sensor on top of a mobile robot slope mower. An embedded computer,
running ROS1, is utilized to process the sensor data streams to enable 2D and
3D Simultaneous Localization and Mapping, 2D localization and object detection
using a convolutional neural network. | Georg Novtony, Walter Morales-Alvarez, Nikita Smirnov, Cristina Olaverri-Monreal | 2023-05-29T17:49:48Z | http://arxiv.org/abs/2305.18276v1 | # Development of a ROS-based Architecture for Intelligent Autonomous on Demand Last Mile Delivery
###### Abstract
This paper presents the development of the JKU-ITS Last Mile Delivery Robot. The proposed approach utilizes a combination of one 3D LIDAR, RGB-D camera, IMU and GPS sensor on top of a mobile robot slope mower. An embedded computer, running ROS1, is utilized to process the sensor data streams to enable 2D and 3D Simultaneous Localization and Mapping, 2D localization and object detection using a convolutional neural network.
Keywords:Last Mile Delivery Mobile Robot Sensors Sensor-Fusion ROS
## 1 Introduction
The use of mobile robots as delivery aids for postal delivery has seen an upswing in recent years. In addition to Amazon, there are several other manufacturers specializing in "last mile delivery" [16]. The ITS-Chair Sustainable Transport Logistics 4 0. has developed several concepts and solutions to contribute to a more sustainable delivery of goods that requires less traffic [6, 8]. On one hand, this is because the last mile of the delivery accounts for up to 75% of the total supply chain costs [14] and, on the other hand, customer needs and consumer behavior have changed significantly in times of e-commerce and mobile shopping.
Two global megatrends in particular, urbanization and e-commerce, are strong drivers of ever-increasing demand for last-mile delivery services. Urbanization refers to the trend of more and more people moving to urban areas in general and to "megacities" with 10 million inhabitants and more in particular. It is estimated that between 82 and 90% of the world's population, depending on the region, will live in major cities by 2050 [14]. In addition, e-commerce is steadily
increasing, and more and more retail goods are being ordered online. In 2021, the revenue of B2C eCommerce in Germany alone grew by 16% and is expected to reach $7.385 billion by 2025 [12]. Thus, greater geographic concentration and increasing online orders per person trigger a rapid increase in the amount of packages that need to be handled. For Germany, for example, it is predicted that 5.68 billion shipments will need to be handled annually by 2025 compared to 2.167 billion in 2012 [4].
The increasing demand for parcels in cities leads to a much higher number of delivery trucks in city centers, which puts additional strain on the existing infrastructure, causes congestion, and negatively impacts on health, environment, and safety. As a result, growing customer awareness and new government regulations force courier services to increase their efforts to operate in a sustainable and environmentally friendly manner. To overcome these challenges, we present an autonomous delivery robot that can navigate in an urban environment. To this end, we developed a software and hardware architecture for a mobile robot for the delivery of packages and letters within the campus of the Johannes Kepler University in Linz Upper Austria (JKU).
The remainder of this paper is structured as follows: In sections 2 and 3 we describe the system concepts, including the hardware, sensor, and software setup. In section 4 we present the results, finally section 5 concludes the paper outlining future research.
## 2 Hardware Setup
Fig. 1 gives an overview of the implemented hardware components which are described in detail in the next section. Although the mobile robot has autonomous capabilities, an operator station is needed to provide a safety fallback and a teleoperation system. In addition, the mobile robot itself needs to be equipped with numerous sensors ranging from 3D LIDAR for obstacle avoidance and mapping, and a front facing camera for obstacle classification as well as for teleoperation.
The LMDBot was built upon a prototype of the "Spider ILD01" slope lawnower [1] as a base platform which was equipped with a wooden parcel station, to store the packages to be delivered. The original holonomic lawnower has been transformed into a quasi Ackermann robot in which the chain drive responsible for steering the four wheels has been placed on only two wheels.
### Sensor suit
To allow the LMDBot to perceive the environment and move throught it, we provided the robot with the common sensor suit that can be found in autonomous driving. This configuration included several types of sensors whose data guarantee a secure driving through an urban environment (Fig. 1). Specifically the sensors are:
**LIDAR Sensor:** The robot is provided with a Light Detection And Ranging (LIDAR) on the roof that serves to obtain 3D distance information of the environment that is used to localize the robot and detect pedestrians. The selected sensor is the 128 layer LIDAR OS-1 manufactured by Ouster. This sensor has uniform distribution of lights, an effective range of 120 m, a vertical field of view of 45\({}^{\ast}\) and a horizontal field of view of 360\({}^{\ast}\). We placed the LIDAR on a Fused Filament Fabrication (FFF) printed platform of 0.3 m over the roof of the robot to minimize the points that are detected due to light ray colliding with the roof of the robot.
**Depth Camera:** We equipped the LMDBot with the depth camera Intel Resalense D435 RGB-D to detect pedestrians and extract dense depth information of the near objects in front of the vehicle. This sensor extracts depth information using two IR cameras for stereo-vision that are overlayed with an IR projector to aid the stereo vision in low light and low feature scenes. Additionally, the sensor possesses a RGB camera that we used to implement an object detection algorithm.
**GNSS INS:** To localize the LMDBot in the environment and track its movements, we provided the robot with the combination of one u-blox GPS [15], one
Figure 1: Last Mile Delivery Robot (LMDBot) hardware setup
(1) Ouster OS1, (2) Ublox C94-M8P, (3) Phidgets Spatial 3/3/3 Inertial Measurement Unit, (4) Intel Realsense D435, (5) 2 \(\times\) 12V Lead-Acid Batteries, (6) Steering Encoder, (7) Propulsion Encoder
Phidgets Spatial IMU [10]. With this system we can obtain the position of the robot in local and global coordinates.
**Encoders:** Finally, we also equipped the LMDBot with two FOTEK rotary encoders, one for the propulsion and one for the steering motor. These rotary encoders connect to a low level controller to track the speed and steering of the robot.
### Processing
We had to minimize the size and weight of the processing units because the LMDBot main cargo is supposed to be the deliveries that will be placed inside the vehicle. We also had to ensure that the processing units could operate without interruption, given the large volume of data collected by the sensors. For these reasons, we selected two embedded processors, one with a dedicated GPU to ensure quick image data processing and one that requires low energy. The processing units are as follows:
#### 2.2.1 Main computer
We chose the Nvidia Jetson AGX Xavier Developer Kit [9] to perform the mapping, localization and, path planning. It additionally provided us CUDA capabilites that also allowed us to deploy the deep learning models to perform pedestrian detection.
#### 2.2.2 Low level control computer
We chose the Raspberry Pi 3B+ for the low level control due to its simplicity. It communicated with the main computer via ethernet and received the speed and steering commands from the main computer. The low level controller used these commands to calculate the amount of voltage that was needed by the motor actuators of the robot to achieve the desired speed and steering commands.
### Network Communication
To transfer data between the LIDAR the main computer and the low level control computer we used an router of 1 Gbps per channel. The router allows the different components of the system to communicate using the TCP/IP protocol. On the other hand, the GPS INSS and the camera interface with the main computer through USB 3.2 which provides rapid data transmission.
### Power management
We equipped the robot with two batteries of 95 AH and 850 A peak current each, since the actuators of the robot require 24V DC to operate, and require high current due to the the robot's weight. To segregate the power channels, we linked the robot's components via a fuse box for safety. We also attached a switch to
turn the robot on and off, as well as a secondary switch to the DC motors to stop the robot without turning it off. Finally, a circuit breaker safeguards the systems from current spikes that might damage the hardware. The connection diagram can be seen in Fig. 2.
## 3 Software Architecture
The following section will provide information about the various software components utilized on the LMDBot.
Figure 3: Overview of software components and interaction between them
Figure 2: Power management diagram of the robot.
### ROS Architecture
The Robot Operating System (ROS) [11] is used as high-level API to evaluate sensor data and control actuators via our developed low-level control either via keyboard or joystick inputs. The Fig. 3 visualizes an overview of the implemented software components.
#### 3.1.1 Sensor Calibration:
The Intel Realsense D435 RGB-D is a camera whose intrinsic parameters are already provided by the manufacturer. As a result, there was no need to use any intrinsic calibrator package. The extrinsic calibration, providing translation in x, y, and z as well as roll, pitch and yaw between the camera and the LIDAR, was created following the algorithm described in [1]. To ease the extrinsic calibration of the IMU we mounted it exactly below the origin of the LIDAR using the aforementioned FFF printed structure.
#### 3.1.2 Detection:
To detect objects that lie along the path of the robot, we applied the Convolutional Neural Network (CNN) _YOLOV4_[2] to the image stream from the RGB-D camera. To improve the adaptability of the CNN to our use case, we re-configured it to detect only people, dogs, cats, ducks, scooters, and bicyclists, as these are the main dynamic objects on the campus of the JKU.
#### 3.1.3 Mapping:
The mapping process of the campus was performed using two different methods. On the one hand, we performed classical 2D mapping with _Hector-SLAM_[5], on the other hand, we created a 3D map using _LIO-SAM_[13]. To create the mandatory 2D LIDAR for _Hector-SLAM_ we cut the 3D information of the Ouster OS1 to a 2D plane utilizing the _pointcloud_to_laserscan_ ROS package.
#### 3.1.4 Localization:
The localization was done based on the _Adaptive Monte Carlo localization (AMCL)_[3] as well as the _robot_localization_ package. _AMCL_ takes over the global localization in the 2D map and the _robot_localization_ package fuses the sensor data of the wheel encoders, IMU and the GPS signal by means of Extended Kalman Filter (EKF) and then feeds them into _AMCL_.
#### 3.1.5 Low Level Control:
To control our robot, we implemented a PID controller for the drive motor and a PID controller for the control motor, where the controlled distance was the linear and angular velocity in x and around the z axis, respectively. For initial tuning of the PID parameters we relied on Visual Odometry from RTAB-Map [7]. Furthermore, for low-level control, we simplified our vehicle model to that of a bicycle with the origin lying in the middle of the rear axis.
## 4 Results
A 2D map was generated by relying on the Hector-SLAM [5]. The corresponding 3D map was produced using LIO-SAM [13]. Finally, the objects in the vicinity were detected relying on the YOLO CNN V4 [2]. The results are visualized in Fig. 4.
As visible in Fig. 4 b) the 2D map is quite noisy, we believe the high number of glass fronts in combination with dynamic objects (pedestrians) played a significant role here. As figure 4 c) depicts the generated 3D map extends far beyond the actual campus of JKU, which can result in better localization as the buildings in the background can be used as landmarks.
## 5 Conclusion and Outlook
In this paper a prototype of a last mile delivery robot has been presented. We introduced our hardware as well as software stack and presented results in terms of generated 3D and 2D maps.
Further work will deal with creating more precise maps, to ease the path planning, and evaluating the performance of autonomous delivery between two or more positions in the geographic coordinate system available at the JKU campus. Further, we will investigate route optimization methods for the parcel delivery framework.
#### 5.0.1 Acknowledgements
This work was supported by the Austrian Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology (BMK) Endowed Professorship for Sustainable Transport Logistics 4.0., IAV France S.A.S.U., IAV GmbH, Austrian Post AG and the UAS Technikum Wien.
Figure 4: a) Object detection, b) Generated 2D map of the JKU campus, c) Dense 3D map of the JKU campus |
2308.09387 | Multi-Level Compositional Reasoning for Interactive Instruction
Following | Robotic agents performing domestic chores by natural language directives are
required to master the complex job of navigating environment and interacting
with objects in the environments. The tasks given to the agents are often
composite thus are challenging as completing them require to reason about
multiple subtasks, e.g., bring a cup of coffee. To address the challenge, we
propose to divide and conquer it by breaking the task into multiple subgoals
and attend to them individually for better navigation and interaction. We call
it Multi-level Compositional Reasoning Agent (MCR-Agent). Specifically, we
learn a three-level action policy. At the highest level, we infer a sequence of
human-interpretable subgoals to be executed based on language instructions by a
high-level policy composition controller. At the middle level, we
discriminatively control the agent's navigation by a master policy by
alternating between a navigation policy and various independent interaction
policies. Finally, at the lowest level, we infer manipulation actions with the
corresponding object masks using the appropriate interaction policy. Our
approach not only generates human interpretable subgoals but also achieves
2.03% absolute gain to comparable state of the arts in the efficiency metric
(PLWSR in unseen set) without using rule-based planning or a semantic spatial
memory. | Suvaansh Bhambri, Byeonghwi Kim, Jonghyun Choi | 2023-08-18T08:38:28Z | http://arxiv.org/abs/2308.09387v2 | # Multi-Level Compositional Reasoning for Interactive Instruction Following
###### Abstract
Robotic agents performing domestic chores by natural language directives are required to master the complex job of navigating environment and interacting with objects in the environments. The tasks given to the agents are often composite thus are challenging as completing them require to reason about multiple subtasks, _e.g._, bring a cup of coffee. To address the challenge, we propose to divide and conquer it by breaking the task into multiple subgoals and attend to them individually for better navigation and interaction. We call it _Multi-level Compositional Reasoning Agent (MCR-Agent)_. Specifically, we learn a three-level action policy. At the highest level, we infer a sequence of human-interpretable subgoals to be executed based on language instructions by a high-level _policy composition controller_. At the middle level, we discriminatively control the agent's navigation by a _master policy_ by alternating between a navigation policy and various independent interaction policies. Finally, at the lowest level, we infer manipulation actions with the corresponding object masks using the appropriate _interaction policy_. Our approach not only generates human interpretable subgoals but also achieves 2.03% absolute gain to comparable state of the arts in the efficiency metric (PLWSR in unseen set) without using rule-based planning or a semantic spatial memory.
## Introduction
For the long-awaited dream of building a robot to assist humans in daily life, we now witness rapid advances in various embodied AI tasks such as visual navigation [1, 13, 14], object interaction [15, 16], and interactive reasoning [17, 18, 1]. Towards building an ideal robotic assistant, the agent should be capable of all of these tasks to address more complex problems. A typical approach for combining these abilities is to build a unified model [15, 14] to jointly perform different sub-tasks. However, the reasoning for navigation can differ significantly from the one for object interaction; the former needs to detect navigable space and explore to reach a target location while the latter requires detecting objects and analysing their distances and states [14].
Meanwhile, the human cognition process learns to divide a task into sub-objectives such as navigation or interaction, which enables humans to facilitate complex reasoning in various circumstances [11]. Inspired by this, we propose a multi-level compositional reasoning agent (MCR-Agent) that disentangles the task into high-level subgoals; then learns and infers a low-level policy for each sub-task. Specifically, we propose a multi-level agent comprised of (1) a policy composition controller (PCC) that specifies a sub-policy sequence, (2) a master policy (MP) that specialises in navigation, and (3) a set of interaction policies (IP) that execute interactions. This disentanglement enables easier analysis of subtasks with shorter horizons (See Sec. 'Multi-Level Policy vs. Flat Policy' for empirical evidence).
In addition, to interact with multiple objects in a long sequence, the agent should be able to keep track of the current target object at each time instance. Inspired by [14, 15], we additionally propose an object encoding module (OEM) that provides target object information which is used as a navigational subgoal monitor, _i.e._, stopping criterion for the navigation policy.
In our empirical evaluations with a long horizon instruction following task with the condition of not requiring additional depth supervision and perfect egomotion assumption, usually not available for real world deployment, we observe that MCR-Agent outperforms most prior arts in literature by large margins. We summarize our contributions as follows:
* We propose a multi-level hierarchical framework, MCR-Agent, that decomposes a compositional task into semantic subgoals and effectively addresses them with corresponding submodules.
* We propose an object encoding module (OEM) that encodes object information from natural language instructions for effective navigation.
* By extensive quantitative analyses on a challenging interactive instruction following benchmark [15], we show that MCR-Agent yields competitive performance with higher efficiency than prior arts that do not assume perfect egomotion and extra depth supervision.
## Related Work
There are numerous task setups and benchmarks proposed for developing an agent to complete complicated tasks given natural language directives, such as agents trained to navigate [14, 15] or solve household tasks [2]. However, the vast majority of approaches for these tasks employ flat reasoning [20, 21], in which the agent decides on the low-level actions accessible while moving through the environment [11, 12]. When the prior arts seek to define subtasks, some define them with two layers of hierarchy [13, 14, 15, 16, 17, 18]. However, these strategies require a good amount of data due to the semantic gap between abstract natural language instructions and concrete executions [12]. Natural language is subjective and even a seemingly simple command can contain several unstated meanings. Because of this semantic gap, most approaches [1, 15, 16] require either a large amount of labeled data or trial-and-error learning to map language to low-level actions. In contrast, we propose to use deeper hierarchical knowledge for better control of embodied agents. Thanks to the modular structure, our agent reasons and accomplishes tasks along longer paths, spanning numerous subgoals.
The described task requires not only navigation but also interaction. [21] proposes a CNN-LSTM-based baseline agent with progress tracking [15]. [20] offers a modular strategy for factorising action prediction and mask generation while [17] offers a system that encodes language and visual state, and performs action prediction using independently trained modules. [13] propose a transformer-based hierarchical agent whereas [20] presents a transformer-based agent that uses object landmarks for navigation. [20] also presents a transformer-based agent that uses a multimodal transformer for exploiting the multiple input modalities.
Recent work proposes to construct semantic maps and leverage the relative localization for improved navigation where [16] uses a 3D map to encode spatial semantic representation, [12] suggests a SLAM-based approach that keeps observed information in a 2D top-down map while [13] presents a planning-based approach that keeps a semantic spatial graph to encode visual inputs and the agent's poses.
Finally, a modular policy with two levels of hierarchy has been proposed by [14] which does not perform well on a long-horizon task. In contrast, our policy operates at three hierarchical levels, exploiting the fact that navigation and interaction are semantically diverse activities that require independent processing.
## Model
Observing that the visual information for navigation considerably varies over time while interacting with objects is largely stationary, we argue that the agent benefits from learning different policy modules for these two different tasks as follows. The navigation needs to reason about the temporal history and global environment information. The interaction with objects requires focusing on local visual cues for precise object localization. In addition, there is a
Figure 1: The proposed ‘Multi-level compositional reasoning’ contrasted to ‘Flat policy reasoning’. The flat policy reasoning has been employed in prior arts [20, 21, 20, 21], training an agent to directly learn the low-level actions. On the contrary, our multi-level policy decomposes a long-horizon task into multiple subtasks and leverages the high-level abstract planning, which enables an agent to better address long-horizon planning.
sample imbalance between navigation and interaction actions as navigation actions are far more frequent than interaction actions. This would bias a learned model towards more frequent actions, _i.e_., navigation.
Based on these observations, we design an architecture with three levels of compositional learning; (1) a high-level policy composition controller (PCC) that uses language instructions to generate a sequence of sub-objectives, (2) a master policy that specialises in navigation and determines when and where the agent is required to perform interaction tasks, and (3) interaction policies (IP) that are a collection of subgoal policies that specialise in precise interaction tasks.
Specifically, MCR-Agent first analyzes each language instruction and uses the information to determine the basic high-level policy sequence required to perform the task. Following the predicted sequence, the control of the agent is shifted between (1) the master policy and (2) different interaction policies for object interaction. Moreover, all interaction policies are compositional and independent, which allows formulating an instance-specific high-level action sequence. In particular, we learn multiple interaction policies, each of which specialises in a different sub-objective and can be integrated in a precise order to complete long-horizon tasks. We illustrate the model overview in Fig. 2.
### Policy Composition Controller
The nature of the long-horizon instruction following is highly complex. To address this, we argue that it is beneficial to first generate a high-level subgoal sequence, and then tackle each subgoal individually. Specifically, the trajectories are first divided into meaningful subgoals based on the given language instruction (called'step-by-step' instruction) [15]. For inferring the subgoals, we propose a policy composition controller (PCC), shown as dark cyan box in Fig. 2, that predicts a subgoal \(\mathcal{S}=\{s_{i}\}\) (where \(s_{i}\) belongs to a set of predefined subgoals) for each'step-by-step' instruction. The PCC's predictions correlate to semantic subgoals, subjecting the agent's logic to observation. It gives the intuition on what the agent is attempting to accomplish at any particular instance. This enables us to track the progress of task completion by the agent.
Specifically, we first encode the language instructions with a Bi-LSTM, followed by a self-attention module. Each encoded step-by-step language instruction \(\hat{x}_{i}\) is used as input for the PCC to generate the subgoal sequences. The agent completes these subgoals in the specified order to accomplish the goal task. Formally, for each language encoding \(\hat{x}_{i}\), the PCC predicts the subgoal action as:
\[s_{i}=\arg\max_{k}(FC_{1}(\hat{x}_{i})),\quad\text{where }k\in[1,N_{subgoals}], \tag{1}\]
where \(FC_{1}\) denotes a single layer perceptron, and \(N_{subgoals}\) denotes the number of subgoals. We train the PCC module using imitation learning, with the associated subgoal labels. On the validation split used in [15], the controller achieves 98.5% accuracy. We provide further details on these subgoals in the supplementary for space's sake.
### Master Policy
As we discussed in Sec. 'Model,' the reasoning required for navigation is significantly varied from the interaction. To this end, we propose to use a dedicated module for navigation, which we call'master policy' (illustrated by the upper-right blue box in Fig. 2). It not only performs navigation but simultaneously also marks the locations for object interaction along the way. In other words, it generates the navigational action sequence based on the multi-modal inputs.
Specifically, let \(\mathcal{A}_{n}\) denote the set of primitive navigation actions {Moveahead, RotateRight, RotateLeft, LookUp, LookDown}. The master policy learns to navigate in the environment by learning a distribution over \(\mathcal{A}_{n}\cup\) <MANIPULATE>, where <MANIPULATE> is the abstract
Figure 2: Model Architecture. \(I_{t}^{d}\) denotes an RGB frame from an explorable direction, \(d\in[0,D]\), at the time step, \(t\), where \(d=0\) indicates the egocentric direction. We encode \(I_{t}^{d}\) using a pretrained ResNet and acquire a visual feature, \(v_{t}^{d}\). \(\hat{x}_{i}\) denotes each step-by-step instruction. \(\hat{l}_{T,v}\), \(\hat{l}_{T,m}\) denotes the encoded instruction for the ‘interactive perception module’ and ‘action prediction module’ respectively. \(\hat{l}_{T:T+1,n}\) denotes the encoded ‘subtask’ instruction (Sec. ‘Master Policy’). \(T\) refers to the index of the current subgoal. In our master policy, OEM outputs object encoding, \(o_{t}\), using \(\hat{l}_{T:T+1,n}\). ‘VL-Ground’ uses dynamic filters to capture the correspondence between visual and language features and outputs attended visual features, \(\hat{v}_{t}^{pan}\) and \(\hat{v}_{t}^{ego}\).
token we introduce for the agent to signify when to move control to the next level of the hierarchy, _i.e_., the interaction policies, for completing manipulation subgoals. It comprises of two modules: (1) an _object encoding module_ that provides information about the object the agent needs to locate for interaction, and (2) a navigation policy that outputs the navigation action sequence based on the multi-modal input for traversing the environment. For the instruction to be used in the master policy, we additionally propose a new way for combining subtask language instructions.
Subtask language encoding.The language instructions for a given task can be divided into two types; (1) navigation and (2) interaction. We observed that for completing a given compositional task, the agent needs to navigate to the necessary locations and then interact with relevant objects. An embodied task would consist of multiple combinations of such pairs with varying locations and interaction subgoals.
We further propose a method for encoding the combination of instructions for navigation. In particular, we regard the subtask instruction as a combination of (1) navigation to discover the relevant object and (2) corresponding interactions. For instance, in the subtask, _"Turn around and walk to the garbage bin by the TV. Pick up the blue credit card on the TV stand."_, the agent needs to interact with the credit card, which is crucial information for the agent and also serves as a navigational criterion, _i.e_., the agent should stop if it encounters the credit card in close vicinity. We observe that this information is often missing in a navigation command but present in the next interaction instruction. We encode these language instruction combinations in a similar manner as PCC. Here, \(\hat{l}_{T:T+1,n}\) refers to the encoded feature of the combined subtask instruction of the navigation subgoal \(T\) and the corresponding interaction subgoal \(T+1\).
Object encoding module (OEM) (box in orange).Locating the required objects in an essential part of navigation. Trying to interact with incorrect objects can lead to catastrophic failure. To find the correct object, we propose an object encoding module that takes as input the subtask language instruction \(l_{T:T+1,n}\) and gives the target object that the agent must locate for interaction. This guides the agent's navigation by acting as a _navigation subgoal monitor_ that indicates the end of the navigation subgoal and shifts control to the next interaction policy. The object encoder is composed of a Bi-LSTM with a two-layer perceptron which outputs the object class (Eq. 2). During navigation, the subgoal monitor uses a pretrained object detector [10] that validates if the relevant object is present in the current view or not. If the agent spots the item, it switches to the appropriate interaction policy; otherwise, it continues to navigate.
Navigation policy (box in yellow).The second component of the master policy is the navigation policy that generates the sequence of navigable low-level actions using the processed multi-modal data as input. The architecture is based on the action prediction module of [20]. It uses visual features, subtask instruction features, object encoding and the embedding of the preceding time step action as inputs. The goal of the navigation policy is locating the correct object for interaction. Therefore, it utilises the subtask combination instruction \(l_{T:T+1}\) as input which provides low-level information relevant for navigation as well as the information about the object that the agent needs to interact with. This aids the agent in arriving at the correct location. To capture the relationship between the visual observation and language instructions, we dynamically generate filters based on the attended language features and convolve visual features with the filters, denoted by "VL-Ground" in Fig. 2. To summarise, the LSTM hidden state \(h_{t,n}\) of the master policy decoder, LSTM\({}_{n}\), is updated with four different features concatenated together as:
\[\begin{split} o_{t}&=\operatorname*{argmax}_{k^{ \prime}}(\text{FC}_{o}(\hat{l}_{T:T+1,n}))\quad k^{\prime}\in[1,N_{objects}]\\ h_{t,n}&=\text{LSTM}_{n}([\hat{v}_{t}^{pan}; \ \hat{l}_{T:T+1,n};\ a_{t-1,n};\ o_{t}])\\ a_{t,n}&=\operatorname*{argmax}_{k}(\text{FC}_{n} ([\hat{v}_{t}^{pan};\hat{l}_{T:T+1,n};a_{t-1,n};o_{t};h_{t,n}]))\\ &\text{where }k\in[1,|\mathcal{A}_{n}|+1]\end{split} \tag{2}\]
where \(\hat{v}_{t}^{pan}\) denotes the attended visual features for surrounding views (See supp.) at time step \(t\); \(\hat{l}_{T:T+1,n}\) the attended subtask language features for the navigation subgoal \(T\) and the corresponding interaction subgoal \(T+1\); \(a_{t-1,n}\) the action given by master policy in the previous time step; and \(o_{t}\) the object encoding given by the OEM.
Loop escape (box in gray).In addition, we use subgoal progress monitor and overall progress monitor similar to [14] to train the navigation policy and also utilize a heuristic loop escape module for escaping the deadlock conditions. We provide details in the supplementary.
### Interaction Policy
To abstract a visual observation to a consequent action, the agent requires a global scene-level comprehension of the visual observation whereas, for the localisation task, the agent needs to focus on both global as well local object-specific information. Following [20], we exploit separate streams for action prediction and object localization due to the contrasting nature of the two tasks, illustrated as 'Interaction Policy\({}_{\text{k}}\)' in Fig. 2. Each interaction policy consists of an action policy module which is responsible for predicting the sequence of actions corresponding to the interaction subgoal, and an interaction perception module which generates the pixel-level segmentation mask for objects that the agent needs to interact with at a particular time step.
The task requires the execution of varied subgoals with different levels of complexity. For instance, a Heat subgoal might require interaction with either a stove or a microwave whereas for a Pickup subgoal, there is a variety of receptacles but the action sequence is simpler. To focus on individual sub-objectives, we train an interaction policy, for each subgoal where \(k\in[1,N_{subgoals}]\). We observed that each interaction has its own properties and the navigation information history is irrelevant to the task, which allows us to keep an isolated hidden state for each interaction subgoal. We provide further details about the architecture and the training process for interaction policies in the supplementary.
## Experiments
**Dataset.** To evaluate our approach in challenging scenarios, we focus on the problem of interactive instruction following in the ALFRED benchmark Shridhar et al. (2020), which poses numerous challenges including long-term planning, partial observability, and irreversible state changes. To complete a task successfully, an agent needs to navigate through very long horizons. Along the trajectory, the agent can interact with 118 objects in novel environments, which requires a thorough comprehension of both visual observations and their relation with the natural language directives. It provides expert trajectories for the agents performing household tasks in simulated environments on AI2-THOR Kolve et al. (2017). The dataset is divided into three splits; 'train', 'validation', and 'test' set. To evaluate the generalisation ability of an embodied agent to novel environments, the benchmark further divides 'validation' and 'test' trajectories into _seen_ and _unseen_ splits. _Unseen_ comprises a set of rooms that are held out during training and scenes that are exposed to the agent during training are termed as _seen_. For each task, ALFRED provides a goal statement with multiple (4+) step-by-step instructions describing each subtask. (Supp. Sec. 'Subgoal Evaluation').
**Metrics.** We use the widely used evaluation metrics in literature Shridhar et al. (2020); Padmakumar et al. (2022), success rate (SR) is the ratio of the successfully completed episodes to the total episodes. The path length weighted success rate (PLWSR) penalizes the success rate by the length of the trajectory traversed by the agent, which indicates the efficiency of the embodied agent. The goal-condition success rate (Goal-Cond.) is the ratio of the satisfied conditions among the total goal conditions for tasks, which takes into account the partial task completion ability of the agent.
### Comparison with State of the Arts
First, we conduct a quantitative analysis of task success rates (SR) and path length weighted success rates (PLWSR) Anderson et al. (2018) by comparing our approach with prior arts on the interactive instruction following task Shridhar et al. (2020) and summarize the results in Table 1. We indicate the highest value for each metric in bold font among methods that are comparable to ours that do not use rule-based planning or semantic memories for a fair comparison. We also present recent methods that use expensive external supervision or well-designed planners for reference.
We observe that in unseen environments, MCR-Agent outperforms most prior-arts in terms of PLWSR for both test and validation folds. This demonstrates the ability of our agent to accomplish tasks in novel environments with higher efficiency. For seen environments in the test fold, MCR-Agent shows comparable performance with LWIT and EmBERT in terms of SR and PLWSR but these works exhibit relatively stronger bias towards seen environments, which is evidenced by the significant drop in their unseen SR (_i.e_., 69.5% and 76.3% relative drop, respectively). Similarly, E.T. decently performs in seen environments but shows a drastic drop (77.7% relative) of SR in unseen environments. Note that E.T. utilises extra synthetic training data.
### Bias Towards Seen Environment
It is previously observed that embodied agents relying on low-level visual features for perception generally exhibit bias towards seen environments Zhang et al. (2020). Unfortunately, MCR-Agent also exhibits similar bias towards seen environments but the degree is significantly lower than other works (E.T., LAV, MOCA).
To mitigate this bias, recent works such as HLSM, MAT, FILM, and EPA utilize spatial semantic representations
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \multicolumn{1}{c}{} & \multicolumn{2}{c}{**Language**} & \multicolumn{2}{c}{**Model**} & \multicolumn{4}{c}{**Validation**} & \multicolumn{4}{c}{**Test**} \\ \cline{2-13} \multicolumn{1}{c}{Model} & \multicolumn{1}{c}{\multirow{2}{*}{Goal-Only}} & \multicolumn{1}{c}{\multirow{2}{*}{\begin{tabular}{c} Rule-based \\ Planning \\ \end{tabular} }} & \multicolumn{1}{c}{\multirow{2}{*}{\begin{tabular}{c} Semantic \\ Memory \\ \end{tabular} }} & \multicolumn{2}{c}{\multirow{2}{*}{
\begin{tabular}{c} Subtask \\ Division \\ \end{tabular} }} & \multicolumn{2}{c}{_Seen_} & \multicolumn{2}{c}{_Unseen_} & \multicolumn{2}{c}{_Seen_} & \multicolumn{2}{c}{_Unseen_} \\ & & & & SR & PLWSR & SR & PLWSR & SR & PLWSR & SR & PLWSR \\ \hline Seq2Seq Shridhar et al. (2020) & ✗ & ✗ & ✗ & - & 3.70 & 2.10 & 0.00 & 3.98 & 2.02 & 0.39 & 0.80 \\ MOCA Singh et al. (2021) & ✗ & ✗ & - & 25.85 & 18.95 & 5.36 & 3.19 & 26.81 & 19.52 & 7.65 & 4.21 \\ EmBERT Suglia et al. (2021) & ✗ & ✗ & ✗ & - & 37.44 & **28.81** & 5.73 & 3.09 & 31.77 & 23.41 & 7.52 & 3.58 \\ E.T. Pashevich et al. (2021) & ✗ & ✗ & - & **46.59** & - & 7.32 & - & **38.42** & **27.78** & 8.57 & 4.10 \\ LWIT Nguyen et al. (2021) & ✗ & ✗ & ✗ & - & 33.70 & 28.40 & 9.70 & 7.30 & 30.92 & 25.90 & 9.42 & 5.60 \\ HiTUT Zhang and Chai (2021) & ✗ & ✗ & ✗ & Subgoal & 25.24 & 12.20 & 12.44 & 6.85 & 21.27 & 11.10 & 13.87 & 5.86 \\ M-Track Song et al. (2022) & ✗ & ✗ & ✗ & Binary & 26.70 & - & 17.29 & - & 24.79 & 13.88 & 16.29 & 7.66 \\ \hline
**MCR-Agent (Ours)** & ✗ & ✗ & Subgoal & 34.39 & 23.04 & **20.08** & **10.84** & 30.13 & 21.19 & **17.04** & **9.69** \\ \hline LAV Nottingham et al. (2021) & ✓ & ✗ & Subgoal & 12.70 & 5.9 & - & - & 13.35 & 6.31 & 6.38 & 3.12 \\ HLSM Bluks et al. (2021) & ✓ & ✗ & ✓ & Primitive & 29.63 & - & 18.28 & - & 29.94 & 8.74 & 20.27 & 5.55 \\ MAT Ishikawa and Sugiura (2022) & ✓ & ✗ & ✓ & Primitive & 30.98 & - & 17.66 & - & 33.01 & - & 21.84 & - \\ FILM Min et al. (2022) & ✗ & ✓ & ✓ & Primitive & 38.51 & 15.06 & 27.67 & 11.23 & 27.67 & 11.23 & 26.49 & 10.55 \\ EPA Liu et al. (2022) & ✓ & ✓ & ✓ & - & - & - & - & 39.96 & 2.56 & 36.07 & 2.92 \\ \end{tabular}
\end{table}
Table 1: Task and Goal-Condition Success Rates. ✓ in “Goal-Only” column under “Language” indicates that the corresponding approach uses only goal statements. “Rule-based Planning” indicates if a model exploits rule-based planning such as shortest path algorithms. “Semantic Memory” denotes if the approach requires external memory for storing semantic information (_e.g_., object positions, classes, _etc_.) using data structures (_e.g_., grid maps, graphs, _etc_.). “Subtask Division” represents if an agent breaks a task into subtasks (Primitive/Subgoal/Binary) or not (-). A subtask can be a “Primitive” interaction action, a set of “Subgoal” actions, or a “Binary” indicator for navigation/interaction. Our MCR-Agent achieves the highest unseen SR and PLWSR in both validation and test folds compared to prior works without rule-based planning or semantic memories. We indicate the highest values in bold among them.
based on the depth followed by additional depth supervision and semantic segmentation data by assuming perfect egomotion that enables retrieval of accurate camera poses for estimation of the environment layout. These assumptions limit the approaches' deployment capabilities since perfect egomotion may not be accessible in a real-world scenario and such spatial representations may lead to an exponential increase in memory requirements when deployed in larger environments. Note that our approach outperforms all these works and shows comparable performance with FILM in terms of PLWSR score without requiring additional memory and perfect egomotion for generating spatial maps.
Furthermore, HLSM and MAT redefine the agent's action space to adopt a pretrained grid-based navigation system on 3D semantic maps for effective navigation. Similarly, FILM and EPA are equipped with rule-based algorithms for obstacle-free path planning. These agents incorporate heuristics for performance gains whereas MCR-Agent utilises purely learning-based algorithms. While the heuristics may help task completion (improved SR), they adversely affect the efficiency and generalisation of the agents as evidenced by the drop in unseen PLWSR for HLSM and EPA.
### Multi-Level Policy _vs._ Flat Policy
We compare the learning efficiency for the hierarchical and flat policy agents for seen and unseen environments. The performance of our hierarchical agent and the flat agent are compared quantitatively in relation to the number of iterations (expressed in terms of epochs), and the results are presented in Fig. 3. As shown, the multi-level hierarchical policy gives a major improvement over the flat policy. Higher success rates in unseen scenarios evidence its ability to perform in novel environments. As depicted in Table 2 (#(a) _vs._#(c)), for seen and unseen task SR, the hierarchical agent outperforms the flat policy by 8.04% and 7.65%, respectively. In both seen and unseen 'Goal-Cond', the hierarchical approach outperforms, with improvements of 10.51% and 9.38%, respectively. The greater performance of the hierarchical approach on both overall task success rate and goal condition suggests its comprehension of both short-term subtasks and long-horizon whole tasks.
The multi-level hierarchical agent converges significantly faster than the flat agent (25th epoch _vs._\(37\)th epoch), as shown in Fig. 2(a), demonstrating the computational efficiency of our approach. Our policies are trained in two stages. We train interaction policies first, which collectively takes two epochs to converge. We provide the details on the convergence of the interaction policies in the supplementary. We include them in computation and begin the hierarchical agent's curve from the 3rd epoch, which is effectively the 1st epoch for the master policy.
Fig. 2(b) represents the average lengths of a successful trajectory traversed by the hierarchical and flat policy agents, for different task types, contrasting the efficiency of each agent. The hierarchical agent consists of the master policy that is dedicated solely to navigation, giving it a significant advantage over the flat agent that learns everything using the same network parameters. It was observed that due to the wide action space, the flat agent occasionally executes irrelevant interactions along the trajectory, which is not the case with MCR-Agent. The dedicated action sets for the master policy and interaction policies improve MCR-Agent by allowing the agent to avoid any unnecessary interactions while traversing to discover the desired object. The interaction policies also perform significantly better because they only master certain short-horizon tasks, which speeds up and simplifies the learning process. We also provide the subgoal performance for each module in the supplementary.
### Interpretable Subgoals
The interpretability of embodied agents recently gain attention in the literature for the transparency of their reasoning process [21, 14]. Despite recent advances in the domain, many approaches still provide little to no transparency about the agent's actions due to their primitive action space that cannot fully represent the intention of the agents. To demystify the agent's behavior, MCR-Agent generates semantically meaningful subgoals that subject the agent's logic to observation ('What is the agent attempting to accomplish right now?'). This makes it easier for humans to monitor the progress of task.
Figure 3: Multi-level policy learns faster and more effective action sequences. Plot (a) shows the learning curves (success rates vs. epochs) of the hierarchical and flat policy agents for unseen and seen environments. Plot (b) presents the average length of an episode traversed by a hierarchical or flat policy for the seven task types [14]. The flat policy denotes the NIH ablated agent, #(c) in Table 2.
The generated subgoals are far more interpretable than low-level action sequences. For instance, a low-level Put action might be associated with any of the subgoals such as Heat, Cool, or Put. The high-level semantics are reasoned about by the hierarchical agent, and the agent's intent is considerably clearer. For instance, if the agent is performing a Cool subgoal, then it is more likely to interact with the refrigerator. If it is a Heat subgoal, then it is more likely to interact with a microwave or stove. The subgoal information provided by the PCC provides extra useful information to the multi-level agent as well as the observer. In contrast, the flat policy agent considers it as a single atomic action regardless of the object or receptacle involved.
### Ablation Study
We conduct a series of ablation analyses on the proposed components of MCR-Agent and report the results in Table 2 to evaluate the significance of each module. In the supplementary, we further provide ablation studies for model input, design components, task types, and subgoal types.
Without object encoding module (OEM).We ablate the navigation subgoal monitor and train the navigation policy without object information. The agent can complete some objectives, but it lacks object information, which functions as a stopping criterion, preventing proper navigation. Hence, it is unable to completely comprehend the relationship between the step-by-step instruction and the visual trajectory. This limits the agent's capacity to explore and connect various interaction policies required for task completion, leading to a significant performance drop (Table 2 #(a) _vs._ #(b)).
Without navigation interaction hierarchy (NIH).Next, we demonstrate the importance of hierarchy between navigation and interaction policies _i.e._ the second level of hierarchy in our framework. For this, we utilize the same network for learning navigation and interaction action prediction. For interaction mask generation, we preserve the interaction perception module. To ablate the benefits of the subtask language encoding, we use the concatenation of all step-by-step instructions as language input and conduct action and mask prediction while leaving the other modules unaltered. The ablated model's task success rates drop significantly (Table 2 #(a) _vs._ #(c)), showing that it is unable to effectively utilise the available inputs.
Without modular interaction policy (MIP).In modular networks, the decision-making process is separated into numerous modules. Each module is designed to perform a certain function and is put together in a structure that is unique to each trajectory instance. Because of their compositional nature, such networks with the help of specialised modules often perform better in new environments than their flat counterparts [19, 17]. We present a quantitative comparison of interaction policies' modular structure (Table 2 #(a) _vs._ #(d)). For this experiment, we train a single policy module to learn all interaction tasks. The decoupled pipeline for action and mask prediction, as well as the rest of the settings, are preserved. The modular agent outperforms the non-modular agent by 3.31% and 4.26% in seen and unseen task SR, respectively. It also performs significantly well in both seen and unseen 'Goal-Cond.' criteria, with gains of 2.70% and 7.61%, respectively. The greater performance of the modular policy in both task and goal-condition metrics highlights the benefits of modular structure in long-horizon planning tasks. Next, we provide the individual performance of the two major components of our framework which brings the most empirical gain, OEM and NIH in the absence of other components.
Object encoding module only.In this ablation, we evaluate the effect of the object encoding module (OEM) in the absence of the hierarchical and modular structure. This makes the agent flat and thus analogous to [20] except for OEM. The agent (Table 2 #(e)) demonstrates significantly higher performance than [20], highlighting the relevance of target object information for navigation and the effectiveness of the proposed OEM.
Navigation interaction hierarchy only.While ablating the modular structure and object encoding module, we observe degradation in performance (Table 2 #(f)) which implies that the multi-level hierarchical architecture needs to include other proposed components for optimal performance. The overall performance improves when these components are combined (#(a) _vs._ #(f)) indicating that the proposed components are complementary to each other.
## Conclusion
We address the problem of interactive instruction following. To effectively tackle the long horizon task, we propose a multi-level compositional approach to learn agents that navigate and manipulate objects in a divide-and-conquer manner for the diverse nature of the entailing task. To improve navigation performance, we propose an object encoding module to explicitly encode target object information during internal state updates. Our approach yields competitive performance with higher efficiency than prior arts in novel environments without extra supervision and well-designed planners.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline & \multicolumn{3}{c}{**Components**} & \multicolumn{3}{c}{**Validation-Seen**} & \multicolumn{2}{c}{**Validation-Unseen**} \\ \cline{2-7} \# & MIP & NIH & OEM & Task & Goal-Cond. & Task & Goal-Cond. \\ \hline a) & ✓ & ✓ & ✓ & \(34.39(0.2)\) & \(41.96(0.5)\) & \(20.08(0.3)\) & \(38.42(0.2)\) \\ \hline b) & ✓ & ✓ & ✗ & \(28.61(0.4)\) & \(32.96(0.3)\) & \(13.31(0.9)\) & \(29.13(0.3)\) \\ c) & ✓ & ✗ & ✓ & \(26.35(0.9)\) & \(31.45(0.9)\) & \(12.43(0.7)\) & \(29.04(0.9)\) \\ d) & ✗ & ✓ & ✓ & \(31.08(0.9)\) & \(39.26(0.8)\) & \(15.82(0.5)\) & \(30.81(0.3)\) \\ e) & ✗ & ✗ & ✓ & \(20.59(1.4)\) & \(25.13(2.1)\) & \(7.45(1.1)\) & \(14.05(1.0)\) \\ f) & ✗ & ✓ & ✗ & \(23.54(1.8)\) & \(31.61(1.5)\) & \(10.30(0.7)\) & \(25.42(1.1)\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study for components of MCR-Agent. We report the task success rate for each ablation. ✓ and ✗ denote that the corresponding component is present/absent in MCR-Agent. MIP (Modular Interaction Policy) denotes the subgoal modules for interaction policies. NIH (Navigation Interaction Hierarchy) denotes the third level of hierarchy between navigation and interaction policies. OEM (Object Encoding Module) denotes the object encoding module. We report averages of 5 runs with random seeds with standard deviations depicted in sub-script parentheses (_e.g._, (0.2)).
## Acknowledgments
This work is partly supported by the NRF grant (No.2022R1A2C4002300), IITP grants (No.2020-0-01361-003, AI Graduate School Program (Yonsei University) 5%, No.2021-0-02068, AI Innovation Hub 5%, 2022-0-00077, 15%, 2022-0-00113, 15%, 2022-0-00959, 15%, 2022-0-00871, 20%, 2022-0-00951, 20%) funded by the Korea government (MSIT).
|
2310.13195 | A Class of Forward-Backward Stochastic Differential Equations Driven by
Lévy Processes and Application to LQ Problems | In this paper, our primary focus lies in the thorough investigation of a
specific category of nonlinear fully coupled forward-backward stochastic
differential equations involving time delays and advancements with the
incorporation of L\'{e}vy processes, which we shall abbreviate as FBSDELDAs.
Drawing inspiration from diverse examples of linear-quadratic (LQ) optimal
control problems featuring delays and L\'{e}vy processes, we proceed to employ
a set of domination-monotonicity conditions tailored to this class of
FBSDELDAs. Through the application of the continuation method, we achieve the
pivotal results of unique solvability and the derivation of a pair of estimates
for the solutions of these FBSDELDAs. These findings, in turn, carry
significant implications for a range of LQ problems. Specifically, they are
relevant when stochastic Hamiltonian systems perfectly align with the FBSDELDAs
that fulfill the domination-monotonicity conditions. Consequently, we are able
to establish explicit expressions for the unique optimal controls by utilizing
the solutions of the corresponding stochastic Hamiltonian systems. | Maozhong Xu, Maoning Tang, Qingxin Meng | 2023-10-19T23:08:11Z | http://arxiv.org/abs/2310.13195v1 | A Class of Forward-Backward Stochastic Differential Equations Driven by Levy Processes and Application to LQ Problems +
###### Abstract
In this paper, our primary focus lies in the thorough investigation of a specific category of nonlinear fully coupled forward-backward stochastic differential equations involving time delays and advancements with the incorporation of Levy processes, which we shall abbreviate as FBSDELDAs. Drawing inspiration from diverse examples of linear-quadratic (LQ) optimal control problems featuring delays and Levy processes, we proceed to employ a set of domination-monotonicity conditions tailored to this class of FBSDELDAs. Through the application of the continuation method, we achieve the pivotal results of unique solvability and the derivation of a pair of estimates for the solutions of these FBSDELDAs. These findings, in turn, carry significant implications for a range of LQ problems. Specifically, they are relevant when stochastic Hamiltonian systems perfectly align with the FBSDELDAs that fulfill the domination-monotonicity conditions. Consequently, we are able to establish explicit expressions for the unique optimal controls by utilizing the solutions of the corresponding stochastic Hamiltonian systems.
**Keywords**: Forward-backward stochastic differential equation with delay; Levy processes; Method of continuation; Domination-monotonicity condition; Stochastic linear-quadratic problem
## 1 Introduction
Since the seminal work of Pardoux and Peng [27] on backward stochastic differential equations (BSDEs), as well as the contributions of Antonelli [2] regarding coupled forward-backward stochastic differential equations (FBSDEs), these equations have garnered substantial attention. They have become an essential subject of study not only due to their classical structure but also their extensive applicability across various domains, including stochastic control, finance, and economics [1, 9, 33]. In 1993, Antonelli [2] introduced coupled FBSDEs and established solvability results for a limited time interval. However, he also presented a counterexample, illustrating that the same conclusion might not hold over an extended time interval when relying solely on Lipschitz conditions. To address this challenge, many scholars have introduced additional monotonicity conditions and employed the method of continuation. This method was first introduced by Hu and Peng [11] and subsequently expanded upon by Yong [29, 38], and others. Moreover, various other conditions and research methodologies have been proposed [8, 18, 19, 28, 41].
For the research of stochastic differential equations (SDEs) with Levy processes, one of the most improtant results was given by Nualart and Schoutens [24]. In their paper, by constructing a set of pairwise strongly orthonormal martingales associated with Levy processes called Teugels martingales, they gave a martingale representation theorem related to Levy processes. Based on this, they proved in [25] the existence and uniqueness of solution for BSDEs with Levy processes, and then their results were extended to the BSDEs driven by Teugels martingales and an independent multi-dimensional Brownian motion by Bahlali, Eddahbi and Essaky [3]. Subsequently, many scholars conducted more in-depth studies on BSDE driven by Teugels martingale and obtained abundant research results, seeing the reference [10, 31, 43] and so
on. Later, there emerged a great deal of research on the stochastic control system driven by Teugels martingales and an independent Brownian motion, including the forward system, the backward system and the forward-backward system. For these results, we can refer to Meng and Tang [21], Tang and Zhang [34], Zhang et al. [42] and so on.
However, in the natural and social phenomena, there exist a large number of processes whose development not only depends on their present state, but also their previous situation. Therefore, it is necessary to to study the stochastic control system with delay. In 2000, \(\Oksendal\) and Sulem [26] obtained the stochastic maximum principle for this type of system. To our knowledge, the adjoint equation for the delayed system is also a new type of BSDE which is called anticipated BSDE. It was introduced by Peng and Yang [30] and they gave the proof of the unique solvability result of solution for such BSDE. Subsequently, Chen and Wu conducted a lot of research on this basis. In 2010, they studied the time-delayed SDEs in [4], where they obtained the maximum principle for this problem by virtue of the duality method and the anticipated BSDEs and the related application was also presented. In the following year, Chen and Wu [5] continued to study a type of general FBSDEs with time-delayed SDEs as the forward equations and time-advanced BSDEs as the backward equations. Besides, [6] and [7] are also their research results on the time-delayed system. For the follow-up research developments, we can refer to [12, 13, 20, 22, 23, 37, 39] and so on.
As fa as we know, the Hamiltonian systems for the stochastic control problem with Levy processes involving time delays or advancements are all described by coupled FBSDELDs. However, to the best of our knowledge, there is very little research on this type of FBSDEs. Therefore, in this paper, we consider the following FBSDELDA:
\[\left\{\begin{aligned} & dx(t)=b(t,\theta(t),\theta_{-}(t),y_{+}(t),z_{+} (t),k_{+}(t))dt+\sigma(t,\theta(t),\theta_{-}(t),y_{+}(t),z_{+}(t),k_{+}(t)) dW(t)\\ &\qquad+\sum_{i=1}^{\infty}g^{(i)}(t,\theta(t-),\theta_{-}(t-),y _{+}(t-),z_{+}(t),k_{+}(t))dH^{(i)}(t),\quad t\in[0,T],\\ & dy(t)=f(t,\theta(t),x_{-}(t),\theta_{+}(t))dt+z(t)dW(t)+\sum_{ i=1}^{\infty}k^{(i)}(t)dH^{(i)}(t),\quad t\in[0,T],\\ & x(t)=\lambda(t),\quad y(t)=\mu(t),\quad z(t)=\rho(t),\quad k(t )=\varsigma(t),\quad t\in[-\delta,0],\\ & y(T)=\Phi(x(T)),\\ & x(t)=y(t)=z(t)=k(t)=0,\quad t\in(T,T+\delta],\end{aligned}\right. \tag{1.1}\]
where we denote \(\theta(\cdot)=(x(\cdot)^{\top},y(\cdot)^{\top},z(\cdot)^{\top},k(\cdot)^{ \top})^{\top}\) with \(k(\cdot):=(k^{(1)}(\cdot)^{\top},k^{(2)}(\cdot)^{\top},\cdots)^{\top},\theta_ {-}(\cdot)=(x_{-}(\cdot)^{\top},y_{-}(\cdot)^{\top},z_{-}(\cdot)^{\top},\\ k_{-}(\cdot)^{\top})^{\top}=(x(\cdot-\delta)^{\top},y(\cdot-\delta)^{ \top},z(-\delta)^{\top},k(-\delta)^{\top})^{\top},\theta_{+}(\cdot)=(x_{+}( \cdot)^{\top},y_{+}(\cdot)^{\top},z_{+}(\cdot)^{\top},k_{+}(\cdot)^{\top})^{ \top}=(\mathbb{E}^{\mathcal{F}_{t}}[x(+\delta)]^{\top},\mathbb{E}^{\mathcal{F }_{t}}[y(+\delta)]^{\top},\mathbb{E}^{\mathcal{F}_{t}}[z(\cdot+\delta)]^{\top },\mathbb{E}^{\mathcal{F}_{t}}[z(\cdot+\delta)]^{\top},\mathbb{E}^{\mathcal{F }_{t}}[k(\cdot+\delta)]^{\top},\mathbb{E}^{\mathcal{F}_{t}}[k(\cdot+\delta)]^{ \top})^{\top}\). Especially, \(\theta(\cdot)=(x(\cdot)^{\top},y(\cdot)^{\top},z(\cdot)^{\top},k(\cdot)^{ \top})^{\top}\), \(\theta_{-}(\cdot)=(x_{-}(\cdot)^{\top},y_{-}(\cdot)^{\top},z(\cdot)^{\top},k (\cdot)^{\top})^{\top}\), \(z_{-}(\cdot)^{\top},k_{-}(\cdot)^{\top})^{\top}=(x\big{(}(\cdot-\delta)- \big{)}^{\top},y((\cdot-\delta)-\big{)}^{\top},\\ y_{+}(\cdot)=\mathbb{E}^{\mathcal{F}_{t}}[y((\cdot+\delta)-)]\), where \(\mathbb{E}^{\mathcal{F}_{t}}[\cdot]=\mathbb{E}[\cdot|\mathcal{F}_{t}]\) and \(\top\) denotes the transpose of matrices. Moreover, we denote \(\Lambda(\cdot)=(\lambda(\cdot),\mu(\cdot),\rho(\cdot),\varsigma(\cdot))\). Let \(\delta>0\) be a given constant and denote the time delay. Furthermore, we define \(\mathcal{F}_{t}=\mathcal{F}_{0}\) for all \(t\in[-\delta,0]\). \(\left\{W_{t}:t\in[0,T]\right\}\) is a d-dimensional standard Brownian motion. \(\left\{H_{t}^{(i)}:t\in[0,T]\right\}_{i=1}^{\infty}\) are Teugels martingales associated with the Levy process. For the convenience of later use, we continue to denote
\[\Gamma(\cdot):=(f(\cdot)^{\top},b(\cdot)^{\top},\sigma(\cdot)^{\top},g(\cdot) ^{\top})^{\top}\quad\text{with}\quad g(\cdot)^{\top}:=(g^{(1)}(\cdot)^{\top},g ^{(2)}(\cdot)^{\top},\cdots)^{\top}. \tag{1.2}\]
Then all of the coefficients of FBSDELDA (1.1) are collected by \((\Lambda,\Phi,\Gamma)\).
In 2014, Li and Wu [15] initiated the investigation of anticipated recursive stochastic optimal control problems involving delays and Levy processes. Their research focused on control systems described by anticipated FBSDE with delays and Levy processes (AFBSDEDLs). In their pioneering work, they established unique solvability results for SDEs with delay and Levy processes (SDEDLs) and anticipated BSDEs with Levy processes (ABSDELs). These findings provided a solid foundation for similar results in the context of uncoupled AFBSDEDLs. Building upon their ground-breaking work, Li and Wu extended their research to address the Linear Quadratic (LQ) optimal control problem for systems characterized by delays and Levy processes in a subsequent publication [16]. This endeavor led to the solvability of stochastic Hamiltonian systems and the derivation of unique optimal control representations. It is important to note that the AFBSDEDLs they investigated in these earlier studies were uncoupled, and our research herein explores the considerably distinct fully coupled scenarios. To prove the existence and uniqueness of solutions for fully coupled FBSDELDAs, we employ and further develop the method of continuation. Additionally, Li and Wu [15] demonstrated the continuous dependence property of solutions for ABSDELs. Therefore, our current study aims to build upon their work
and delve into the continuous dependence theorem for both SDEDLs and fully coupled FBSDELDs, as expounded in Lemma 2.1 and Theorem 3.1.
In 2022, in order to solve more general coupled FBSDEs which can be applied to solve various stochastic LQ problems, Yu [40] has introduced various matrices, matrix-valued random variables and matrix-valued stochastic processes to present a domination-monotonicity framework and this kind of domination-monotonicity conditions are more accurate and general forms of the traditional monotonicity conditions which strengthen the Lipschitz condition and weaken the monotonicity conditions at the same time. Actually, this new framework can not only cover most of situations related to the method of continuation in the literature, but also contain many others beyond the literature. More importantly, this framework can precisely correspond to four types of LQ problems which has been demonstrated in detail in Section 4 of Yu [40]. In their paper, a unique solvability result and a pair of estimates for coupled FBSDEs are obtained. Due to the wider applicability of this kind of framework, it has been applied to some well-known literature [17, 35, 36, 40] and so on.
Recently, a class of coupled FBSDEs involving time delays and time advancements on infinite horizon is studied by Yang and Yu [37], in which the unique solvability of infinite horizon FBSDEs is obtained under a randomized Lipschitz condition and a randomized monotonicity condition. In comparison to Yang and Yu [37], we develop the domination-monotonicity conditions introduced by Yu [40] to the framework that addresses time delays and advancements associated with Levy processes. This extension allowed us to tackle the unique solvability of FBSDELDAs and apply our approach to solve more general stochastic LQ problems that involve cost functionals with cross terms. Here it is worth mentioning that under some type of domination-monotonicity conditions, Li, Wang and Wu [14] studied the uniqueness and existence of the solutions for a particular class of anticipated forwardbackward stochastic differential delayed equations, where their domination-monotonicity conditions differ significantly from ours (see Assumption 3.2) and in Remark 3.1, the corresponding difference has been discussed in detail.
As an application of these findings, we shall re-examine stochastic LQ problems involving Levy processes with time delays or advancements. LQ problems represent a quintessential category within the realm of stochastic optimal control problems, intensively studied by numerous scholars. When delving into the study of these LQ problems, it is imperative to engage with Hamiltonian systems, a type of linear FBSDELDAs. Leveraging the unique solvability results derived for FBSDELDAs, we obtain analogous outcomes for Hamiltonian systems in the context of LQ problems. It is noteworthy that, particularly in the case of forward LQ problems, we confront the complexity of cost functionals that include cross terms. It is also of significance to highlight that, in order to address these cross terms, we have introduced a pivotal lemma (refer to Lemma 4.1) to establish the uniqueness of the optimal control.
The rest of this paper is organized as follows. In Section 2, we introduce and establish essential notations for our analysis. Additionally, we present two key lemmas related to SDEDLs and ABSDELs. These lemmas will prove invaluable for our subsequent analysis. In Section 3, we delve into the examination of FBSDELDA (1.1), subject to domination-monotonicity conditions. Our primary focus is on establishing the unique solvability of this equation. We also provide a pair of estimates, which are instrumental for our theoretical framework. These critical results are encapsulated in Theorem 3.1. In Section 4, we build upon the findings from previous sections to address two distinct types of LQ problems concerning systems involving Levy processes and incorporating time delays or advancements. We successfully derive the explicit forms of unique optimal control strategies in these scenarios.
## 2 Notations and Preliminaries
Let \(\mathbb{R}^{n}\) be the \(n\)-dimensional Euclidean space with the norm \(|\cdot|\) and the inner product \(\langle\cdot,\cdot\rangle\). Let \(\mathbb{S}^{n}\) be the set of all symmetric matrices in \(\mathbb{R}^{n\times n}\). Let \(\mathbb{R}^{n\times m}\) be the collection of all \(n\times m\) matrices with the norm \(|A|=\sqrt{\mathrm{tr}(AA^{\top})}\), for \(\forall A\in\mathbb{R}^{n\times m}\) and the inner product:
\[\langle A,B\rangle=\mathrm{tr}(AB^{\top}),\quad A,B\in\mathbb{R}^{n\times m}.\]
Let \(T>0\) and \([0,T]\) denotes the finite time horizon. Let \((\Omega,\mathcal{F},\mathbb{F},\mathbb{P})\) be a complete filtered probability space with a filtration \(\mathbb{F}=\left\{\mathcal{F}_{t}:0\leq t\leq T\right\}\) satisfying the usual conditions of right-continuity and \(\mathbb{P}\)- completeness. Besides, let the filtration \(\mathbb{G}=\left\{\mathcal{G}_{t}:\mathcal{G}_{t}=\mathcal{F}_{t-\delta},0 \leq t\leq T\right\}\). Let \(\left\{W_{t}:0\leq t\leq T\right\}\) be a d-dimensional standard Brownian motion with respect to \(\mathbb{F}\) and \(\left\{S_{t}:0\leq t\leq T\right\}\) be a 1-dimensional real-valued cadlag trajectory called Levy processes with stationary and independent increments, which is independent of \(\left\{W_{t}:0\leq t\leq T\right\}\). It is well-known that \(S_{t}\) has a characteristic
function of the following form
\[E(e^{i\omega S_{t}})=\exp\left[ia\omega t-\frac{1}{2}\varrho^{2}\omega^{2}t+t\int_ {\mathbb{R}}\left(e^{i\omega s}-1-i\omega\mathbf{1}_{\{|s|<1\}}\right)v(dx) \right],\]
where \(a\in\mathbb{R},\varrho>0\), and \(v\) is a measure on \(\mathbb{R}\) with \(\int_{\mathbb{R}}\left(1\wedge x^{2}\right)v(dx)\leq\infty\). We will assume that the Levy measure \(v\) satisfies
\[\int_{(-\varepsilon,\varepsilon)^{c}}\left(e^{k|x|}\right)v(dx)\leq\infty,\]
for every \(\varepsilon>0\) and some constant \(k>0\).
We assume that
\[\mathcal{F}_{t}=\sigma\left(S_{s},s\leq t\right)\vee\sigma\left(W_{s},s\leq t \right)\vee\mathcal{N},\]
where \(\mathcal{N}\) denotes the totality of \(P\)-null sets.
We denote by \(\left\{H_{t}^{(i)}:0\leq t\leq T\right\}_{i=1}^{\infty}\) the Teugels martingales associated with the Levy process \(\left\{S_{t}:0\leq t\leq T\right\}\). \(H_{t}^{(i)}\) is given by
\[H_{t}^{(i)}=c_{i,i}Y_{t}^{(i)}+c_{i,i-1}Y_{t}^{(i-1)}+\cdots+c_{i,1}Y_{t}^{(1 )},\]
where \(Y_{t}^{(i)}=S_{t}^{(i)}-\mathbb{E}[S_{t}^{(i)}]\) for all \(i\geq 1\), \(S_{t}^{(i)}\) are so-called power jump processes with \(S_{t}^{(1)}=S_{t}\), \(S_{t}^{(i)}=\sum\limits_{0\leq s\leq t}(\Delta S_{s})^{i}\) for \(i\geq 2\) and the coefficients \(c_{i,j}\) correspond to the orthonormalization of polynomials \(1,x,x^{2},\cdots\) with respect to the measure \(\mu(dx)=x^{2}v(dx)+\sigma^{2}\delta_{0}(dx)\). Furthermore, it is well-known that the Teugels martingales \(\left\{H_{t}^{(i)}\right\}_{i=1}^{\infty}\) are pairwise strongly orthogonal and their predictable quadratic variation processes are given by
\[\left\langle H_{t}^{(i)},H_{t}^{(j)}\right\rangle=\delta_{ij}t,\]
where
\[\delta_{ij}=\begin{cases}1&i=j\\ 0&i\neq j.\end{cases}\]
The reader can refer to [24, 25] for more details about Teugels martingales.
Let \(\mathbb{H}\) be a Hilbert space with norm \(\|\cdot\|_{\mathbb{H}}\), then we introduce some notations as follows:
\(\bullet\)\(l^{2}\): the space of all real-valued sequences \(x=(x_{n})_{n\geq 1}\) satisfying
\[\|x\|_{l^{2}}:=\Big{(}\sum\limits_{i=1}^{\infty}x_{i}^{2}\Big{)}^{1/2}<\infty.\]
\(\bullet\)\(l^{2}(\mathbb{H})\): the space of all \(\mathbb{H}\)-valued sequences \(f=\left\{f^{i}\right\}_{i\geq 1}\) satisfying
\[\|f\|_{l^{2}(\mathbb{H})}:=\Big{(}\sum\limits_{i=1}^{\infty}\|f^{i}\|_{ \mathbb{H}}^{2}\Big{)}^{1/2}<\infty.\]
\(\bullet\)\(C(s,r;\mathbb{H})\): the space of continuous functions form \([s,r]\) into \(H\).
\(\bullet\)\(L^{2}(s,r;\mathbb{H})\): the space of all \(\mathbb{H}\)-valued Lebesgue measurable functions \(\xi(\cdot)\) satisfying
\[\|\xi(\cdot)\|_{L^{2}(s,r;\mathbb{H})}:=\bigg{[}\int_{s}^{r}|\xi(t)|_{\mathbb{ H}}^{2}dt\bigg{]}^{1/2}<\infty.\]
\(\bullet\)\(L^{2}_{\mathcal{F}_{T}}(\Omega;\mathbb{H})\): the space of all \(\mathbb{H}\)-valued and \(\mathcal{F}_{T}\)-measurable random variables \(\xi\) satisfying
\[\|\xi\|_{L^{2}_{\mathcal{F}_{T}}(\Omega;\mathbb{H})}:=\left[\mathbb{E}\|\xi\| _{\mathbb{H}}^{2}\right]^{1/2}<\infty.\]
\(\bullet\)\(L^{\infty}_{\mathcal{F}_{T}}(\Omega;\mathbb{H})\): the space of all \(\mathbb{H}\)-valued and \(\mathcal{F}_{T}\)-measurable essentially bounded variables.
* \(L^{2}_{\mathbb{F}}(s,r;\mathbb{H})\): the space of all \(\mathbb{H}\)-valued and \(\mathbb{F}\)-predictable processes \(f(\cdot)\) satisfying \[\|f(\cdot)\|_{L^{2}_{\mathbb{F}}(0,T;\mathbb{H})}:=\left[\mathbb{E}\bigg{(}\int _{s}^{r}\|f(t)\|_{\mathbb{H}}^{2}ds\bigg{)}\right]^{1/2}<\infty.\]
* \(M^{2}_{\mathbb{F}}(s,r;\mathbb{H})\): the space of all \(\mathbb{H}\)-valued and \(\mathbb{F}\)-adapted processes \(f(\cdot)\) satisfying \[\|f(\cdot)\|_{L^{2}_{\mathbb{F}}(0,T;\mathbb{H})}:=\left[\mathbb{E}\bigg{(}\int _{s}^{r}\|f(t)\|_{\mathbb{H}}^{2}ds\bigg{)}\right]^{1/2}<\infty.\]
* \(L^{2}_{\mathbb{F}}(s,r;l^{2}(\mathbb{H}))\): the space of all \(l^{2}(\mathbb{H})\)-valued and \(\mathbb{F}\)-predictable processes \(f(\cdot)=\left\{f^{i}(\cdot)\right\}_{i\geq 1}\) satisfying \[\|f(\cdot)\|_{L^{2}_{\mathbb{F}}(0,T;l^{2}(\mathbb{H}))}:=\left[\mathbb{E} \bigg{(}\int_{s}^{r}\sum_{i=1}^{\infty}\|f^{i}(t)\|_{\mathbb{H}}^{2}ds\bigg{)} \right]^{1/2}<\infty.\]
* \(L^{\infty}_{\mathbb{F}}(s,r;\mathbb{H})\): the space of all \(\mathbb{H}\)-valued and \(\mathbb{F}\)-predictable essentially bounded processes.
* \(\mathcal{S}^{2}_{\mathbb{F}}(s,r;\mathbb{H})\): the space of all \(\mathbb{H}\)-valued and \(\mathbb{F}\)-adapted cadlag processes \(f(\cdot)\) satisfying \[\|f(\cdot)\|_{\mathcal{S}^{2}_{\mathbb{F}}(s,r;\mathbb{H})}:=\left[\mathbb{E} \bigg{(}\sup_{t\in[s,r]}\|f(t)\|_{\mathbb{H}}^{2}\bigg{)}\right]^{1/2}<\infty.\]
For the sake of simplicity of notation, we will also present some product space as follows:
* \(N^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{n(2+d)}\times l^{2}(\mathbb{R}^{m})):= \mathcal{S}^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{n})\times\mathcal{S}^{2}_{ \mathbb{F}}(0,T;\mathbb{R}^{n})\times M^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{n \times d})\times L^{2}_{\mathbb{F}}(0,T;l^{2}(\mathbb{R}^{n}))\). For any \(\theta(\cdot)=(x(\cdot)^{\top},y(\cdot)^{\top},z(\cdot)^{\top},k(\cdot)^{ \top})^{\top}\in N^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{n(2+d)}\times l^{2}( \mathbb{R}^{n}))\), its norm is given by \[\|\theta(\cdot)\|_{N^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{n(2+d)}\times l^{2}( \mathbb{R}^{n}))}:=\left\{\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|x(t)|^{2}+\sup_{ t\in[0,T]}|y(t)|^{2}+\int_{0}^{T}|z(t)|^{2}dt+\int_{0}^{T}\|k(t)\|_{l^{2}( \mathbb{R}^{n})}^{2}dt\bigg{]}\right\}^{1/2}.\]
* \(\mathcal{N}^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{n(2+d)}\times l^{2}(\mathbb{R}^ {n})):=L^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{n})\times L^{2}_{\mathbb{F}}(0,T; \mathbb{R}^{n})\times M^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{n\times d})\times L^{ 2}_{\mathbb{F}}(0,T;l^{2}(\mathbb{R}^{n}))\). For any \(\rho(\cdot)=(\varphi(\cdot)^{\top},\psi(\cdot)^{\top},\gamma(\cdot)^{\top}, \beta(\cdot)^{\top})^{\top}\in\mathcal{N}^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{n(2 +d)}\times l^{2}(\mathbb{R}^{n}))\), its norm is given by \[\|\rho(\cdot)\|_{\mathcal{N}^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{n(2+d)}\times l ^{2}(\mathbb{R}^{n}))}:=\left\{\mathbb{E}\bigg{[}\int_{0}^{T}|\varphi(t)|^{2}dt +\int_{0}^{T}|\psi(t)|^{2}dt+\int_{0}^{T}|\gamma(t)|^{2}dt+\int_{0}^{T}\|\beta( t)\|_{l^{2}(\mathbb{R}^{n})}^{2}dt\bigg{]}\right\}^{1/2}.\]
* \(\mathcal{Q}(-\delta,0;\mathbb{R}^{n(2+d)}\times l^{2}(\mathbb{R}^{n}))\times L ^{2}_{\mathcal{F}_{T}}(\Omega;\mathbb{R}^{n})\times\mathcal{N}^{2}_{\mathbb{F}} (0,T;\mathbb{R}^{n(2+d)}\times l^{2}(\mathbb{R}^{n}))\). For any \((\pi(\cdot),\eta,\rho(\cdot))\in\mathcal{H}[-\delta,T]\), its norm is given by \[\|(\pi(\cdot),\eta,\rho(\cdot))\|_{\mathcal{H}[-\delta,T]}:=\left\{\|\pi(t)\|_{ \mathcal{Q}(-\delta,0;\mathbb{R}^{n(2+d)}\times l^{2}(\mathbb{R}^{n}))}^{2}+ \|\eta\|_{L^{2}_{\mathcal{F}_{T}}(\Omega;\mathbb{R}^{n})}^{2}+\|\rho(\cdot)\|_{ \mathcal{N}^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{n(2+d)}\times l^{2}(\mathbb{R}^ {n}))}^{2}\right\}^{1/2}.\]
In what follows, we shall present some basic results on SDEDL and ABSDEL.
Firstly, we study the following SDEDL:
\[\begin{cases}dx_{t}=b(t,x_{t},x_{t}^{\prime})dt+\sigma(t,x_{t},x_{t}^{\prime}) dW_{t}+\sum_{i=1}^{\infty}g^{(i)}(t,x_{t-},x_{t-}^{\prime})dH_{t}^{(i)},\quad t\in[0,T],\\ x_{t}=\lambda_{t},\quad t\in[-\delta,0],\end{cases} \tag{2.1}\]
where \(x_{t}^{\prime}=x_{t-\delta}\) and \(x_{t-}^{\prime}=x_{(t-\delta)-}\).
The coefficients \((b,\sigma,g,\lambda)\) are assumed to satisfy the following conditions:
**Assumption 2.1**.: \(\lambda(\cdot)\in C(-\delta,0;\mathbb{R}^{n})\) and \((b,\sigma,g)\) are three given random mappings
\[b:[0,T]\times\Omega\times\mathbb{R}^{n}\times\mathbb{R}^{n}\to \mathbb{R}^{n},\] \[\sigma:[0,T]\times\Omega\times\mathbb{R}^{n}\times\mathbb{R}^{n} \to\mathbb{R}^{n\times d},\] \[g=(g^{(i)})_{i=1}^{\infty}:[0,T]\times\Omega\times\mathbb{R}^{n }\times\mathbb{R}^{n}\to l^{2}(\mathbb{R}^{n})\]
satisfying
(i)For any \(x,x^{\prime}\in\mathbb{R}^{n}\), \(b(\cdot,x,x^{\prime})\), \(\sigma(\cdot,x,x^{\prime})\) and \(g(\cdot,x,x^{\prime})\) are \(\mathbb{F}\)-progressively measurable. Moreover, \(b(\cdot,0,0)\in L_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n})\), \(\sigma(\cdot,0,0)\in L_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n\times d})\), \(g(\cdot,0,0)\in L_{\mathbb{F}}^{2}(0,T;l^{2}(\mathbb{R}^{n}))\).
(ii)The mappings \(b\), \(\sigma\) and \(g\) are uniformly Lipschitz continuous with respect to \((x,x^{\prime})\), i.e., for any \(x,\bar{x},x^{\prime},\bar{x}^{\prime}\in\mathbb{R}^{n}\), there exists a constant \(L>0\) such that
\[|b(t,x,x^{\prime})-b(t,\bar{x},\bar{x}^{\prime})|+|\sigma(t,x,x^{\prime})- \sigma(t,\bar{x},\bar{x}^{\prime})|+\|g(t,x,x^{\prime})-g(t,\bar{x},\bar{x}^{ \prime})\|_{l^{2}(\mathbb{R}^{n})}\leq L(|x-\bar{x}|+|x^{\prime}-\bar{x}^{ \prime}|).\]
**Lemma 2.1**.: _Under Assumption 2.1, SDEDL (2.1) with coefficients \((b,\sigma,g,\lambda)\) admits a unique solution \(x(\cdot)\in\mathcal{S}_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n})\). Moreover, we have the following estimate:_
\[\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|{x_{t}}|^{2}\bigg{]}\leq K\mathbb{E}\bigg{[} \sup_{t\in[-\delta,0]}|\lambda_{t}|^{2}+\int_{0}^{T}|b(t,0,0)|^{2}dt+\int_{0} ^{T}|\sigma(t,0,0)|^{2}dt+\int_{0}^{T}\|g(t,0,0)\|_{l^{2}(\mathbb{R}^{n})}^{2} dt\bigg{]}, \tag{2.2}\]
_where \(K\) is a positive constant depending only on \(T\) and the Lipschitz constant \(L\). Furthermore, let \((\bar{b},\bar{\sigma},\bar{g},\bar{\lambda})\) be another set of coefficients satisfying Assumption 2.1, and assume that \(\bar{x}(\cdot)\in\mathcal{S}_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n})\) is a solution to SDEDL (2.1) corresponding the coefficients \((\bar{b},\bar{\sigma},\bar{g},\bar{\lambda})\). Then the following estimate holds:_
\[\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|{x_{t}}-\bar{x}_{t}|^{2}\bigg{]}\leq K\mathbb{E}\bigg{[}\sup_{t\in[-\delta,0]}|\lambda_{t}-\bar{ \lambda}_{t}|^{2}+\int_{0}^{T}|b(t,\bar{x}_{t},\bar{x}_{t}^{\prime})-\bar{b}(t,\bar{x}_{t},\bar{x}_{t}^{\prime})|^{2}dt \tag{2.3}\] \[+\int_{0}^{T}|\sigma(t,\bar{x}_{t},\bar{x}_{t}^{\prime})-\bar{ \sigma}(t,\bar{x}_{t},\bar{x}_{t}^{\prime})|^{2}dt+\int_{0}^{T}\|g(t,\bar{x}_{ t},\bar{x}_{t}^{\prime})-\bar{g}(t,\bar{x}_{t},\bar{x}_{t}^{\prime})\|_{l^{2}( \mathbb{R}^{n})}^{2}dt\bigg{]},\]
_where \(K\) is also a positive constant which only depends only on \(T\) and the Lipschitz constant \(L\)._
Proof.: Firstly, the existence of uniqueness of the solution to SDEDL (2.1) has been proved in Theorem 3.1 of Li and Wu [15], so we just need to prove the estimate (2.2) and (2.3). In the following proof, the constant \(K\) may be changed line to line.
For simplicity, we denote by
\[\begin{cases}\widehat{x}_{s}=x_{s}-\bar{x}_{s},\quad\widehat{x}_{t}=x_{t}-\bar {x}_{t},\\ \widehat{x}_{s}^{\prime}=x_{s}^{\prime}-\bar{x}_{s}^{\prime},\quad\widehat{ \lambda}_{s}=\lambda_{s}-\bar{\lambda}_{s},\\ \widehat{b}_{s}=b(s,x_{s},x_{s}^{\prime})-\bar{b}(s,\bar{x}_{s},\bar{x}_{s}^{ \prime}),\\ \widehat{\sigma}_{s}=\sigma(s,x_{s},x_{s}^{\prime})-\bar{\sigma}(s,\bar{x}_{s}, \bar{x}_{s}^{\prime}),\\ \widehat{g}_{s}=g(s,x_{s},x_{s}^{\prime})-\bar{g}(s,\bar{x}_{s},\bar{x}_{s}^{ \prime}).\end{cases}\]
Using Ito's formula to \(|\widehat{x}_{s}|^{2}\) leads to
\[|\widehat{x}_{t}|^{2}= |\widehat{x}_{0}|^{2}+2\int_{0}^{t}\left\langle\widehat{x}_{s}, \widehat{b}_{s}\right\rangle ds+\int_{0}^{t}|\widehat{\sigma}_{s}|^{2}ds+\sum_{i,j=1}^{\infty}\int_{0}^{t}\left\langle\widehat{g}_{s}^{(i)},\widehat{g}_{s}^{(j )}\right\rangle d[H_{i},H_{j}]_{s} \tag{2.4}\] \[+2\int_{0}^{t}\left\langle\widehat{x}_{s},\widehat{\sigma}_{s}dW_{ s}\right\rangle+2\sum_{i=1}^{\infty}\int_{0}^{t}\left\langle\widehat{x}_{s-}, \widehat{g}_{s-}^{(i)}\right\rangle dH_{s}^{(i)}.\]
Taking expectation on both sides and applying the elementary inequality \(2ab\leq a^{2}+b^{2}\), we can get
\[\mathbb{E}\big{[}|\widehat{x}_{t}|^{2}\big{]}\leq\mathbb{E}\Bigg{\{}|\widehat{x }_{0}|^{2}+\int_{0}^{t}\big{(}|\widehat{x}_{s}|^{2}+|\widehat{b}_{s}|^{2}+| \widehat{\sigma}_{s}|^{2}+\|\widehat{g}_{s}\|_{l^{2}(\mathbb{R}^{n})}^{2}\big{)} ds\Bigg{\}}. \tag{2.5}\]
It is easy to verify that
\[\int_{0}^{t}|\widehat{x}_{s}^{\prime}|^{2}ds=\int_{-\delta}^{t-\delta}|\widehat{x }_{s}|^{2}ds=\int_{-\delta}^{0}|\widehat{x}_{s}|^{2}+\int_{0}^{t-\delta}| \widehat{x}_{s}|^{2}ds\leq\delta\sup_{t\in[-\delta,0]}|\widehat{\lambda}_{s}|^ {2}+\int_{0}^{t}|\widehat{x}_{s}|^{2}ds. \tag{2.6}\]
With the Lipschitz condition in Assumption 2.1, the inequality (2.6) and the elementary inequality \((a+b)^{2}\leq 2a^{2}+2b^{2}\), we find that
\[\int_{0}^{t}|\widehat{b}_{s}|^{2}ds= \int_{0}^{t}\big{|}b(s,x_{s},x_{s}^{\prime})-b(s,\bar{x}_{s}, \bar{x}_{s}^{\prime})+b(s,\bar{x}_{s},\bar{x}_{s}^{\prime})-\bar{b}(s,\bar{x}_ {s},\bar{x}_{s}^{\prime})\big{|}^{2}ds \tag{2.7}\] \[\leq 2\int_{0}^{t}\big{|}b(s,x_{s},x_{s}^{\prime})-b(s,\bar{x}_{s}, \bar{x}_{s}^{\prime})\big{|}^{2}ds+2\int_{0}^{t}\big{|}b(s,\bar{x}_{s},\bar{x} _{s}^{\prime})-\bar{b}(s,\bar{x}_{s},\bar{x}_{s}^{\prime})\big{|}^{2}ds\] \[\leq 2L^{2}\int_{0}^{t}\big{(}|\widehat{x}_{s}|+|\widehat{x}_{s}^{ \prime}|\big{)}^{2}ds+2\int_{0}^{t}\big{|}b(s,\bar{x}_{s},\bar{x}_{s}^{\prime })-\bar{b}(s,\bar{x}_{s},\bar{x}_{s}^{\prime})\big{|}^{2}ds\] \[\leq 4L^{2}\int_{0}^{t}|\widehat{x}_{s}|^{2}ds+4L^{2}\int_{0}^{t}| \widehat{x}_{s}^{\prime}|^{2}ds+2\int_{0}^{t}\big{|}b(s,\bar{x}_{s},\bar{x}_{ s}^{\prime})-\bar{b}(s,\bar{x}_{s},\bar{x}_{s}^{\prime})\big{|}^{2}ds\] \[\leq 8L^{2}\int_{0}^{t}|\widehat{x}_{s}|^{2}ds+4\delta L^{2}\sup_{t \in[-\delta,0]}|\widehat{\lambda}_{s}|^{2}+2\int_{0}^{t}\big{|}b(s,\bar{x}_{ s},\bar{x}_{s}^{\prime})-\bar{b}(s,\bar{x}_{s},\bar{x}_{s}^{\prime})\big{|}^{2}ds.\]
Likewise,
\[\int_{0}^{t}|\widehat{\sigma}_{s}|^{2}ds\leq 8L^{2}\int_{0}^{t}|\widehat{x}_{s}|^{2}ds+4\delta L^{2} \sup_{t\in[-\delta,0]}|\widehat{\lambda}_{s}|^{2}+2\int_{0}^{t}\big{|}\sigma( s,\bar{x}_{s},\bar{x}_{s}^{\prime})-\bar{\sigma}(s,\bar{x}_{s},\bar{x}_{s}^{ \prime})\big{|}^{2}ds, \tag{2.8}\] \[\int_{0}^{t}\|\widehat{g}_{s}\|^{2}_{l^{2}(\mathbb{R}^{n})}ds\leq 8L^{2}\int_{0}^{t}|\bar{x}_{s}|^{2}ds+4\delta L^{2} \sup_{t\in[-\delta,0]}|\widehat{\lambda}_{s}|^{2}+2\int_{0}^{t}\big{\|}g(s, \bar{x}_{s},\bar{x}_{s}^{\prime})-\bar{g}(s,\bar{x}_{s},\bar{x}_{s}^{\prime}) \big{\|}^{2}_{l^{2}(\mathbb{R}^{n})}ds. \tag{2.9}\]
Putting (2.7), (2.8) and (2.9) into (2.5), we derive
\[\mathbb{E}\big{[}|\widehat{x}_{t}|^{2}\big{]}\leq K\mathbb{E}\Bigg{\{}\int_{0}^{t}|\widehat{x}_{s}|^{2}ds+\sup_{t\in[- \delta,0]}|\widehat{\lambda}_{s}|^{2}+\int_{0}^{t}\big{|}b(s,\bar{x}_{s},\bar{ x}_{s}^{\prime})-\bar{b}(s,\bar{x}_{s},\bar{x}_{s}^{\prime})\big{|}^{2}ds \tag{2.10}\] \[+\int_{0}^{t}\big{|}\sigma(s,\bar{x}_{s},\bar{x}_{s}^{\prime})- \bar{\sigma}(s,\bar{x}_{s},\bar{x}_{s}^{\prime})\big{|}^{2}ds+\int_{0}^{t} \big{\|}g(s,\bar{x}_{s},\bar{x}_{s}^{\prime})-\bar{g}(s,\bar{x}_{s},\bar{x}_{s} ^{\prime})\big{\|}^{2}_{l^{2}(\mathbb{R}^{n})}ds\Bigg{\}}.\]
Thus, applying Gronwalls inequality gives
\[\sup_{t\in[0,T]}\mathbb{E}\big{[}|\widehat{x}_{t}|^{2}\big{]}\leq K\mathbb{E}\Bigg{\{}\sup_{t\in[-\delta,0]}|\widehat{\lambda}_{s}|^{2}+ \int_{0}^{T}\big{|}b(s,\bar{x}_{s},\bar{x}_{s}^{\prime})-\bar{b}(s,\bar{x}_{s}, \bar{x}_{s}^{\prime})\big{|}^{2}ds+\int_{0}^{T}\big{|}\sigma(s,\bar{x}_{s}, \bar{x}_{s}^{\prime})-\bar{\sigma}(s,\bar{x}_{s},\bar{x}_{s}^{\prime})\big{|}^ {2}ds \tag{2.11}\] \[+\int_{0}^{T}\big{\|}g(s,\bar{x}_{s},\bar{x}_{s}^{\prime})-\bar{g}( s,\bar{x}_{s},\bar{x}_{s}^{\prime})\big{\|}^{2}_{l^{2}(\mathbb{R}^{n})}ds\Bigg{\}}.\]
From (2.4), (2.7) to (2.9) and (2.11), we apply the Burkholder-Davis-Gundy inequality and derive that
\[\mathbb{E}\big{[}\sup_{t\in[0,T]}|\widehat{x}_{t}|^{2}\big{]}\leq K\mathbb{E}\Bigg{\{}\sup_{t\in[-\delta,0]}|\widehat{ \lambda}_{s}|^{2}+\int_{0}^{T}\big{(}|\widehat{x}_{s}|^{2}+|\widehat{\vartheta}_{s}| ^{2}+|\widehat{\vartheta}_{s}|^{2}_{l^{2}(\mathbb{R}^{n})}\big{)}ds \tag{2.12}\] \[\qquad+\sup_{t\in[0,T]}2\int_{0}^{t}\widehat{\sigma}_{s},\widehat {\sigma}_{s}dW_{s})+\sup_{t\in[0,T]}2\sum_{i=1}^{\infty}\int_{0}^{t}\Big{\langle} \widehat{x}_{s-},\widehat{g}_{s-}^{(i)}\Big{\rangle}\,dH_{s}^{(i)}\Bigg{\}}\] \[\leq K\mathbb{E}\Bigg{\{}\sup_{t\in[-\delta,0]}|\widehat{\lambda}_{s}|^ {2}+\int_{0}^{T}\big{|}b(s,\bar{x}_{s},\bar{x}_{s}^{\prime})-\bar{b}(s,\bar{x}_{s}, \bar{x}_{s}^{\prime})\big{|}^{2}ds+\int_{0}^{T}\big{|}\sigma(s,\bar{x}_{s},\bar{ x}_{s}^{\prime})-\bar{\sigma}(s,\bar{x}_{s},\bar{x}_{s}^{\prime})\big{|}^{2}ds\] \[\qquad+\int_{0}^{T}\big{\|}g(s,\bar{x}_{s},\bar{x}_{s}^{\prime})- \bar{g}(s,\bar{x}_{s},\bar{x}_{s}^{\prime})\big{\|}^{2}_{l^{2}(\mathbb{R}^{n})} ds\Bigg{\}}+\frac{1}{2}\mathbb{E}\big{[}\sup_{t\in[0,T]}|\widehat{x}_{t}|^{2}\big{]}.\]
Then we can easily deduce the desired estimate (2.3) and we take \((\bar{b},\bar{\sigma},\bar{g},\bar{\lambda})=(0,0,0,0)\) such taht the estimate (2.2) holds. For the proof of the existence of uniqueness of the solution, we can also get it directly from the estimate (2.3) by the method of continuation.
Secondly, we consider the ABSDEL as follows:
\[\begin{cases}dy_{t}=f(t,y_{t},z_{t},k_{t},y^{\prime}_{t},z^{\prime}_{t},k^{ \prime}_{t})dt+z_{t}dW_{t}+\sum_{i=1}^{\infty}k_{t}^{(i)}dH_{t}^{(i)},\quad t \in[0,T],\\ y_{T}=\nu,\\ y_{t}=z_{t}=k_{t}=0,\quad t\in(T,T+\delta],\end{cases} \tag{2.13}\]
where \(y^{\prime}_{t}=\mathbb{E}^{\mathcal{F}_{t}}[y_{t+\delta}]\), \(z^{\prime}_{t}=\mathbb{E}^{\mathcal{F}_{t}}[z_{t+\delta}]\), \(k^{\prime}_{t}=\mathbb{E}^{\mathcal{F}_{t}}[k_{t+\delta}]\).
The coefficients \((\nu,f)\) are assumed to satisfy the following assumptions:
**Assumption 2.2**.: \(\nu\in L^{2}_{\mathcal{F}_{T}}(\Omega;\mathbb{R}^{n})\) and \(f\) is a given random mapping
\[f:[0,T]\times\Omega\times\mathbb{R}^{n}\times\mathbb{R}^{n\times d}\times l^ {2}(\mathbb{R}^{n})\times\mathbb{R}^{n}\times\mathbb{R}^{n\times d}\times l^ {2}(\mathbb{R}^{n})\rightarrow\mathbb{R}^{n}\]
satisfying
(i)For any \((y,z,k,y^{\prime},z^{\prime},k^{\prime})\in\mathbb{R}^{n}\times\mathbb{R}^{n \times d}\times l^{2}(\mathbb{R}^{n})\times\mathbb{R}^{n}\times\mathbb{R}^{n \times d}\times l^{2}(\mathbb{R}^{n})\), \(f(\cdot,y,z,k,y^{\prime},z^{\prime},k^{\prime})\) is \(\mathbb{F}\)-progressively measurable. Besides, \(f(\cdot,0,0,0,0,0,0)\in L^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{n})\).
(ii)The mapping \(f\) is uniformly Lipschitz continuous with respect to \((y,z,k,y^{\prime},z^{\prime},k^{\prime})\), i.e., for any \(y,\bar{y},y^{\prime},\bar{y}^{\prime}\in\mathbb{R}^{n}\), \(z,\bar{z},z^{\prime},\bar{z}^{\prime}\in\mathbb{R}^{n\times d}\), \(k,\bar{k},k^{\prime},\bar{k}^{\prime}\in l^{2}(\mathbb{R}^{n})\), there exists a constant \(L>0\) such that
\[|f(t,y,z,k,y^{\prime},z^{\prime},k^{\prime})-f(t,\bar{y},\bar{z},\bar{k},\bar{y}^{\prime},\bar{z}^{\prime},\bar{k}^{\prime})|\] \[\leq L\big{(}|y-\bar{y}|+|z-\bar{z}|+\|k-\bar{k}\|_{l^{2}(\mathbb{ R}^{n})}+|y^{\prime}-\bar{y}^{\prime}|+|z^{\prime}-\bar{z}^{\prime}|+\|k^{ \prime}-\bar{k}^{\prime}\|_{l^{2}(\mathbb{R}^{n})}\big{)}.\]
**Lemma 2.2**.: _Under Assumption 2.2, ABSDEL (2.13) with coefficients \((\nu,f)\) admits a unique solution \((y(\cdot),z(\cdot),k(\cdot))\in\mathcal{S}^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{n })\times M^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{n\times d})\times L^{2}_{\mathbb{ F}}(0,T;l^{2}(\mathbb{R}^{n}))\). Moreover, the following estimate holds:_
\[\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|y_{t}|^{2}+\int_{0}^{T}(|z_{t}|^{2}+\|k_{ t}\|^{2}_{l^{2}(\mathbb{R}^{n})})dt\bigg{]}\leq K\mathbb{E}\bigg{[}|\nu|^{2}+\int_{0}^{T}|f(t,0,0,0,0,0,0)|^{2}dt \bigg{]}, \tag{2.14}\]
_where \(K\) is a positive constant depending only on \(T\) and the Lipschitz constant \(L\) of the mapping \(f\). Furthermore, suppose that \((\bar{\nu},\bar{f})\) is another set of coefficients, and assume that \((\bar{y}(\cdot),\bar{z}(\cdot),\bar{k}(\cdot))\in\mathcal{S}^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{n})\times M^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{n\times d})\times L ^{2}_{\mathbb{F}}(0,T;l^{2}(\mathbb{R}^{n}))\) is a solution to ABSDEL (2.13) with coefficients \((\bar{\nu},\bar{f})\) satisfying Assumption 2.2. Then we have the following estimate:_
\[\begin{split}&\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|y_{t}-\bar{y}_{t}|^{2 }+\int_{0}^{T}(|z_{t}-\bar{z}_{t}|^{2}+\|k_{t}-\bar{k}_{t}\|^{2}_{l^{2}(\mathbb{ R}^{n})})dt\bigg{]}\\ &\leq K\mathbb{E}\bigg{[}\int_{0}^{T}|f(t,\bar{y}_{t},\bar{z}_{t}, \bar{k}_{t},\bar{y}^{\prime}_{t},\bar{z}^{\prime}_{t},\bar{k}^{\prime}_{t})- \bar{f}(t,\bar{y}_{t},\bar{z}_{t},\bar{k}_{t},\bar{y}^{\prime}_{t},\bar{z}^{ \prime}_{t},\bar{k}^{\prime}_{t})|^{2}dt+|\nu-\bar{\nu}|^{2}\bigg{]},\end{split} \tag{2.15}\]
_where \(\bar{y}^{\prime}_{t}=\mathbb{E}^{\mathcal{F}_{t}}[\bar{y}_{t+\delta}]\), \(\bar{z}^{\prime}_{t}=\mathbb{E}^{\mathcal{F}_{t}}[\bar{z}_{t+\delta}]\), \(\bar{k}^{\prime}_{t}=\mathbb{E}^{\mathcal{F}_{t}}[\bar{k}_{t+\delta}]\) and the positive constant \(K\) is similar to the previous one._
Proof.: In fact, Theorem 3.2 of Li and Wu [15] has also proved the unique solvability result of solution to ABSDEL (2.13) by means of the Fix Point Theorem. Therefore, here we will only give the proof of the estimate of the solution.
Similarly, the constant \(K\) can also be changed line to line. Furthermore, to simplify our notation, we denote by
\[\begin{cases}\widehat{y}_{s}=y_{s}-\bar{y}_{s},\quad\widehat{z}_{s}=z_{s}-\bar{ z}_{s},\quad\widehat{k}_{s}=k_{s}-\bar{k}_{s},\\ \widehat{y}^{\prime}_{s}=y^{\prime}_{s}-\bar{y}^{\prime}_{s},\quad\widehat{z}^{ \prime}_{s}=z^{\prime}_{s}-\bar{z}^{\prime}_{s},\quad\widehat{k}^{\prime}_{s}=k^{ \prime}_{s}-\bar{k}^{\prime}_{s},\\ \widehat{f}_{s}=f(s,y_{s},z_{s},k_{s},y^{\prime}_{s},z^{\prime}_{s},k^{\prime}_{s})- \bar{f}(s,\bar{y}_{s},\bar{z}_{s},\bar{k}_{s},\bar{y}^{\prime}_{s},\bar{z}^{ \prime}_{s},\bar{k}^{\prime}_{s}),\\ \widehat{\nu}=\nu-\bar{\nu}.\end{cases}\]
Firstly, applying the Ito's formula to \(|\widehat{y}_{s}|^{2}\) gives
\[\begin{split}&|\widehat{y}_{t}|^{2}+\int_{t}^{T}|\widehat{z}_{t}|^{ 2}+\sum_{i,j=1}^{\infty}\int_{t}^{T}\left\langle\widehat{k}_{s}^{(i)},\widehat {k}_{s}^{(j)}\right\rangle d[H_{i},H_{j}]_{s}\\ =&\ |\widehat{\nu}|^{2}-2\int_{t}^{T}\left\langle \widehat{y}_{s},\widehat{f}_{s}\right\rangle ds-2\int_{t}^{T}\left\langle \widehat{y}_{s},\widehat{z}_{s}dW_{s}\right\rangle-2\sum_{i=1}^{\infty}\int_{t }^{T}\left\langle\widehat{y}_{s},\widehat{k}_{s}^{(i)}\right\rangle dH_{s}^{( i)}.\end{split} \tag{2.16}\]
Taking expectation on both sides, we have
\[\begin{split}&\ \mathbb{E}\bigg{[}|\widehat{y}_{t}|^{2}+\int_{t}^{T} \big{(}|\widehat{z}_{s}|^{2}+\|\widehat{k}_{s}\|_{I^{2}(\mathbb{R}^{n})}^{2} \big{)}ds\bigg{]}\\ &\leq\mathbb{E}\Bigg{\{}\big{|}\widehat{\nu}\big{|}^{2}+\int_{t} ^{T}2|\widehat{y}_{s}||\widehat{f}_{s}|ds\Bigg{\}}\\ &\leq\mathbb{E}\Bigg{\{}\big{|}\widehat{\nu}\big{|}^{2}+\frac{1}{ \varepsilon}\int_{t}^{T}|\widehat{y}_{s}|^{2}ds+\varepsilon\int_{t}^{T}| \widehat{f}_{s}|^{2}ds\Big{\}}\\ &=\mathbb{E}\Bigg{\{}\big{|}\widehat{\nu}\big{|}^{2}+\frac{1}{ \varepsilon}\int_{t}^{T}|\widehat{y}_{s}|^{2}ds+\varepsilon\int_{t}^{T}|f(s, y,z,k,y^{\prime},z^{\prime},k^{\prime})-f(s,\bar{y}_{s},\bar{z}_{s},\bar{k}_{s}, \bar{y}_{s}^{\prime},\bar{z}_{s}^{\prime},\bar{k}_{s}^{\prime})\\ &\qquad\quad+f(s,\bar{y}_{s},\bar{z}_{s},\bar{k}_{s},\bar{y}_{s}^ {\prime},\bar{z}_{s}^{\prime},\bar{k}_{s}^{\prime})-\bar{f}(s,\bar{y}_{s}, \bar{z}_{s},\bar{k}_{s},\bar{y}_{s}^{\prime},\bar{z}_{s}^{\prime},\bar{k}_{s}^ {\prime})|^{2}ds\Bigg{\}}\\ &\leq\mathbb{E}\Bigg{\{}\big{|}\widehat{\nu}\big{|}^{2}+\frac{1} {\varepsilon}\int_{t}^{T}|\widehat{y}_{s}|^{2}ds+2\varepsilon\int_{t}^{T}|f(s, y,z,k,y^{\prime},z^{\prime},k^{\prime})-f(s,\bar{y}_{s},\bar{z}_{s},\bar{k}_{s}, \bar{y}_{s}^{\prime},\bar{z}_{s}^{\prime},\bar{k}_{s}^{\prime})|^{2}ds\\ &\qquad\quad+2\varepsilon\int_{t}^{T}|f(s,\bar{y}_{s},\bar{z}_{s },\bar{k}_{s},\bar{y}_{s}^{\prime},\bar{z}_{s}^{\prime},\bar{k}_{s}^{\prime})- \bar{f}(s,\bar{y}_{s},\bar{z}_{s},\bar{k}_{s},\bar{y}_{s}^{\prime},\bar{z}_{s }^{\prime},\bar{k}_{s}^{\prime})|^{2}ds\Bigg{\}},\end{split} \tag{2.17}\]
where the elementary inequality \(2ab\leq\frac{1}{\varepsilon}a^{2}+\varepsilon b^{2}\) and \((a+b)^{2}\leq 2a^{2}+2b^{2}\) for any \(a>0\), \(b>0\), \(\varepsilon>0\) have been used. Then applying the Lipschitz condition in Assumption 2.2 and the Cauchy-Schwarz inequality yields
\[\begin{split}&\ \mathbb{E}\bigg{[}|\widehat{y}_{t}|^{2}+\int_{t}^{T} \big{(}|\widehat{z}_{s}|^{2}+\|\widehat{k}_{s}\|_{I^{2}(\mathbb{R}^{n})}^{2} \big{)}ds\bigg{]}\\ \leq&\ \mathbb{E}\Bigg{\{}\big{|}\widehat{\nu}\big{|}^{2}+ \frac{1}{\varepsilon}\int_{t}^{T}|\widehat{y}_{s}|^{2}ds+2\varepsilon\int_{t} ^{T}|f(s,\bar{y}_{s},\bar{z}_{s},\bar{k}_{s},\bar{y}_{s}^{\prime},\bar{z}_{s} ^{\prime},\bar{k}_{s}^{\prime})-\bar{f}(s,\bar{y}_{s},\bar{z}_{s},\bar{k}_{s}, \bar{y}_{s}^{\prime},\bar{z}_{s}^{\prime},\bar{k}_{s}^{\prime})|^{2}ds\\ &\qquad+12\varepsilon L^{2}\int_{t}^{T}\Big{[}|\widehat{y}_{s}|^{ 2}+|\widehat{z}_{s}|^{2}+\|\widehat{k}_{s}\|_{I^{2}(\mathbb{R}^{n})}^{2}+| \widehat{y}_{s}^{\prime}|^{2}+|\widehat{z}_{s}^{\prime}|^{2}+\|\widehat{k}_{s} ^{\prime}\|_{I^{2}(\mathbb{R}^{n})}^{2}\Big{]}ds\Bigg{\}}.\end{split} \tag{2.18}\]
By virtue of the Jensen's inequality and time-shifting transformation, we notice that
\[\mathbb{E}\Big{[}\int_{t}^{T}|\widehat{y}_{s}^{\prime}|^{2}ds\Big{]}=\mathbb{E} \Big{[}\int_{t}^{T}\big{|}\mathbb{E}\big{[}\widehat{y}_{s+\delta}|\mathcal{F}_{s }\big{]}\big{|}^{2}ds\Big{]}\leq\mathbb{E}\Big{[}\int_{t}^{T}\mathbb{E}\big{[} |\widehat{y}_{s+\delta}|^{2}\mathcal{F}_{s}\big{]}ds\Big{]}=\mathbb{E}\Big{[} \int_{t+\delta}^{T+\delta}|\widehat{y}_{s}|^{2}ds\Big{]}\leq\mathbb{E}\Big{[} \int_{t}^{T}|\widehat{y}_{s}|^{2}ds\Big{]}. \tag{2.19}\]
Similarly, we have
\[\mathbb{E}\Big{[}\int_{t}^{T}|\widehat{z}_{s}^{\prime}|^{2}ds\Big{]}\leq \mathbb{E}\Big{[}\int_{t}^{T}|\widehat{z}_{s}|^{2}ds\Big{]}, \tag{2.20}\]
\[\mathbb{E}\Big{[}\int_{t}^{T}\|\widehat{k}_{s}^{\prime}\|_{I^{2}(\mathbb{R}^{n})}^{2} ds\Big{]}\leq\mathbb{E}\Big{[}\int_{t}^{T}\|\widehat{k}_{s}\|_{I^{2}( \mathbb{R}^{n})}^{2}ds\Big{]}. \tag{2.21}\]
Taking (2.19), (2.20), (2.21) into (2.18), we can get
\[\begin{split}&\mathbb{E}\bigg{[}|\widehat{y}_{t}|^{2}+\int_{t}^{T} \big{(}|\widehat{z}_{s}|^{2}+\|\widehat{k}_{s}\|_{I^{\left(\mathbb{R}^{n} \right)}}^{2}\big{)}ds\bigg{]}\\ &\leq\mathbb{E}\bigg{\{}|\widehat{\nu}|^{2}+2\varepsilon\int_{t}^ {T}|f(s,\bar{y}_{s},\bar{z}_{s},\bar{k}_{s},\bar{y}_{s}^{\prime},\bar{z}_{s}^ {\prime},\bar{k}_{s}^{\prime})-\bar{f}(s,\bar{y}_{s},\bar{z}_{s},\bar{k}_{s}, \bar{y}_{s}^{\prime},\bar{z}_{s}^{\prime},\bar{k}_{s}^{\prime})|^{2}ds\\ &\quad+\frac{1}{\varepsilon}\int_{t}^{T}|\widehat{y}_{s}|^{2}ds+ 24\varepsilon L^{2}\int_{t}^{T}\Big{[}|\widehat{y}_{s}|^{2}+|\widehat{z}_{s}|^ {2}+\|\widehat{k}_{s}\|_{I^{\left(\mathbb{R}^{n}\right)}}^{2}\Big{]}ds\bigg{\}}.\end{split} \tag{2.22}\]
Selecting \(\varepsilon\) small enough such that \(24\varepsilon L^{2}<1\) leads to
\[\begin{split}&\mathbb{E}\bigg{[}|\widehat{y}_{t}|^{2}+\int_{t}^ {T}\big{(}|\widehat{z}_{s}|^{2}+\|\widehat{k}_{s}\|_{I^{\left(\mathbb{R}^{n} \right)}}^{2}\big{)}ds\bigg{]}\\ \leq&\ K\mathbb{E}\Bigg{\{}\int_{t}^{T}|f(s,\bar{y} _{s},\bar{z}_{s},\bar{k}_{s},\bar{y}_{s}^{\prime},\bar{z}_{s}^{\prime},\bar{k }_{s}^{\prime})-\bar{f}(s,\bar{y}_{s},\bar{z}_{s},\bar{k}_{s},\bar{y}_{s}^{ \prime},\bar{z}_{s}^{\prime},\bar{k}_{s}^{\prime})|^{2}ds+\big{|}\widehat{\nu }\big{|}^{2}+\int_{t}^{T}|\widehat{y}_{s}|^{2}ds\Bigg{\}}.\end{split} \tag{2.23}\]
By using Gronwalls inequality, we deduce
\[\begin{split}&\sup_{t\in[0,T]}\mathbb{E}\big{[}|\widehat{y}_{t}|^{2 }\big{]}+\mathbb{E}\bigg{[}\int_{0}^{T}\big{(}|\widehat{z}_{s}|^{2}+\|\widehat {k}_{s}\|_{I^{\left(\mathbb{R}^{n}\right)}}^{2}\big{)}ds\bigg{]}\\ \leq&\ K\mathbb{E}\Bigg{\{}\int_{0}^{T}|f(s,\bar{y} _{s},\bar{z}_{s},\bar{k}_{s},\bar{y}_{s}^{\prime},\bar{z}_{s}^{\prime},\bar{k }_{s}^{\prime})-\bar{f}(s,\bar{y}_{s},\bar{z}_{s},\bar{k}_{s},\bar{y}_{s}^{ \prime},\bar{z}_{s}^{\prime},\bar{k}_{s}^{\prime})|^{2}ds+\big{|}\widehat{\nu }\big{|}^{2}\Bigg{\}}.\end{split} \tag{2.24}\]
Based on all of the above analysis, we combine (2.16), (2.24) and apply the Burkholder-Davis-Gundy inequality to get
\[\begin{split}&\ \mathbb{E}\bigg{[}\sup_{t\in[0,T]}|\widehat{y}_{t}|^{2} \bigg{]}\\ \leq&\ K\mathbb{E}\Bigg{\{}|\widehat{\nu}|^{2}+ \frac{1}{\varepsilon}\int_{0}^{T}|\widehat{y}_{s}|^{2}ds+\varepsilon\int_{0}^ {T}|\widehat{f}_{s}|^{2}ds+2\sup_{t\in[0,T]}\bigg{|}\int_{t}^{T}\big{\langle} \widehat{y}_{s},\widehat{z}_{s}dW_{s}\big{\rangle}\,\bigg{|}+2\sup_{t\in[0,T]} \bigg{|}\sum_{i=1}^{\infty}\int_{t}^{T}\Big{\langle}\widehat{y}_{s},\widehat {k}_{s}^{(i)}\Big{\rangle}\,dH_{s}^{(i)}\bigg{|}\Bigg{\}}\\ \leq&\ K\mathbb{E}\Bigg{\{}|\widehat{\nu}|^{2}+\int_{0 }^{T}|f(s,\bar{y}_{s},\bar{z}_{s},\bar{k}_{s},\bar{y}_{s}^{\prime},\bar{z}_{s}^ {\prime},\bar{k}_{s}^{\prime})-\bar{f}(s,\bar{y}_{s},\bar{z}_{s},\bar{k}_{s}, \bar{y}_{s}^{\prime},\bar{z}_{s}^{\prime},\bar{k}_{s}^{\prime})|^{2}ds\Bigg{\}} \\ &\quad+\frac{1}{2}\mathbb{E}\Big{[}\sup_{t\in[0,T]}|\widehat{y}_ {t}|^{2}\Big{]}+K\mathbb{E}\bigg{[}\int_{0}^{T}\big{(}|\widehat{z}_{s}|^{2}+\| \widehat{k}_{s}\|_{I^{\left(\mathbb{R}^{n}\right)}}^{2}\big{)}ds\bigg{]}.\end{split} \tag{2.25}\]
Finally, combining (2.24) and (2.25) leads to the estimate (2.15). Then we take \((\bar{\nu},\bar{f})=(0,0)\) to get the estimate (2.14). Moreover, for the unique solvability, we can also infer it from the estimate (2.15) by the method of continuation.
## 3 FBSDELDAs with domination-monotonicity conditions
In this section, we will be committed to studying the FBSDELDA (1.1). Similar to the cases of SDEDL (2.1) and ABSDEL (2.13), we still have to make the following assumptions for the coefficients \((\Lambda,\Phi,\Gamma)\) of FBSDELDA (1.1).
**Assumption 3.1**.: (i)For any \(x\in\mathbb{R}^{n}\), \(\Phi(x)\) is \(\mathcal{F}_{T}\)-measurable. Furthermore, for any \(\theta,\theta_{-},\theta_{+}\in\mathbb{R}^{n(2+d)}\times l^{2}(\mathbb{R}^{n})\), \(\Gamma(\cdot,\theta,\theta_{-},\theta_{+})\) is \(\mathbb{F}\)-progressively measurable. Moreover, \((\Lambda(0),\Phi(0),\Gamma(\cdot,0,0,0))\in\mathcal{H}[-\delta,T]\); (ii)The mappings \(\Phi\), \(\Gamma\) are uniformly Lipschitz continuous, i.e., for any \(x,\bar{x}\in\mathbb{R}^{n}\), \(\theta,\bar{\theta},\theta_{-},\bar{\theta}_{-},\theta_{+},\bar{\theta}_{+}\in \mathbb{R}^{n(2+d)}\times l^{2}(\mathbb{R}^{n})\), there exists a constant \(L>0\) such that
\[\begin{cases}|\Phi(x)-\Phi(\bar{x})|\leq L|x-\bar{x}|,\\ |h(t,\theta,\theta_{-},y_{+},z_{+},k_{+})-h(t,\bar{\theta},\bar{\theta}_{-}, \bar{y}_{+},\bar{z}_{+},\bar{k}_{+})|\leq L(|\theta-\bar{\theta}|+|\theta_{-}- \bar{\theta}_{-}|+|y_{+}-\bar{y}_{+}|+|z_{+}-\bar{z}_{+}|+\|k_{+}-\bar{k}_{+} \|_{I^{\left(\mathbb{R}^{n}\right)}}),\\ |f(t,\theta,x_{-},\theta_{+})-f(t,\bar{\theta},\bar{x}_{-},\bar{\theta}_{+})| \leq L(|\theta-\bar{\theta}|+|x_{-}-\bar{x}_{-}|+|\theta_{+}-\bar{\theta}_{+}|), \end{cases}\]
where \(h=b,\sigma,g^{(i)}\).
In addition to the Assumption 3.1 above, we continue to introduce the following domination-monotonicity conditions on the coefficients \((\Lambda,\Phi,\Gamma)\) for later study.
**Assumption 3.2**.: There exist two constants \(\mu\geq 0\), \(v\geq 0\), a matrix-valued random variable \(G\in L^{\infty}_{\mathcal{F}_{x}}(\Omega;\mathbb{R}^{\hat{m}\times n})\), and a series of matrix-valued processes \(A(\cdot),\bar{A}(\cdot),B(\cdot),\bar{B}(\cdot)\in L^{\infty}_{\mathbb{F}}(0,T; \mathbb{R}^{m\times n})\), \(C(\cdot)=(C_{1}(\cdot)),\cdots,C_{d}(\cdot)),\bar{C}(\cdot)=(\bar{C}_{1}(\cdot )),\cdots,\bar{C}_{d}(\cdot))\in L^{\infty}_{\mathbb{F}}(0,T;\mathbb{R}^{d \times mn})\), \(D(\cdot)=(D_{j}(\cdot))_{j=1}^{\infty},\bar{D}=(\bar{D}_{j}(\cdot))_{j=1}^{ \infty}\in L^{\infty}_{\mathbb{F}}(0,T;l^{2}(\mathbb{R}^{m\times n}))\) (where \(\bar{m},m\in\mathbb{N}\) are given and \(\bar{A}(t)=\bar{B}(t)=\bar{C}_{i}(t)=\bar{D}_{j}(t)=0(i=1,2,\cdots,d;j=1,2, \cdots,)\) when \(t\in[0,\delta]\)) such that we have the following conditions:
(i) One of the following two cases holds true. Case 1: \(\mu>0\) and \(v=0\).
(ii) (domination condition) For almost all \((t,\omega)\in[0,T]\times\Omega\), and any \(x,\bar{x},x_{-},\bar{x}_{-},x_{+},\bar{x}_{+},y,\bar{y},y_{-},\bar{y}_{-},y_{ +},\bar{y}_{+}\in\mathbb{R}^{n}\), \(z,\bar{z},z_{-},\bar{z}_{-},z_{+},\bar{z}_{+}\in\mathbb{R}^{n\times d}\), \(k,\bar{k},k_{-},\bar{k}_{-},k_{+},\bar{k}_{+}\in l^{2}(\mathbb{R}^{n})\) (the argument t is suppressed),
\[\begin{cases}|\Phi(x)-\Phi(\bar{x})|\leq\frac{1}{v}|G\widehat{x}|,\\ \int_{0}^{T}\Big{(}|f(x,y,z,k,x_{-},x_{+},y_{+},z_{+},k_{+})-f(\bar{x},y,z,k, \bar{x}_{-},\bar{x}_{+},y_{+},z_{+},k_{+})|\Big{)}dt\\ \leq\int_{0}^{T}\Big{(}\frac{1}{v}\big{|}A\widehat{x}+\mathbb{E}^{ \mathcal{F}_{t}}[\bar{A}_{+}\widehat{x}_{+}]\big{|}\Big{)}dt,\\ \int_{0}^{T}\Big{(}|h(x,y,z,k,x_{-},y_{-},z_{-},k_{-},y_{+},z_{+},k_{+})-h(x, \bar{y},\bar{z},\bar{k},x_{-},\bar{y}_{-},\bar{z}_{-},\bar{k}_{-},\bar{y}_{+},\bar{z}_{+},\bar{k}_{+})|\Big{)}dt\\ \leq\int_{0}^{T}\Big{(}\frac{1}{\mu}\Big{|}B\widehat{y}+C\widehat{z}+\sum_{j =1}^{\infty}D_{j}\widehat{k}^{(j)}+\mathbb{E}^{\mathcal{F}_{t}}\Big{[}\bar{B} _{+}\widehat{y}_{+}+\bar{C}_{+}\widehat{z}_{+}+\sum_{j=1}^{\infty}\bar{D}_{j +}\widehat{k}_{+}^{j}\Big{]}\Big{|}\Big{)}dt,\end{cases} \tag{3.1}\]
where \(h=b,\sigma,g^{(i)}\), and \(\widehat{x}=x-\bar{x}\), \(\widehat{y}=y-\bar{y}\), \(\widehat{z}=z-\bar{z}\), \(\widehat{k}=k-\bar{k}\) (\(i=1,2,3,\cdots\)), etc. \(\bar{A}_{+}(\cdot)=\bar{A}(\cdot+\delta)\), \(\bar{B}_{+}(\cdot)=\bar{B}(\cdot+\delta)\), etc.
It should be noticed that there is a little abuse of notations in the conditions above, when \(\mu=0\) (resp. \(v=0\)), \(1/\mu\) (resp. \(1/v\)) means \(+\infty\). In other words, if \(\mu=0\) or \(v=0\), the corresponding domination constraints will vanish.
(iii) (monotonicity condition) For almost all \((t,\omega)\in[0,T]\times\Omega\) and any \(\theta,\bar{\theta},\theta_{-},\bar{\theta}_{-},\theta_{+},\bar{\theta}_{+}\in \mathbb{R}^{n(2+d)}\times l^{2}(\mathbb{R}^{n})\) (the argument t is suppressed),
\[\begin{cases}\langle\Phi(x)-\Phi(\bar{x}),\widehat{x}\rangle\geq v|G\widehat{ x}|^{2},\\ \int_{0}^{T}\Big{(}\Big{\langle}\Gamma(\theta,\theta_{-},\theta_{+})- \Gamma(\bar{\theta},\bar{\theta}_{-},\bar{\theta}_{+}),\widehat{\theta}\Big{)} \,\Big{)}dt\\ \leq\int_{0}^{T}\bigg{(}-v\big{|}A\widehat{x}+\mathbb{E}^{ \mathcal{F}_{t}}[\bar{A}_{+}\widehat{x}_{+}]\big{|}^{2}-\mu\Big{|}B\widehat{y} +C\widehat{z}+\sum_{j=1}^{\infty}D_{j}\widehat{k}^{(j)}+\mathbb{E}^{\mathcal{F }_{t}}\Big{[}\bar{B}_{+}\widehat{y}_{+}+\bar{C}_{+}\widehat{z}_{+}+\sum_{j=1}^ {\infty}\bar{D}_{j+}\widehat{k}_{+}^{(j)}\Big{]}\Big{|}^{2}\bigg{)}dt,\end{cases} \tag{3.2}\]
where \(\widehat{\theta}=\theta-\theta^{\prime}\) and \(\Gamma(t,\theta,\theta_{-},\theta_{+})\) is given by (1.2).
**Remark 3.1**.: (i) In Assumption 3.2-(ii), the constant \(1/\mu\) and \(1/v\) can be replaced by \(K/\mu\) and \(K/v\quad(K>0)\). However, for simplicity, we prefer to omit the constant \(K\);
(ii) There exists a symmetrical version of Assumption 3.2-(iii) as follows:
For almost all \((t,\omega)\in[0,T]\times\Omega\) and any \(\theta,\bar{\theta},\theta_{-},\bar{\theta}_{-},\theta_{+},\bar{\theta}_{+}\in \mathbb{R}^{n(2+d)}\times l^{2}(\mathbb{R}^{n})\) (the argument t is suppressed),
\[\begin{cases}\langle\Phi(x)-\Phi(\bar{x}),\widehat{x}\rangle\leq-v|G\widehat{ x}|^{2},\\ \int_{0}^{T}\Big{(}\Big{\langle}\Gamma(\theta,\theta_{-},\theta_{+})- \Gamma(\bar{\theta},\bar{\theta}_{-},\bar{\theta}_{+}),\widehat{\theta}\Big{)} \,\Big{)}dt\\ \geq\int_{0}^{T}\bigg{(}v\big{|}A\widehat{x}+\mathbb{E}^{ \mathcal{F}_{t}}[\bar{A}_{+}\widehat{x}_{+}]\big{|}^{2}+\Big{|}B\widehat{y}+C \widehat{z}+\sum_{j=1}^{\infty}D_{j}\widehat{k}^{(j)}+\mathbb{E}^{\mathcal{F}_{ t}}\Big{[}\bar{B}_{+}\widehat{y}_{+}+\bar{C}_{+}\widehat{z}_{+}+\sum_{j=1}^{\infty}\bar{D}_{j+} \widehat{k}_{+}^{(j)}\Big{]}\Big{|}^{2}\bigg{)}dt.\end{cases} \tag{3.3}\]
It is easy to verify the symmetry between the two and we omit the detailed proofs. For similar proofs, we can refer to Yu [40].
(iii) In the framework of Brownian motion only in the diffusion term, Li, Wang and Wu [14] have studied a kind of anticipated fully coupled forwardbackward stochastic differential delayed equations, where they introduced the following monotonicity condition:
\[\begin{split}&\int_{0}^{T}\left\langle A(t,x_{t-2\delta},\lambda_{t}, \lambda_{t-\delta},\lambda_{t+\delta},x_{t+2\delta},y_{t+2\delta})-A(t,\bar{x }_{t-2\delta},\bar{\lambda}_{t},\bar{\lambda}_{t-\delta},\bar{\lambda}_{t+ \delta},\bar{x}_{t+2\delta},\bar{y}_{t+2\delta}),\lambda_{t}-\bar{\lambda}_{t} \right\rangle dt\\ &\leq\int_{0}^{T}\bigg{(}-\mu\bigg{|}B\widehat{y}+D\widehat{z}+ \mathbb{E}^{\mathcal{F}_{t}}\Big{[}\bar{B}_{+}\widehat{y}_{+}\Big{]}\bigg{|}^ {2}\bigg{)}dt.\end{split} \tag{3.4}\]
Compared with this monotonicity condition, our monotonicity condition (3.2) contains the terms \(\big{|}A\widehat{x}+\mathbb{E}^{\mathcal{F}_{t}}[\bar{A}_{+}\widehat{x}_{+}] \big{|}^{2}\), and \(\mathbb{E}^{\mathcal{F}_{t}}[\bar{D}_{+}\widehat{z}_{+}]\) which will be founded that these conditions are necessary in our application in section 4.
For the convenience of later use, we would like to give some notations as follows (the argument \(t\) is suppressed):
\[\begin{cases}P(x)=Ax+\mathbb{E}^{\mathcal{F}_{t}}[\bar{A}_{+}x_{+}],\\ P_{-}(x)=A_{-}x_{-}+\mathbb{E}^{\mathcal{G}_{t}}[\bar{A}x],\\ P(\widehat{x})=A\widehat{x}+\mathbb{E}^{\mathcal{F}_{t}}[\bar{A}_{+}\widehat{ x}_{+}],\\ P_{-}(\widehat{z})=A_{-}\widehat{x}_{-}+\mathbb{E}^{\mathcal{G}_{t}}[\bar{A} \bar{x}],\\ Q(y,z,k)=By+Cz+\sum_{j=1}^{\infty}D_{j}k^{(j)}+\mathbb{E}^{\mathcal{F}_{t}} \Big{[}\bar{B}_{+}y_{+}+\bar{C}_{+}z_{+}+\sum_{j=1}^{\infty}\bar{D}_{j+}k_{+}^ {j}\Big{]},\\ Q_{-}(y,z,k)=B_{-}y_{-}+C_{-}z_{-}+\sum_{j=1}^{\infty}D_{j-}k_{-}^{(j)}+ \mathbb{E}^{\mathcal{G}_{t}}\Big{[}\bar{B}y+\bar{C}z+\sum_{j=1}^{\infty}\bar{ D}_{j}k^{j}\Big{]},\\ Q(\widehat{y},\widehat{z},\widehat{k})=B\widehat{y}+C\widehat{z}+\sum_{j=1}^ {\infty}D_{j}\widehat{k}^{(j)}+\mathbb{E}^{\mathcal{F}_{t}}\Big{[}\bar{B}_{ +}\widehat{y}_{+}+\bar{C}_{+}\widehat{z}_{+}+\sum_{j=1}^{\infty}\bar{D}_{j+} \widehat{k}_{+}^{j}\Big{]},\\ Q_{-}(\widehat{y},\widehat{z},\widehat{k})=B_{-}\widehat{y}_{-}+C_{-} \widehat{z}_{-}+\sum_{j=1}^{\infty}D_{j-}\widehat{k}_{-}^{(j)}+\mathbb{E}^{ \mathcal{G}_{t}}\Big{[}\bar{B}\widehat{y}+\bar{C}\widehat{z}+\sum_{j=1}^{ \infty}\bar{D}_{j}\bar{k}^{j}\Big{]},\end{cases} \tag{3.5}\]
where \(x_{-}(t)=x(t-\delta)\), \(x_{+}(t)=x(t+\delta)\), \(A_{-}(t)=A(t-\delta)\), \(A_{+}(t)=A(t+\delta)\), etc.
Now, we give the main results of this section.
**Theorem 3.1**.: _Let \((\Lambda,\Phi,\Gamma)\) be a set of coefficients satisfying Assumption 3.1 and Assumption 3.2. Then FBSDELDA (1.1) admits a unique solution \(\theta(\cdot)\in N_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n(2+d)}\times l^{2}( \mathbb{R}^{n}))\). Moreover, we have the following estimate:_
\[\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|x_{t}|^{2}+\sup_{t\in[0,T]}|y_{t}|^{2}+\int _{0}^{T}|z_{t}|^{2}dt+\int_{0}^{T}\|k_{t}\|_{l^{2}(\mathbb{R}^{n})}^{2}dt \bigg{]}\leq K\mathbb{E}[\mathrm{I}], \tag{3.6}\]
_where_
\[\begin{split}\mathrm{I}=&|\Phi(0)|^{2}+\int_{0}^{T}| b(t,0,0,0,0,0)|^{2}dt+\int_{0}^{T}|\sigma(t,0,0,0,0,0)|^{2}dt+\int_{0}^{T}\|g(t,0,0,0,0,0) \|_{l^{2}(\mathbb{R}^{n})}^{2}dt\\ &+\int_{0}^{T}|f(t,0,0,0)|^{2}dt+\sup_{t\in[-\delta,0]}|\lambda_ {t}|^{2}+\sup_{t\in[-\delta,0]}|\mu_{t}|^{2}+\int_{-\delta}^{0}|\rho_{t}|^{2} dt+\int_{-\delta}^{0}\|_{\xi}\|_{l^{2}(\mathbb{R}^{n})}^{2}dt,\end{split} \tag{3.7}\]
_and \(K\) is a positive constant depending only on \(T\), the Lipschitz constants, \(\mu\), \(v\) and the bound of all \(G\), \(A(\cdot)\), \(\bar{A}(\cdot)\), \(B(\cdot)\), \(\bar{B}(\cdot)\), \(C(\cdot)\), \(\bar{C}(\cdot)\), \(D_{j}(\cdot)\), \(\bar{D}_{j}(\cdot)\) (\(j=1,2,\cdots\)), \(S(\cdot)\), \(\bar{S}(\cdot)\). Furthermore, let \((\bar{\Lambda},\bar{\Phi},\bar{\Gamma})\) be another set of coefficients, and \(\bar{\theta}(\cdot)\in N_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n(2+d)}\times l^{2}( \mathbb{R}^{n}))\) be a solution to FBSDELDA (1.1) with the coefficients \((\bar{\Lambda},\bar{\Phi},\bar{\Gamma})\). Then we continue to assume that \(\big{(}\Lambda(\cdot),\bar{\Phi}(\bar{x}_{T}),\bar{\Gamma}(\cdot,\theta(\cdot),\theta_{-}(\cdot),\theta_{+}(\cdot)\big{)}\in\mathcal{H}[-\delta,T]\). Then the following estimate holds:_
\[\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|\widehat{x}_{t}|^{2}+\sup_{t\in[0,T]}| \widehat{y}_{t}|^{2}+\int_{0}^{T}|\widehat{z}_{t}|^{2}dt+\int_{0}^{T}\| \widehat{k}_{t}\|_{l^{2}(\mathbb{R}^{n})}^{2}dt\bigg{]}\leq K\mathbb{E}[ \widehat{\mathrm{I}}], \tag{3.8}\]
_where we denote \(\widehat{x}=x-\bar{x}\), \(\widehat{y}=y-\bar{y}\), \(\widehat{z}=z-\bar{z}\), \(\widehat{k}=k-\bar{k}\)\((i=1,2,3,\cdots)\), etc, and_
\[\widehat{\mathrm{I}}= |\Phi(\bar{x}_{T})-\bar{\Phi}(\bar{x}_{T})|^{2}+\int_{0}^{T}|b(t, \bar{\theta}(t),\bar{\theta}_{-}(t),\bar{y}_{+}(t),\bar{z}_{+}(t),\bar{k}_{+}( t))-\bar{b}(t,\bar{\theta}(t),\bar{\theta}_{-}(t),\bar{y}_{+}(t),\bar{z}_{+}(t), \bar{k}_{+}(t))|^{2}dt\] \[+\int_{0}^{T}|\sigma(t,\bar{\theta}(t),\bar{\theta}_{-}(t),\bar{ y}_{+}(t),\bar{z}_{+}(t),\bar{k}_{+}(t))-\bar{\sigma}(t,\bar{\theta}(t),\bar{ \theta}_{-}(t),\bar{y}_{+}(t),\bar{z}_{+}(t),\bar{k}_{+}(t))|^{2}dt\] \[+\int_{0}^{T}\|g(t,\bar{\theta}(t),\bar{\theta}_{-}(t),\bar{y}_{ +}(t),\bar{z}_{+}(t),\bar{k}_{+}(t))-\bar{g}(t,\bar{\theta}(t),\bar{\theta}_{- }(t),\bar{y}_{+}(t),\bar{z}_{+}(t),\bar{k}_{+}(t))\|_{I^{2}(\mathbb{R}^{n})}^{2} \tag{3.9}\] \[+\int_{0}^{T}|f(t,\bar{\theta}(t),\bar{x}_{-}(t),\bar{\theta}_{+} (t))-\bar{f}(t,\bar{\theta}(t),\bar{x}_{-}(t),\bar{\theta}_{+}(t))|^{2}dt\] \[+\sup_{t\in[-\delta,0]}|\widehat{\lambda}_{t}|^{2}+\sup_{t\in[- \delta,0]}|\widehat{\mu}_{t}|^{2}+\int_{-\delta}^{0}|\widehat{\rho}_{t}|^{2}dt +\int_{-\delta}^{0}\|\widehat{\varsigma}\|_{I^{2}(\mathbb{R}^{n})}^{2}dt,\]
_and \(K\) is the same constant as in (3.6)._
Next, we are devoted to proving Theorem 3.1. Due to the symmetry of monotonicity condition (3.2) and (3.3), we only give the detailed proofs under monotonicity condition (3.2).
For any \(\big{(}\pi(\cdot),\eta,\rho(\cdot)\big{)}\in\mathcal{H}[-\delta,T]\) with \(\pi(\cdot)=\big{(}\xi(\cdot)^{\top},\vartheta(\cdot)^{\top},\tau(\cdot)^{ \top},\chi(\cdot)^{\top}\big{)}^{\top}\) and \(\rho(\cdot)=(\varphi(\cdot)^{\top},\psi(\cdot)^{\top},\gamma(\cdot)^{\top}, \beta(\cdot)^{\top})^{\top}\) where \(\beta(\cdot)=(\beta^{(1)}(\cdot)^{\top},\beta^{(2)}(\cdot)^{\top},\cdots)^{\top}\), we continue to introduce a family of FBSDELDAs parameterized by \(\alpha\in[0,1]\) as follows:
\[\begin{cases}dx^{\alpha}(t)=&\big{[}b^{\alpha}(t,\theta^{\alpha}(t),\theta^{ \alpha}_{-}(t),y^{\alpha}_{+}(t),z^{\alpha}_{+}(t),k^{\alpha}_{+}(t))+\psi(t) \big{]}dt\\ &\quad+\big{[}\sigma^{\alpha}(t,\theta^{\alpha}(t),\theta^{\alpha}_{-}(t),y^ {\alpha}_{+}(t),z^{\alpha}_{+}(t),k^{\alpha}_{+}(t))+\gamma(t)\big{]}dW(t)\\ &\quad+\sum_{i=1}^{\infty}\big{[}g^{(i)\alpha}(t,\theta^{\alpha}(t-),\theta^ {\alpha}_{-}(t-),y^{\alpha}_{+}(t-),z^{\alpha}_{+}(t),k^{\alpha}_{+}(t))+ \beta^{(i)}(t)\big{]}dH^{(i)}(t),\quad t\in[0,T],\\ dy^{\alpha}(t)=&\big{[}f^{\alpha}(t,\theta^{\alpha}(t),x^{\alpha}_{-}(t), \theta^{\alpha}_{+}(t))+\varphi(t)\big{]}dt+z^{\alpha}(t)dW(t)+\sum_{i=1}^{ \infty}k^{(i)\alpha}(t)dH^{(i)}(t),\quad t\in[0,T],\\ x^{\alpha}(t)=&\lambda^{\alpha}(t)+\xi(t),\quad y^{\alpha}(t)=\mu^{\alpha}(t)+ \vartheta(t),\\ z^{\alpha}(t)=&\rho^{\alpha}(t)+\tau(t),\quad k^{\alpha}(t)=\varsigma^{\alpha }(t)+\chi(t),\quad t\in[-\delta,0],\\ y^{\alpha}(T)=&\Phi^{\alpha}(x^{\alpha}(T))+\eta,\\ x^{\alpha}(t)=&y^{\alpha}(t)=z^{\alpha}(t)=k^{\alpha}(t)=0,\quad t\in(T,T+ \delta],\end{cases} \tag{3.10}\]
where \(\theta^{\alpha}(t)=(x^{\alpha}(t)^{\top},y^{\alpha}(t)^{\top},z^{\alpha}(t)^{ \top},k^{\alpha}(t)^{\top})^{\top}\) with \(k^{\alpha}(t):=(k^{(1)\alpha}(t)^{\top},k^{(2)\alpha}(t)^{\top},\cdots)^{\top}\), \(\theta^{\alpha}_{-}(t)=(x^{\alpha}_{-}(t)^{\top},y^{\alpha}_{-}(t)^{\top},z^{ \alpha}_{-}(t)^{\top},k^{\alpha}_{-}(t)^{\top},\)\(k^{\alpha}_{-}(t)^{\top},\)\(\theta^{\alpha}_{+}(t)^{\top},y^{\alpha}_{+}(t)^{\top},z^{\alpha}_{+}(t)^{\top},k^{ \alpha}_{+}(t)^{\top})^{\top}\), and \(\theta^{\alpha}(t-)\), \(\theta^{\alpha}_{-}(t-)\) have the similar definition as that in LBS-DELDA (1.1) which is to say that \(t-\) only works on \(x\), \(x_{-}\) and \(y\), \(y_{-}\). For any \((t,\omega,\theta,\theta_{-},\theta_{+})\in[-\delta,T]\times\Omega\times\big{(} \mathbb{R}^{n(2+d)}\times l^{2}(\mathbb{R}^{n})\big{)}\times\big{(}\mathbb{R}^{n (2+d)}\times l^{2}(\mathbb{R}^{n})\big{)}\times\big{(}\mathbb{R}^{n(2+d)} \times l^{2}(\mathbb{R}^{n})\big{)},\)
\[\begin{cases}\Phi^{\alpha}(x)=\alpha\Phi(x)+(1-\alpha)vG^{\top}Gx,\\ \lambda^{\alpha}(t)=\lambda(t),\quad\mu^{\alpha}(t)=\mu(t),\quad\rho^{\alpha}(t )=\rho(t),\quad\varsigma^{\alpha}(t)=\varsigma(t),\\ f^{\alpha}(t,\theta,x_{-},\theta_{+})=\alpha f(t,\theta,x_{-},\theta_{+})-(1- \alpha)v\bigg{[}A(t)^{\top}P(t,x)+\bar{A}(t)^{\top}P_{-}(t,x)\bigg{]},\\ b^{\alpha}(t,\theta,\theta_{-},y_{+},z_{+},k_{+})=\alpha b(t,\theta,\theta_{-},y_{+},z_ {+},k_{+})-(1-\alpha)\mu\bigg{[}B(t)^{\top}Q(t,y,z,k)+\bar{B}(t)^{\top}Q_{-}(t,y,z,k)\bigg{]},\\ \sigma^{\alpha}(t,\theta,\theta_{-},y_{+},z_{+},k_{+})=\alpha\sigma(t,\theta, \theta_{-},y_{+},z_{+},k_{+})-(1-\alpha)\mu\bigg{[}C(t)^{\top}Q(t,y,z,k)+\bar{C} (t)^{\top}Q_{-}(t,y,z,k)\bigg{]},\\ g^{(i)\alpha}(t,\theta,\theta_{-},y_{+},z_{+},k_{+})=\alpha g^{(i)}(t,\theta, \theta_{-},y_{+},z_{+},k_{+})-(1-\alpha)\mu\bigg{[}D_{i}(t)^{\top}Q(t,y,z,k)+ \bar{D}_{i}(t)^{\top}Q_{-}(t,y,z,k)\bigg{]},\end{cases} \tag{3.11}\]
where \(P(t,x)\), \(P_{-}(t,x)\), \(Q_{-}(t,y,z,k)\), \(Q_{-}(t,y,z,k)\) are defined by (3.5). Similarly, we continue to denote \(\Gamma^{\alpha}(\cdot):=(f^{\alpha}(\cdot)^{\top},b^{\alpha}(\cdot)^{\top},\sigma^{ \alpha}(\cdot)^{\top},g^{\alpha}(\cdot)^{\top})^{\top}\) with \(g^{\alpha}(\cdot)^{\top}:=(g^{(1)\alpha}(\cdot)^{
Without loss of generality, we assume that the Lipschitz constants of the coefficients \((\Phi,\Gamma)\) are larger than
\[\max\left\{\mu,v\right\}\Bigg{(}|G|+\|A(\cdot)\|_{L^{\infty}_{F}(0,T; \mathbb{R}^{m\times n})}+\|\bar{A}(\cdot)\|_{L^{\infty}_{F}(0,T;\mathbb{R}^{m \times n})}+\|B(\cdot)\|_{L^{\infty}_{F}(0,T;\mathbb{R}^{m\times n})}+\|\bar{B} (\cdot)\|_{L^{\infty}_{F}(0,T;\mathbb{R}^{m\times n})}\] \[\qquad\qquad+\|C(\cdot)\|_{L^{\infty}_{F}(0,T;\mathbb{R}^{d\times mn })}+\|\bar{C}(\cdot)\|_{L^{\infty}_{F}(0,T;\mathbb{R}^{d\times mn})}+\sum_{j=1} ^{\infty}\|D_{j}(\cdot)\|_{L^{\infty}_{F}(0,T;\mathbb{R}^{m\times n})}+\sum_{j =1}^{\infty}\|\bar{D}_{j}(\cdot)\|_{L^{\infty}_{F}(0,T;\mathbb{R}^{m\times n })}\Bigg{)}^{2},\]
and the constant \(\mu\) and \(v\) in Assumption 3.2-(i) satisfy the following condition:
\[(\frac{1}{\mu})^{2},(\frac{1}{v})^{2}\geq\max\Bigl{\{} |G|,\|A(\cdot)\|_{L^{\infty}_{F}(0,T;\mathbb{R}^{m\times n})},\|\bar{A}(\cdot )\|_{L^{\infty}_{F}(0,T;\mathbb{R}^{m\times n})},\|B(\cdot)\|_{L^{\infty}_{F} (0,T;\mathbb{R}^{m\times n})},\|\bar{B}(\cdot)\|_{L^{\infty}_{F}(0,T;\mathbb{R }^{m\times n})},\] \[\qquad\qquad\qquad\qquad\|C(\cdot)\|_{L^{\infty}_{F}(0,T;\mathbb{R }^{d\times mn})},\|\bar{C}(\cdot)\|_{L^{\infty}_{F}(0,T;\mathbb{R}^{d\times mn })},\|D_{j}(\cdot)\|_{L^{\infty}_{F}(0,T;\mathbb{R}^{m\times n})},\|\bar{D}_{j }(\cdot)\|_{L^{\infty}_{F}(0,T;\mathbb{R}^{m\times n})}\Bigr{\}}.\]
Then we can easily verify that for any \(\alpha\in[0,1]\), the new coefficients \((\Lambda^{\alpha},\Phi^{\alpha},\Gamma^{\alpha})\) also satisfy Assumption 3.1 and Assumption 3.2 with the same Lipschitz constants, \(\mu\), \(v\), \(G\), \(A(\cdot)\), \(\bar{A}(\cdot)\), \(B(\cdot)\), \(\bar{B}(\cdot)\), \(C(\cdot)\), \(\bar{C}(\cdot)\), \(D_{j}(\cdot)\)\(\bar{D}_{j}(\cdot)\)\((j=1,2,\cdots)\) as the original coefficients \((\Lambda,\Phi,\Gamma)\).
Obviously, when \(\alpha=0\), FBSDELDA (3.10) can be rewritten in the following form:
\[\left\{\begin{aligned} dx^{0}(t)=&\bigg{\{}- \mu\Bigl{[}B(t)^{\top}Q(t,y_{t}^{0},z_{t}^{0},k_{t}^{0})+\bar{B}(t)^{\top}Q_{ -}(t,y_{t}^{0},z_{t}^{0},k_{t}^{0})\Bigr{]}+\psi(t)\bigg{\}}dt\\ &+\bigg{\{}-\mu\Bigl{[}C(t)^{\top}Q(t,y_{t}^{0},z_{t}^{0},k_{t}^ {0})+\bar{C}(t)^{\top}Q_{-}(t,y_{t}^{0},z_{t}^{0},k_{t}^{0})\Bigr{]}+\gamma(t) \bigg{\}}dW(t)\\ &+\sum_{i=1}^{\infty}\bigg{\{}-\mu\Bigl{[}D_{i}(t)^{\top}Q(t,y_{ t-}^{0},z_{t}^{0},k_{t}^{0})+\bar{D}_{i}(t)^{\top}Q_{-}(t,y_{t-}^{0},z_{t}^{0},k_{t} ^{0})\Bigr{]}+\beta^{(i)}(t)\bigg{\}}dH^{(i)}(t),\\ dy^{0}(t)=&\bigg{(}-v\Bigl{[}A(t)^{\top}P(t,x_{t}^{0} )+\bar{A}(t)^{\top}P_{-}(t,x_{t}^{0})\Bigr{]}+\varphi(t)\bigg{)}dt\\ &+z^{0}(t)dW(t)+\sum_{i=1}^{\infty}k^{(i)0}(t)dH^{(i)}(t),\quad t \in[0,T],\\ x^{0}(t)=&\lambda^{0}(t)+\xi(t),\quad y^{0}(t)=\mu^{ 0}(t)+\vartheta(t),\quad z^{0}(t)=\rho^{0}(t)+\tau(t),\quad k^{0}(t)=\varsigma ^{0}(t)+\chi(t),\quad t\in[-\delta,0],\\ y^{0}(T)=& vG^{\top}Gx^{0}(T)+\eta,\quad x^{0}(t)=y^ {0}(t)=z^{0}(t)=k^{0}(t)=0,\quad t\in(T,T+\delta].\end{aligned}\right. \tag{3.12}\]
In fact, we can easily find that FBSDELDA (3.12) is in a decoupled form. In detail, when Assumption 3.2-(i)-Case 1 holds (i.e., \(\mu>0\) and \(v=0\)), we can firstly solve \((y^{0}(\cdot),z^{0}(\cdot),k^{0}(\cdot))\) from the backward equation, then substitute \((y^{0}(\cdot),z^{0}(\cdot),k^{0}(\cdot))\) into the forward equation and solve \(x^{0}(\cdot)\). Similarly, when Assumption 3.2-(i)-Case 2 holds (i.e., \(\mu=0\) and \(v>0\)), we can firstly solve the forward equation and then the backward equation. In short, under Assumption 3.1 and Assumption 3.2, FBSDELDA (3.12) admits a unique solution \(\theta^{0}(\cdot)\in N_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n(2+d)}\times l^{2}( \mathbb{R}^{n}))\).
It is clear that when \(\alpha=1\) and \((\pi(\cdot),\eta,\rho(\cdot))\) vanish, FBSDELDA (3.10) and FBSDELDA (1.1) are identical. Next, we will illustrate that if for some \(\alpha_{0}\in[0,1)\), FBSDELDA (3.10) is uniquely solvable for any \(\left(\pi(\cdot),\eta,\rho(\cdot)\right)\in\mathcal{H}[-\delta,T]\), then there exists a fixed step length \(\delta_{0}>0\) such that the same conclusion still holds for any \(\alpha\in[\alpha_{0},\alpha_{0}+\delta_{0}]\). As long as this has been proved to be true, we can gradually increase the parameter \(\alpha\) until \(\alpha=1\). This method is called the method of continuation which is introduced initially by Hu and Peng [11].
For this goal, we shall first establish a priori estimate for the solution of FBSDELDA (3.10) which plays an important role in the subsequent proofs.
**Lemma 3.2**.: _Let the given coefficients \((\Lambda,\Phi,\Gamma)\) satisfy Assumption 3.1 and Assumption 3.2. Let \(\alpha\in[0,1]\), \(\left(\pi(\cdot),\eta,\rho(\cdot)\right)\), \(\left(\bar{\pi}(\cdot),\bar{\eta},\bar{\rho}(\cdot)\right)\in\mathcal{H}[-\delta,T]\). Assume that \(\theta(\cdot)\in N_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n(2+d)}\times l^{2}( \mathbb{R}^{n}))\) is the solution of FBSDELDA (3.10) with the coefficients \((\Lambda^{\alpha}+\pi,\Phi^{\alpha}+\eta,\Gamma^{\alpha}+\rho)\) and \(\bar{\theta}(\cdot)\in N_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n(2+d)}\times l^{2}( \mathbb{R}^{n}))\) is also the solution to FBSDELDA (3.10) with the coefficients \((\Lambda^{\alpha}+\bar{\pi},\Phi^{\alpha}+\bar{\eta},\Gamma^{\alpha}+\bar{\rho})\). Then the following estimate holds:_
\[\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|\widehat{x}_{t}|^{2}+\sup_{t\in[0,T]}| \widehat{y}_{t}|^{2}+\int_{0}^{T}|\widehat{z}_{t}|^{2}dt+\int_{0}^{T}|\widehat{k}_{t} |_{l^{2}(\mathbb{R}^{n})}^{2}dt\bigg{]}\leq K\mathbb{E}[\widehat{\mathfrak{J}}], \tag{3.13}\]
_where we denote_
\[\widehat{\mathfrak{J}}= |\widehat{\eta}|^{2}+\int_{0}^{T}|\widehat{\varphi}_{t}|^{2}dt+ \int_{0}^{T}|\widehat{\psi}_{t}|^{2}dt+\int_{0}^{T}|\widehat{\gamma}_{t}|^{2} dt+\int_{0}^{T}\|\widehat{\beta}_{t}\|_{I^{2}(\mathbb{R}^{n})}^{2}dt \tag{3.14}\] \[+\sup_{t\in[-\delta,0]}|\widehat{\xi}_{t}|^{2}+\sup_{t\in[-\delta,0]}|\widehat{\vartheta}_{t}|^{2}+\int_{-\delta}^{0}|\widehat{\tau}_{t}|^{2} dt+\int_{-\delta}^{0}\|\widehat{\chi}_{t}\|_{I^{2}(\mathbb{R}^{n})}^{2}dt,\]
_and \(\widehat{\xi}=\xi-\bar{\xi}\), \(\widehat{\varphi}=\varphi-\bar{\varphi}\), etc. Here \(K\) is a positive constant that satisfies the same conditions as the constant \(K\) in Theorem 3.1._
Proof.: In the following proofs, the argument \(t\) is suppressed for simplicity. Besides, it should be noted that the positive constant \(K\) could be changed line to line.
By the estimate (2.3) in Lemma 2.1, we have
\[\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|\widehat{x}|^{2}\bigg{]} \leq K\mathbb{E}\Bigg{\{}\int_{0}^{T}\bigg{|}\alpha\big{(}b(\bar{x},y,z,k,\bar{x}_{-},y_{-},z_{-},k_{-},y_{+},z_{+},k_{+})-b(\bar{\theta},\bar{ \theta}_{-},\bar{y}_{+},\bar{z}_{+},\bar{k}_{+})\big{)} \tag{3.15}\] \[\qquad-(1-\alpha)\mu\Big{[}B^{\top}Q(\widehat{y},\widehat{z}, \widehat{k})+\bar{B}^{\top}Q_{-}(\widehat{y},\widehat{z},\widehat{k})\Big{]}+ \widehat{\psi}\bigg{|}^{2}dt\] \[+\int_{0}^{T}\Bigg{|}\alpha\big{(}\sigma(\bar{x},y,z,k,\bar{x}_{ -},y_{-},z_{-},k_{-},y_{+},z_{+},k_{+})-\sigma(\bar{\theta},\bar{\theta}_{-}, \bar{y}_{+},\bar{z}_{+},\bar{k}_{+})\big{)}\] \[\qquad-(1-\alpha)\mu\Big{[}C^{\top}Q(\widehat{y},\widehat{z}, \widehat{k})+\bar{C}^{\top}Q_{-}(\widehat{y},\widehat{z},\widehat{k})\Big{]}+ \widehat{\gamma}\Bigg{|}^{2}dt\] \[+\int_{0}^{T}\Bigg{\|}\alpha\big{(}g(\bar{x},y,z,k,\bar{x}_{-},y_ {-},z_{-},k_{-},y_{+},z_{+},k_{+})-g(\bar{\theta},\bar{\theta}_{-},\bar{y}_{ +},\bar{z}_{+},\bar{k}_{+})\big{)}\] \[\qquad-(1-\alpha)\mu\Big{[}D_{i}^{\top}Q(\widehat{y},\widehat{z},\widehat{k})+\bar{D}_{i}^{\top}Q_{-}(\widehat{y},\widehat{z},\widehat{k}) \Big{]}+\widehat{\beta}\Bigg{\|}_{I^{2}(\mathbb{R}^{n})}^{2}dt+\sup_{t\in[- \delta,0]}|\widehat{\xi}|^{2}\Bigg{\}}.\]
Similarly, by applying the estimate (2.15) in Lemma 2.2, we can derive
\[\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|\widehat{y}|^{2}+\int_{0}^{T} |\widehat{z}|^{2}dt+\int_{0}^{T}\|\widehat{k}\|_{I^{2}(\mathbb{R}^{n})}^{2}dt\bigg{]} \tag{3.16}\] \[\leq K\mathbb{E}\Bigg{\{}\Big{|}\alpha\big{(}\Phi(x_{T})-\Phi( \bar{x}_{T})\big{)}+(1-\alpha)vG^{\top}G\dot{x}_{T}+\widehat{\eta}\Big{|}^{2}\] \[\qquad-(1-\alpha)v\Big{[}A^{\top}P(\widehat{x})+\bar{A}^{\top}P_ {-}(\widehat{x})\Big{]}+\widehat{\varphi}\Big{|}^{2}dt\Bigg{\}}.\]
Furthermore, applying Ito's formula to \(\langle\widehat{x}(\cdot),\widehat{y}(\cdot)\rangle\) yields
\[\mathbb{E}\Bigg{\{}(1-\alpha)v|G\widehat{x}_{T}|^{2}+\alpha\, \langle\Phi(x_{T})-\Phi(\bar{x}_{T}),\widehat{x}_{T}\rangle+(1-\alpha)\mu \int_{0}^{T}\big{|}Q(\widehat{y},\widehat{z},\widehat{k})\big{|}^{2}dt \tag{3.17}\] \[\qquad+(1-\alpha)v\int_{0}^{T}\big{|}P(\widehat{x})\big{|}^{2}dt- \alpha\int_{0}^{T}\Big{\langle}\Gamma(\theta,\theta_{-},\theta_{+})-\Gamma(t, \bar{\theta},\bar{\theta}_{-},\bar{\theta}_{+}),\widehat{\theta}\Big{\rangle} \,dt\Bigg{\}}\] \[= \,\mathbb{E}\Bigg{\{}-\langle\widehat{\eta},\widehat{x}_{T} \rangle+\Big{\langle}\widehat{\xi}_{0},\widehat{\vartheta}_{0}\Big{\rangle}+ \int_{0}^{T}\Big{[}\,\langle\widehat{\varphi},\widehat{x}\rangle+\Big{\langle} \widehat{\psi},\widehat{y}\Big{\rangle}+\langle\widehat{\gamma},\widehat{z} \rangle+\Big{\langle}\widehat{\beta},\widehat{k}\Big{\rangle}\,\Big{]}dt \Bigg{\}},\]
where the assumption that \(\bar{A}=\bar{B}=\bar{C}=\bar{D}_{j}=0\), when \(t\in[0,\delta]\) has been used. Therefore, combining with the monotonicity conditions in Assumption 3.2-(iii), (3.17) is reduced to
\[\begin{split}&\mathbb{E}\Bigg{\{}v|G\widehat{x}_{T}|^{2}+v\int_{0}^{T }\big{|}P(\widehat{x})\big{|}^{2}dt+\mu\int_{0}^{T}\big{|}Q(\widehat{y},\widehat {z},\widehat{k})\big{|}^{2}dt\Bigg{\}}\\ &\leq\mathbb{E}\Bigg{\{}-\langle\widehat{\eta},\widehat{x}_{T} \rangle+\Big{\langle}\widehat{\xi}_{0},\widehat{\vartheta}_{0}\Big{\rangle}+ \int_{0}^{T}\Big{[}\left\langle\widehat{\varphi},\widehat{x}\right\rangle+ \Big{\langle}\widehat{\psi},\widehat{y}\Big{\rangle}+\langle\widehat{\gamma},\widehat{z}\rangle+\Big{\langle}\widehat{\beta},\widehat{k}\Big{\rangle} \,\Big{]}dt\Bigg{\}},\end{split} \tag{3.18}\]
The following proofs will be divided into two cases according to Assumption 3.2-(i).
**Case 1**: \(\mu>0\) and \(v=0\). By applying the domination conditions (3.1) in Assumption 3.2-(ii) to the estimate (3.15), we get
\[\begin{split}\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|\widehat{x}|^{2 }\bigg{]}&\leq K\mathbb{E}\Bigg{\{}\int_{0}^{T}|\widehat{\psi}|^{ 2}dt+\int_{0}^{T}|\widehat{\gamma}|^{2}dt+\int_{0}^{T}\|\widehat{\beta}\|_{I^ {2}(\mathbb{R}^{n})}^{2}dt+\sup_{t\in[-\delta,0]}|\widehat{\xi}|^{2}\\ &\quad+\int_{0}^{T}\big{|}Q(\widehat{y},\widehat{z},\widehat{k}) \big{|}^{2}dt+\int_{0}^{T}\big{|}Q_{-}(\widehat{y},\widehat{z},\widehat{k}) \big{|}^{2}dt\Bigg{\}}.\end{split} \tag{3.19}\]
By the time-shifting transformation and the property that \(\bar{B}=\bar{C}=\bar{D}_{j}=0\), when \(t\in[0,\delta]\), we derive that
\[\begin{split}&\int_{0}^{T}\big{|}Q_{-}(\widehat{y},\widehat{z}, \widehat{k})\big{|}^{2}dt\\ &=\int_{0}^{\delta}\big{|}Q_{-}(\widehat{y},\widehat{z}, \widehat{k})\big{|}^{2}dt+\int_{\delta}^{T}\big{|}Q_{-}(\widehat{y},\widehat{ z},\widehat{k})\big{|}^{2}dt\\ &=\int_{-\delta}^{0}\big{|}B\widehat{y}+C\widehat{z}+\sum_{j=1}^ {\infty}D_{j}\widehat{k}^{(j)}\big{|}^{2}dt+\int_{0}^{T-\delta}\big{|}Q( \widehat{y},\widehat{z},\widehat{k})\big{|}^{2}dt\\ &\leq K\bigg{\{}\sup_{t\in[-\delta,0]}|\widehat{\vartheta}|^{2} +\int_{-\delta}^{0}|\widehat{\tau}|^{2}dt+\int_{-\delta}^{0}\|\widehat{\chi} \|_{\mathbb{I}^{2}(\mathbb{R}^{n})}^{2}dt+\int_{0}^{T}\big{|}Q(\widehat{y}, \widehat{z},\widehat{k})\big{|}^{2}dt\bigg{\}}.\end{split} \tag{3.20}\]
Then by substituting (3.20) into (3.19), the estimate (3.19) can be rewritten as follows:
\[\begin{split}\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|\widehat{x}|^{2 }\bigg{]}&\leq K\mathbb{E}\Bigg{\{}\int_{0}^{T}|\widehat{\psi}|^{ 2}dt+\int_{0}^{T}|\widehat{\gamma}|^{2}dt+\int_{0}^{T}\|\widehat{\beta}\|_{ \mathbb{I}^{2}(\mathbb{R}^{n})}^{2}dt+\int_{0}^{T}\big{|}Q(\widehat{y},\widehat {z},\widehat{k})\big{|}^{2}dt\\ &\quad+\sup_{t\in[-\delta,0]}|\widehat{\xi}|^{2}+\sup_{t\in[- \delta,0]}|\widehat{\vartheta}|^{2}+\int_{-\delta}^{0}|\widehat{\tau}|^{2}dt+ \int_{-\delta}^{0}\|\widehat{\chi}\|_{I^{2}(\mathbb{R}^{n})}^{2}dt\Bigg{\}}. \end{split} \tag{3.21}\]
Applying the Lipschitz condition and the time-shifting transformation to the estimate (3.16) leads to
\[\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|\widehat{y}|^{2}+\int_{0}^{T}|\widehat{z}|^ {2}dt+\int_{0}^{T}\|\widehat{k}\|_{I^{2}(\mathbb{R}^{n})}^{2}dt\bigg{]}\leq K \mathbb{E}\bigg{\{}|\widehat{\eta}|^{2}+\int_{0}^{T}|\widehat{\varphi}|^{2}dt+ \sup_{t\in[0,T]}|\widehat{x}|^{2}+\sup_{t\in[-\delta,0]}|\widehat{\xi}|^{2} \bigg{\}}. \tag{3.22}\]
Hence, combining (3.21) and (3.22) yields
\[\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|\widehat{x}|^{2}+\sup_{t\in[0,T]}|\widehat{ y}|^{2}+\int_{0}^{T}|\widehat{z}|^{2}dt+\int_{0}^{T}\|\widehat{k}\|_{I^{2}( \mathbb{R}^{n})}^{2}dt\bigg{]}\leq K\mathbb{E}\Bigg{\{}\widehat{\mathcal{I}} +\int_{0}^{T}\big{|}Q(\widehat{y},\widehat{z},\widehat{k})\big{|}^{2}dt \Bigg{\}}, \tag{3.23}\]
where \(\widehat{\mathcal{I}}\) is defined by (3.14). Finally, we continue to combine (3.18) and (3.23) and use the inequality \(ab\leq\frac{1}{4\varepsilon}a^{2}+\epsilon b^{2}\)
to get
\[\begin{split}&\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|\widehat{x}|^{2}+ \sup_{t\in[0,T]}|\widehat{y}|^{2}+\int_{0}^{T}|\widehat{z}|^{2}dt+\int_{0}^{T} |\widehat{k}||_{I^{2}(\mathbb{R}^{n})}^{2}dt\bigg{]}\\ &\leq K\mathbb{E}\Bigg{\{}\widehat{\mathfrak{J}}-\langle\widehat{ \eta},\widehat{x}_{T}\rangle+\left\langle\widehat{\xi}_{0},\widehat{\vartheta} _{0}\right\rangle+\int_{0}^{T}\Big{[}\left\langle\widehat{\varphi},\widehat{x} \rangle+\left\langle\widehat{\psi},\widehat{y}\right\rangle+\langle\widehat{ \gamma},\widehat{z}\rangle+\left\langle\widehat{\beta},\widehat{k}\right\rangle \Big{]}dt\Bigg{\}}\\ &\leq K\mathbb{E}\Bigg{\{}\widehat{\mathfrak{J}}+|\widehat{\eta}| \Big{(}\sup_{t\in[0,T]}|\widehat{x}|\Big{)}+\Big{(}\sup_{t\in[-\delta,0]}| \widehat{\xi}|\Big{)}\Big{(}\sup_{t\in[-\delta,0]}|\widehat{\vartheta}| \Big{)}+\bigg{(}\int_{0}^{T}|\widehat{\varphi}|dt\bigg{)}\Big{(}\sup_{t\in[0,T]}|\widehat{x}|\Big{)}\\ &\qquad+\bigg{(}\int_{0}^{T}|\widehat{\psi}|dt\Big{)}\Big{(}\sup _{t\in[0,T]}|\widehat{y}|\Big{)}+\int_{0}^{T}|\widehat{\gamma}||\widehat{z}| dt+\int_{0}^{T}\|\widehat{\beta}\|_{I^{2}(\mathbb{R}^{n})}\|\widehat{k}\|_{I^{2}( \mathbb{R}^{n})}dt\Bigg{\}}\\ &\leq K\mathbb{E}\Bigg{\{}\widehat{\mathfrak{J}}+2\varepsilon \bigg{[}\sup_{t\in[0,T]}|\widehat{x}|^{2}+\sup_{t\in[0,T]}|\widehat{y}|^{2}+ \int_{0}^{T}|\widehat{z}|^{2}dt+\int_{0}^{T}|\widehat{k}||_{I^{2}(\mathbb{R}^ {n})}^{2}dt\bigg{]}\Bigg{\}}.\end{split} \tag{3.24}\]
By taking \(\varepsilon\) small enough such that \(2K\varepsilon<1\), we can easily obtain the desired estimate (3.13) and the proof in this case is finished.
**Case 2**: \(\mu=0\) and \(v>0\). Differently, we apply the Lipschitz conditions and the time-shifting transformation to the estimate (3.15) to get
\[\begin{split}\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|\widehat{x}|^{2} \bigg{]}\leq& K\mathbb{E}\Bigg{\{}\sup_{t\in[0,T]}|\widehat{y}|^ {2}+\int_{0}^{T}|\widehat{z}|^{2}dt+\int_{0}^{T}|\widehat{k}||_{I^{2}(\mathbb{ R}^{n})}^{2}dt+\sup_{t\in[-\delta,0]}|\widehat{\xi}|^{2}+\sup_{t\in[- \delta,0]}|\widehat{\vartheta}|^{2}+\int_{-\delta}^{0}|\widehat{\tau}|^{2}dt \\ &+\int_{-\delta}^{0}\|\widehat{\chi}\|_{I^{2}(\mathbb{R}^{n})}^{2 }dt+\int_{0}^{T}|\widehat{\psi}|^{2}dt+\int_{0}^{T}|\widehat{\gamma}|^{2}dt+ \int_{0}^{T}\|\widehat{\beta}\|_{I^{2}(\mathbb{R}^{n})}^{2}dt\Bigg{\}}.\end{split} \tag{3.25}\]
By the domination conditions (3.1) in Assumption 3.2-(ii), we deduce from (3.16) and obtain
\[\begin{split}&\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|\widehat{y}|^{2} +\int_{0}^{T}|\widehat{z}|^{2}dt+\int_{0}^{T}\|\widehat{k}\|_{I^{2}(\mathbb{R} ^{n})}^{2}dt\bigg{]}\\ &\leq K\mathbb{E}\bigg{\{}|G\widehat{x}_{T}|^{2}+\int_{0}^{T}|P( \widehat{x})|^{2}dt+\int_{0}^{T}|P_{-}(\widehat{x})|^{2}dt+|\widehat{\eta}|^{2} +\int_{0}^{T}|\widehat{\varphi}|^{2}dt\Bigg{\}}.\end{split} \tag{3.26}\]
Using the the time-shifting transformation and the property that \(\bar{A}=0\), when \(t\in[0,\delta]\), we have
\[\begin{split}\int_{0}^{T}|P_{-}(\widehat{x})|^{2}dt& =\int_{0}^{\delta}|P_{-}(\widehat{x})|^{2}dt+\int_{\delta}^{T}|P_ {-}(\widehat{x})|^{2}dt=\int_{-\delta}^{0}|Ax|^{2}dt+\int_{0}^{T-\delta}|P( \widehat{x})|^{2}dt\\ &\leq K\bigg{\{}\sup_{t\in[-\delta,0]}|\widehat{\xi}|^{2}+\int_{ 0}^{T}|P(\widehat{x})|^{2}dt\bigg{\}}.\end{split} \tag{3.27}\]
Then putting (3.27) into (3.26) yields
\[\begin{split}&\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|\widehat{y}|^{2} +\int_{0}^{T}|\widehat{z}|^{2}dt+\int_{0}^{T}\|\widehat{k}\|_{I^{2}(\mathbb{R} ^{n})}^{2}dt\bigg{]}\\ &\leq K\mathbb{E}\bigg{\{}|G\widehat{x}_{T}|^{2}+\int_{0}^{T}|P( \widehat{x})|^{2}dt+\sup_{t\in[-\delta,0]}|\widehat{\xi}|^{2}+|\widehat{\eta}|^ {2}+\int_{0}^{T}|\widehat{\varphi}|^{2}dt\Bigg{\}}.\end{split} \tag{3.28}\]
Thus, combining (3.25) and (3.28), we can derive
\[\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|\widehat{x}|^{2}+\sup_{t\in[0,T]}|\widehat{ y}|^{2}+\int_{0}^{T}|\widehat{z}|^{2}dt+\int_{0}^{T}\|\widehat{k}\|_{I^{2}( \mathbb{R}^{n})}^{2}dt\bigg{]}\leq K\mathbb{E}\Bigg{\{}\widehat{\mathfrak{J}}+|G \widehat{x}_{T}|^{2}+\int_{0}^{T}|P(\widehat{x})|^{2}dt\Bigg{\}}. \tag{3.29}\]
where \(\widehat{\mathfrak{J}}\) is defined by (3.14). At last, (3.18) and (3.29) work together to turn out
\[\begin{split}&\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|\widehat{x}|^{2}+ \sup_{t\in[0,T]}|\widehat{y}|^{2}+\int_{0}^{T}|\widehat{z}|^{2}dt+\int_{0}^{T }\|\widehat{k}\|_{l^{2}(\mathbb{R}^{n})}^{2}dt\bigg{]}\\ &\leq K\mathbb{E}\Bigg{\{}\widehat{\mathfrak{J}}-\langle\widehat {\eta},\widehat{x}_{T}\rangle+\Big{\langle}\widehat{\xi}_{0},\widehat{\vartheta }_{0}\Big{\rangle}+\int_{0}^{T}\Big{[}\,\langle\widehat{\varphi},\widehat{x} \rangle+\Big{\langle}\widehat{\psi},\widehat{y}\Big{\rangle}+\langle\widehat{ \gamma},\widehat{z}\rangle+\Big{\langle}\widehat{\beta},\widehat{k}\Big{\rangle} \,\Big{]}dt\Bigg{\}}.\end{split} \tag{3.30}\]
The remaining proof is the same as (3.24) in Case 1 and then we finish the proof in this case. Consequently, the whole proof of the lemma is completed.
Next, we prove a continuation lemma based on the priori estimate in Lemma 3.2.
**Lemma 3.3**.: _Let Assumption 3.1 and Assumption 3.2 be satisfied. If for some \(\alpha_{0}\in[0,1)\), FBSDELDA (3.10) admits a unique solution in \(N_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n(2+d)}\times l^{2}(\mathbb{R}^{n}))\) for any \((\pi(\cdot),\eta,\rho(\cdot))\in\mathcal{H}[-\delta,T]\), then there exists an absolute constant \(\delta_{0}>0\) such that the same conclusion is also true for \(\alpha=\alpha_{0}+\delta\) with \(\delta\in(0,\delta_{0}]\) and \(\alpha\leq 1\)._
Proof.: Let \(\delta_{0}>0\) be determined below. For any \(\theta(\cdot)\in N_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n(2+d)}\times l^{2}( \mathbb{R}^{n}))\), we introduce the following FBSDELDA with unknow \(\Theta(\cdot):=(X(\cdot)^{\top},Y(\cdot)^{\top},Z(\cdot)^{\top},K(\cdot)^{ \top})\in N_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n(2+d)}\times l^{2}(\mathbb{R}^{n }))\) where \(K(\cdot):=(K^{(1)}(\cdot)^{\top},K^{(2)}(\cdot)^{\top},\cdots)^{\top}\) (compared to (3.10) with \(\alpha=\alpha_{0}+\delta\)):
\[\begin{split}\begin{cases}dX(t)=&\Big{\{}-(1-\alpha_{0}) \mu\Big{[}B(t)^{\top}Q(t,Y_{t},Z_{t},K_{t})+\bar{B}(t)^{\top}Q_{-}(t,Y_{t},Z_{ t},K_{t})\Big{]}\\ &\quad+\alpha_{0}b(t,\Theta(t),\Theta_{-}(t),Y_{+}(t),Z_{+}(t),K_{+}(t)) +\widetilde{\psi}(t)\Big{\}}dt\\ &\quad+\Big{\{}-(1-\alpha_{0})\mu\Big{[}C(t)^{\top}Q(t,Y_{t},Z_{ t},K_{t})+\bar{C}(t)^{\top}Q_{-}(t,Y_{t},Z_{t},K_{t})\Big{]}\\ &\quad+\alpha_{0}\sigma(t,\Theta(t),\Theta_{-}(t),Y_{+}(t),Z_{+}(t),K_{+}( t))+\widetilde{\gamma}(t)\Big{\}}dW(t)\\ &\quad+\sum_{i=1}^{\infty}\Big{\{}-(1-\alpha_{0})\mu\Big{[}D_{i}(t)^{ \top}Q(t,Y_{t-},Z_{t},K_{t})+\bar{D}_{i}(t)^{\top}Q_{-}(t,Y_{t-},Z_{t},K_{t}) \Big{]}\\ &\quad+\alpha_{0}g^{(i)}(t,\Theta(t-),\Theta_{-}(t-),Y_{+}(t-),Z_{ +}(t),K_{+}(t))+\widetilde{\beta}^{(i)}(t)\Big{\}}dH^{(i)}(t),\quad t\in[0,T],\\ dY(t)=&\Big{\{}-(1-\alpha_{0})v\Big{[}A(t)^{\top}P(t,X_{t})+ \bar{A}(t)^{\top}P_{-}(t,X_{t})\Big{]}\\ &\quad+\alpha_{0}f(t,\Theta(t),X_{-}(t),\Theta_{+}(t))+\widetilde{ \varphi}(t)\Big{\}}dt\\ &\quad+Z(t)dW(t)+\sum_{i=1}^{\infty}K^{(i)}(t)dH^{(i)}(t),\quad t\in[0,T],\\ X(t)=&\lambda(t),\quad Y(t)=\mu(t),\quad Z(t)=\rho(t),\quad K(t)= \varsigma(t),\quad t\in[-\delta,0],\\ Y(T)=&\alpha_{0}\Phi(X(T))+(1-\alpha_{0})v\sigma^{\top}GX(T)+\widetilde{ \eta},\\ X(t)=& Y(t)=Z(t)=K(t)=0,\quad t\in(T,T+\delta],\end{cases}\end{split} \tag{3.31}\]
where
\[\begin{cases}\widetilde{\psi}(t)=\psi(t)+\delta b(t,\theta(t), \theta_{-}(t),y_{+}(t),z_{+}(t),k_{+}(t))+\delta\mu\Big{[}B(t)^{\top}Q(t,y_{t}, z_{t},k_{t})+\bar{B}(t)^{\top}Q_{-}(t,y_{t},z_{t},k_{t})\Big{]},\\ \widetilde{\gamma}(t)=\gamma(t)+\delta\sigma(t,\theta(t),\theta_{-}(t),y_{+}(t ),z_{+}(t),k_{+}(t))+\delta\mu\Big{[}C(t)^{\top}Q(t,y_{t},z_{t},k_{t})+\bar{C}( t)^{\top}Q_{-}(t,y_{t},z_{t},k_{t})\Big{]},\\ \widetilde{\beta}^{(i)}(t)=\beta^{(i)}(t)+\delta g^{(i)}(t,\theta(t),\theta_{-}(t ),y_{+}(t),z_{+}(t),k_{+}(t))+\delta\mu\Big{[}D_{i}(t)^{\top}Q(t,y_{t},z_{t},k_ {t})+\bar{D}_{i}(t)^{\top}Q_{-}(t,y_{t},z_{t},k_{t})\Big{]},\\ \widetilde{\varphi}(t)=\varphi(t)+\delta f(t,\theta(t),x_{-}(t),\theta_{+}(t))+ \delta v\Big{[}A(t)^{\top}P(t,x_{t})+\bar{A}(t)^{\top}P_{-}(t,x_{t})\Big{]},\\ \widetilde{\eta}=\eta+\delta\Phi(x(T))-\delta vG^{\top}Gx(T).\end{cases} \tag{3.32}\]
Similar to before, for \(\Theta(t-)\) and \(\Theta_{-}(t-)\), \(t-\) only works on \(X\), \(X_{-}\) and \(Y\), \(Y_{-}\). Furthermore, we also denote \(\Lambda(\cdot)=(\lambda(\cdot),\mu(\cdot),\rho(\cdot),\varsigma(\cdot))\) and \(\widetilde{\rho}(\cdot)=(\widetilde{\varphi}(\cdot)^{\top},\widetilde{\psi}( \cdot)^{\top},\widetilde{\gamma}(\cdot)^{\top},\widetilde{\beta}(\cdot)^{\top} )^{\top}\). Then it is easy to check that \((\Lambda,\widetilde{\eta},\widetilde{\rho})\in\mathcal{H}[-\delta,T]\). By
our assumptions, the FBSDELDA (3.31) admits a unique solution \(\Theta(\cdot)\in N_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n(2+d)}\times l^{2}(\mathbb{R }^{n}))\). In fact, we have established a mapping
\[\Theta(\cdot)=\mathcal{T}_{\alpha_{0}+\delta}\big{(}\theta(\cdot)\big{)}:N_{ \mathbb{F}}^{2}(0,T;\mathbb{R}^{n(2+d)}\times l^{2}(\mathbb{R}^{n}))\to N_{ \mathbb{F}}^{2}(0,T;\mathbb{R}^{n(2+d)}\times l^{2}(\mathbb{R}^{n})).\]
In the following, we shall prove that the above mapping is contractive when \(\delta\) is small enough.
Let \(\theta(\cdot),\bar{\theta}(\cdot)\in N_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n(2+d) }\times l^{2}(\mathbb{R}^{n}))\) and \(\Theta(\cdot)=\mathcal{T}_{\alpha_{0}+\delta}\big{(}\theta(\cdot)\big{)}\), \(\bar{\Theta}(\cdot)=\mathcal{T}_{\alpha_{0}+\delta}\big{(}\bar{\theta}(\cdot) \big{)}\). Similarly, we denote \(\widehat{\theta}(\cdot)=\theta(\cdot)-\bar{\theta}(\cdot)\), \(\widehat{\Theta}(\cdot)=\Theta(\cdot)-\bar{\Theta}(\cdot)\), etc. By applying Lemma 3.2, we have (the argument \(t\) is suppressed for simplicity)
\[\quad\big{|}\widehat{\Theta}\big{|}_{N_{\mathbb{F}}^{2}(0,T; \mathbb{R}^{n(2+d)}\times l^{2}(\mathbb{R}^{n}))}^{2}\] \[=\mathbb{E}\bigg{[}\sup_{t\in[0,T]}|\widehat{X}|^{2}+\sup_{t\in[ 0,T]}|\widehat{Y}|^{2}+\int_{0}^{T}|\widehat{Z}|^{2}dt+\int_{0}^{T}\|\widehat {K}\|_{l^{2}(\mathbb{R}^{n})}^{2}dt\bigg{]}\] \[\leq K\delta^{2}\mathbb{E}\Bigg{\{}\Big{|}\big{(}\Phi(x_{T})- \Phi(\bar{x}_{T})\big{)}-vG^{\top}G\widehat{x}_{T}\Big{|}^{2}\] \[\quad+\int_{0}^{T}\bigg{|}\big{(}b\big{(}\theta,\theta_{-},y_{+}, z_{+},k_{+})-b(\bar{\theta},\bar{\theta}_{-},\bar{y}_{+},\bar{z}_{+},\bar{k}_{+} )\big{)}+\mu\big{[}B^{\top}Q(\widehat{y},\widehat{z},\widehat{k})+\bar{B}^{ \top}Q_{-}(\widehat{y},\widehat{z},\widehat{k})\big{]}\bigg{|}^{2}dt\] \[\quad+\int_{0}^{T}\Bigg{|}\big{(}\sigma(\theta,\theta_{-},y_{+}, z_{+},k_{+})-\sigma(\bar{\theta},\bar{\theta}_{-},\bar{y}_{+},\bar{z}_{+},\bar{k}_{+} )\big{)}+\mu\big{[}C^{\top}Q(\widehat{y},\widehat{z},\widehat{k})+\bar{C}^{ \top}Q_{-}(\widehat{y},\widehat{z},\widehat{k})\big{]}\bigg{|}^{2}dt\] \[\quad+\int_{0}^{T}\sum_{i=1}^{\infty}\bigg{\|}\big{(}g^{(i)}( \theta,\theta_{-},y_{+},z_{+},k_{+})-g^{(i)}(\bar{\theta},\bar{\theta}_{-}, \bar{y}_{+},\bar{z}_{+},\bar{k}_{+})\big{)}+\mu\big{[}D_{i}^{\top}Q(\widehat{ y},\widehat{z},\widehat{k})+\bar{D}_{i}^{\top}Q_{-}(\widehat{y},\widehat{z}, \widehat{k})\big{]}\bigg{\|}^{2}_{l^{2}(\mathbb{R}^{n})}dt\] \[\quad+\int_{0}^{T}\bigg{|}\big{(}f(\theta,x_{-},\theta_{+})-f( \bar{\theta},\bar{x}_{-},\bar{\theta}_{+})\big{)}+v\big{[}A^{\top}P(\widehat{ x})+\bar{A}^{\top}P_{-}(\widehat{x})\big{]}\bigg{|}\Bigg{\}}.\]
Due to the Lipschitz continuity of \((\Phi,\Gamma)\) and the boundedness of \(G\), \(A(\cdot)\), \(\bar{A}(\cdot)\), \(B(\cdot)\), \(\bar{B}(\cdot)\), \(C(\cdot)\), \(\bar{C}(\cdot)\), \(D_{i}(\cdot)\), \(\bar{D}_{i}(\cdot)\), there exists a new constant \(K^{\prime}>0\) independent of \(\alpha_{0}\) and \(\delta\) such that
\[\|\widehat{\Theta}\|_{N_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n(2+d)}\times l^{2} (\mathbb{R}^{n}))}^{2}\leq K^{\prime}\delta^{2}\|\widehat{\theta}\|_{N_{ \mathbb{F}}^{2}(0,T;\mathbb{R}^{n(2+d)}\times l^{2}(\mathbb{R}^{n}))}^{2}.\]
By selecting \(\delta_{0}=1/(2\sqrt{K^{\prime}})\), when \(\alpha\in(0,\delta_{0}]\), we can find that the mapping \(\mathcal{T}_{\alpha_{0}+\delta}\) is contractive. Thus, the mapping \(\mathcal{T}_{\alpha_{0}+\delta}\) admits a unique fixed point which is just the unique solution to FBSDELDA (3.10). The proof is finished.
We summarize the above analysis to give the following proof.
proof of Theorem 3.1.: Firstly, the unique solvability of FBSDELDA (1.1) in the space \(N_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n(2+d)}\times l^{2}(\mathbb{R}^{n}))\) is deduced from the unique solvability of FBSDELDA (3.12) and Lemma 3.3. Secondly, in Lemma 3.2, by taking \(\alpha=1\), \(\big{(}\pi(\cdot),\eta,\rho(\cdot)\big{)}=(0,0,0)\), and \(\big{(}\bar{\pi}(\cdot),\bar{\eta},\bar{\rho}(\cdot)\big{)}=\Big{(}\bar{ \Lambda}(\cdot)-\Lambda(\cdot),\bar{\Phi}(\bar{x}_{T})-\Phi(\bar{x}_{T}),\bar{ \Gamma}\big{(}\cdot,\bar{\theta}(\cdot),\bar{\theta}_{-}(\cdot),\bar{\theta}_{+ }(\cdot)\big{)}-\Gamma\big{(}\cdot,\bar{\theta}(\cdot),\bar{\theta}_{-}(\cdot), \bar{\theta}_{+}(\cdot)\big{)}\Big{)}\), we get the estimate (3.8) in Theorem 3.1 from the estimate (3.13) in Lemma 3.2. Finally, by selecting the coefficients \(\big{(}\bar{\Lambda},\bar{\Phi},\bar{\Gamma}\big{)}=(0,0,0)\), we get (3.6) from (3.8). Then the proof is completed.
## 4 Applications in linear quadratic problem
In this section, we will study two kinds of linear quadratic optimal control problems and then we find that the Hamiltonian systems arising from these linear quadratic problems are FBSDELDAs satisfying the domination-monotonicity conditions mentioned in Section 3. Therefore, we have the conclusion that they are uniquely solvable by virtue of Theorem 3.1. Actually, exploring the solvability of these Hamiltonian systems is also one of our motivations in this paper. It should be noted that we assume that the Brownian motion is \(1\)-dimensional in this section.
### Forward LQ stochastic control problem
Firstly, we give the following control system driven by a linear forward SDEDL:
\[\left\{\begin{aligned} dx_{t}=&\big{(}A_{t}x_{t}+ \bar{A}_{t}x_{t-\delta}+B_{t}v_{t}+\bar{B}_{t}v_{t-\delta}\big{)}dt+\big{(}C_{t }x_{t}+\bar{C}_{t}x_{t-\delta}+D_{t}v_{t}+\bar{D}_{t}v_{t-\delta}\big{)}dW_{t} \\ &+\sum_{i=1}^{\infty}\big{(}E_{t}^{(i)}x_{t-}+\bar{E}_{t}^{(i)}x_ {(t-\delta)-}+F_{t}^{(i)}v_{t}+\bar{F}_{t}^{(i)}v_{t-\delta}\big{)}dH_{t}^{(i)},\quad t\in[0,T],\\ x_{0}=& a,\quad x_{t}=\lambda_{t},\quad v_{t}=\zeta_ {t},\quad t\in[-\delta,0),\end{aligned}\right. \tag{4.1}\]
where \(\delta>0\) is a constant time delay, \(a\in\mathbb{R}^{n}\), \(\lambda_{t}\in C(-\delta,0;\mathbb{R}^{n})\), \(\zeta_{t}\in C(-\delta,0;\mathbb{R}^{m})\). Moreover, \(A_{t},C_{t},E_{t}^{(i)},\bar{A}_{t},\bar{C}_{t},\bar{E}_{t}^{(i)}\in L_{ \mathbb{F}}^{\infty}(0,T;\mathbb{R}^{n\times n})\), \(B_{t},D_{t},F_{t}^{(i)}\in L_{\mathbb{F}}^{\infty}(0,T;\mathbb{R}^{n\times m})\), \(\bar{B}_{t},\bar{D}_{t},\bar{F}_{t}^{(i)}\in L_{\mathbb{F}}^{\infty}(0,T; \mathbb{R}^{n\times m})\), where \(\bar{B}_{t}=\bar{D}_{t}=\bar{F}_{t}=0\), when \(t\in[0,\delta]\). The admissible control set is denoted by \(\mathcal{V}_{ad}\), in which each element \(v(\cdot)\in\mathcal{V}_{ad}\) has the following form
\[\left\{\begin{aligned} v_{t}&=\zeta_{t},\quad t \in[-\delta,0),\\ v_{t}&=v_{t}&=v_{t}\in L_{\mathbb{F}}^{ \infty}(0,T;\mathbb{R}^{m}),\quad t\in[0,T],\end{aligned}\right.\]
which is called an admissible control. By Lemma 2.1, we know that SDEDL (4.1) admits a unique solution \(x(\cdot)\in\mathcal{S}_{\mathbb{F}}^{2}(0,T;\mathbb{R}^{n})\).
Next, we continue to give a cost functional in quadratic form as follows:
\[J\big{(}v(\cdot)\big{)}=\frac{1}{2}\mathbb{E}\bigg{\{}\int_{0}^{ T} \Big{(}\left\langle Q_{t}x_{t},x_{t}\right\rangle+\left\langle\bar{Q}_{t}x_{t-\delta},x _{t-\delta}\right\rangle+\left\langle R_{t}v_{t},v_{t}\right\rangle+\left\langle \bar{R}_{t}v_{t-\delta},v_{t-\delta}\right\rangle \tag{4.2}\] \[\qquad\qquad+2\left\langle\bar{S}_{t}x_{t},v_{t}\right\rangle+2 \left\langle\bar{S}_{t}x_{t-\delta},v_{t-\delta}\right\rangle\Big{)}dt+\left \langle Gx_{T},x_{T}\right\rangle\bigg{\}},\]
where \(Q_{t},\bar{Q}_{t}\in L_{\mathbb{F}}^{\infty}(0,T;\mathbb{S}^{n})\), \(R_{t},\bar{R}_{t}\in L_{\mathbb{F}}^{\infty}(0,T;\mathbb{S}^{m})\), \(S_{t},\bar{S}_{t}\in L_{\mathbb{F}}^{\infty}(0,T;\mathbb{R}^{n\times m})\) and \(G\) is an \(\mathcal{F}\)-measurable \(n\times n\) symmetric matrix-valued bounded random variable. In addition, \(\bar{Q}_{t}=\bar{R}_{t}=\bar{S}_{t}=0\), when \(t\in(T,T+\delta]\).
Now, we propose our main problem as follows:
**Problem(LQDL).**The problem is to find an admissible control \(u(\cdot)\in\mathcal{V}_{ad}\) such that
\[J\big{(}u(\cdot)\big{)}=\inf_{v(\cdot)\in\mathcal{V}_{ad}}J\big{(}v(\cdot)\big{)}. \tag{4.3}\]
Then such an admissible control \(u(\cdot)\) is called an optimal control, and \(x^{u}(\cdot)\) is called the corresponding optimal trajectory.
Moreover, we impose the following assumption:
**Assumption 4.1**.: (i)\(G\) is nonnegative definite;
(ii)For any \((\omega,t)\in\Omega\times[0,T]\), \(Q_{t}+\mathbb{E}^{\mathcal{F}_{t}}[\bar{Q}_{t+\delta}]\) is nonnegative definite;
(iii)For any \((\omega,t)\in\Omega\times[0,T]\), \(R_{t}+\mathbb{E}^{\mathcal{F}_{t}}[\bar{R}_{t+\delta}]\) is positive definite. Besides, there exists a constant \(\delta>0\) such that
\[\left\langle(R_{t}+\mathbb{E}^{\mathcal{F}_{t}}[R_{t+\delta}])v,v\right\rangle \geq\delta|v|^{2},\quad a.s.\]
for any \(v\in\mathbb{R}^{m}\) and for almost every \(t\in[-\delta,T]\);
(iv)\((Q_{t}+\mathbb{E}^{\mathcal{F}_{t}}[\bar{Q}_{t+\delta}])-(S_{t}+\mathbb{E}^{ \mathcal{F}_{t}}[\bar{S}_{t+\delta}])^{\top}(R_{t}+\mathbb{E}^{\mathcal{F}_{t }}[\bar{R}_{t+\delta}])^{-1}(S_{t}+\mathbb{E}^{\mathcal{F}_{t}}[\bar{S}_{t+ \delta}])\) is nonnegative definite.
**Remark 4.1**.: This type of LQ optimal control problem for the system with delay and Levy processes has been studied by Li and Wu [16]. However, the cost functional in this paper is more complex and general, where \(\left\langle\bar{Q}_{t}x_{t-\delta},x_{t-\delta}\right\rangle\), \(\left\langle\bar{S}_{t}x_{t},v_{t}\right\rangle\) and \(\left\langle\bar{S}_{t}x_{t-\delta},v_{t-\delta}\right\rangle\) are also considered.
Firstly, based on the result in Sun et al. [32, Lemma 2.3], we introduce the following lemma for later use.
**Lemma 4.1**.: _For any \(v(\cdot)\in\mathcal{V}_{ad}\), let \(x^{v}(\cdot)\) be the solution of the following equation:_
\[\left\{\begin{aligned} dx_{t}=&\big{(}A_{t}x_{t}+ \bar{A}_{t}x_{t-\delta}+B_{t}v_{t}+\bar{B}_{t}v_{t-\delta}\big{)}dt+\big{(}C_{t }x_{t}+\bar{C}_{t}x_{t-\delta}+D_{t}v_{t}+\bar{D}_{t}v_{t-\delta}\big{)}dW_{t} \\ &+\sum_{i=1}^{\infty}\big{(}E_{t}^{(i)}x_{t-}+\bar{E}_{t}^{(i)}x_ {(t-\delta)-}+F_{t}^{(i)}v_{t}+\bar{F}_{t}^{(i)}v_{t-\delta}\big{)}dH_{t}^{(i)},\quad t\in[0,T],\\ x_{0}=& 0,\quad x_{t}=0,\quad v_{t}=0,\quad t\in[-\delta,0), \end{aligned}\right.\]
_Then for any \(\Theta(\cdot)\in L^{2}(0,T;\mathbb{R}^{m\times n})\), there exists a constant \(\gamma>0\) such that_
\[\mathbb{E}\int_{0}^{T}|v_{t}-\Theta_{t}x_{t}^{v}|^{2}dt\geq\gamma\mathbb{E}\int_ {0}^{T}|v_{t}|^{2}dt,\qquad\forall v(\cdot)\in\mathcal{V}_{ad}.\]
Proof.: We first define a bounded linear operator \(\mathfrak{D}:\mathcal{V}_{ad}\to\mathcal{V}_{ad}\) by
\[\mathfrak{D}v=v-\Theta x^{v}.\]
Then \(\mathfrak{D}\) is bijective and its inverse \(\mathfrak{D}^{-1}\) is given by
\[\mathfrak{D}^{-1}v=v+\Theta\widetilde{x}^{v},\]
where \(\widetilde{x}^{v}(\cdot)\) is the solution of the following equation:
\[\left\{\begin{aligned} d\widetilde{x}_{t}^{v}=& \big{[}(A_{t}+B_{t}\Theta_{t})\widetilde{x}_{t}^{v}+(\bar{A}_{t}+\bar{B}_{t} \Theta_{t-\delta})\widetilde{x}_{t-\delta}^{v}+B_{t}v_{t}+\bar{B}_{t}v_{t- \delta}\big{]}dt\\ &+\big{[}(C_{t}+D_{t}\Theta_{t})\widetilde{x}_{t}^{v}+(\bar{C}_ {t}+\bar{D}_{t}\Theta_{t-\delta})\widetilde{x}_{t-\delta}^{v}+D_{t}v_{t}+\bar{ D}_{t}v_{t-\delta}\big{]}dW_{t}\\ &+\sum_{i=1}^{\infty}\big{(}(E_{t}^{(i)}+F_{t}^{(i)}\Theta_{t}) \widetilde{x}_{t-}^{v}+(\bar{E}_{t}^{(i)}+\bar{F}_{t}^{(i)}\Theta_{t-\delta}) \widetilde{x}_{(t-\delta)-}^{v}+F_{t}^{(i)}v_{t}+\bar{F}_{t}^{(i)}v_{t-\delta }\big{)}dH_{t}^{(i)},\quad t\in[0,T],\\ \widetilde{x}_{0}^{v}=& 0,\quad\widetilde{x}_{t}^{v}=0, \quad v_{t}=0,\quad t\in[-\delta,0).\end{aligned}\right.\]
Noticing the bounded inverse theorem, we derive that \(\mathfrak{D}^{-1}\) is bounded with norm \(\|\mathfrak{D}^{-1}\|>0\). Based on this, we have
\[\mathbb{E}\int_{0}^{T}|v_{t}|^{2}dt=\mathbb{E}\int_{0}^{T}|(\mathfrak{D}^{-1} \mathfrak{D}v)_{t}|^{2}dt\leq\|\mathfrak{D}^{-1}\|\mathbb{E}\int_{0}^{T}|( \mathfrak{D}v)_{t}|^{2}dt=\|\mathfrak{D}^{-1}\|\mathbb{E}\int_{0}^{T}|v_{t}- \Theta_{t}x_{t}^{v}|^{2}dt.\]
Finally, by taking \(\gamma=\|\mathfrak{D}^{-1}\|^{-1}\), we finish the proof.
Next, we will give the main result of this section. First of all, by the result of maximum principle for stochastic control system with delay and Levy processes in Li and Wu [15] and [16], we can deduce the stochastic Hamiltonian system of SDEDL (4.1) as follows:
\[\left\{\begin{aligned} dx_{t}=&\big{(}A_{t}x_{t}+ \bar{A}_{t}x_{t-\delta}+B_{t}u_{t}+\bar{B}_{t}u_{t-\delta}\big{)}dt+\big{(}C_{ t}x_{t}+\bar{C}_{t}x_{t-\delta}+D_{t}u_{t}+\bar{D}_{t}u_{t-\delta}\big{)}dW_{t}\\ &+\sum_{i=1}^{\infty}\big{(}E_{t}^{(i)}x_{t-}+\bar{E}_{t}^{(i)}x _{(t-\delta)-}+F_{t}^{(i)}v_{t}+\bar{F}_{t}^{(i)}v_{t-\delta}\big{)}dH_{t}^{(i )},\quad t\in[0,T],\\ dy_{t}=&-\Big{(}A_{t}^{\top}y_{t}+C_{t}^{\top}z_{t}+ \sum_{i=1}^{\infty}E_{t}^{(i)\top}k_{t}^{(i)}+Q_{t}x_{t}+S_{t}^{\top}u_{t}+ \mathbb{E}^{\mathbb{F}_{t}^{\prime}}\big{[}\bar{A}_{t+\delta}^{\top}y_{t+ \delta}+\bar{C}_{t+\delta}^{\top}z_{t+\delta}\\ &\qquad+\sum_{i=1}^{\infty}\bar{E}_{t+\delta}^{(i)\top}k_{t+ \delta}^{(i)}+\bar{Q}_{t+\delta}x_{t}+\bar{S}_{t+\delta}^{\top}u_{t}\Big{]} \Big{)}dt+z_{t}dW_{t}+\sum_{i=1}^{\infty}k_{t}^{(i)}dH_{t}^{(i)},\quad t\in[0,T],\\ x_{0}=& a,\quad x_{t}=\lambda_{t},\quad u_{t}=\zeta_{t },\quad t\in[-\delta,0),\\ y_{T}=& Gx_{T},\quad y_{t}=z_{t}=k_{t}=0,\quad t\in[- \delta,0)\cup(T,T+\delta],\\ \Big{(}R_{t}+&\mathbb{E}^{\mathcal{F}_{t}}[\bar{R}_{t +\delta}]\Big{)}u_{t}+\Big{(}B_{t}^{\top}y_{t}+D_{t}^{\top}z_{t}+\sum_{i=1}^{ \infty}F_{t}^{(i)\top}k_{t}^{(i)}+S_{t}x_{t}\\ &+\mathbb{E}^{\mathcal{F}_{t}}\big{[}\bar{B}_{t+\delta}^{\top}y_ {t+\delta}+\bar{D}_{t+\delta}^{\top}z_{t+\delta}+\sum_{i=1}^{\infty}\bar{F}_{ t+\delta}^{(i)\top}k_{t+\delta}^{(i)}+\bar{S}_{t+\delta}x_{t}]\Big{)}=0.\end{aligned}\right. \tag{4.4}\]
It is clear that the Hamiltonian system (4.4) is described by a FBSDELDA. Therefore, with the help of Theorem 3.1, we can get the following theorem.
**Theorem 4.2**.: _Under Assumption 4.1, the above Hamiltonian system (4.4) admits a unique solution \((\theta(\cdot),u(\cdot))\). Moreover, \(u(\cdot)\) is the unique optimal control of Problem(LQDL) and \(x(\cdot)\) is its corresponding optimal trajectory._
Proof.: Firstly, for simplicity, we denote by \(\widetilde{R}_{t}=R_{t}+\mathbb{E}^{F_{t}}[\bar{R}_{t+\delta}]\), \(\widetilde{Q}_{t}=Q_{t}+\mathbb{E}^{F_{t}}[Q_{t+\delta}]\), \(\widetilde{S}_{t}=S_{t}+\mathbb{E}^{F_{t}}[S_{t+\delta}]\). It is easy to find that \(\widetilde{R}_{t}\) is invertible, so we can solve \(u(\cdot)\) from the last equation of Hamiltonian system (4.4).
\[u_{t}=-\widetilde{R}_{t}^{-1}\Big{(}B_{t}^{\top}y_{t}+D_{t}^{\top}z_{t}+\sum_{ i=1}^{\infty}{F_{t}^{(i)}}^{\top}k_{t}^{(i)}+\widetilde{S}_{t}x_{t}+\mathbb{E}^{F_{t }}\big{[}\bar{B}_{t+\delta}^{\top}y_{t+\delta}+\bar{D}_{t+\delta}^{\top}z_{t+ \delta}+\sum_{i=1}^{\infty}\bar{F}_{t+\delta}^{(i)}{}^{\top}k_{t+\delta}^{(i) }\big{]}\Big{)}. \tag{4.5}\]
Putting (4.5) into the Hamiltonian system (4.4), we can get a FBSDELDA. Then we can easily verify that the coefficients of this FBSDELDA satisfy Assumption 3.1, Assumption 3.2-(i)-Case 1, and Assumption 3.2-(ii). As for Assumption 3.2-(iii), we give a detailed verification as follows:
\[\begin{split}&\int_{0}^{T}\Big{(}\left\langle\Gamma(\theta, \theta_{-},\theta_{+})-\Gamma(\bar{\theta},\bar{\theta}_{-},\bar{\theta}_{+}), \widehat{\theta}\right\rangle\Big{)}dt\\ =&\int_{0}^{T}\Big{(}\left\langle A\widehat{x}+ \bar{A}\widehat{x}_{-}+B\widehat{u}+\widehat{B}\widehat{u}_{-},\widehat{y} \right\rangle+\left\langle C\widehat{x}+\bar{C}\widehat{x}_{-}+D\widehat{u}+ \widehat{D}\widehat{u}_{-},\widehat{z}\right\rangle+\left\langle E^{(i)} \widehat{x}+\bar{E}^{(i)}\widehat{x}_{-}+F^{(i)}\widehat{u}+\bar{F}^{(i)} \widehat{u}_{-},\widehat{K}^{(i)}\right\rangle\\ &-\left\langle A^{\top}\widehat{y}+C^{\top}\widehat{z}+\sum_{ i=1}^{\infty}E^{(i)}{}^{\top}\widehat{k}^{(i)}+Q\widehat{x}+S^{\top}\widehat{u}+ \mathbb{E}^{F_{t}}\big{[}\bar{A}_{+}^{\top}\widehat{y}_{+}+\bar{C}_{+}^{\top }\widehat{z}_{+}+\sum_{i=1}^{\infty}\bar{E}_{+}^{(i)}{}^{\top}\widehat{k}_{+ }^{(i)}+\bar{Q}_{+}\widehat{x}+\bar{S}_{+}^{\top}\widehat{u}\big{]},\widehat{ x}\right\rangle\Big{)}dt\\ =&\int_{0}^{T}\Big{(}\left\langle\widehat{u},B^{ \top}\widehat{y}+D^{\top}\widehat{z}+\sum_{i=1}^{\infty}F^{(i)}{}^{\top} \widehat{k}^{(i)}-\widetilde{S}\widehat{x}+\mathbb{E}^{F_{t}}\big{[}\bar{B}_{ +}^{\top}\widehat{y}_{+}+\bar{D}_{+}^{\top}\widehat{z}_{+}+\sum_{i=1}^{\infty }\bar{F}_{+}^{(i)}{}^{\top}\widehat{k}_{+}^{(i)}\big{]}\right)-\left\langle \widetilde{Q}\widehat{x},\widehat{x}\right\rangle\Big{)}dt\\ =&\int_{0}^{T}\Big{(}-\left\langle\widetilde{R}^{-1} (Q(\widehat{y},\widehat{z},\widehat{k})+\widetilde{S}\widehat{x}),Q(\widehat {y},\widehat{z},\widehat{k})-\widetilde{S}\widehat{x}\right\rangle-\left\langle \widetilde{Q}\widehat{x},\widehat{x}\right\rangle\Big{)}dt\\ =&\int_{0}^{T}\Big{(}-\left\langle\widetilde{R}^{-1} (Q(\widehat{y},\widehat{z},\widehat{k}),Q(\widehat{y},\widehat{z},\widehat{k}) \right\rangle-\left\langle(\widetilde{Q}-\widetilde{S}^{\top}\widehat{R}^{-1 }\widetilde{S})\widehat{x},\widehat{x}\right\rangle\Big{)}dt,\end{split} \tag{4.6}\]
where \(Q(\widehat{y},\widehat{z},\widehat{k})\) is defined by (3.5). Then combining with Assumption 4.1, we can finish the verification of Assumption 3.2-(iii). Thus, by Theorem 3.1, we know that this FBSDELDA admits a unique solution, which is equivalent to that the Hamiltonian system (4.4) is uniquely solvable.
Next, we will prove the optimality of \(u(\cdot)\) in the form of (4.5). Firstly, let \(v(\cdot)\in\mathcal{V}_{ad}\) be any other control and \(x^{v}(\cdot)\) is the corresponding state. Then we shall explore the difference between \(J\big{(}v(\cdot)\big{)}\) and \(J\big{(}u(\cdot)\big{)}\):
\[\begin{split}& J\big{(}v(\cdot)\big{)}-J\big{(}u(\cdot)\big{)}\\ =&\frac{1}{2}\mathbb{E}\Bigg{\{}\int_{0}^{T}\Big{(} \left\langle Q_{t}x_{t}^{v},x_{t}^{v}\right\rangle-\left\langle Q_{t}x_{t},x_{ t}\right\rangle+\left\langle\bar{Q}_{t}x_{t-\delta}^{v},x_{t-\delta}^{v}\right\rangle- \left\langle\bar{Q}_{t}x_{t-\delta},x_{t-\delta}\right\rangle+\left\langle R_{ t}v_{t},v_{t}\right\rangle-\left\langle R_{t}u_{t},u_{t}\right\rangle\\ &\qquad+\left\langle\bar{R}_{t}v_{t-\delta},v_{t-\delta}\right\rangle -\left\langle\bar{R}_{t}u_{t-\delta},u_{t-\delta}\right\rangle\Big{)}dt+\left \langle Gx_{T}^{v},x_{T}^{v}\right\rangle-\left\langle Gx_{T},x_{T}\right\rangle +2\left\langle S_{t}x_{t}^{v},v_{t}\right\rangle-2\left\langle S_{t}x_{t},u_{t}\right\rangle \\ &\qquad+2\left\langle\bar{S}_{t}x_{t-\delta}^{v},v_{t-\delta}\right\rangle -2\left\langle\bar{S}_{t}x_{t-\delta},u_{t-\delta}\right\rangle\Bigg{\}}\\ =&\frac{1}{2}\mathbb{E}\Bigg{\{}\int_{0}^{T}\Big{(} \left\langle Q_{t}(x_{t}^{v}-x_{t}),x_{t}^{v}-x_{t}\right\rangle+\left\langle \bar{Q}_{t}(x_{t-\delta}^{v}-x_{t-\delta}),x_{t-\delta}^{v}-x_{t-\delta} \right\rangle+\left\langle R_{t}(v_{t}-u_{t}),v_{t}-u_{t}\right\rangle\\ &\qquad+\left\langle\bar{R}_{t}(v_{t-\delta}-u_{t-\delta}),v_{t- \delta}-u_{t-\delta}\right\rangle+2\left\langle S_{t}(x_{t}^{v}-x_{t}),v_{t}-u_{t} \right\rangle+2\left\langle\bar{S}_{t}(x_{t-\delta}^{v}-x_{t-\delta}),v_{t-\delta }-u_{t-\delta}\right\rangle\Big{)}dt\\ &\qquad+\left\langle G(x_{T}^{v}-x_{T}),x_{T}^{v}-x_{T}\right\rangle +\int_{0}^{T}\Big{(}2\left\langle Q_{t}x_{t},x_{t}^{v}-x_{t}\right\rangle+2 \left\langle\bar{Q}_{t}x_{t-\delta},x_{t-\delta}^{v}-x_{t-\delta}\right\rangle+2 \left\langle R_{t}u_{t},v_{t}-u_{t}\right\rangle\\ &\qquad+2\left\langle\bar{R}_{t}u_{t-\delta},v_{t-\delta}-u_{t- \delta}\right\rangle+2\left\langle S_{t}(x_{t}^{v}-x_{t}),u_{t}\right\rangle+2 \left\langle S_{t}x_{t},v_{t}-u_{t}\right\rangle+2\left\langle\bar{S}_{t}(x_{t- \delta}^{v}-x_{t-\delta}),u_{t-\delta}\right\rangle\\ &\qquad+2\left\langle\bar{S}_{t}x_{t-\delta},v_{t-\delta}-u_{t- \delta}\right\rangle\Big{)}dt+2\left\langle Gx_{T},x_{T}^{v}-x_{T}\right\rangle \Bigg{\}}.\end{split}\]
Since \(\bar{Q}=\bar{R}=\bar{S}=0\), when \(t\in(T,T+\delta]\), by the time-shifting transformation and the initial conditions in (4.4),
we have
\[\begin{split} J\big{(}v(\cdot)\big{)}-J\big{(}u(\cdot)\big{)}\\ =&\frac{1}{2}\mathbb{E}\Bigg{\{}\int_{0}^{T}\Big{(} \left\langle\widetilde{Q}_{t}(x_{t}^{v}-x_{t}),x_{t}^{v}-x_{t}\right\rangle+ \left\langle\widetilde{R}_{t}(v_{t}-u_{t}),v_{t}-u_{t}\right\rangle\\ &\qquad+2\left\langle\widetilde{S}_{t}(x_{t}^{v}-x_{t}),v_{t}-u_ {t}\right\rangle+\left\langle G(x_{T}^{v}-x_{T}),x_{T}^{v}-x_{T}\right\rangle \Bigg{\}}+\Delta,\end{split} \tag{4.7}\]
where
\[\begin{split}\Delta=&\,\mathbb{E}\Bigg{\{}\int_{0}^ {T}\Big{(}\left\langle\widetilde{Q}_{t}x_{t},x_{t}^{v}-x_{t}\right\rangle+ \left\langle\widetilde{R}_{t}u_{t},v_{t}-u_{t}\right\rangle+\left\langle \widetilde{S}_{t}(x_{t}^{v}-x_{t})\right\rangle,u_{t}\Big{)}\\ &\qquad+\left\langle\widetilde{S}_{t}x_{t},v_{t}-u_{t}\right \rangle\Big{)}dt+\left\langle Gx_{T},x_{T}^{v}-x_{T}\right\rangle\Bigg{\}}. \end{split}\]
Applying Ito's formula to \(\left\langle x_{t}^{v}-x_{t},y_{t}\right\rangle\) leads to
\[\begin{split}\mathbb{E}\Big{[}\left\langle x_{T}^{v}-x_{T},y_{T} \right\rangle\Big{]}=&\mathbb{E}\Bigg{\{}\int_{0}^{T}\Bigg{(} \left\langle v_{t}-u_{t},B_{t}^{\top}y_{t}+D_{t}^{\top}z_{t}+\sum_{i=1}^{ \infty}F_{t}^{(i)\top}k_{t}^{(i)}+S_{t}x_{t}\right\rangle\\ &\qquad+\left\langle v_{t-\delta}-u_{t-\delta},\bar{B}_{t}^{ \top}y_{t}+\bar{D}_{t}^{\top}z_{t}+\sum_{i=1}^{\infty}\bar{F}_{t}^{(i)\top}k_{ t}^{(i)}+\bar{S}_{t}x_{t-\delta}\right\rangle\\ &\qquad-\left\langle\widetilde{Q}_{t}x_{t},x_{t}^{v}-x_{t} \right\rangle-\left\langle\widetilde{S}_{t}(x_{t}^{v}-x_{t}),u_{t}\right\rangle \\ &\qquad-\left\langle S_{t}x_{t},v_{t}-u_{t}\right\rangle-\left\langle \widetilde{S}_{t}x_{t-\delta},v_{t-\delta}-u_{t-\delta}\right\rangle\Big{)}dt \Bigg{\}}.\end{split} \tag{4.8}\]
Moreover, due to the fact that \(v_{t}=u_{t}\), \(t\in[-\delta,0)\) and \(y_{t}=z_{t}=k_{t}=0\), \(t\in(T,T+\delta]\), (4.8) can be rewritten as follows:
\[\begin{split}&\,\mathbb{E}\Big{[}\left\langle x_{T}^{v}-x_{T},Gx_{T} \right\rangle\Big{]}\\ =&\mathbb{E}\Bigg{\{}\int_{0}^{T}\Bigg{(}\left\langle v _{t}-u_{t},B_{t}^{\top}y_{t}+D_{t}^{\top}z_{t}+\sum_{i=1}^{\infty}F_{t}^{(i) \top}k_{t}^{(i)}+S_{t}x_{t}\right.\\ &\qquad+\left.\mathbb{E}^{\mathcal{F}_{t}}\big{[}\bar{B}_{t+ \delta}^{\top}y_{t+\delta}+\bar{D}_{t+\delta}^{\top}z_{t+\delta}+\sum_{i=1}^{ \infty}\bar{F}_{t+\delta}^{(i)\top}k_{t+\delta}^{(i)}+\bar{S}_{t+\delta}x_{t} \big{]}\right)-\left\langle\widetilde{Q}_{t}x_{t},x_{t}^{v}-x_{t}\right\rangle \\ &\qquad-\left\langle\widetilde{S}_{t}(x_{t}^{v}-x_{t}),u_{t} \right\rangle-\left\langle\widetilde{S}_{t}x_{t},v_{t}-u_{t}\right\rangle \Bigg{)}dt\Bigg{\}}.\end{split} \tag{4.9}\]
Consequently, combining with (4.5), we can infer that \(\Delta=0\). Thus, applying the nonnegative definiteness of \(G\), we have
\[\begin{split}&\,J\big{(}v(\cdot)\big{)}-J\big{(}u(\cdot)\big{)}\\ =&\frac{1}{2}\mathbb{E}\bigg{\{}\int_{0}^{T}\Big{(} \left\langle\widetilde{Q}_{t}(x_{t}^{v}-x_{t}),x_{t}^{v}-x_{t}\right\rangle+ \left\langle\widetilde{R}_{t}(v_{t}-u_{t}),v_{t}-u_{t}\right\rangle\\ &\qquad+2\left\langle\widetilde{S}_{t}(x_{t}^{v}-x_{t}),v_{t}-u_ {t}\right\rangle\Big{)}dt+\left\langle G(x_{T}^{v}-x_{T}),x_{T}^{v}-x_{T} \right\rangle\bigg{\}}\\ \geq&\frac{1}{2}\mathbb{E}\bigg{\{}\int_{0}^{T} \Big{(}\left\langle\widetilde{Q}_{t}(x_{t}^{v}-x_{t}),x_{t}^{v}-x_{t}\right\rangle +\left\langle\widetilde{R}_{t}(v_{t}-u_{t}),v_{t}-u_{t}\right\rangle\\ &\qquad+2\left\langle\widetilde{R}_{t}^{-1}\widetilde{S}_{t}(x_{t} ^{v}-x_{t}),\widetilde{R}_{t}(v_{t}-u_{t})\right\rangle\Big{)}dt\bigg{\}}\\ =&\frac{1}{2}\mathbb{E}\bigg{\{}\int_{0}^{T}\Big{(} \left\langle(\widetilde{Q}_{t}-\widetilde{S}_{t}^{\top}\widetilde{R}_{t}^{-1} \widetilde{S}_{t})(x_{t}^{v}-x_{t}),x_{t}^{v}-x_{t}\right\rangle\]
\[+\Big{\langle}\widetilde{R}_{t}\big{[}(v_{t}-u_{t})+\widetilde{R}_{t}^{-1} \widetilde{S}_{t}(x_{t}^{v}-x_{t})\big{]},(v_{t}-u_{t})+\widetilde{R}_{t}^{-1} \widetilde{S}_{t}(x_{t}^{v}-x_{t})\Big{\rangle}\Big{)}dt\bigg{\}}.\]
By virtue of Assumption 4.1-(iii),(iv), it is easy to verify that \(J\big{(}v(\cdot)\big{)}-J\big{(}u(\cdot)\big{)}\geq 0\). This is to say that \(u(\cdot)\) is the optimal control of Problem(LQDL).
For the uniqueness, if there exists another optimal control \(\bar{u}(\cdot)\) and its corresponding state is \(x^{\bar{u}}(\cdot)\). Thus, \(J\big{(}\bar{u}(\cdot)\big{)}=J\big{(}u(\cdot)\big{)}\). Coming back to (4.10) and combining with Assumption 4.1 and Lemma 4.1, we derive
\[\begin{split} 0=& J\big{(}\bar{u}(\cdot)\big{)}-J \big{(}u(\cdot)\big{)}\\ \geq&\frac{1}{2}\mathbb{E}\bigg{\{}\int_{0}^{T} \Big{(}\left\langle(\widetilde{Q}_{t}-\widetilde{S}_{t}^{\top}\widetilde{R}_{t }^{-1}\widetilde{S}_{t})(x_{t}^{\bar{u}}-x_{t}),x_{t}^{\bar{u}}-x_{t}\right\rangle \tag{4.11}\]
Due to the nonnegative definiteness of \(\widetilde{R}_{t}\) and the fact that \(\gamma>0\), (4.11) implies that \(\bar{u}(\cdot)=u(\cdot)\). The proof of uniqueness is completed.
### Backward LQ stochastic control problem
In this section, the control system is given by a linear ABSDEL:
\[\begin{cases}dy_{t}=&\Big{(}A_{t}y_{t}+\bar{A}_{t}\mathbb{E}^{ \mathcal{F}_{t}}[y_{t+\delta}]+B_{t}z_{t}+\bar{B}_{t}\mathbb{E}^{\mathcal{F}_ {t}}[z_{t+\delta}]+\sum_{i=1}^{\infty}C_{t}^{(i)}k_{t}^{(i)}+\sum_{i=1}^{ \infty}\bar{C}_{t}^{(i)}\mathbb{E}^{\mathcal{F}_{t}}[k_{t+\delta}^{(i)}]+D_{t} v_{t}+\bar{D}_{t}v_{t-\delta}\Big{)}dt\\ &+z_{t}dW_{t}+\sum_{i=1}^{\infty}k_{t}^{(i)}dH_{t}^{(i)},\quad t \in[0,T],\\ y_{T}=&b,\quad y_{t}=z_{t}=k_{t}=0,\quad t\in(T,T+\delta],\\ v_{t}=&\iota_{t},\quad t\in[-\delta,0),\end{cases} \tag{4.12}\]
where \(b\in L^{2}_{\mathcal{F}_{T}}(\Omega;\mathbb{R}^{n})\) and \(\iota_{t}\in C(-\delta,0;\mathbb{R}^{m})\). Moreover, \(A_{t},\bar{A}_{t},B_{t},\bar{B}_{t},C_{t}^{(i)},\bar{C}_{t}^{(i)}\in L^{\infty }_{\mathbb{F}}(0,T;\mathbb{R}^{n\times n})\), \(D_{t}\in L^{\infty}_{\mathbb{F}}(0,T;\mathbb{R}^{n\times m})\), \(\bar{D}_{t}\in L^{\infty}_{\mathbb{G}}(0,T;\mathbb{R}^{n\times m})\), where \(\bar{D}_{t}=0\), when \(t\in[0,\delta]\). Let \(\mathcal{U}_{ad}\) denote the set of stochastic processes \(v(\cdot)\) with the form:
\[\begin{cases}v_{t}=\iota_{t},\quad t\in[-\delta,0),\\ v_{t}=v_{t}\in L^{2}_{\mathbb{F}}(0,T;\mathbb{R}^{m}),\quad t\in[0,T],\end{cases}\]
which is called an addmissible control.
Similar to before, we give a cost functional as follows:
\[\begin{split} J\big{(}v(\cdot)\big{)}=\frac{1}{2}\mathbb{E}\bigg{\{} &\int_{0}^{T}\Big{(}\left\langle Q_{t}y_{t},y_{t} \right\rangle+\left\langle\bar{Q}_{t}y_{t+\delta},y_{t+\delta}\right\rangle+ \left\langle L_{t}z_{t},z_{t}\right\rangle+\left\langle\bar{L}_{t}z_{t+ \delta},z_{t+\delta}\right\rangle+\sum_{i=1}^{\infty}\left\langle G_{t}^{(i)}k _{t}^{(i)},k_{t}^{(i)}\right\rangle\\ &\quad+\sum_{i=1}^{\infty}\left\langle\bar{G}_{t}^{(i)}k_{t+\delta }^{(i)},k_{t+\delta}^{(i)}\right\rangle+\left\langle R_{t}v_{t},v_{t}\right\rangle +\left\langle\bar{R}_{t}v_{t-\delta},v_{t-\delta}\right\rangle\Big{)}dt+ \langle My_{0},y_{0}\rangle\bigg{\}},\end{split} \tag{4.13}\]
where \(Q_{t},\bar{Q}_{t},L_{t},\bar{L}_{t},G_{t}^{(i)},\bar{G}_{t}^{(i)}\in L^{\infty} _{\mathbb{F}}(0,T;\mathbb{S}^{n})\), \(R_{t},\bar{R}_{t}\in L^{\infty}_{\mathbb{F}}(0,T;\mathbb{S}^{m})\) and \(M\) is an \(\mathcal{F}\)-measurable \(n\times n\) symmetric and nonnegative definite matrix-valued bounded random variable. Moreover, \(\bar{Q}_{t}=\bar{L}_{t}=\bar{G}_{t}=0\), when \(t\in[-\delta,0)\) and \(\bar{R}_{t}=0\), when \(t\in(T,T+\delta]\). Similarly, we assume that \(Q_{t}+\bar{Q}_{t-\delta}\), \(L_{t}+\bar{L}_{t-\delta}\), \(G_{t}+\bar{G}_{t-\delta}\) are all nonnegative definite and \(R_{t}+\mathbb{E}^{\mathcal{F}_{t}}[\bar{R}_{t+\delta}]\) has the similar condition as the \(R_{t}+\mathbb{E}^{\mathcal{F}_{t}}[\bar{R}_{t+\delta}]\) in subsection 4.1.
Now, our problem in this section is proposed as follows:
**Problem(LQAL).** Find an admissible control \(u(\cdot)\in\mathcal{U}_{ad}\) such that
\[J\big{(}u(\cdot)\big{)}=\inf_{v(\cdot)\in\mathcal{U}_{ad}}J\big{(}v(\cdot)\big{)}. \tag{4.14}\]
Likewise, we give the Hamiltonian system of ABSDEL (4.12) as follows:
\[\begin{cases}dx_{t}=&-\Big{(}A_{t}^{\top}x_{t}+\bar{A}_{t-\delta}^{\top}x_{t- \delta}+Q_{t}^{\top}y_{t}+\bar{Q}_{t-\delta}^{\top}y_{t}\Big{)}dt-\Big{(}B_{t }^{\top}x_{t}+\bar{B}_{t-\delta}^{\top}x_{t-\delta}+L_{t}^{\top}z_{t}+\bar{L}_ {t-\delta}^{\top}z_{t}\Big{)}dW_{t}\\ &-\Big{(}\sum_{i=1}^{\infty}(C_{t}^{(i)\top}x_{t-\bar{\gamma}}+\bar{C}_{t- \delta}^{(i)\top}x_{(t-\delta)-}+G_{t}^{(i)\top}k_{t}^{(i)}+\bar{G}_{t-\delta} ^{(i)\top}k_{t}^{(i)}\Big{)}dH_{t}^{(i)},\quad t\in[0,T],\\ dy_{t}=&\Big{(}A_{t}y_{t}+\bar{A}_{t}\mathbb{E}^{\mathcal{F}_{t}}[y_{t+\delta}] +B_{t}z_{t}+\bar{B}_{t}\mathbb{E}^{\mathcal{F}_{t}}[z_{t+\delta}]+\sum_{i=1}^{ \infty}C_{t}^{(i)}k_{t}^{(i)}+\sum_{i=1}^{\infty}\bar{C}_{t}^{(i)}\mathbb{E}^{ \mathcal{F}_{t}}[k_{t+\delta}^{(i)}]+D_{t}u_{t}+\bar{D}_{t}u_{t-\delta}\Big{)} dt\\ &+z_{t}dW_{t}+\sum_{i=1}^{\infty}k_{t}^{(i)}dH_{t}^{(i)},\quad t\in[0,T],\\ x_{0}=&-My_{0},\quad x_{t}=0,\quad u_{t}=\iota_{t},\quad t\in[-\delta,0),\\ y_{T}=&b,\quad x_{t}=y_{t}=z_{t}=k_{t}=0,\quad t\in(T,T+\delta],\\ (D_{t}^{\top}x_{t}+\mathbb{E}^{\mathcal{F}_{t}}[\bar{D}_{t+\delta}^{\top}x_{t +\delta}])+(R_{t}+\mathbb{E}^{\mathcal{F}_{t}}[\bar{R}_{t+\delta}])u_{t}=0. \end{cases} \tag{4.15}\]
Obviously, the Hamiltonian system (4.15) is also driven by a FBSDELDA which satisfies all the assumptions in section 3, so by Theorem 3.1, we have the following theorem.
**Theorem 4.3**.: _The Hamiltonian system (4.15) admits a unique solution \((\theta(\cdot),u(\cdot))\). Moreover, \(u(\cdot)\) is the unique optimal control of Problem(LQAL)._
Similar to the proof of Theorem 4.2, we will still explore the difference between \(J\big{(}v(\cdot)\big{)}\) and \(J\big{(}u(\cdot)\big{)}\), and then apply Ito's formula to \(\langle x_{t},y_{t}^{v}-y_{t}\rangle\). Due to the similarity between the proof of the two, we omit the detailed proof of the latter.
**Remark 4.2**.: (i) In fact, in subsection 4.1, the coefficients of the FBSDELDA in Hamiltonain system (4.4) are verified to satisfy Assumption 3.1 and Assumption 3.2-(i)-case 1, Assumption 3.2-(ii), Assumption 3.2-(iii), whereas Assumption 3.2-(i)-case 2 corresponds to backward LQ stochastic control problem; (ii) If we replace the positive definiteness and nonnegative definiteness of the coefficients of the cost functional in (4.2) and (4.13) with the negative definiteness and non-positive definiteness, then the coefficients of FBSDELDs in Hamiltonain system (4.4) and (4.15) satisfy Assumption 3.1, Assumption 3.2-(i), (ii) and the symmetrical version of Assumption 3.2-(iii) in Remark 3.1. In this situation, the problem of minimize the cost functional will become maximizing it. In summary, the domination-monotonicity conditions exactly correspond to four classes of optimal control problems.
## Acknowledgments
The authors would like to thank anonymous referees for helpful comments and suggestions which improved the original version of the paper. Q. Meng was supported by the Key Projects of Natural Science Foundation of Zhejiang Province (No. Z22A013952) and the National Natural Science Foundation of China ( No.12271158 and No. 11871121). Maoning Tang was supported by the Natural Science Foundation of Zhejiang Province (No. LY21A010001)
|
2306.06518 | Practical Problems of Statistical Learning | Statistical models have seen a significant rise in popularity in recent
years. Despite their undeniable success in various industry use cases such as
sabermetrics, investment portfolio management, and artificial intelligence,
there has been immense debate about the value of results produced by
statistical methods. This paper focuses on presenting the common issues
practitioners have when implementing statistical learning models, and why these
issues make it difficult to interpret results produced by such methods. | Joseph Andersen | 2023-06-10T20:24:30Z | http://arxiv.org/abs/2306.06518v1 | # Practical Problems of Statistical Learning
###### Abstract
Statistical models have seen a significant rise in popularity in recent years. Despite their undeniable success in various industry use cases such as sabermetrics, investment portfolio management, and artificial intelligence, there has been immense debate about the value of results produced by statistical methods. This paper focuses on presenting the common issues practitioners have when implementing statistical learning models, and why these issues make it difficult to interpret results produced by such methods.
###### Contents
* 1 Acknowledgements
* 2 Introduction
* 2.1 The Death of Theory
* 2.2 Statistics
* 2.3 Causal Inference
* 2.4 Prediction
* 3 Universal Problems of Statistical Learning
* 3.1 Internal vs. External Validity
* 3.2 The Importance of Context
* 3.3 Curse of Dimensionality
* 4 Problems Unique to Causal Inference
* 4.1 Small Problems, Small Significance
* 4.2 Hypothesis Testing
* 5 Problems Unique to Prediction
* 5.1 Correlation is Not Causation
* 5.2 Deep Learning
* 6 Conclusion
Acknowledgements
I would like to thank Professor Edward Rubin for his helpful guidance throughout the writing and research process of this paper. I would also like to thank ChatGPT for answering many of my technical questions.
## 2 Introduction
### The Death of Theory
The past couple decades have seen an unstoppable march away from principled and deductive thinking, to a data-centric approach whenever a problem presents itself. In sciences concerned with complex social phenomena, where causal relationships resemble a tangled web, an empirical approach is often the quickest way to finding answers to a problem. In 2008 journalist Chris Anderson published a highly provocative article titled _The End of Theory: The Data Deluge Makes the Scientific Method Obsolete_[1]. The article argues that with enough data and compute it will be more efficient to abandon theory in favor of models based on the frequency of past events. Rather than trying to understand how a single variable affects natural phenomena, we can simply target it as an outcome variable and induce what will happen. Anderson's article ignited a wave of debate about whether the scientific method is necessary in an era of cloud compute and data abundance. On one extreme, data scientists argue that statistical methods can replace the scientific method and called for the death of theory. On the other hand, epistemologists and academics who had dedicated their lives towards understanding social phenomena, argue that statistical methods are trivial, and don't contribute to actual scientific knowledge. They proclaim that such methods can only approximate the physical laws they attempt to discover.
This paper seeks to provide a broad overview of what problems prevent statistical learning from producing scientific knowledge. In doing so, we hope to better understand the extent to which statistical learning models are valuable. Out of scope of this paper are the social and political issues of how data is collected. Although this list is expansive, our paper is limited to practical issues of implementing statistical learning.
### Statistics
Statistics is a field of mathematics that primarily aims to transform raw data into refined information. Statistical inference is the process that allows scientists to induce knowledge from a sample onto a population. Although used in many contexts, this paper is only focused on looking at statistical inference in the causal inference and prediction settings.
### Causal Inference
The first setting of statistical learning is causal inference. This discipline focuses on developing mathematical models that allow us to understand the causal relationship between an independent and dependent variable. The fundamental problem of causal inference is that for each individual \(i\), we only observe outcome \(Y_{i}(1)\) or \(Y_{i}(0)\). As time is linear, we are not able to observe the counterfactual of an individual. Rather, we are only able to observe what would happen if an individual is treated or not treated. Due to the presence of covariates and confounders, this means a direct comparison between treated and untreated individuals is typically not equal. As a result it becomes difficult to extract the causal effect of a variable.
In order to address this problem, researchers use randomization and the law of large numbers to generate a control and treatment group. Since observations are randomized we are able to reasonably assume that biases are equally distributed between the groups, and are effectively able to create a counterfactual. Approaches to randomization generally fall into experimental or quasi-experimental methods.
Experimental methods, such as randomized controlled trials, randomize participants into control and treatment groups. If the sample size is large enough and the randomization process is sound, researchers do not need to worry about bias and thus the potential outcome \(Y(0)\) or \(Y(1)\) is independent of the treatment itself. As a result the average treatment effect can be correctly obtained with the regression model (1).
\[Y_{i}=\alpha+\tau D_{i}+\epsilon_{i} \tag{1}\]
In this model \(Y_{i}\) is the outcome, \(\alpha\) is the intercept, \(\tau\) is the average causal effect of the treatment on the outcome, \(D_{i}\) is the treatment variable, and \(\epsilon_{i}\) is the error of the model.
Pure experimental methods are often impractical due to economic issues, ethical concerns, or attrition. This is especially true for econometricians who study macroeconomic phenomena. In this situation, researchers resort to quasi-experimental methods in which they attempt to _approximate_ true randomization.
The first type of quasi-experimental method is _selection on unobservables_. In this approach, researchers attempt to control for bias by carefully selecting what observational data they use in their model. This approach boils down to finding a data generating process in which eligibility for treatment is continuous, and the variation of the unobservable characteristics is smooth at the discontinuity where observations become eligible for treatment. The principal assumption in selection on unobservables, is that the discontinuity creates random variation in which observations are treated. The data selection process in this method is
known as the _identification strategy_ and often requires an extremely deductive process. The validity of this assumption may be tested via a manipulation test or covariate smoothness test. This approach allows researchers to approximate the randomization of an RCT experiment, and then extract the average causal effect of a treatment by implementing research designs such as a regression discontinuity (2) or difference-in-differences approach (3).
\[Y_{i}=\alpha+\tau D_{i}+\beta_{1}(X_{i}-c)+\beta_{2}(X_{i}-c)D_{i}+\epsilon_{i} \tag{2}\]
\[Y_{i}=\alpha\mathds{1}(post)+\gamma\mathds{1}D_{i}+\tau\mathds{1}(post)* \mathds{1}D_{i}+\epsilon_{i} \tag{3}\]
In a local linear regression discontinuity model (2), the \(\beta_{1}\) coefficient controls for the slope estimate before the discontinuity \(c\). \(\beta_{2}\) controls for the slope estimate after the discontinuity in which observations become eligible for treatment. The estimate \(\tau\) is the average causal effect of the treatment assuming perfect compliance. If non-compliance is suspected, researchers may implement further controls such as a fuzzy regression discontinuity design.
In the difference-in-differences model (3), the dummy variable \(post\) indicates whether the observation was recorded before or after the treatment was implemented. Thus the average causal effect \(\tau\) is found by looking at the interaction between whether the observation is in the treatment group and whether the treatment has been applied yet.
Oftentimes the treatment in an experiment is endogenously determined. This happens when the treatment is correlated with the error \(Cov(D_{i},\epsilon_{i})\neq 0\). Informally, this means the assignment of a treatment is not randomly determined (exogenous) and is confounded by factors that influence both the treatment and outcome. To recover the causal effect a researcher must use an Instrumental Variable (IV) to isolate exogenous variation of the treatment variable. IV work by finding a valid instrument \(Z\) that is correlated with the treatment, yet uncorrelated with the error term \(Cov(D_{i},Z_{i})\neq 0\). Then researchers determine whether the instrument meets the exclusion restriction \(Cov(Z_{i},\epsilon_{i})\neq 0\). Informally, the exclusion restriction requires \(Z\) only affects \(Y\) through \(D\). This condition is not testable.
The IV estimate is then found by a two stage process. The first step is known as the _first stage regression_ where \(D\sim Z+X\). In the second step, researchers solve the _reduced form regression_ by \(Y\sim Z+X\). The IV estimate of the causal effect of \(D\) on \(Y\) is then found by dividing the reduced form by the first stage (4). This method is equivalent to the Complier Average Causal Effect (CACE) which is found by dividing the intent-to-treat by the compliance rate. However, the power of Instrumental Variables comes from the fact that knowledge of the compliance rate is not needed. This allows researchers to extract causal effects in quasi-experimental and uncontrolled scenarios.
\[\frac{Y\sim Z+X}{D\sim Z+X} \tag{4}\]
An example of how instrumental variables in combination with a quasi-experiment research design allows researchers to extract an average causal effect is in [E+17]. Ebenstein et al. used China's Huai River Policy as part of their identification strategy to determine the impact of air pollution on life expectancy. This policy heavily incentivized the use of coal for indoor heating on the Northern side of the river, while discouraging it on the Southern side. Central to identifying the causal effect, was the issue that life expectancy is endogenous to air pollution. Wealthier families that can afford better health care may have self selected into the Southern side of the river. Because of the assumption that living North of the river is correlated with pollution caused by coal heating, the authors were able to use the geographical location of observations as an instrument. This allowed the authors to create a regression discontinuity design to understand the causal effect of airborne particulate matter on life expectancy.
As a last resort when a researcher is unable to obtain a valid identification assumption, a _selection on observables_ quasi-experimental approach may be used. In this situation the outcome of the treatment is not suspected to be independent of the treatment itself \(Y_{i}(1),Y_{i}(0)\leavevmode\ \hbox{\small 1\kern-3.8pt\normalsize 1}\kern-3.8ptD_{i}\). Thus, finding the causal effect requires the the addition of controls. This is known as the Conditional Independence Assumption \(Y_{i}(1),Y_{i}(0)\leavevmode\ \hbox{\small 1\kern-3.8pt\normalsize 1}\kern-3.8ptD_{i} \leavevmode\ \hbox{\small 1\kern-3.8pt\normalsize 1}\kern-3.8ptX_{i}\). Controls may be implemented either by matching methods, or if the researcher believes they know the functional form of the synthetic control, they may implement a control \(X_{i}\) directly into the regression (5).
\[Y_{i}=\alpha+\tau D_{i}+\gamma X_{i}+\epsilon_{i} \tag{5}\]
This brief overview demonstrates how experimental and quasi-experimental research designs, in combination with instrumental variables, allow researchers to isolate the average causal effect \(\tau\) of a treatment variable \(D_{i}\) on an outcome \(Y_{i}\). In the next section we will provide a brief overview of how prediction modeling works, before discussing the practical problems of implementing both types of these models.
### Prediction
In contrast to causal inference the prediction setting is solely concerned with the model's ability to estimate the outcome \(Y_{i}\). Because of the disregard for causality, researchers are willing to accept the use of correlative variables if they have predictive power.
The methodology of a statistical engineer was almost indistinguishable between prediction and causal inference tasks up until the early 2000s. In [B01], Breiman
details how the approach to prediction was traditionally through _data modeling_. Through observations, engineers deduced the functional form of the relationship they were approximating, selected a parametric model with the correct form, and trained the model until they calculated parameters that reliably estimated the outcome variable. The proliferation of ridge regression exemplifies the data modeling approach. In order to prevent overfitting, a central problem in prediction, researchers simply added a penalty term to the linear regression model.
\[\min_{\beta^{R}}\sum_{i=1}^{n}(Y_{i}-\hat{Y}_{i})^{2}+\lambda\sum_{j=1}^{p} \beta_{j}^{2} \tag{6}\]
Although functionally different from (1), the concepts used in ridge regression prediction (6) are extremely similar to those in causal inference. Training this model returns parameters that minimize the squared error of a multiple linear regression model \(\hat{Y}_{i}\), whose functional form was decided _a priori_ to training. Parametric models such as lasso regression were ubiquitous throughout the data modeling era of prediction. By the early 2000s however, the prediction community began its shift to _algorithmic models_. In [1], Breiman writes how algorithmic modeling did not come from the statistics community but rather from a new discipline called machine learning.
In contrast to data modeling, the learners used in algorithmic modeling do not have strict parameterization. Algorithmic models were designed to allow the learner to find the functional form of the relationship it is approximating. The use of highly flexible algorithms that can determine their own functional form, in combination with a willingness to use correlative instead of causal data, resulted in the deductive process of causal inference and data modeling being stripped out in prediction. Many of the state of the art prediction models such as OpenAI's GPT-4 have hundreds of billions of parameters. This is in stark contrast to the causal inference models used by econometricians, which rarely have more than a couple thousand parameters in the most extreme cases of panel data and fixed effects models.
Algorithmic models include neural networks, k-nearest neighbors, decisions trees, and a subset of support vector machines. Although these models are more flexible and can approximate more complicated relationships, they are not without downsides. The training process for algorithmic models is extremely compute heavy and inefficient. This is because the complexity of the models makes them non-differentiable. As a result, models must estimate parameters using optimization algorithms such as gradient descent. These algorithms solve for partial derivatives and use the resulting vector to step the parameters towards a local minima. This process is not only time and resource intensive, but can cause the model to become stuck in sub-optimal minima. Algorithmic models also require much more training data as their flexibility makes them prone
to over-fitting. However, these problems are typically insignificant relative to algorithmic models performance over data models.
This overview demonstrates that current machine learning algorithms allow researchers to build highly predictive models as the models learn the functional form of the relationship themselves. In the next section, we discuss the practical problems of implementing statistical learning in both the causal inference and prediction setting.
## 3 Universal Problems of Statistical Learning
### Internal vs. External Validity
Statistical inference is considered to have _internal validity_ if the results are valid for the sample population. In the causal inference setting, this would mean that the coefficient \(\hat{\beta_{x}}\) is equivalent to the average causal effect of \(\hat{x}\) on \(\hat{y}\) in the sample. In the prediction setting, internal validity would mean that when the model is given the observed variables for \(y_{i}\), its parameters return an estimate \(\hat{y_{i}}\) that is close to the actual outcome \(y_{i}\).
On the other hand, statistical inference is only considered to have _external validity_ if the model can generalize to a broader population. In causal inference, this means that the treatment effect of \(x\) on \(y\) is applicable to populations and settings outside of the sample. The reason that external validity is extremely difficult to obtain in causal inference, is because of how a different setting introduces new sets of parameters and factors. Furthermore, causal inference typically focuses on cardinal rather than ordinal effects. This means that for a model to have external validity, the underlying distribution of the new population must be equivalent to the sample population it was modeled upon. For a prediction model to have external validity, it must be able to predict the outcome \(y_{i}\) given a vector of variables it was not trained on.
The reason external validity is so hard to achieve in causal inference, is because participants are often not representative of the broader population. Even in experimental designs such as randomized controlled trial, there are often many problems of finding suitable samples. In [10] the authors surveyed a number of RCT primary care experiments, and found that participants in clinical studies were often not representative in actual primary care settings. This is problematic because clinicians must have confidence that a drug will have the same effect on a patient, as it did on the participants in clinical trials. If there is significant heterogeneity between a patient versus those participating in clinical trials, it is likely that the results will be different.
The problem of external validity in the causal inference setting is only further exacerbated by an increasing use of quasi-experimental methods. Recall the selection on unobservables and selection on observables experiments discussed in section 2.3. These quasi-experimental designs rely on natural data generating
processes as opposed to true randomization. As a result, the covariates are specific to the geographical area that was studied, and often do not translate to broader populations. When assumptions in these experiments are further relaxed, such as non-compliance, external validity is further broken down by methods that attempt to address such issues.
This problem has caused the value of empirical economic experiments to be hotly debated. The notion of a local average treatment effect (LATE) was introduced to highlight how effects in these studies are often "local" to the subsample studied and can rarely generalize to a broader population. In [10] Guido W. Imbens provides an overview of how many notable theoretical economists struggle to place value on such estimates. Imbens writes that depending on the setting and subsample studied, there is significant heterogeneity in the estimands of empirical research. As a result a single estimate is unlikely to be useful for informing policy. Instead, researchers can obtain informative estimates by combining several studies based on multiple populations and settings. Although researchers are often unable to find the true population mean, the collection of multiple local average treatment effects allows for a conglomerate of experiments with individually weak external validity, to provide value to policy makers and researchers.
External validity in the prediction setting is very similar and yet very different to causal inference. Like causal inference, prediction faces the problem of gathering a training set that has the same distribution as the broader population. If a learner is trained on data that is unrepresentative it will perform poorly. However, this is not the only problem of prediction. Researchers must also be aware of underfitting and overfitting.
Machine learning algorithms are highly flexible and can fit non-parametric relationships in high dimensional spaces. Without restraining this flexibility, the algorithm will simply learn to interpolate every observation it is trained on. This phenomenon is known as overfitting, and is problematic because it results in the model learning the statistical noise of the relationship it is approximating, rather than the underlying relationship itself. When a model overfits it performs extremely well in-sample, but extremely poorly out-of-sample. In contrast when the model's flexibility becomes too restricted, the model underfits the relationship. For example consider a parametric model, such as a linear regression, attempting to approximate a non-parametric relationship.
To guard against the problem of overfitting and underfitting, machine learning algorithms rely on two approaches. The first is regularization which introduces a cost on the flexibility of a model. This can be an explicit cost such as lasso and ridge regularization. On the other hand, regularization can happen via randomization such as in drop out or feature subsampling. To achieve the correct amount of regularization, researchers will partition their data set so that they may _cross validate_. What this means is that they train and adjust the amount of regularization on one portion of the data, and then test it out-of-sample on the remaining data.
These examples highlight the problem of external validity in causal inference and prediction. Although there are methods to address generalization in both settings, it is important to not place too much value on any single model, even if it has high internal validity. By creating models on different sub populations, and observing the heterogeneity in results, we are better able to approximate the population mean.
### The Importance of Context
Statistical learning methods also face severe interpretability problems if the correct preprocessing and deduction processes are not undertaken. Although the exact effects of these problems differs in magnitude between causal inference and prediction, the problem can be boiled down to the simple fact that without context and interpretation, data is meaningless.
A commonly used example to explain the importance of context in statistical analysis, is a car driving down a road with varying slope. Consider a driver who is going down this hill and wants to maintain a constant speed. If the driver is skilled, pressing the brake will have no correlation with the speed despite the fact that it has a causal effect on the constant speed.
An observer with no knowledge of the causal relationships between these variables may conclude that there is no relationship between the brake and the speed. In causal inference this is known as endogeneity, where the treatment effect of the break is correlated with the error term. If the observer attempted to regress the speed on the break pressure and hill height, they would encounter multicollinearity and receive biased estimates. This example demonstrates how understanding a systems mechanics is vital to properly interpreting causal inference estimates.
Context is equally important in prediction. The example of the car and the hill demonstrates that a lack of correlation does not mean a lack of causation. As machine learning algorithms are trained on correlations, it is important to note that a learner with poor predictive accuracy does not mean that causal relationships are not present. Alternatively, just because a learner has high predictive accuracy does not mean that the correlation is not spurious.
Regardless of whether one is in the causal inference or prediction setting, the use of statistical learning without context is perilous. Simply throwing data at an algorithm will not make up for an improper understanding of how a system works.
### Curse of Dimensionality
All statistical learners face the curse of dimensionality. This problem encapsulates the challenges that arise from dealing with high-dimensional data. In causal inference, the curse of dimensionality applies when the model has many terms (also known as dimensions). Consider the following models:
\[Y_{i}\sim\alpha+\tau D_{i}+\beta_{1}X_{1}+\beta_{2}X_{2}+...+\beta_{100}X_{100}+ \epsilon_{i} \tag{8}\]
The parameters of an ordinary least squares regression are only differentiable when the model has fewer terms than data observations. Model (6) only requires three observations to find an estimate for \(\tau\) while model (7) requires one hundred three observations. However, this is not the only problem. Increased complexity of the model causes data to become sparse which may cause biased estimates. This is especially prevalent when the model must control for observable characteristics.
Matching involves pairing treatment observations to control observations based on whether they share the same covariates. If the treatment and control observations share the exact same observable characteristics, we can assume \(Y_{i}(1),Y_{i}(0)\ \mathbbm{1}\)\(D_{i}\ |\ X_{i}\). However as more and more dimensions are added, it becomes increasingly unlikely exact matches will be found. When this happens researchers must result to non-exact matches and use approaches such as k-nearest neighbors to find approximate matches. Alternatively, controls may be implemented using propensity score matching. In this method observations are matched based on whether they have the same conditional probability of receiving treatment. Although this approach allows researchers to reduce the amount of dimensions of their model, it introduces bias, can skew estimates, and introduces additional causality assumptions. Implementing the controls directly into the regression such as in (4) does not solve this problem either as it adds even more terms to the model.
In prediction, the curse of dimensionality presents a different set of problems. Recall the problem of finding a local minima discussed in section 2.4. Adding dimensions to your model makes computing the partial derivatives more and more compute heavy. Furthermore, adding more dimensions to your model, while retaining the same amount of observations, makes it more likely that your model will over fit. Dimensionality reduction techniques such as principal component analysis (PCA) and uniform manifold approximation and projection (UMAP) were invented to overcome these problems, however introduce a host of their own problems.
In summary, the curse of dimensionality is a universal problem of statistical learning. Understanding how potential solutions to the curse of dimensionality biases results is paramount to building useful models.
Problems Unique to Causal Inference
### Small Problems, Small Significance
Causal inference heavily constrains economists to the type of problems that may be studied. Economists and other social scientists are concerned with understanding social phenomena. These problems are often enormous in scope, and make it difficult to run experiments on. For example, what is the effect of an authoritarian versus democratic government on the GDP of a country?
The scope of these questions have been a large reason that most economic work has historically been concerned with structural models rather than empirical ones. In [10] while advocating for the continued use of econometric methods, Imbens presents the arguments raised in [D09] and [H+09]. Deaton, Heckman, and Urzura state that economic research has become excessively experimental. They advance their argument by presenting the difficulty of interpreting local average treatment effects, but also state that empirical methods hand-cuff economists from answering interesting economic questions. In [H+09] the authors write, "Proponents of [instrumental variable approaches] are less ambitious in the range of questions they seek to answer. The method often gains precision by asking narrower questions. The problem that plagues the IV approach is that the questions it answers are usually defined as probability limits of estimators and not by well formulated economic problems".
These necessary conditions restrict researchers using statistical learning methods from looking at broader questions. Distilling a question of interest into an independent and dependent variable, and then isolating exogenous variation of a treatment variable greatly constrains the set of all available problems. As a result, empirical economic work has been forced to focus on partial equilibriums that struggle to find a place in broader economic frameworks.
### Hypothesis Testing
Causal inference has additionally faced scrutiny because of its abuse of hypothesis testing. When determining the validity of an estimated parameter, researchers calculate P-values. These allow researchers to obtain a statistical measure of the likelihood of an observed outcome, assuming the null hypothesis is false.
In [G+13] Gelman and Loken give an overview of the statistical crisis in science. In the pursuit of publishing research, academics have come up with ingenious ways of producing statistical significance even when it is not present. This practice has given rise to the colloquialisms "P-hacking" and "hypothesis fishing". In [B+16] the authors examined top economic journals and found an unusual distribution of statistical significant results. This is attributed to researchers who artificially inflate the statistical significance of their research. This has not only raised many questions about the credibility of the researchers involved, but also of empirical economics as a whole.
The ability to examine data in many different ways makes it possible to achieve statistical significance from pure noise. Whether it be by excluding observations or selecting certain interactions in the regression model, a statistically significant hypothesis can be fit to the data instead of the other way around. This problem is especially prevalent in research that relies on instrumental variables. As the exclusion restriction assumption is not testable, researchers may try a variety of instrumental variables until they find one that produces a statistically significant estimate.
In response to this, many have advocated for researchers to make it easier for others to reproduce their work. By providing data sets as well as code for their methods, scientists hope to solve the statistical crisis. Alternatively, others have proposed to completely eliminate the Frequentist mindset of statistical significance. They call for researchers to report the posterior probabilities of obtaining a parameter, which may be calculated via Bayesian inference. This is in stark contrast to statistical significance which reports the probability of the data given a parameter.
## 5 Problems Unique to Prediction
### Correlation is Not Causation
Prediction models gain their power by utilizing variables that have strong correlative power. This is problematic because such variables may lack any explainable causal relationship with the outcome variable. This approach allows researchers to build extremely powerful models, but also makes it difficult to interpret the results they produce. This is because exploiting correlative relationships to build highly predictive models is sound only as long as the distribution of data does not shift. Simply making a prediction does not affect a system or the distribution of data that the prediction was pulled from. However the purpose of making predictions is to make better decisions which themselves may affect the system. Thus, making a decision based on a prediction can be circular in some settings. It is often difficult to discern whether a decision will have a causal effect on the the prediction it is predicated upon. Although this problem is not applicable to all situations, it demonstrates the issue of relying on associative rather than causal relationships.
The reliance on correlative relationships has also caused many to question the scientific value of prediction models. This question has especially been of interest to the Linguistics and Natural Language Processing communities. In a New York Times Op-Ed titled _The False Promise of ChatGPT_, Noam Chomsky writes "[ChatGPT is] a lumbering statistical engine for pattern matching, goring on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question" [C+23]. Chomsky's point is that large language models such as ChatGPT are able to generate realistic-sounding language but have no understanding of what it is
producing. This sentiment has been echoed by other linguists such as Emily M. Bender, who coined the term'stochastic parrot' in reference to LLMs.
On the other hand, computer scientists such as Peter Norvig point to the overwhelmingly dominant success of statistical models in industry use cases. In _On Chomsky and the Two Cultures of Statistical Learning_ Norvig writes, "the intellectual offspring of [Claude] Shannon's theory create several trillion dollars of revenue each year, while the offspring of Chomsky's theories generate well under a billion". Norvig is referencing Claude Shannon's paper _A Mathematical Theory of Communication_ which introduced the idea of modeling language using Markov (Markkoff) chains, a type of statistical model. Shannon's work laid the foundation for the statistical language models widely used in search engines, speech recognition, machine translation, and other applications. On the other hand, the language models developed by computational linguistics have seen very limited adoption. Rather than using probability, these models are based on logical rules and semantic parsing. In contrast to'stochastic parrots' these models are intended to have an understanding of what it is producing. Norvig's point illustrates the harsh truth. Although interpreting results generated from correlative variables may be difficult, probability-based models clearly outperform all other options. Despite the fact that logic-based models are intended to have a causal understanding of semantics, they simply can not compete with statistical methods.
### Deep Learning
Deep learning, a subset of machine learning networks, compounds the difficulty of interpreting correlative variables by transforming and interacting them in an extremely complicated manner. Deep learning utilizes neural networks with many hidden layers. Almost every notable artificial intelligence application including ChatGPT, AlphaGO, and Tesla's autopilot, are powered by these networks. Their power comes from the fact that each layer can capture and represent data in a different way. This lets the model build a hierarchy of transformations on the inputs, and capture even the most complex patterns.
Although highly effective, this hierarchy quickly becomes inaccessible to the human observer simply due to its sheer complexity. Attempting to understand how the model builds the hierarchy during the training process is equally frivolous. Neural networks don't work properly unless they are initialized with random weights and balances. This means that understanding how the inputs are represented and transformed from layer to layer, is typically impossible even from the start of the training process.
Researchers have made many attempts at advancing techniques that let us understand how each layer of a neural networks represents and transform inputs. Seminal work by Neel Nanda et al. [N+22] trained a single-layer transformer neural network to perform the math task \((a+b)\%p\). Nanda et al. found that at first the model simply memorized all of the training examples it was given. This
is not uncommon, and is an example of overfitting. However as training continued, the transformer stopped memorizing the examples and instead started to solve the task by using a complicated algorithm. Rather than learning the addition and modulo operators, the transformer built an algorithm based on trigonometric identities and discrete Fourier transforms. The development of this algorithm may be partly due to the fact that the hyperbolic functions used in the activation of each neuron are similar to waves. The results of this paper demonstrate how neural networks are prone to learning how to solve tasks in methods that are completely unintuitive to humans. Attempting to understand how a single layer transformer learned took the authors many weeks. As most neural networks used in production have orders of magnitude more layers, it becomes clear that understanding exactly how neural networks are representing data is most likely impossible.
For most tasks, it can be argued that understanding how a neural network works is not important. However for high-risk applications, a lack of understanding can be disastrous. Consider the neural networks used in high-risk computer vision tasks such as Tesla's autopilot. Understanding how the model recognizes a person could lead to the difference in life or death. Even worse is if machine learning is used in tasks for things such as identifying criminals. A paper titled _Automated Inference on Criminality Using Face Images_ built a model that predicted, with an accuracy of 89.5%, whether facial features could determine whether someone was a criminal [W+17]. Although the positive and negative instances used to train the model looked identical to the human observer, the learner picked up biases in both groups. In one group, most of the samples wore white collars while the other group did not. This may have reflected the light onto the samples faces, which while not observable to the naked eye, was picked up by the learner. Critics of the paper also argued that the criminal photos may have also contained micro-expressions due to the sample's recent incarceration.
Computer scientists have long been aware of the expression 'garbage in, garbage out'. However, the difficulty in understanding how deep learning networks represent data makes it much more difficult to determine whether training data really does contain biases. Without an understanding of how the data is being represented, racist, sexist, or other biases can make their way into production systems and have extreme consequences. Without a clear understanding of how data is represented in neural networks, it becomes difficult to understand the robustness of the model, and to what degree we can trust the predictions they produce.
Conclusion
Statistical learning has surged in importance over the past few years and become one of our most pivotal tools in our decision-making processes. Policymakers, pharmaceutical scientists, and advertisers have progressively embraced the utilization of causal inference to make informed decisions. Prediction continues to play a more and more important role in our lives as well. Deep learning has made immense breakthroughs in predicting protein structures, self-driving cars, and natural language processing. Other machine learning methods such as decision trees, have helped develop fraud detection systems and credit risk systems. Statistical learning's growing importance in our lives shows no signs of slowing.
Chris Anderson's _The Death of Theory_ envisions a future where we have no need for the scientific method. While the expanding influence of statistical learning may make it seem like this future is not far away, it is important to realize that these methods are not a silver bullet. Problems such as external validity, interpretation of independent variables, and the curse of dimensionality exist for both causal inference and prediction models. Additionally, both settings have an additional set of unique problems. It is crucial to acknowledge how these issues may prevent correct implementation of these methods, and how they may effect our interpretation of their results. Recognizing this allows us to create better yet still imperfect models. Additionally, staying cognizant of these issues allows us to make better value judgements of the results produced by statistical models, and use these models to their fullest extent. |
2305.11534 | Developing Multi-Agent Systems with Degrees of Neuro-Symbolic
Integration [A Position Paper] | In this short position paper we highlight our ongoing work on verifiable
heterogeneous multi-agent systems and, in particular, the complex (and often
non-functional) issues that impact the choice of structure within each agent. | Louise Dennis, Marie Farrell, Michael Fisher | 2023-05-19T08:58:26Z | http://arxiv.org/abs/2305.11534v1 | # Developing Multi-Agent Systems with
###### Abstract.
In this short position paper we highlight our ongoing work on verifiable heterogeneous multi-agent systems and, in particular, the complex (and often non-functional) issues that impact the choice of structure within each agent.
Louise Dennis Marie Farrell Michael Fisher, Department of Computer Science, University of Manchester, UK
## 1. Introduction
A traditional way to develop multi-agent systems is to take some overall goal for the system, say \(G\), and decompose this into a set of tasks (which could be any of actions, updates, sub-goals, etc) (Garrell and Fisher, 2010). These tasks are then undertaken by a set of agents, for simplicity one task per agent. In devising symbolic teams, each agent is symbolic, discharging its task appropriately (see Figure 1 (left)) giving us a collection of behaviour specifications (usually in symbolic logic) for the agents which can then be combined appropriately to provide the overall description in the form of a symbolic logic formula.
An alternative approach would be to decompose our collective goal into a set of 'neural' agents; see Figure 1 (right). These agents could be any of (deep) learning components, adaptive control components, optimisation components, etc (Beng et al., 2016). Again, tasks would be implemented with these agents and the result would be combined to give an overall outcome. The difference here is that the behaviour of an individual 'neural' component is described using some stochastic notation, either logical or differential equations.
In reality, however, our multi-agent system is unlikely to be wholly symbolic or wholly neural and so we will get a more heterogeneous multi-agent system such as in Figure 2.
## 2. Position
The particular issue that we are concerned with here is how, and why, we choose one particular type of agent over another when constructing our agent team. Symbolic and 'neural' agents generally have quite distinct properties1
Footnote 1: For the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023), A. Ricci, W. York, N. Agmon, R. An (eds.), May 29 – June 2, 2003, London, United Kingdom. © 2003 International Foundation for Autonomous Agents and Multiagent Systems (www.filmans.org). All rights reserved.
* neural -- fast, able to cope with vast amounts of data/input, opaque/stochastic, -- symbolic -- logical, transparent, explainable, verifiable, much slower, may be overwhelmed by data, --
So, our aim is to capture, in the goal specification \(\mathbf{G}\), key aspects that need to be considered/achieved relating to this goal. Specifically:
1. Speed -- Although this might relate to the Multi-Agent System (MAS) as a whole, it is more likely to refer to how quickly a particular task (a decision, recognition, or action) should be completed;
2. Transparency -- Again, while this might refer to the whole MAS, it will more usually relate to specific decisions and actions, exposing exactly what the agents do and why they do it.
Note: This transparency then forms a basis for both explainability and verifiability (Krause et al., 2017).
3. Accuracy -- This refers to decisions that are taken on the basis of potentially inaccurate measurements (e.g. sensor readings, images, etc.). Accuracy-related properties will likely be captured using probabilistic reasoning and/or statistical measures.
Consequently, in specifying the system goal, \(\mathbf{G}\), the above aspects must be highlighted. Then, as we decompose this goal into tasks for specific agents, we can assess how well each agent not only can achieve its task but can satisfy the additional speed, transparency, and accuracy requirements. This will surely affect how the MAS is composed and, in many practical cases, we will develop a heterogeneous multi-agent system as in Figure 2.
Each agent will be selected to satisfy its speed, transparency, or accuracy requirement and each will have a description of its behaviour. Typically, symbolic agents will have logical descriptions while 'neural' agents will have stochastic descriptions (though this may not always be the case).
**Point 1:** we highlight the speed/transparency/accuracy requirements in the MAS development process, attempting to ensure that the most appropriate style of agent is utilised.
While the above approach can provide us with a description of the distinct agent recommendations, we also need to check that these requirements are actually being achieved. We do this by also adding _runtime monitors_ to check each agent's behaviour and, in particular, whether it is matching the speed/transparency/accuracy requirements. We check speed, in the obvious way by timing progress, and check transparency by interrogating the agent structures to ensure that "what" is done and "why" are explicit.
**Point 2:** since agents may well act sub-optimally, we utilise runtime verification (Krause et al., 2017) to assess how well each agent is
Figure 1: Decomposing goals into collations of symbolic (left) and neural (right) agents
Figure 2: Decomposing goals into heterogeneous collations of agents
actually matching its speed/transparency/accuracy requirements.
## 3. Examples
Autonomous VehicleWe consider an autonomous vehicle to actually be a MAS, comprising a wide range of communicating components. We want sensing and recognition to be as fast as possible and so speed is the main criterion here. Although we may veer towards symbolic agents if we require some explanation of recognition, it is much more likely that these aspects will be implemented as some learning component. On the other hand, the decision-making component(s) that handle the high-level decisions that a human driver/pilot/operator used to make will very likely be symbolic agent(s). This facilitates verification, for example against human-level rules, and explanations, for example to authorities or driver who is taking control (Bowuho et al., 2019) Lower level actions will undoubtedly be implemented in some adaptive or 'neural' control component.
An interesting element concerns obstacle avoidance. If this is required to be very quick, for example in an emergency situation, then symbolic agents will be too slow. However, if the vehicle has some time to "think" then a symbolic agent making a reasoned and explainable decision might be more appropriate. It might well be that both aspects are in the system, with the level of urgency dictating which agent to invoke.
Tele-Operated RobotIn this situation, speed, accuracy and compliance with the human operator's commands will be the most important aspects. Consequently, very few of the agents need be symbolic. Perhaps the few that are will be concerned with situation awareness (for example, explaining to the distant operator what is happening in the robot's context) or fault diagnosis (for example, explaining what the robot believes has occurred).
Social/Healthcare TeamsThese very complex teams comprise a wide range of agents: some are human (healthcare specialists, etc); some are low-level sensors (detecting movement, temperature, etc); and many are AI-based agents somewhere in between. We could have:
* simple cleaning robots, where accuracy (and safety) is central;
* health monitoring software, where recognising problems (quickly, accurately) is important but so is explaining to, and interacting with, humans in the environment or team (transparency);
* social robots, able to have complex dialogue with humans and able to explain and expand on topics (transparency); and so on.
In many cases, the strong reliability/verification of the agent, often based on its transparency, will be important.
CybersecurityIn this case, a cybersecurity system may use symbolic representations of known attack patterns (based on experience and/or threat analysis) and neural networks to detect previously unknown attacks by analyzing network traffic and system logs. It is especially difficult to distinguish an attack from a software error/failure (Bowuho et al., 2019) so the complex computations performed here by the neural components is beneficial. Here, transparency and speed are particularly important: speed to identify attacks quickly so that mitigations might be invoked and transparency to provide reasoning behind why some behaviours were identified as potential attacks. The latter is useful to build up a knowledge base of why and how previously unidentified attacks occur. A similar approach could be used in other fields such as natural language processing.
## 4. Summary
Decomposing an overall goal into a set of agents that together can achieve this goal, is well understood. We want to expose additional constraints, particularly those of speed, accuracy, and transparency, and then use these in the decomposition/development process to help us select the variety of agent that is most suitable.
We recognise that, in realistic scenarios, "things will go wrong" and so we need to provide dynamic checks that the agents are living up to their requirements. We do this by inserting a range of runtime monitors to assess both the speed and transparency of the agents.
The problems related to this involve
* describing speed, accuracy, and transparency in a flexible but precise way,
* developing effective, but not intrusive, runtime monitors to assess these, and
* extending towards other properties such as flexibility, being able to cope with vast amounts of data, etc.
Then, within these complexities we aim to ensure strong verification where appropriate, so supporting the assurance and trustworthiness of complex autonomous teams (Bourbooh et al., 2019; Bourbooh et al., 2019; Bourbooh et al., 2019)
|
2306.08535 | Cyclic proofs for arithmetical inductive definitions | We investigate the cyclic proof theory of extensions of Peano Arithmetic by
(finitely iterated) inductive definitions. Such theories are essential to proof
theoretic analyses of certain `impredicative' theories; moreover, our cyclic
systems naturally subsume Simpson's Cyclic Arithmetic.
Our main result is that cyclic and inductive systems for arithmetical
inductive definitions are equally powerful. We conduct a metamathematical
argument, formalising the soundness of cyclic proofs within second-order
arithmetic by a form of induction on closure ordinals, thence appealing to
conservativity results. This approach is inspired by those of Simpson and Das
for Cyclic Arithmetic, however we must further address a difficulty: the
closure ordinals of our inductive definitions (around Church-Kleene) far exceed
the proof theoretic ordinal of the appropriate metatheory (around
Bachmann-Howard), so explicit induction on their notations is not possible. For
this reason, we rather rely on formalisation of the theory of (recursive)
ordinals within second-order arithmetic. | Anupam Das, Lukas Melgaard | 2023-06-14T14:34:01Z | http://arxiv.org/abs/2306.08535v1 | # Cyclic proofs for arithmetical inductive definitions
###### Abstract
We investigate the cyclic proof theory of extensions of Peano Arithmetic by (finitely iterated) inductive definitions. Such theories are essential to proof theoretic analyses of certain 'impredicative' theories; moreover, our cyclic systems naturally subsume Simpson's Cyclic Arithmetic.
Our main result is that cyclic and inductive systems for arithmetical inductive definitions are equally powerful. We conduct a metamathematical argument, formalising the soundness of cyclic proofs within second-order arithmetic by a form of induction on closure ordinals, thence appealing to conservativity results. This approach is inspired by those of Simpson and Das for Cyclic Arithmetic, however we must further address a difficulty: the closure ordinals of our inductive definitions (around Church-Kleene) far exceed the proof theoretic ordinal of the appropriate metatheory (around Bachmann-Howard), so explicit induction on their notations is not possible. For this reason, we rather rely on formalisation of the theory of (recursive) ordinals within second-order arithmetic.
cyclic proofs, inductive definitions, arithmetic, fixed points, proof theory 10.4230/LIPIcs.FSCD.2023.23This work was supported by a UKRI Future Leaders Fellowship, 'Structure vs. Invariants in Proofs', project reference MR/S035540/1.
The authors would like to thank Graham Leigh and Colin Riba for several interesting conversations about (arithmetical) inductive definitions.
## 1 Introduction
_Cyclic proof theory_ studies 'proofs' whose underlying dependency graph may not be well-founded, but are nonetheless regular. Soundness for such systems is controlled by an appropriate 'correctness criterion', usually an \(\omega\)-regular property on infinite branches, defined at the level of formula ancestry. Cyclic proofs are a relatively recent development in proof theory (and related areas), with origins in seminal work of Niwinski and Walukiewicz for the modal \(\mu\)-calculus [18]. Inspired by that work, Brotherston and Simpson studied the extension of first-order logic by (ordinary) inductive definitions [7, 9, 10]. More recently, Simpson has proposed _Cyclic Arithmetic_ (\(\mathsf{CA}\)), an adaptation of usual Peano Arithmetic (\(\mathsf{PA}\)) to the cyclic setting [21].
One of the recurring themes of cyclic proof theory is the capacity for non-wellfounded reasoning to simulate inductive arguments with apparently simpler (and often _analytic_) invariants. Indeed this difference in expressivity has been made formal in various settings [3, 6] and has been exploited in implementations [8, 20, 26, 27]. Within the setting of arithmetic, we have a more nuanced picture: while Simpson showed that \(\mathsf{CA}\) and \(\mathsf{PA}\) are equivalent as theories [21], Das has shown that indeed the logical complexity of invariants required in \(\mathsf{CA}\) is indeed strictly simpler than in \(\mathsf{PA}\)[11]. These arguments typically follow a metamathematical approach, formalising the soundness argument of cyclic proofs themselves within arithmetic and relying on a _reflection_ principle (though there are alternative approaches too, cf. [4, 5]). Due to the infinitary nature of non-wellfounded proofs and the complexity of
correctness, such arguments require a further detour through the reverse mathematics of \(\omega\)-automata theory, cf. [14, 15].
In this work we somewhat bridge the aforementioned traditions in the \(\mu\)-calculus, first-order logic with inductive definitions, and arithmetic. In particular we present a cyclic proof system \(\mathsf{CID}_{<\omega}\) over the language of (finitely iterated) arithmetical inductive definitions: the closure of the language of arithmetic under formation of (non-parametrised) fixed points. Such languages form the basis of important systems in proof theory, in particular \(\mathsf{ID}_{<\omega}\), which allows for an ordinal analysis of impredicative second-order theories such as \(\Pi^{1}_{1}\mathsf{-CA}_{0}\). Our cyclic system \(\mathsf{CID}_{<\omega}\) over this language is essentially recovered by directly importing analogous definitions from the \(\mu\)-calculus and first-order inductive definitions.
Our main result is the equivalence between \(\mathsf{CID}_{<\omega}\) and its inductive counterpart \(\mathsf{ID}_{<\omega}\). While subsuming inductive proofs by cyclic proofs is a routine construction, the converse direction constitutes a generalisation of ideas from the setting of arithmetic, cf. [21, 11]. One particular nuance here is that the soundness of cyclic proofs with forms of inductive definitions typically reduces to a form of induction on the corresponding _closure ordinals_. For the setting of even unnested inductive definitions, \(\mathsf{ID}_{1}\), closure ordinals already exhaust all the recursive ordinals (up to Church-Kleene, \(\omega_{1}^{\mathrm{CK}}\)). On the other hand the proof theoretical ordinal of \(\mathsf{ID}_{1}\) is only the Bachmann-Howard ordinal, so we cannot formalise the required induction principle on explicit ordinal notations. Instead we rely on a (known) formalisation of (recursive) ordinal theory _within_ appropriate fragments of second-order arithmetic.
This paper is structured as follows. In Section 2 we recall the syntax and semantics of first-order logic with inductive definitions, as well as the Knaster-Tarski fixed point theorem specialised to \(\mathcal{P}(\mathbb{N})\). In Section 3 we recall \(\mathsf{PA}\) and \(\mathsf{ID}_{<\omega}\), recast in the sequent calculus to facilitate the definition of \(\mathsf{CID}_{<\omega}\). The latter is presented in Section 4 where we also show its simulation of \(\mathsf{ID}_{<\omega}\). In Section 5 we show that the system \(\mathsf{CID}_{<\omega}\) is indeed sound for the standard model. In Sections 6 and 7 we formalise aspects of inductive definitions, truth, order theory and fixed point theory within suitable fragments of second-order arithmetic. Finally in Section 8 we present the converse simulation, from \(\mathsf{CID}_{<\omega}\) to \(\mathsf{ID}_{<\omega}\), by essentially arithmetising the soundness argument of Section 5.
Due to space constraints, most proofs and auxiliary material are omitted.
## 2 Syntax and semantics of arithmetical inductive definitions
### First-order logic (with equality)
In this work we shall work in predicate logic over various languages, written \(\mathcal{L},\mathcal{L}^{\prime}\) etc. We write \(x,y\) etc. for (first-order) variables and \(s,t\) etc. for terms, and \(\varphi,\psi\) etc. for formulas (including equality). For later convenience, we shall write formulas in _De Morgan normal form_, with negations only in front of atomic formulas. I.e. formulas are generated from 'atomic' formulas \(P(\vec{t}),\neg P(\vec{t}),s=t,\neg s=t\) under \(\vee,\wedge,\exists,\forall\). From here we use standard abbreviations for negation and other connectives.
In order to interpret 'inductive definitions' in the next section, it will be useful to consider a variation of usual Henkin semantics that interprets (relativised) formulas as operators on a structure. Given a language \(\mathcal{L}\), We write \(\mathcal{L}(X)\) for the extension of \(\mathcal{L}\) by the fresh predicate symbol \(X\). For instance formulas of \(\mathcal{L}(X)\), where \(X\) is unary, include all those of \(\mathcal{L}\), new 'atomic' formulas of the form \(X(t)\) and \(\neg X(t)\), and are closed under usual logical operations.
Fix a language \(\mathcal{L}\) and \(\mathcal{L}\)-structure \(\mathfrak{M}\) with domain \(M\). Let \(X\) be a fresh \(k\)-ary predicate symbol and let \(\vec{x}=x_{1},\ldots,x_{l}\) be distinguished variables. Temporarily expand \(\mathcal{L}\) to include each \(a\in M\) as a constant symbol and each \(A\subseteq M^{k}\) as a predicate symbol and fix \(a^{\mathfrak{M}}:=a\)
and \(A^{\mathfrak{M}}:=A\). We interpret formulas \(\varphi(X,\vec{x})\) of \(\mathcal{L}(X)\) as functions \(\varphi^{\mathfrak{M}}:\mathcal{P}(M^{k})\to\mathcal{P}(M^{l})\) by setting \(\vec{a}\in\varphi^{\mathfrak{M}}(A)\) just if \(\mathfrak{M}\vDash\varphi[A/X][\vec{a}/\vec{x}]\).
Let us call a formula \(\varphi(X)\)_positive_ in \(X\) if it has no subformula of the form \(\neg X(\vec{t})\). The following result motivates the 'positive inductive definitions' we consider in the next section: [Positivity implies monotonicity] Let \(\mathcal{L}\), \(\mathfrak{M}\), \(X\), \(\vec{x}\) be as above. If \(\varphi\), a formula of \(\mathcal{L}(X)\), is positive in \(X\) then \(\varphi^{\mathfrak{M}}\) is monotone: \(A\subseteq B\implies\varphi^{\mathfrak{M}}(A)\subseteq\varphi^{\mathfrak{M}}(B)\).
Proof idea. By straightforward induction on the structure of \(\varphi\).
### Languages of arithmetic and (finitely iterated) inductive definitions
The _language of arithmetic (with inequality)_ is \(\mathcal{L}_{A}:=\{0,\mathsf{s},+,\times,<\}\). Here, as usual, \(0\) is a constant symbol (i.e. a \(0\)-ary function symbol), \(\mathsf{s}\) is a unary function symbol, \(+\) and \(\times\) are binary function symbols, and \(<\) is a binary relation symbol.
Throughout this paper we shall work with (certain extensions of) \(\mathcal{L}_{A}\): [(Finitely iterated) inductive definitions] \(\mathcal{L}_{<\omega}\) is the smallest language containing \(\mathcal{L}_{A}\) and closed under:
* if \(\varphi\) is a formula of \(\mathcal{L}_{<\omega}(X)\) positive in \(X\), for \(X\) a fresh unary predicate symbol, and \(x\) is a distinguished variable, then \(I_{\varphi,X,x}\) is a unary predicate symbol of \(\mathcal{L}_{<\omega}\).
Note that we only take the case that \(X\) is unary above since we can always code \(k\)-ary predicates using unary ones within arithmetic. When \(X,x\) are clear from context, we shall simply write \(I_{\varphi}\) instead of \(I_{\varphi,X,x}\). We shall also frequently suppress free variables and parameters (i.e. predicate symbols), e.g. writing interchangably \(\varphi(X,x)\) and \(\varphi\), when it is convenient and unambiguous.
Let us introduce some running examples for this work.
[Naturals, evens and odds] We define the following formulas of \(\mathcal{L}_{A}(X)\):
* \(n(X,x):=x=0\lor\exists y(X(y)\wedge x=\mathsf{s}y)\).
* \(e(X,x):=x=0\lor\exists y(X(y)\wedge x=\mathsf{s}y)\).
* \(o(X,x):=x=1\land\exists y(X(y)\wedge x=\mathsf{s}y)\) (where \(1:=\mathsf{s}0\)).
By definition \(\mathcal{L}_{<\omega}\) contains the symbols \(N:=I_{n}\), \(E:=I_{e}\) and \(O:=I_{o}\). Now, writing,
* \(m(X,x):=e(X,x)\lor(\forall y(E(y)\to X(y))\wedge x=1)\)
we also have that \(M:=I_{m}\) is a symbol of \(\mathcal{L}_{<\omega}\), by the closure property of the language.
All our theories are interpreted by the'standard model' of arithmetic \(\mathfrak{N}=(0,\mathsf{s},+,\times,<)\), which we extend to a \(\mathcal{L}_{<\omega}\)-structure by:
* \(I^{\mathfrak{M}}_{\varphi,X,x}:=\bigcap\{A\subseteq\mathbb{N}:\varphi^{ \mathfrak{M}}(A)\subseteq A\}\)
### On Knaster-Tarski: inductive definitions as fixed points
We conclude this section by making some comments about the interpretation of inductive definitions as fixed points. Let us first state a version of the well-known Knaster-Tarski theorem specialised to the setting at hand: [Knaster-Tarski on \(\mathcal{P}(\mathbb{N})\)] Let \(F:\mathcal{P}(\mathbb{N})\to\mathcal{P}(\mathbb{N})\) be monotone, i.e. \(A\subseteq B\subseteq\mathbb{N}\implies F(A)\subseteq F(B)\). Then \(F\) has a least fixed point \(\mu F\) and a greatest fixed point \(\nu F\). Moreover, we have: \(\mu F=\bigcap\{A\subseteq\mathbb{N}:F(A)\subseteq A\}\)
and \(\nu F=\bigcup\{A\subseteq\mathbb{N}:A\subseteq F(A)\}\).
We shall henceforth adopt the notation of the theorem above, writing \(\mu F\) and \(\nu F\) for the least and greatest fixed point of an operator \(F\), when they exist.
In light of Proposition 2.1 we immediately have:
\(I_{\varphi}^{\mathfrak{N}}=\mu\,\varphi^{\mathfrak{N}}\), i.e. \(I_{\varphi}^{\mathfrak{N}}\) is the least fixed point of \(\varphi^{\mathfrak{N}}:\mathcal{P}(\mathbb{N})\to\mathcal{P}(\mathbb{N})\).
[Naturalals, evens and odds: interpretation] Revisiting Example 2.3 we have:
* \(N^{\mathfrak{N}}=\mathbb{N}\)
* \(E^{\mathfrak{N}}=\mathbb{E}:=\{2n:n\in\mathbb{N}\}\)
* \(O^{\mathfrak{N}}=\mathbb{O}:=\{2n+1:n\in\mathbb{N}\}\)
It turns out that also \(M^{\mathfrak{N}}=\mathbb{N}\). While this is readily verifiable with the current definitions, we shall delay a justification of this until we have built up some more technology.
[Greatest fixed points] Thanks to negation, we can also express _greatest_ fixed points (within \(\mathcal{P}(\mathbb{N})\)), thanks to the equality \(\nu F=(\mu\lambda A(F(A^{c})^{c}))^{c}\), here writing \(\cdot^{c}\) for the complement of a set in \(\mathcal{P}(\mathbb{N})\) and \(\lambda\) for abstraction. Syntactically we can write \(J_{\varphi(X,x)}:=\neg I_{\neg\varphi(\neg X,x)}\), denoting the greatest fixed point of the operator \(\varphi^{\mathfrak{N}}\), allowing us to express forms of 'codata' in \(\mathcal{L}_{<\omega}\).
For instance, let us assume a standard pairing bijection \(\mathbb{N}\times\mathbb{N}\to\mathbb{N}\), write \(\mathfrak{p}_{0}\) and \(\mathfrak{p}_{1}\) for the left and right inverses respectively. Write \(\eta(X,x):=E(\mathfrak{p}_{0}x)\wedge X(\mathfrak{p}_{1}x)\). Then the greatest fixed point \(J_{\eta}^{\mathfrak{N}}\) is just the set of finitely supported streams of even numbers.
As another example, given formulas \(\varphi(x,y),\psi(x)\) we can write \([\varphi^{*}]\psi:=J_{\chi(X,x)}\) for \(\chi(X,x):=\psi(x)\wedge\forall y(\varphi(x,y)\to X(y))\). Now, construing \(\varphi^{\mathfrak{N}}\) as a binary relation on natural numbers, we have that \(([\varphi^{*}]\psi)^{\mathfrak{N}}\) consists of all those points from which every (finite) \(\varphi^{\mathfrak{N}}\)-path leads to a point in \(\psi^{\mathfrak{N}}\).
It is well known that least (and greatest) fixed points can be approximated 'from below' (and 'from above', respectively) via the notion of _(ordinal) approximant_. For any \(F:\mathcal{P}(\mathbb{N})\to\mathcal{P}(\mathbb{N})\), let us define by transfinite induction,
\[\begin{array}{rcl}F^{0}(A)&:=&A\\ F^{\alpha+1}(A)&:=&F(F^{\alpha}(A))\\ F^{\lambda}(A)&:=&\bigcup\limits_{\alpha<\lambda}F^{\alpha}(A)&\text{if $\lambda$ is a limit ordinal}\end{array} \tag{1}\]
By appealing to the transfinite pigeonhole principle we have:
[For \(F:\mathcal{P}(\mathbb{N})\to\mathcal{P}(\mathbb{N})\) monotone, there is an ordinal \(\alpha\) s.t. \(\mu F=F^{\alpha}(\varnothing)\).]
Indeed we may assume that such \(\alpha\) is _countable_ and, by the well-ordering principle, there is indeed a _least_ such \(\alpha\) satisfying the proposition above.
[Naturals, evens and odds: closure ordinals] Revisiting Example 2.3 again, it is not hard to see that the approximants of \(n^{\mathfrak{N}},e^{\mathfrak{N}},o^{\mathfrak{N}}\) are respectively:
\[\begin{array}{rclrclrcl}(n^{\mathfrak{N}})^{0}(\varnothing)&=&\varnothing&( e^{\mathfrak{N}})^{0}(\varnothing)&=&\varnothing&(o^{\mathfrak{N}})^{0}( \varnothing)&=&\varnothing\\ (n^{\mathfrak{N}})^{1}(\varnothing)&=&\{0\}&(e^{\mathfrak{N}})^{1}(\varnothing)& =&\{0\}&(o^{\mathfrak{N}})^{1}(\varnothing)&=&\{1\}\\ (n^{\mathfrak{N}})^{2}(\varnothing)&=&\{0,1\}&(e^{\mathfrak{N}})^{2}(\varnothing)& =&\{0,2\}&(o^{\mathfrak{N}})^{2}(\varnothing)&=&\{1,3\}\\ &\vdots&&\vdots&&\vdots&&\vdots\\ (n^{\mathfrak{N}})^{\omega}(\varnothing)&=&\mathbb{N}&(e^{\mathfrak{N}})^{ \omega}(\varnothing)&=&\mathbb{E}&&(o^{\mathfrak{N}})^{\omega}(\varnothing)&=& \mathbb{O}\end{array}\]
Note that for each of these operators we reached the (least) fixed point for the first time at stage \(\omega\). We say that \(\omega\) is the _closure ordinal_ of these operators.
Now, returning to the formula \(m(X,x)\), let us finally compute its least fixed point in \(\mathfrak{N}\) by the method of approximants:
\[\begin{array}{rclrclrcl}(m^{\mathfrak{N}})^{0}(\varnothing)&=&\varnothing&(m^{ \mathfrak{N}})^{\omega}(\varnothing)&=&\mathbb{E}&(m^{\mathfrak{N}})^{\omega 2}( \varnothing)&=&\mathbb{E}\cup\mathbb{O}=\mathbb{N}\\ (m^{\mathfrak{N}})^{1}(\varnothing)&=&\{0\}&(m^{\mathfrak{N}})^{\omega+1}( \varnothing)&=&\mathbb{E}\cup\{1\}\\ (m^{\mathfrak{N}})^{2}(\varnothing)&=&\{0,2\}&(m^{\mathfrak{N}})^{\omega+2}( \varnothing)&=&\mathbb{E}\cup\{1,3\}\\ &\vdots&&\vdots\end{array}\]
Thus indeed \(I_{m}^{\mathfrak{N}}=\mathbb{N}\), but this time with closure ordinal \(\omega 2\).
## 3 Arithmetical theories of inductive definitions
Thusfar we have only considered the language of arithmetic and inductive definitions ('syntax') and structures over these languages ('semantics'). We shall now introduce _theories_ over these languages, in particular setting them up within a _sequent calculus_ system, in order to facilitate the definition of the non-wellfounded and cyclic systems we introduce later.
[Sequent calculus for PA] A sequent is an expression \(\Gamma\Rightarrow\Delta\) where \(\Gamma\) and \(\Delta\) are sets of formulas (sometimes called _cedents).1 The calculus \(\mathsf{LK}_{=}\) for first-order logic with equality and substitution is given in Figure 1.
Footnote 1: The symbol \(\Rightarrow\) is just a syntactic delimiter, but is suggestive of the semantic interpretation of sequents.
_The sequent calculus for PA extends \(\mathsf{LK}_{=}\) by initial sequents for all axioms of Robinson Arithmetic \(\mathsf{Q}\), as well as the induction rule:_
\[\operatorname*{\mathsf{ind}}\frac{\Gamma\Rightarrow\Delta,\varphi(0)\quad \Gamma,\varphi(y)\Rightarrow\Delta,\varphi(\mathfrak{sy})}{\Gamma\Rightarrow \Delta,\varphi(t)}\,y\text{ fresh}\]
We will present some examples of proofs shortly, but first let us develop the implementation of the first-order theories we consider within the sequent calculus.
### Theory of (finitely iterated) inductive definitions
\(\mathsf{ID}_{<\omega}\) is a \(\mathcal{L}_{<\omega}\)-theory that extends PA by (the universal closures of):2
Footnote 2: Formally, we include instances of the induction schema for all formulas \(\varphi\) in the extended language too.
* (Pre-fixed) \(\forall x(\varphi(I_{\varphi},x)\to I_{\varphi}(x))\)
* (Least) \(\forall x(\varphi(\psi,x)\to\psi(x))\to\forall x(I_{\varphi}(x)\to\psi(x))\)
for all formulas \(\varphi(X,x)\) positive in \(X\).
Note that, while the first axiom states that \(I_{\varphi}\) is a _pre-fixed point_ of \(\varphi(-)\), the second axiom (schema) states that \(I_{\varphi}\) is least among the (arithmetically definable) pre-fixed points. As before, we implement this theory within the sequent calculus:
[Sequent calculus for \(\mathsf{ID}_{<\omega}\)] The sequent calculus for \(\mathsf{ID}_{<\omega}\) extends that for PA by the rules:
\[I_{\varphi}\cdot\frac{\Gamma,\varphi(I_{\varphi},t)\Rightarrow\Delta}{\Gamma, I_{\varphi}(t)\Rightarrow\Delta}\qquad I_{\varphi}\cdot\frac{\Gamma \Rightarrow\Delta,\varphi(I_{\varphi},t)}{\Gamma\Rightarrow\Delta,I_{\varphi}( t)} \tag{2}\]
\[\operatorname*{\mathsf{ind}}(\varphi)\frac{\Gamma,\varphi(\psi,y)\Rightarrow \Delta,\psi(y)\quad\Gamma,\psi(t)\Rightarrow\Delta}{\Gamma,I_{\varphi}(t) \Rightarrow\Delta}\,y\text{ fresh} \tag{3}\]
### Examples
In this subsection we consider some examples of sequent proofs for \(\mathsf{ID}_{<\omega}\).
Note that the \(I_{\varphi}\)-\(r\) and \(\mathsf{ind}(\varphi)\) rules correspond respectively to the axioms we gave for \(\mathsf{ID}_{<\omega}\). The \(I_{\varphi}\)-\(l\) rule, morally stating that \(I_{\varphi}\) is a _post-fixed point_ of \(\varphi(-)\), does not correspond to any of the axioms. In fact we may consider it a form of'syntactic sugar' that will be useful for defining our cyclic systems later:
[Post-fixed point] We can derive the \(I_{\varphi}\)-\(l\) rule from the other two as follows:
\[\begin{split}\includegraphics[width=142.26378pt]{images/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/.eps/eps/.eps/.
* If \(\varphi(Y)\) is itself an inductive predicate, say \(I_{\psi(X,Y,x)}(t)\), we construct the following derivation: \[\begin{array}{c}\stackrel{{ IH}}{{\psi(X,Z,x)\cdot r}}\frac{Y(y) \Rightarrow Z(y)}{{I_{\psi(X,Z,x)}}}\\ \stackrel{{ IH}}{{\mathop{\mathtt{ind}}(\psi(-Z,x))}}\frac{\psi(I _{\psi(X,Z,x)},Y,z)\Rightarrow\psi(I_{\psi(X,Z,x)},Z,z)}{{I_{\psi(X,Y,x)}}(t) \Rightarrow I_{\psi(X,Z,x)}(t)}\\ \end{array}\ \
* _the rules_ \(I_{\varphi}\)_-_\(l\) _and_ \(I_{\varphi}\)_-_\(r\) _from (_2_); and,_
* _the following additional rule:_ \(\frac{N}{\Gamma\Rightarrow\Delta,N(t)}\)__
The '\(-\)' superscript in \(\mathsf{LID}^{-}_{<\omega}\) indicates that we do not include the \(\mathsf{ind}(\varphi)\) rules in this system. Note in the definition above that, in light of Example 3.5, we have chosen to simplify our system by omitting an explicit rule for numerical induction and instead simply including a rule that insists that our domain consists only of natural numbers. This streamlines the resulting definition of 'progressing trace':
[Traces and progress] Fix a \(\mathsf{LID}^{-}_{<\omega}\)-preproof \(\pi\) and \((v_{i})_{i\in\omega}\) an infinite branch along \(T_{\pi}\). A _trace_ along \((v_{i})_{i\in\omega}\) is a sequence of formulas \((\varphi_{i})_{i\geq k}\), with each \(\varphi_{i}\) occurring on the LHS of \(\pi(v_{i})\), such that for all \(i\geq k\):
* \(\pi(v_{i})\) is not a substitution step and \(\varphi_{i+1}=\varphi_{i}\); or,
* \(\pi(v_{i})\) is a \(\theta\)-substitution step and \(\theta(\varphi_{i+1})=\varphi_{i}\); or,
* \(\pi(v_{i})\) is a \(=\) -\(l\) step with respect to \(s=t\) and, for some \(\psi(x,y)\), we have \(\varphi_{i+1}=\psi(s,t)\) and \(\varphi_{i}=\psi(t,s)\); or,
* \(\varphi_{i}\) is the principal formula of \(\pi(v_{i})\) and \(\varphi_{i+1}\) is auxiliary.
We say that \(\varphi_{k+1}\) is an _immediate ancestor_ of \(\varphi_{k}\) if they extend to some trace \((\varphi_{i})_{i\geq k}\).
A trace \((\varphi_{i})_{i\geq k}\) is _progressing_ if it is principal infinitely often.
[Non-wellfounded proofs] A (non-wellfounded) \(\mathsf{LID}^{-}_{<\omega}\)-proof is a \(\mathsf{LID}^{-}_{<\omega}\)-preproof \(\pi\) for which each infinite branch has a progressing trace. We also say that \(\pi\) is progressing in this case. If \(\pi\) is regular, we call it a _cyclic proof.
We write \(\mathsf{LID}^{-}_{<\omega}\vdash_{\mathrm{nwt}}\varphi\) or \(\mathsf{LID}^{-}_{<\omega}\vdash_{\mathrm{cyc}}\varphi\) if there is a non-wellfounded or cyclic, respectively, \(\mathsf{LID}^{-}_{<\omega}\)-proof of \(\varphi\). We write \(\mathsf{CID}_{<\omega}\) for the class of cyclic \(\mathsf{LID}^{-}_{<\omega}\)-proofs.
Many of the basic results and features of non-wellfounded and cyclic proofs for arithmetic from [21, 11] are present also in our setting, and we point the reader to those works for several examples further to those we give here.
[Naturals, evens and odds: proving relationships] Let us revisit once more Example 3.3. Several examples about the relationships between \(N,E,O\) for a similar framework of first-order logic with inductive definitions are given in [7, 9, 10], in particular including ones with complex cycle structure. Here we shall instead revisit the relationship between the inductive predicates \(M\) and \(N\).
Recall that we showed in Example 3 that \(N\) and \(M\) compute the same set, namely \(\mathbb{N}\), in the standard model. We can show this formally within \(\mathsf{CID}_{<\omega}\) by means of cyclic proofs. For the direction \(M\subseteq N\):
\[\begin{array}{c}\vdots\\ \frac{I_{m}\cdot l}{M(x)\Rightarrow N(x)}\bullet\\ \frac{N(\mathbf{s})}{M(y)\Rightarrow N(\mathbf{s}\mathbf{y})}\\ \vee\cdot l\,\frac{e(M,x)\Rightarrow N(x)}{I_{m}\cdot l}\frac{m(M,x) \Rightarrow N(x)}{M(x)\Rightarrow N(x)}\end{array}\]
where the derivations marked \(N(0),N(\mathbf{ss}),N(1)\) all have simple finite proofs by unfolding \(N\) on the RHS. Again we indicate by \(\bullet\) roots of identical subproofs, and the only infinite branch, looping on \(\bullet\), has progressing trace in blue.
**Example 4.6** (Deep inference / Functoriality, revisited).: Recalling Example 3.4, we can actually build simpler such (cyclic) derivations in \(\mathsf{LID}^{-}_{<\omega}\), again by structural induction. Again the critical case is when \(\varphi(Y)\) is itself an inductive predicate, say \(I_{\varphi(X,Y,x)}\):
\[\begin{array}{c}\vdots\\ \mbox{\small$II$}\frac{I_{\varphi(X,Y,x)}(t)\Rightarrow I_{\varphi(X,Z,x)}(t) }{I_{\varphi(X,Y,x)}(t)\Rightarrow\varphi(I_{\varphi(X,Z,x)},Z,t)}\\ \mbox{\small$I_{\varphi}$}\frac{\varphi(I_{\varphi(X,Y,x)},Y,t)\Rightarrow I_{ \varphi(X,Z,x)}(t)}{I_{\varphi(X,Y,x)}(t)\Rightarrow I_{\varphi(X,Z,x)}(t)} \bullet\end{array}\]
Here the steps marked \(\bullet\) root identical subproofs (witnessing regularity). The subderivation marked \(\mathit{IH}\) is obtained by the inductive hypothesis for \(\varphi(X,Y,t)\), under substitution of \(I_{\varphi(X,Y,x)}\) for \(Y\) and \(I_{\varphi(X,Z,x)}\) for \(Z\). The progressing trace along the only infinite branch, looping on \(\bullet\), is indicated in blue. Naturally, the fact that this trace remains 'unbroken' during appeal to the inductive hypothesis should, strictly speaking, be verified as an invariant during the remaining (omitted) cases.
### Simulating inductive proofs
Our cyclic system \(\mathsf{CID}_{<\omega}\) subsumes \(\mathsf{ID}_{<\omega}\) by a standard construction:
**Theorem 4.7** (Induction to cycles).: _If \(\mathsf{ID}_{<\omega}\vdash\varphi\) then \(\mathsf{CID}_{<\omega}\vdash\varphi\)._
Proof sketch.: We proceed by induction on the structure of a \(\mathsf{ID}_{<\omega}\) proof. The critical step is \(\mathsf{ind}(\varphi)\), for which we do not have a corresponding rule in \(\mathsf{LID}^{-}_{<\omega}\). We simulate this rule by,
\[\begin{array}{c}\vdots\\ \mbox{\small$\varphi$}\frac{I_{\varphi}\vdash I_{\varphi}(t)\Rightarrow\Delta,\psi(t)}{\Gamma,\varphi(I_{\varphi},t)\Rightarrow\Delta,\varphi(\psi,t)$}\\ \mbox{\small$\Gamma$}\frac{\Gamma,\varphi(I_{\varphi},t)\Rightarrow\Delta, \psi(t)$}{I_{\varphi}\vdash I}\\ \mbox{\small$\mbox{\small$\mathrm{cut}$}\frac{\Gamma,\varphi(I_{ \varphi},t)\Rightarrow\Delta,\psi(t)$}{\Gamma,I_{\varphi}(t)\Rightarrow\Delta,\psi(t)$}$}\\ \mbox{\small$\Gamma$},I_{\varphi}(t)\Rightarrow\Delta\end{array}\]
where \(\bullet\) marks roots of identical subproofs and the derivation marked \(\varphi\) is obtained by induction on the structure of \(\varphi\), see Example 4.6. Any infinite branch is either progressing by the induction hypothesis, or loops infinitely on \(\bullet\) and has the progressing trace coloured in blue.
Of course, the converse result is much harder (and, indeed, implies soundness of cyclic proofs).
### About traces
Our notion of (progressing) trace may seem surprisingly simple to the seasoned cyclic proof theorist, when comparing to analogous conditions in similar logics such as the \(\mu\)-calculus requiring complex'signatures', e.g. [18, 25, 2]. However this simplicity arises naturally from
the way we have formulated our syntax. Let us take some time to detail some of properties of (progressing) traces that will facilitate our soundness argument later.
Write \(\mathcal{I}\) for the set of inductive predicates of \(\mathcal{L}_{<\omega}\) (i.e. the set of symbols \(I_{\varphi}\)). Write \(<\) for the smallest transitive relation on \(\mathcal{I}\) satisfying:
* if \(I_{\varphi}\) occurs in \(\psi(X,x)\) then \(I_{\varphi}<I_{\psi}\).
By the inductive definition of the language \(\mathcal{L}_{<\omega}\), it is immediate that \(<\) is a well-founded relation on \(\mathcal{I}\). In what follows, we shall extend \(<\) arbitrarily to a (total) well-order on \(\mathcal{I}\), so as to freely use of terminology peculiar to linear orders.
[Properties of progressing traces] Let \((\tau_{i})_{i\geq k}\) be a progressing trace. There is a (unique) inductive predicate symbol \(I_{\psi}\) and some \(k^{\prime}\geq k\) such that:
1. \(\tau_{i}\) is of the form \(I_{\psi}(t)\) and principal for infinitely many \(i\geq k\);
2. \(I_{\psi}\) occurs positively in each \(\tau_{i}\), for \(i\geq k^{\prime}\);
3. for any \(j\geq k^{\prime}\) and \(I_{\chi}\) occurring in \(\tau_{j}\), we have \(I_{\chi}\leq I_{\psi}\).
Proof.: Since \((\tau_{i})_{i\geq k}\) is progressing it is infinitely often principal, and so must be infinitely often principal for inductive predicates, i.e. for formulas of the form \(I_{\varphi}(t)\), since a trace through at any other formula, when principal, decreases in size. Furthermore, when \(i\leq j\), we have by induction on \(j-i\) that:
* if \(\tau_{i}=I_{\varphi}(t)\) and \(I_{\chi}\) occurs in \(\tau_{j}\), then \(I_{\chi}\leq I_{\varphi}\) (4)
Thus, in particular, if \(i<j\) and \(\tau_{i}=I_{\varphi}(t)\) and \(\tau_{j}=I_{\chi}(u)\) then \(I_{\chi}\leq I_{\varphi}\), and so by well-foundedness of \(<\) on \(\mathcal{I}\) there is a unique \(I_{\psi}\) satisfying Item 1.
Now, let \((\tau_{i_{j}})_{j<\omega}\) be a subsequence with each \(\tau_{i_{j}}=I_{\psi}(t_{j})\) principal, for some terms \(t_{j}\). Setting \(k^{\prime}=i_{0}\), Item 2 is proved by induction on \(i_{j}-i\) for least \(i_{j}\geq i\).
Finally Item 3 also follows from (4) above, by setting \(i=k^{\prime}=i_{0}\leq j\).
## 5 Soundness of non-wellfounded proofs
The main goal of this section is to prove the following result:
[Soundness] If \(\mathsf{LID}^{-}_{<\omega}\vdash_{\mathrm{nwf}}\varphi\) then \(\mathfrak{N}\vDash\varphi\).
Before proving this, it is convenient to omit consideration of substitutions in preproofs:
[Admissibility of substitution] If there is a (non-wellfounded) \(\mathsf{LID}^{-}_{<\omega}\)-proof of a sequent \(\Gamma\Rightarrow\Delta\), then there is one not using the substitution rule.
Proof sketch.: We proceed by a coinductive argument, replacing each substitution step by a meta-level substitution on the sub-preproof rooted at the premiss. Productivity is guaranteed by progressiveness: each infinite branch has, at least, infinitely many \(I_{\varphi}\)-\(l\) steps.
### Satisfaction with respect to approximants
Before proceeding, let us build up a little more theory about approximants of (least) fixed points. Let us temporarily expand the language \(\mathcal{L}_{<\omega}\) to include, for each inductive predicate symbol \(I_{\varphi}\) and each ordinal \(\alpha\) a symbol \(I^{\alpha}_{\varphi}\). We do not consider these symbols 'inductive predicates', but rather refer to them as _approximant symbols_. In the standard model, using the notation of Section 2, we set \((I^{\alpha}_{\varphi})^{\mathfrak{M}}:=(\varphi^{\mathfrak{M}})^{\alpha}(\varnothing)\).
For a formula \(\varphi\) of \(\mathcal{L}_{<\omega}\) whose \(<\)-greatest inductive predicate in positive position is \(I_{\psi}\), we write \(\varphi^{\alpha}\) for the formula obtained from \(\varphi\) by replacing each positive occurrence of \(I_{\psi}\) by \(I^{\alpha}_{\psi}\). As an immediate consequence of the characterisation of least fixed points by unions of approximants, Proposition 2.8, we have:
**Corollary 5.3** (of Proposition 2.8).: _If \(\mathfrak{N}\vDash\varphi\) then there is an ordinal \(\alpha\) such that \(\mathfrak{N}\vDash\varphi^{\alpha}\)._
Note that, as a consequence of positivity implying monotonicity, we also have:
**Corollary 5.4** (of Proposition 2.1).: _If \(\alpha\leq\beta\) then \(\mathfrak{N}\vDash\varphi^{\alpha}\to\varphi^{\beta}\)._
Finally, let us point out that, by the definition of the inflationary construction in Equation (1), if \(t^{\mathfrak{N}}\in\mu\varphi^{\mathfrak{N}}\), then the least ordinal \(\alpha\) with \(t^{\mathfrak{N}}\in(\varphi^{\mathfrak{N}})^{\alpha}\) must be a successor ordinal. Albeit rather immediate, we better state the following consequence of this reasoning:
**Observation 5.5**.: _If \(\alpha,\beta\) are least s.t. \(\mathfrak{N}\vDash I^{\alpha}_{\varphi}(t)\) and \(\mathfrak{N}\vDash\varphi(I^{\beta}_{\varphi},t)\) respectively, then \(\beta<\alpha\)._
### Building countermodels
An _assignment_ is a (partial) map \(\rho\) from variables to natural numbers.
If \(\varphi\) is a formula and \(\rho:\mathsf{FV}(\varphi)\to\mathbb{N}\), we define \(\mathfrak{N},\rho\vDash\varphi\) (or simply \(\rho\vDash\varphi\)) by simply interpreting free variables under \(\rho\) in \(\mathfrak{N}\). Formally, \(\mathfrak{N},\rho\vDash\varphi\) if \(\mathfrak{N}\vDash\varphi\left[\rho(x)/x\right]_{x\in\mathsf{FV}(\varphi)}\).4
Footnote 4: Note here we are implicitly identifying natural numbers with their corresponding numerals.
As a consequence of local soundness of the rules, as well as preserving truth we have that rules'reflect' falsity. In fact we can say more:
**Lemma 5.6** (Reflecting falsity).: _Fix an inference step:_
\[\begin{array}{l}\underset{t}{\Gamma_{1}\Rightarrow\Delta_{1}}\;\;\;\cdots \;\;\;\Gamma_{n}\Rightarrow\Delta_{n}\\ \Gamma\Rightarrow\Delta\end{array} \tag{5}\]
_If \(\rho\vDash\bigwedge\Gamma\) and \(\rho\nvDash\bigvee\Delta\) then there is an assignment \(\rho^{\prime}\) and premiss \(\Gamma^{\prime}\Rightarrow\Delta^{\prime}\) with:_
1. \(\rho^{\prime}\) _extends_ \(\rho\)_, i.e._ \(\rho^{\prime}(x)=\rho(x)\) _for any_ \(x\) _in the domain of_ \(\rho\)_;_
2. \(\rho^{\prime}\vDash\bigwedge\Gamma^{\prime}\) _and_ \(\rho^{\prime}\nvDash\bigvee\Delta^{\prime}\)_;_
3. _if_ \(\psi\in\Gamma^{\prime}\) _is an immediate ancestor of_ \(\varphi\in\Gamma\) _then either:_
1. \(I,I^{\prime}\) _are the greatest inductive predicates occurring in_ \(\varphi,\psi\) _resp. and_ \(I^{\prime}<I\)_; or,_
2. _For any ordinal_ \(\alpha\)_, we have_ \(\rho\vDash\varphi^{\alpha}\implies\rho^{\prime}\vDash\psi^{\alpha}\)_._
The proof is similar to analogous results in [21, 11], however we must also take care to maintain the invariant Item 3 during the construction. An important distinction here is that, for Item 3b, we must find the least ordinal approximating the principal formula of, say a \(\lor\)-left step, and evaluate auxiliary formulas with respect to this ordinal in order to appropriately choose the correct premiss. The required property then follows by monotonicity, Proposition 2.1, and the fact that approximants form an increasing chain, cf. Equation (1). The necessity of this consideration is similar to (but somewhat simpler than) analogous issues arising in the cyclic proof theory of the modal \(\mu\)-calculus, cf. [18, 25, 2].
### Putting it all together
We are now ready to prove the main result of this section.
Proof of Theorem 5.1.: Let \(\pi\) be a (non-wellfounded) \(\mathsf{LID}^{-}_{<\omega}\) proof of the sequent \(\Rightarrow\varphi\) and suppose, for contradiction, that \(\mathfrak{N}\nvDash\varphi\). We define a branch \((v_{i})_{i<\omega}\) and assignments \((\rho_{i})_{i<\omega}\) by setting:
**=**: \(\rho_{0}:=\varnothing\) and \(v_{0}:=\varepsilon\) (the root of \(\pi\));5
* appealing to Lemma 5.6, if \(\pi(v_{i})\) has form (5), we set \(v_{i+1}\) s.t. \(\pi(v_{i+1})\) has conclusion \(\Gamma^{\prime}\Rightarrow\Delta^{\prime}\) and \(\rho_{i+1}:=\rho_{i}^{\prime}\).
By assumption that \(\pi\) is progressing, let \((\tau_{i})_{i\geq k}\) be a progressing trace along \((v_{i})_{i<\omega}\), and let \(\alpha_{i}\) be the least ordinals such that \(\mathfrak{N}\vdash\tau_{i}^{\alpha_{i}}\) for \(i\geq k\).
Now, let \(k^{\prime}\geq k\) and \(I_{\psi}\) be obtained from \((\tau_{i})_{i\geq k}\) by Proposition 4.8. By Items 2 and 3 of Proposition 4.8 we have that \(I_{\psi}\) is the greatest inductive predicate occurring (positively) in each \(\tau_{i}\), for \(i\geq k^{\prime}\), and so Item 3a of Lemma 5.6 never applies (for \(i\geq k^{\prime}\)). Thus, by Proposition 2.1, we have \(\alpha_{i+1}\leq\alpha_{i}\) for \(i\geq k^{\prime}\).
On the other hand, at any \(I_{\psi}\)-\(l\) step where \(\tau_{i}\) is principal, for \(i\geq k^{\prime}\), we must have that \(\alpha_{i+1}<\alpha_{i}\) by Observation 5.5. Since this happens infinitely often, by Item 1 of Proposition 4.8, we conclude that \((\alpha_{i})_{i\geq k^{\prime}}\) is a monotone non-increasing sequence of ordinals that does not converge, contradicting the well-foundedness of ordinals.
## 6 Inductive definitions and truth in second-order arithmetic
The remainder of this paper is devoted to proving the converse of Theorem 4.7. For this, we are inspired by the ideas of previous work [21, 11], using'second-order' theories to formalise the metatheorems of cyclic systems (namely soundness), and then appealing to conservativity results. However the exposition here is far more involved than the analogous ones for arithmetic.
For this reason, we rather rely on a formalisation of the 'theory of recursive ordinals' (with parameters) in \(\Pi^{1}_{1}\mbox{-}\mathsf{CA}_{0}\), and formalise the soundness argument abstractly in this way.
### Subsystems of second-order arithmetic and inductive definitions
We shall work with common subsystems of second-order arithmetic, as found in textbooks such as [22], and assume basic facts about them.
In particular, recall that \(\mathsf{ACA}_{0}\) is a two-sorted extension of basic arithmetic by:
* _Arithmetical comprehension._\(\exists X\forall x(X(x)\leftrightarrow\varphi(x))\) for each arithmetical formula \(\varphi(x)\).
* _Set induction._\(\forall X(X(0)\to\forall x(X(x)\to X(\mathsf{s}x))\to\forall x(x))\)
From here \(\Pi^{1}_{1}\mbox{-}\mathsf{CA}_{0}\) is the extension of \(\mathsf{ACA}_{0}\) by the comprehension schema for all \(\Pi^{1}_{1}\) formulas. It is well-known that \(\Pi^{1}_{1}\mbox{-}\mathsf{CA}_{0}\) proves also the \(\Sigma^{1}_{1}\)-comprehension scheme, a fact that we shall freely use, along with other established principles, e.g. from [22].
We can interpret \(\mathcal{L}_{<\omega}\) into the language of second-order arithmetic by:
\[I_{\varphi}(t)\quad:=\quad\forall X((\forall x\varphi(X,x)\to X(x))\to X(t)) \tag{6}\]
This interpretation induces a bona fide (and well-known) encoding of \(\mathsf{ID}_{<\omega}\) within \(\Pi^{1}_{1}\mbox{-}\mathsf{CA}_{0}\), and we shall henceforth freely use (arithmetical) inductive predicates when working within \(\Pi^{1}_{1}\mbox{-}\mathsf{CA}_{0}\), always understanding them as abbreviations under (6). In fact, we can make a stronger statement. Not only does \(\Pi^{1}_{1}\mbox{-}\mathsf{CA}_{0}\) extend \(\mathsf{ID}_{<\omega}\) arithmetically, it does so conservatively:
[E.g., [12]] \(\Pi^{1}_{1}\mbox{-}\mathsf{CA}_{0}\) is arithmetically conservative over \(\mathsf{ID}_{<\omega}\).
This is a nontrivial but now well-known result in proof theory whose details we shall not recount. We will use this result as a 'black box' henceforth.
### Satisfaction as an inductive definition
As usual, there is no universal (first-order) truth predicate for a predicate language, for Tarskian reasons. However we may define _partial_ truth predicates for fragments of the language. In a language closed under inductive definitions, this is particularly straightforward since satisfaction itself is inductively defined (at the meta level). In what follows we will employ standard metamathematical notations and conventions for coding, e.g. we write \(\ulcorner E\urcorner\) for the Godel code of an expression \(E\).
Also, when it is not ambiguous, we shall typically use the same notation for meta-level objects/operations and their object-level (manipulations on) codes, as a convenient abuse of notation.
[Formalised relative satisfaction] Let \(\vec{X}=X_{1},\ldots,X_{k}\) be a sequence of set symbols. There is a \(\Pi^{1}_{1}\) formula \(\operatorname{Sat}_{\vec{X}}(\rho,m,\vec{A})\) such that \(\Pi^{1}_{1}\text{-}\mathsf{CA}_{0}\) proves the characterisation in Figure 2
for \(\varphi,\psi\) ranging over arithmetical formulas over \(\vec{X}\).
Proof sketch.: The RHS of the formula displayed induces a positive arithmetical inductive definition of \(\operatorname{Sat}_{\vec{X}}\), whence we conclude by [27].
[Reflection, \(\Pi^{1}_{1}\text{-}\mathsf{CA}_{0}\)] For any arithmetical formula \(\varphi(\vec{X},\vec{x})\) with all free first-order variables displayed, we have \(\operatorname{Sat}_{\vec{X}}(\rho,\ulcorner\varphi(\vec{X},\vec{x})\urcorner, \vec{A})\leftrightarrow\varphi(\vec{A},\rho(\vec{x}))\).
## 7 Approximants and transfinite recursion in second-order arithmetic
Throughout this section we shall fix a list \(\vec{X}\) of set variables that may occur as parameters in all formulas. We shall almost always suppress them. We work within \(\Pi^{1}_{1}\text{-}\mathsf{CA}_{0}\) throughout this section, unless otherwise stated.
### Order theory and transfinite recursion in second-order arithmetic
We assume some basic notions for speaking about (partial) (well-founded) orders in second-order arithmetic, and some well-known facts about them. Definitions and propositions in this section have appeared previously in the literature, e.g., [22].
Figure 2: Inductive characterisation of the satisfaction predicate.
A _(binary) relation_ is a set symbol \(R\), construed as a set of pairs, with _domain_\(|R|:=\{x:R(x,x)\}\). We write simply \(x\leq_{R}y\) for \(x\in|R|\wedge y\in|R|\wedge R(x,y)\) and \(x<_{R}y:=x\leq_{R}y\wedge\neg x=y\).
We write:
* \(\mathrm{LO}(R)\) for an arithmetical formula stating that \(<_{R}\) is a linear order on \(|R|\).
* \(\mathrm{WF}(R)\) for a \(\Pi^{1}_{1}\)-formula stating that \(<_{R}\) is well-founded on \(|R|\).
* \(\mathrm{WO}(R):=\mathrm{LO}(R)\wedge\mathrm{WF}(R)\). ("\(R\) is a well-order")
* \(R<_{\mathrm{WO}}R^{\prime}\) if \(\mathrm{WO}(R),\mathrm{WO}(R^{\prime})\) and there is an order preserving bijection from \(R\) onto a proper initial segment of \(R^{\prime}\).
(\(<_{\mathrm{WO}}\) is provably \(\Delta^{1}_{1}\) within \(\Pi^{1}_{1}\)-\(\mathsf{CA}_{0}\)).
We have, already in \(\mathsf{ACA}_{0}\), transfinite induction (for sets) over any well-order:
\[\forall X,R(\mathrm{WO}(R)\to\forall a\in|R|\,(\forall b<_{R}a\,X(b)\to X(a) )\to\forall a\in|R|\,X(a))\]
More importantly we have that the class of well-orders itself is well-founded under comparison:
[Well-orders are well-ordered, \(\mathsf{ATR}_{0}\)] If \(F:\mathbb{N}\to\mathrm{WO}\) then there is \(n\in\mathbb{N}\) with \(F(n+1)\not<_{\mathrm{WO}}F(n)\)
An important principle within \(\Pi^{1}_{1}\)-\(\mathsf{CA}_{0}\) is _arithmetical transfinite recursion_ (\(\mathsf{ATR}\)). Since we shall need to later bind the well-order over which recursion takes place, we better develop the principle explicitly.
[Approximants] Let \(\varphi(X,x)\) be arithmetical and \(R\) a relation. We define:
\[I^{R}_{\varphi}(a,x):=\exists F\subseteq|R|\times\mathbb{N}\left(\forall b \in|R|\,\forall y\left(F(b,y)\to\exists c<_{R}b\,\varphi(F(c),y)\right)\wedge F (a,x)\right)\]
Intuitively we may see \(I^{R}_{\varphi}(a)\) as the union of a family of sets \(F(b)\), indexed by \(b<_{R}a\), satisfying \(F(b)=\bigcup\limits_{c<_{R}b}\varphi(F(c))\), here construing \(\varphi(-)\) as an operation on sets. The notation we have used is suggestive: the point of this section is to characterise inductive definitions in terms of approximants given by transfinite recursion.
Note that \(I^{R}_{\varphi}\) is a \(\Sigma^{1}_{1}\)-formula. The following is well-known:
[Bounded recursion] Let \(\varphi(X,x)\) be an arithmetical formula and suppose \(\mathrm{WO}(R)\). \(I^{R}_{\varphi}\) is a set (uniquely) satisfying:
\[\forall a\in|R|\,\forall x\left(I^{R}_{\varphi}(a,x)\leftrightarrow\exists b< _{R}a\,\varphi(I^{R}_{\varphi}(b),x)\right)\qquad\quad\left(\text{i.e. }\ I^{R}_{\varphi}(a)=\bigcup\limits_{b<_{R}a}\varphi(I^{R}_{\varphi}(b))\right) \tag{7}\]
As a consequence of transfinite induction, Proposition 7 we have:
Let \(\varphi(X,x)\) be arithmetical and positive in \(X\), and suppose \(\mathrm{WO}(R)\). We have \(\forall a<_{R}b\,\forall x(I^{R}_{\varphi}(a,x)\to I^{R}_{\varphi}(b,x))\).
Intuitively the above statement tells us that \(I^{R}_{\varphi}(-)\) forms an increasing chain along \(R\).
Henceforth we write \(I^{R}_{\varphi}(x):=\exists a\in|R|\,I^{R}_{\varphi}(a,x)\) which, with \(R\) occurring as a parameter, is again a \(\Sigma^{1}_{1}\) formula.
### Formalising recursive ordinals and approximants
\(\Pi^{1}_{1}\)-\(\mathsf{CA}_{0}\) is not strong enough a theory to characterise inductive definitions by limits of approximants, in general. However, when the closure ordinals of inductive definitions are recursive, they may be specified by finite data and duly admit such a characterisation within \(\Pi^{1}_{1}\)-\(\mathsf{CA}_{0}\). This subsection is devoted to a development of this characterisation; the definitions and propositions have appeared previously in the literature, e.g., [12, 13].
Let us fix a recursive enumeration of \(\Sigma^{0}_{1}\)-formulas with free (first-order) variables among \(x,y\), and write \(\alpha,\beta\) etc. to range over their Godel codes. Thanks to a (relativised) universal \(\Sigma^{0}_{1}\)-formula, we can readily evaluate (the codes of) \(\Sigma^{0}_{1}\) formulas already within \(\mathsf{RCA}_{0}\).
In this way we may treat \(\alpha,\beta\) etc. as binary relations, and duly extend the notations of the previous subsections appropriately, e.g. freely writing \(|\alpha|,\leq_{\alpha},<_{\alpha}\operatorname{LO}(\alpha),\operatorname{WF}( \alpha),\operatorname{WO}(\alpha),\alpha<_{\operatorname{WO}}\beta,I^{\alpha}_ {\varphi}\).
[Recursive ordinals] Write \(\mathcal{O}:=\{\alpha:\operatorname{WO}(\alpha)\}\), obtained by \(\Pi^{1}_{1}\)-comprehension, and \(\alpha<_{\mathcal{O}}\beta\) for \(\mathcal{O}(\alpha)\wedge\mathcal{O}(\beta)\wedge\alpha<_{\operatorname{WO}}\beta\).
We also write \(I^{\mathcal{O}}_{\varphi}(x):=\exists\alpha\in\mathcal{O}\,I^{\alpha}_{ \varphi}(x)\).
Of course, well-foundedness of \(\mathcal{O}\) under \(<_{\mathcal{O}}\) is directly inherited from well-foundedness of \(\operatorname{WO}\) under \(<_{\operatorname{WO}}\), Proposition 7. Note that \(I^{\mathcal{O}}_{\varphi}(x)\) is again a \(\Sigma^{1}_{1}\)-formula, and so we have access to \(I^{\mathcal{O}}_{\varphi}\) as a set within \(\Pi^{1}_{1}\)-\(\mathsf{CA}_{0}\). In fact we even have access to the restriction \(I^{\mathcal{O}}_{\varphi}(-)\subseteq\mathcal{O}\times\mathbb{N}\) again by \(\Sigma^{1}_{1}\)-comprehension. As a result we can give a recursive characterisation of \(I^{\mathcal{O}}_{\varphi}\) similar to Proposition 7.1 but at the level of \(\mathcal{O}\):
[Recursion] Let \(\varphi(X,x)\) be arithmetical and positive in \(X\). We have:
\[\forall\alpha\in\mathcal{O}\,\forall x\,(I^{\alpha}_{\varphi}(x)\leftrightarrow \exists\beta<_{\mathcal{O}}\alpha\,\varphi(I^{\beta}_{\varphi},x))\qquad\quad \left(\text{i.e. }\ I^{\alpha}_{\varphi}=\bigcup_{\beta<_{\mathcal{O}}\alpha} \varphi(I^{\beta}_{\varphi})\right) \tag{8}\]
Proof sketch.: First, let us write \(\alpha_{a}\) for the initial segment of \(\alpha\) up to (and including) \(a\). Note that we have \(\alpha_{-}(-)\) as a set by arithmetical comprehension in \(\alpha\). By \(\alpha\)-induction on \(a\in|\alpha|\) we can show \(I^{\alpha}_{\varphi}(a,x)\leftrightarrow I^{\alpha_{a}}_{\varphi}(x)\). From here the equivalence follows directly by reduction to Proposition 7.1.
The following are well-known properties about \(\mathcal{O}\):
[Properties of \(\mathcal{O}\)] We have the following:
1. (Increase) \(\forall\alpha\in\mathcal{O}\,\exists\beta\in\mathcal{O}\,\alpha<_{\mathcal{O }}\beta\).
2. (Collection) \(\forall x\exists\alpha\in\mathcal{O}\,\varphi\to\exists\beta\,\forall x\, \exists\alpha<_{\mathcal{O}}\beta\ \varphi\).
Turning back to positive formulas again, we have the following useful consequence:
Let \(\varphi(X,x)\) and \(\psi(X)\) be arithmetical and positive in \(X\). \(\psi(I^{\mathcal{O}}_{\varphi})\to\exists\alpha\psi(I^{\alpha}_{\varphi})\).
Proof sketch.: We proceed by (meta-level) induction on the structure of \(\psi(X)\), appealing to Proposition 7.1 at a \(\forall\)-quantifier and Corollary 7.1 throughout.
### Characterising inductive definitions as limits of approximants
The main result of this section is:
[Characterisation] \(\forall x(I_{\varphi}(x)\leftrightarrow I^{\mathcal{O}}_{\varphi}(x))\) (i.e. \(I_{\varphi}=I^{\mathcal{O}}_{\varphi}\)).
**Proposition 8.2** (\(\mathsf{RCA}_{0}\), [11]).: \(\pi\) _is progressing._
**Formalised admissibility of substitution.** The admissibility of substitution, Proposition 5.2, is available already in weak theories by a simple inductive construction: from \(\pi\) define \(\pi^{\prime}\) a substitution-free \(\mathsf{LID}^{-}_{<\omega}\) non-wellfounded proof node-wise by simply composing the (finitely many) substitutions up to a node. The progressing criterion means that there are, in particular, infinitely many non-substitution steps along any infinite branch, and so by (weak) Konig's lemma have that the resulting binary tree is well-defined.
We henceforth work with \(\pi^{\prime}\) a substitution-free \(\mathsf{LID}^{-}_{<\omega}\) non-wellfounded proof of \(\Gamma\Rightarrow\Delta\) using only inductive predicates among \(\widetilde{I}=I_{\varphi_{1}},\ldots,I_{\varphi_{n}}\), that we 'know' is progressing.
**Formalising satisfaction with respect to approximants.** We already defined recursive approximants in \(\Pi^{1}_{1}\)-\(\mathsf{CA}_{0}\) in Section 7.2. The formalised version of Corollary 5.3 is given by Corollary 7.9, and the formalised version of Corollary 5.4 is available already in pure logic. The existence of least ordinals satisfying a property is given by well-foundedness of WO under \(<_{\mathrm{WO}}\), Proposition 7.2, and thus Observation 5.5 follows from Equation (7).
**Formalised building countermodels.** To speak about satisfaction and truth of formulas in \(\pi^{\prime}\), we use the formalised notion \(\mathrm{Sat}_{\widetilde{I}}\) in place of the meta-level '\(\vdash\)'. Note that the inductive predicates occurring in \(\pi^{\prime}\) parametrise the satisfiability predicate. From here Lemma 5.6 is formalised by proving soundness of the rules of \(\mathsf{LID}^{-}_{<\omega}\) with respect to \(\mathrm{Sat}_{\widetilde{I}}\), keeping track of immediate ancestry and using the results of the previous subsection. We use the (formalised) notions \(I^{\alpha}_{\varphi}\) as inputs to \(\mathrm{Sat}_{\widetilde{I}}\) in order to evaluate formulas like \(\varphi^{\alpha}\), and we rely on well-foundedness of the class of well-orders, Proposition 7.2, to make the correct decisions cf. Item 2(b). Let us point out that, for a fixed step \(r\), the description of \((\rho^{\prime},S^{\prime})\) from \((\rho,S)\) is arithmetical in \(\widetilde{I},\mathrm{Sat}_{\widetilde{I}}\), \(<_{\mathrm{WO}}\), \(\mathcal{O}\) and \(\widetilde{I}^{-}\), by essentially following the specification in the Lemma statement, relativising 'ordinals' to \(\mathcal{O}\).
**Putting it all together, formally.** Finally let us discuss how the proof of Theorem 5.1 (for \(\pi^{\prime}\)) is formalised. Recall that the infinite 'countermodel branch' \((v_{i})_{i<\omega}\) is recursive in the construction from (formalised) Lemma 5.6. Since that construction was arithmetical (in certain set symbols), we indeed have access to the countermodel branch \((v_{i})_{i<\omega}\) as a set by comprehension. Now, since we know that \(\pi^{\prime}\) is progressing, we can duly take a progressing trace \((\tau_{i})_{i\geq k}\) along it. From here the obtention of the sequence of (now recursive) ordinals \((\alpha_{i})_{i\geq k}\) is obtained by a simple comprehension instance arithmetical in \(\mathrm{Sat}_{\widetilde{I}}\), \(\widetilde{I}^{-}\) and \(<_{\mathrm{WO}}\). The remainder of the argument goes through as written, appealing to formalised versions of auxiliary statements.
From here we may conclude the main result of this section as promised:
Proof sketch of Theorem 8.1.: From \(\mathsf{CID}_{<\omega}\vdash\varphi\), for \(\varphi\) arithmetical, the explanations in this section give us \(\Pi^{1}_{1}\)-\(\mathsf{CA}_{0}\vdash\mathrm{Sat}_{\varnothing}(\varnothing,\ulcorner \varphi\urcorner,\varnothing)\). By reflection, Corollary 6.3, we thus have \(\Pi^{1}_{1}\)-\(\mathsf{CA}_{0}\vdash\varphi\), and so by conservativity, Theorem 6.1, we have \(\mathsf{ID}_{<\omega}\vdash\varphi\), as required.
## 9 Conclusions
We presented a new cyclic system \(\mathsf{CID}_{<\omega}\) formulated over the language \(\mathcal{L}_{<\omega}\) of finitely iterated arithmetical inductive definitions. We showed the arithmetical equivalence of \(\mathsf{CID}_{<\omega}\) and its inductive counterpart \(\mathsf{ID}_{<\omega}\) by nontrivially extending techniques that have recently appeared in the setting of cyclic arithmetic [21, 11]. Among other things, this work serves to further test the metamathematical techniques and methodology now available in cyclic proof theory.
Extensions of predicate logic by 'ordinary' inductive definitions, which are essentially quantifier-free but allow for a form of simultaneous induction, were extensively studied by Brotherston and Simpson, in particular in the setting of cyclic proofs [7, 9, 10]. Indeed
recently Berardi and Tatsuta have shown that cyclic systems for extensions of Peano and Heyting arithmetic by such inductive definitions prove the same theorems as the corresponding inductive systems [4, 5]. As noted by Das in [11] the result of [4] (for Peano arithmetic) is, in a sense, equivalent to Simpson's in [21] since ordinary inductive definitions can be encoded by \(\Sigma_{1}\)-formulas: closure ordinals of ordinary inductive definitions are always bounded above by \(\omega\). Comparing to the current work, recall that the closure ordinals of even a single arithmetical inductive definition exhaust all recursive ordinals.
There are many other possible extensions of the language of arithmetic \(\mathcal{L}_{A}\) by fixed points. One natural avenue for further work would be to consider \(\mathcal{L}_{\alpha}\) for both \(\alpha<\omega\) and \(\alpha\geq\omega\). Again the corresponding finitary systems \(\mathsf{ID}_{\alpha}\) play a crucial role in the ordinal analysis of stronger impredicative subsystems of second-order arithmetic (see, e.g., [19]). However what may be more interesting in the context of cyclic proof theory is the extension of \(\mathcal{L}_{A}\) (and \(\mathcal{L}_{<\omega}\)) by so-called 'general' inductive definitions, as in [16, 17]. These essentially extend the syntax of \(\mathcal{L}_{A}\) in the same way that fixed points of the modal \(\mu\)-calculus extend the language of modal logic, in particular allowing set parameters within inductive definitions. Such a setting necessarily exhibits more complicated metatheory, but is a natural target in light of the origins of cyclic proof theory based in the \(\mu\)-calculus and first-order logic with inductive definitions. To this end, let us point out that cyclic systems for the 'first-order \(\mu\)-calculus' have already appeared [24, 23, 1], and so could form the basis of such investigation.
|
2304.06361 | Application of fusion technique to the solution of Harrington problem
and its generalizations to Baire functions, part I | In this paper we provide solutions of the Harrington problem (along with a
few generalizations) proposed in a book Analytic Sets. The original problem
asks if for arbitrary sequence of continuous functions from \( \R^\omega \) to
a fixed compact interval we can find a subsequence point-wise convergent on
some product of perfect subsets of \( \R \). We reduce aforementioned problem
to functions from \( C^\omega \) to \(C\), where \(C\) is a standard Cantor set
as well as also provide solution to the problem with Baire functions in place
of continuous ones. Our main focus is on showing applications of the fusion
lemma - a result about perfect trees used among others to prove minimality of
Sack's forcing - to the problem at hand. | Sławomir Kusiński | 2023-04-13T09:34:32Z | http://arxiv.org/abs/2304.06361v2 | Application of fusion technique to the solution of Harrington problem _and its generalizations to Baire functions_, **part I**
###### Abstract
In this paper we provide solutions of the Harrington problem (along with a few generalizations) proposed in a book _Analytic Sets_. The original problem asks if for arbitrary sequence of continuous functions from \(\mathbb{R}^{\omega}\) to a fixed compact interval we can find a subsequence point-wise convergent on some product of perfect subsets of \(\mathbb{R}\). We reduce aforementioned problem to functions from \(C^{\omega}\) to \(C\), where \(C\) is a standard Cantor set as well as also provide solution to the problem with Baire functions in place of continuous ones. Our main focus is on showing applications of the fusion lemma - a result about perfect trees used among others to prove minimality of Sack's forcing - to the problem at hand.
## 1 Preliminary notions
By a perfect set we mean a subset of a topological space that is compact and has no isolated points. It is worth noting that some authors use a slightly weaker definition of a perfect set, namely they require it to be only closed instead of compact. By 2 we mean a set \(\{0,1\}\) with discrete topology. By \(C\subseteq\mathbb{R}\) we mean a standard Cantor set. It is well known that \(C\) is homeomorphic to the space \(2^{\omega}\) with product topology where \(\omega\) denotes the set of all natural numbers.
In [1] in the problems section L Harrington published a following problem (as a possible weakening of a problem of Halpern).
**Problem 1**.: _Given continuous functions \(f_{n}\colon\mathbb{R}^{\omega}\to[0;1]\), do there exist a set \(N=\{n_{i}\colon i\in\omega\}\in[\omega]^{\omega}\) and nonempty perfect sets \(P_{j}\subseteq[0;1]\) for \(j\in\omega\) such that the subsequence \((f_{n_{i}})_{i\in\omega}\) is pointwise convergent on the product \(\prod\limits_{j\in\omega}P_{j}\)?_
We have acquired information that the one dimensional version of such problem has been solved in 1920s by S Mazurkiewicz, but unfortunately we were not able to trace it back to the original paper. In [12] Laver showed amongst others that we get an equivalent problem if we substitute continuous for measurable functions or functions with the Baire property. This prompted to consider variants of the problem with
different notions of measurability and different topologies (most notably with Ellentuck topology [4]).
We will restrict our attention to the functions with domain \(C^{\omega}\) and codomain \(C\). Note that restriction of domain is not a weakening of the statement and as we will show such a restriction of codomain leads in fact to an equivalent problem, ie by solving such a variant we will also solve the original problem in all its generality. Apart from the classical variant for continuous function we will also consider Baire functions and measurable functions. Our main tool at work will be the fusion lemma - a result about perfect trees, that originated in forcing theory. In its original form it was used to prove the minimality of Sack's forcing. We will use a slightly more complex, topological variant of it.
**Definition 1**.: _Let \(T\subseteq 2^{<\omega}=\bigcup\limits_{n\in\omega}2^{n}\)._
_We will say that \(T\) is a tree if for any \(s\in T\) and \(t\in 2^{<\omega}\) if \(t\subseteq s\) then \(t\in T\), ie it is closed under taking the initial segment._
_We will further say that \(T\) is a perfect tree if for any \(s\in T\) there exist \(t_{1},t_{2}\in T\) with \(t_{1}\neq t_{2}\) such that \(s\subset t_{1}\) and \(s\subset t_{2}\)._
**Definition 2**.: _Let \(T\) be a perfect tree and for each \(s\in T\) let there be a perfect set \(U_{s}\subseteq X\), where \(X\) is a metrizable space. Then \((U_{s})_{s\in T}\) is called a fusion sequence if_
* \(U_{s_{1}}\supseteq U_{s_{2}}\) _for_ \(s_{1},s_{2}\in T\) _and_ \(s_{1}\subseteq s_{2}\)_,_
* \(U_{s^{\sim}0}\cap U_{s^{\sim}1}=\emptyset\) _for_ \(s\in T\)_._
We have a following fundamental property of fusion sequences.
**Theorem 1**.: _(Fusion Lemma) Let \(T\) be a perfect tree and let \(\tilde{T}=\{f\in 2^{\omega}\colon\forall_{n\in\omega}f|_{n}\in T\}\). Let \((U_{s})_{s\in T}\) be a fusion sequence. If the diameter of \(U_{s}\) tends to \(0\) with increasing length of \(s\) then the set_
\[P=\bigcap\limits_{n\in\omega}\bigcup\limits_{s\in T}U_{s}=\bigcup\limits_{f \in\tilde{T}}\bigcap\limits_{n\in\omega}U_{f|n}\]
_is a perfect set and it is homeomorphic to the Cantor set. [9][8]_
If we are considering exclusively subsets of the Cantor set (or for that matter a space homeomorphic to it) the restriction on diameter can be dropped up to extent.
**Theorem 2**.: _(Fusion Lemma) Let \(T\) be a perfect tree and let \(\tilde{T}=\{f\in 2^{\omega}\colon\forall_{n\in\omega}f|_{n}\in T\}\). Let \((U_{s})_{s\in T}\) be a fusion sequence of subsets of \(C\). Then the set_
\[Q=\bigcap\limits_{n\in\omega}\bigcup\limits_{s\in T}U_{s}=\bigcup\limits_{f \in\tilde{T}}\bigcap\limits_{n\in\omega}U_{f|n}\]
_contains a non-empty perfect subset. If we further assume that the sets \(U_{s}\) are all basic clopen subsets of \(C\) (with respect to the product topology on \(2^{\omega}\)) then \(Q\) itself is perfect._
Cantor set, perfect sets and perfect trees
It worth noting that the Cantor set, perfect sets and perfect trees are very closely connected. In this section we will outline those properties of them that will be useful in proving our main result. For any non-empty set \(A\subseteq 2^{\omega}=C\) let
\[T_{A}=\{x|_{n}\colon x\in A,n\in\omega\}.\]
\(T_{A}\) is a perfect tree if and only if the set \(A\) has no isolated points. Moreover the set
\[\tilde{T_{A}}=\{x\in 2^{\omega}\colon\forall_{n\in\omega}x|_{n}\in T_{A}\}= \operatorname{Cl}(A),\]
ie \(A=\tilde{T_{A}}\) if and only if \(A\) is perfect. On the other hand if \(T\) is any perfect tree the fusion lemma automatically gives us that \(\tilde{T}\) is a perfect set. Of course \(T_{\tilde{T}}=T\).
**Corrolary 1**.: _Let \(P\subseteq C\) be perfect and non-empty. Then \(P\) is homeomorphic to \(C\)._
**Proposition 1**.: _Let \(X\) be a metric space and \(A\subseteq X\) be a non-empty perfect set. Then there exist a subset of \(A\) homeomorphic to the Cantor set._
_Proof:_ As \(A\) is perfect and non-empty it has infinitely many elements. Let \(x_{0},x_{1}\in A\) be distinct. There has to exist \(r_{0}>0\) such that \(B(x_{0},r_{0})\cap B(x_{1},r_{0})=\emptyset\). Let
\[P_{0} =A\cap\bar{B}(x_{0},\frac{r_{0}}{2}),\] \[P_{1} =A\cap\bar{B}(x_{1},\frac{r_{0}}{2}).\]
Observe that as \(A\) is perfect so are the sets defined above. Now inductively for every \(i_{0},\ldots,i_{n}\in 2\) there exist distinct \(x_{i_{0},\ldots,i_{n},0},x_{i_{0},\ldots,i_{n},1}\in P_{i_{0},\ldots,i_{n}}\) and there exists \(r_{n+1}>0\) (common for all the sequences of length \(n+1\)) such that \(B(x_{i_{0},\ldots,i_{n},0},r_{n+1})\cap B(x_{i_{0},\ldots,i_{n},1},r_{n+1})=\emptyset\). We can thus define
\[P_{i_{0},\ldots,i_{n},0} =A\cap\bar{B}(x_{i_{0},\ldots,i_{n},0},\frac{r_{n+1}}{2}),\] \[P_{i_{0},\ldots,i_{n},1} =A\cap\bar{B}(x_{i_{0},\ldots,i_{n},1},\frac{r_{n+1}}{2}).\]
It is clear from the construction that \(\lim\limits_{n\rightarrow+\infty}r_{n}=0\). From the compactness it follow that for any \(y\in 2^{\omega}\) we have \(|\bigcap\limits_{n\in\omega}P_{y|_{n}}|=1\). Thus the set
\[P=\bigcup\limits_{y\in 2^{\omega}}\bigcap\limits_{n\in\omega}P_{y|_{n}}\]
is homeomorphic to the Cantor set.
**QED**
**Corrolary 2**.: _Let \(f\colon\mathbb{R}^{\alpha}\to\mathbb{R}\), where \(\alpha<\omega_{1}\), be continuous and let \(A_{n}\subseteq\mathbb{R}\) for \(n\in\alpha\) be perfect. If \(f(\prod\limits_{n\in\alpha}A_{n})\) contains a perfect subset then it contains a subset homeomorphic to the Cantor Set._
Topological spaces that do not have a non-empty dense-in-itself subset are called scattered. It is a well know property that second countable scattered spaces are countable.
**Proposition 2**.: _Let \(X\) be a compact metric space and \(f\colon X\to\mathbb{R}\) be continuous. If there exists \(B\subseteq f(X)\) homeomorphic to the Cantor set then there exists \(A\subseteq X\) also homeomorphic to the Cantor set and such that \(f(A)\subseteq B\)._
_Proof:_ As \(f^{-1}(B)\) is compact, if it didn't have any perfect subsets then it would be in fact scattered as closure operation preserves isolated points. On the other hand \(f^{-1}(B)\) cannot be scattered as it is uncountable. It means that there exist a non-empty perfect subset \(P\subseteq f^{-1}(B)\) and from the corollary above we get that \(P\) has to have a subset homeomorphic to the Cantor set.
\(\mathbf{QED}\)
The next two propositions will be vital in some of our later arguments.
**Proposition 3**.: _Let \(A\subseteq C\) be a dense \(G_{\delta}\) set. There exists a non-empty perfect set \(P\subseteq A\)._
_Proof:_ Let \(A=\bigcap\limits_{i\in\omega}U_{i}\), where the sets \(U_{i}\) are open and dense. Let \(W_{(0)},W_{(1)}\subseteq U_{0}\) be disjoint basic clopen sets. Now with \(W_{s}\) defined for \(s\in 2^{i}\) let \(W_{s^{\sim}0},W_{s^{\sim}1}\subseteq W_{s}\cap U_{i+1}\) be disjoint basic clopen sets. From the properties of fusion we obtain that the set
\[P=\bigcup\limits_{x\in 2^{\omega}}\bigcap\limits_{i\in\omega}W_{x|_{i}}=\bigcap \limits_{i\in\omega}\bigcup\limits_{s\in 2^{i}}W_{s}\]
is perfect. Clearly \(P\subseteq\bigcap\limits_{i\in\omega}U_{i}=A\).
\(\mathbf{QED}\)
It is worth noting that the multidimensional version of this proposition utilises basically the same proof idea, but it needs to be adjusted to the product topology.
**Proposition 4**.: _Let \(A\subseteq C^{\omega}\) be a dense \(G_{\delta}\) set. There exist non-empty perfect sets \(P_{k}\subseteq C\) for \(k\in\omega\) such that \(\prod\limits_{k\in\omega}P_{k}\subseteq A\)._
_Proof:_ Let \(A=\bigcap\limits_{i\in\omega}U_{i}\), where the sets \(U_{i}\) are open and dense. Let
\[V_{0}=\prod\limits_{k\in\omega}V_{0,k}\subseteq U_{0}\]
be a basic clopen set. Note that \(V_{0,k}\) is a proper subset of \(C\) for only a finite amount of \(k\in\omega\). We can represent \(V_{0,0}\) as a sum of two disjoint basic clopen sets \(W_{(0),0},W_{(1),0}\subseteq C\). Now suppose that we have a clopen set
\[V_{i}=\prod_{k\in\omega}V_{i,k}\subseteq U_{0}\cap\ldots\cap U_{i}\]
such that for \(k\leq i\) the sets \(V_{i,k}\) are represented as a disjoint union
\[V_{i,k}=\bigcup_{s\in 2^{i+1}}W_{s,k}\]
of basic clopen subsets of \(C\), ie
\[V_{i}=\bigcup_{s\in(2^{i+1})^{i}}W_{s(0),0}\times\ldots W_{s(i),i}\times\prod _{k>i}V_{i,k}.\]
Note that the sum above is finite. By taking intersections of \(U_{i+1}\) with all those sets one after another we can find the basic clopen sets \(W_{s,k}^{*}\subseteq W_{s,k}\) and for \(k>i\) the basic clopen sets \(V_{i+1,k}\subseteq V_{i,k}\) (of which only finite amount are proper subsets) such that the sum
\[\bigcup_{s\in(2^{i})^{i}}W_{s(0),0}^{*}\times\ldots W_{s(i),i}^{*}\times\prod _{k>i}V_{i+1,k}\subseteq\subseteq U_{0}\cap\ldots\cap U_{i+1}.\]
We can represent each set \(W_{s,k}^{*}\) as a disjoint union of basic clopen sets \(W_{s^{-}0,k},W_{s^{-}1}\subseteq C\) and the set \(V_{i+1,i+1}\) as the sum of \(2^{i+2}\) disjoint basic clopen sets \(W_{s,i+1}\) for \(s\in 2^{i+2}\). For \(k\leq i\) we can take
\[V_{i+1,k}=\bigcup_{s\in 2^{i+2}}W_{s,k}.\]
We consider the fusion sequence on each coordinate separately. From the properties of fusion we obtain that all the sets
\[P_{k}=\bigcup_{x\in 2^{\omega}}\bigcap_{i\geq k}W_{x|i,k}=\bigcap_{i>k}\bigcup _{s\in 2^{i}}W_{s,k}\]
are perfect. Clearly we have \(\prod\limits_{k\in\omega}P_{k}\subseteq\bigcap\limits_{i\in\omega}U_{i}=A\).
**QED**
We will now proceed with proving that we can substitute in the original problem functions from \(\mathbb{R}^{\omega}\) to \([0;1]\) by functions from \(C^{\omega}\) to \(C\).
**Lemma 1**.: _Let \(P_{k}\subseteq C\) be perfect sets and let \(f\colon\prod\limits_{k\in\omega}P_{k}\to[0;1]\) be a continuous function. If the image of the function \(f\) is perfect then there exist perfect sets \(Q_{k}\subseteq P_{k}\), each homeomorphic to the Cantor set, such that \(f(\prod\limits_{k\in\omega}P_{k})\) is either not perfect or is homeomorphic to the Cantor set._
_Proof:_ Let \(a_{0,0}\) and \(a_{0,1}\) be the smallest and largest elements of the image of \(f\) respectively. We will define a sequence of refining partitions of \([a_{0,0};a_{0,1}]\) into interval in a following way. As the image of \(f\) is perfect clearly \(a_{0,0}<a_{0,1}\) and with \(a_{n,i}\) defined for \(i\leq 2^{n}\) there have to exist \(a_{n+1,i}\in[0;1]\) for \(i\leq 2^{n+1}\) such that
* \(a_{n+1,2i}=a_{n,i}\) for \(i\leq 2^{n}\)
* \(a_{n+1,i}<a_{n+1,i+1}\) for \(i<2^{n+1}\)
* \(f^{-1}(a_{n+1,i})\) is nowhere dense for \(0<i<2^{n+1}\)
* \(f^{-1}([a_{n+1,0};a_{n+1,1})),f^{-1}((a_{n+1,1};a_{n+1,2})),\ldots,\)
* \(f^{-1}((a_{n+1,2^{n+1}-2};a_{n+1,2^{n+1}-1})),f^{-1}((a_{n+1,2^{n+1}-1};a_{n+ 1,2^{n+1}}])\) are not nowhere dense
* the diameter of partitions defined in such a manner tends to \(0\) with increasing \(n\)
Let
\[A_{n}=\{a_{n,i}\colon i\leq 2^{n}\}.\]
Then we have \(A_{n}\subseteq A_{n+1}\). The set \(A=\bigcup_{n\in\omega}A_{n}\) is countable and its preimage \(f^{-1}(A)\) is a meager \(F_{\sigma}\) set, ie \(\prod\limits_{k\in\omega}P_{k}\setminus f^{-1}(A)\) is a dense \(G_{\delta}\) set a thus contains a product \(\prod\limits_{k\in\omega}Q_{k}\) where all the sets \(Q_{k}\) are perfect and non-empty. The image \(f(\prod\limits_{k\in\omega}Q_{k})\) is clearly zero-dimensional and thus if it is perfect it has to be homeomorphic to the Cantor set.
**QED**
**Theorem 3**.: _Let the following property hold._
* _For any continuous functions_ \(f_{n}\colon C^{\omega}\to C\) _there exist a set_ \(N=\{n_{i}\colon i\in\omega\}\in[\omega]^{\omega}\) _and non-empty perfect sets_ \(P_{k}\subseteq C\) _for_ \(k\in\omega\) _such that the subsequence_ \((f_{n_{i}})_{i\in\omega}\) _is pointwise convergent on the product_ \(\prod\limits_{k\in\omega}P_{k}\)_._
_Then the answer to the Harrington problem is positive._
_Proof:_ Let \(P_{k}\subseteq\mathbb{R}\) be perfect for \(k\in\omega\). Observe that if for an infinite amount of \(n\in\omega\) the set \(f_{n}(\prod\limits_{k\in\omega}P_{k})\) is not perfect then we obtain the result right away as we can pick an isolated point in each of such sets and then find a convergent subsequence. Thus we can at any further point in the proof assume that all images \(f_{n}(\prod\limits_{i\in\omega}P_{i})\) are perfect.
We will begin with restricting all the functions to the set \(C^{\omega}\). By the lemma above there are perfect sets \(P_{0,k}\subseteq C\) for \(k\in\omega\) such that \(C_{0}=f_{0}(\prod\limits_{k\in\omega}P_{0,k})\) is homeomorphic to the Cantor set. Inductively we can define the sets \(P_{n+1,k}\subseteq P_{n,k}\) such that \(C_{n+1}=f_{n+1}(\prod\limits_{k\in\omega}P_{n+1,k})\) is homeomorphic to the Cantor set. Moreover they can be chosen in such a way that
\[P_{n,k}=\bigcup_{s\in 2^{n+1}}Q_{s,k}\]
for \(k\leq n\), where all the sets \(Q_{s,k}\) are pairwise disjoint and they are intersections of \(P_{n,k}\) with basic clopen subsets in \(C\)
Any finite sum of the sets \(C_{i}\) is perfect and zero-dimensional and thus homeomorphic to the Cantor set. Any infinite sum will be dense in itself, but it may not be closed. Let us pick such \(N\in[\omega]^{<\omega}\) that for any \(a,b\in[0;1]\) with \(a<b\) the set \(\bigcup\limits_{n\in N}C_{n}\cap(a;b)\) is not dense in \((a;b)\). Then the closure \(D=\mathrm{Cl}(\bigcup\limits_{n\in N}\mathrm{C_{n}})\) is in fact homeomorphic to the Cantor set. From the properties of fusion we obtain that each set
\[Q_{k}=\bigcup\limits_{x\in 2^{\omega}}\bigcap\limits_{i\geq k}W_{x|_{i},k}= \bigcap\limits_{i>k}\bigcup\limits_{s\in 2^{i}}P_{s,k}\]
contains a perfect subset \(P_{k}\). It follows that
\[f_{n}|_{\prod\limits_{i\in\omega}}P_{i}\colon\prod\limits_{i\in\omega}P_{i}\to D\]
and our assumption gives the desired result.
**QED**
## 3 Baire property and measurability
As one of our variants we consider Baire functions instead of continuous ones.
**Definition 3**.: _We will say that a subset \(A\) of a topological space \(X\) has Baire property if it can be represented as a symmetric difference \(U\triangle m\) of an open set \(U\) and a meager set \(m\)._
_We will say that a function \(f\colon X\to Y\) - for topological spaces \(X\) and \(Y\) is Baire if preimage \(f^{-1}(U)\) of any open subset \(U\) of \(Y\) has Baire property._
A natural question arises if Baire functions are always continuous apart from some meager set. In [5], [7] and [6] one can find following partial answers to that question, which will be of importance to our considerations.
**Theorem 4**.: _Let \(X\) and \(Y\) be metric spaces. The following statements are equivalent:_
* _For every Baire function from_ \(X\) _to_ \(Y\) _there exists a meager set_ \(m\subseteq X\) _such that_ \(f|_{X\setminus m}\) _is continuous._
* _There does not exist a partition (called_ \(K\)_-partition)_ \(\mathcal{F}\) _of_ \(X\) _into meager subsets such that for any_ \(\mathcal{F}^{\prime}\subseteq\mathcal{F}\) _the sum_ \(\bigcup\mathcal{F}^{\prime}\) _has Baire property._
**Theorem 5**.: _Let \(f\colon X\to Y\) be Baire and \(Y\) be a separable metrizable space. Then there exists a meager set \(m\subseteq X\) such that \(f|_{X\setminus m}\) is continuous._
**Theorem 6**.: _Let \(f\colon X\to Y\) be Baire and \(X\) be a completely metrizable space of weight at most \(\mathfrak{c}\) and \(Y\) be a metrizable space. Then there exists a meager set \(m\subseteq X\) such that \(f|_{X\setminus m}\) is continuous._
As another one of our variants considers measurable functions the following variant of the well know Luzin's theorem, which can be found in [10], will be vital in our reasonings.
**Theorem 7**.: _(Luzin) Let \(E\subseteq\mathbb{R}\) be Lebesgue measurable and \(f\colon E\to R\). The function \(f\) is measurable iff for any \(\varepsilon>0\) there exists a closed set \(F\) such that \(f|_{F}\) is continuous and \(|E\setminus F|<\varepsilon\)._
## 4 Main result
We will apply fusion lemma to our problem.
**Theorem 8**.: _Let \(f_{n}\colon C\to 2\) be continuous functions. Then there exist \(N=\{n_{i}\colon i\in\omega\}\in[\omega]^{\omega}\) and a non-empty perfect set \(P\subseteq C\) such that the subsequence \((f_{n_{i}})_{i\in\omega}\) is pointwise convergent on \(P\)._
_Proof:_ Let \(U_{0}^{n}=f_{n}^{-1}(0)\) and \(U_{1}^{n}=f_{n}^{-1}(1)\). Observe that those are disjoint clopen sets and their sum is whole \(C\). If infinitely many sets \(U_{j}^{n}\) are empty then the result follows in a straightforward way. Let \(n_{0}\in\omega\) be such that both \(U_{0}^{n_{0}}\) and \(U_{1}^{n_{0}}\) are non-empty. There exist \(t_{(0)},t_{(1)}\in 2^{<\omega}\) such that
\[U_{(0)}=C_{t_{(0)}}\subseteq U_{0}^{0}\]
and
\[U_{(1)}=C_{t_{(1)}}\subseteq U_{1}^{0}.\]
Assume that all \(U_{s}\) and \(n_{i}\) are defined for \(s\in 2^{m+1}\) and \(i\leq m\). If there exists \(s\in 2^{m+1}\) such that \(U_{0}^{n}\cap U_{s}=\emptyset\) or \(U_{1}^{n}\cap U_{s}=\emptyset\) for infinitely many \(n>n_{m}\) then once again the result follows in a straightforward way. Otherwise let \(n_{m+1}\in\omega\) be such that \(U_{j}^{n_{m+1}}\cap U_{s}\neq\emptyset\) for all \(s\in 2^{m+1}\) and \(j\in 2\). It follows that there exist \(t_{s^{\frown}0},t_{s^{\frown}1}\supset t_{s}\) such that
\[U_{s^{\frown}0}=C_{t_{s^{\frown}0}}\subseteq U_{0}^{n_{m+1}}\cap U_{s}\]
and
\[U_{s^{\frown}1}=C_{t_{s^{\frown}1}}\subseteq U_{1}^{n_{m+1}}\cap U_{s}.\]
Now consider a set
\[S=\{x\in 2^{\omega}\colon x(2m)=0\text{ for }m\in\omega\}\]
it is clearly uncountable. By the properties of fusion we obtain that the set
\[Q=\bigcup_{x\in S}\bigcap_{i\in\omega}U_{x|_{i}}=\bigcap_{m\in\omega}\bigcup _{s\in 2^{2m+1}}U_{s^{\frown}0}\]
is compact as well as uncountable and thus contains a perfect subset \(P\). It is easy to see that the sequence \((f_{n_{2i}})_{i\in\omega}\) is convergent to \(0\) on \(P\).
**Theorem 9**.: _Let \(f_{n}\colon C^{\omega}\to 2\) be continuous functions. Then there exist \(N=\{n_{i}\colon i\in\omega\}\in[\omega]^{\omega}\) and non-empty perfect sets \(P_{k}\subseteq C\) for \(k\in\omega\) such that the subsequence \((f_{n_{i}})_{i\in\omega}\) is pointwise convergent on the product \(\prod\limits_{k\in\omega}P_{k}\). What is more we can assume that \(P_{k}=C\) for \(i>0\)._
_Proof:_ For any fixed \(x\in C^{\omega}\) let us define functions \(g_{x,n}\colon C\to 2\) in a following way
\[g_{x,n}(y)=f_{n}(y,x_{0},x_{1},\ldots).\]
Let \(A=\{a_{m}\colon m\in\omega\}\) be a countable, dense subset of \(C^{\omega}\). From the theorem above there exists a non-empty perfect set \(Q_{\emptyset}\subseteq C\) and \(N_{0}\in[\omega]^{\omega}\) such that the subsequence \((g_{a_{0},n})_{n\in N_{0}}\) is pointwise convergent on \(Q_{\emptyset}\). There exist disjoint clopen sets \(U_{(0)},U_{(1)}\subseteq C\) such that
\[P_{(0)}=U_{(0)}\cap Q_{\emptyset}\neq\emptyset\]
and
\[P_{(1)}=U_{(1)}\cap Q_{\emptyset}\neq\emptyset.\]
The sets \(P_{(0)}\) and \(P_{(1)}\) are perfect and thus homeomorphic to \(C\) itself.
Assume that all \(P_{s}\) and \(N_{i}\) are defined for \(s\in 2^{m+1}\) and \(i\leq m\). As the set \(2^{m+1}\) is finite from the theorem above we get that there exist the perfect sets \(Q_{s}\subseteq P_{s}\) for \(s\in 2^{m+1}\) and \(N_{m+1}\in[N_{m}]^{\omega}\) such that the sequence \((g_{a_{m+1},n})_{n\in N_{m+1}}\) is pointwise convergent on all the sets \(Q_{s}\). For each such set there exist disjoint clopen sets \(U_{s^{\neg}0},U_{s^{\neg}1}\subseteq C\) such that
\[P_{s^{\neg}0}=U_{s^{\neg}0}\cap Q_{s}\neq\emptyset\]
and
\[P_{s^{\neg}1}=U_{s^{\neg}1}\cap Q_{s}\neq\emptyset.\]
Clearly all the sets \(P_{s^{\neg}j}\) are homeomorphic to \(C\).
Let us define the set \(N=\{n_{m}\colon m\in\omega\}\) in a following way.
\[n_{0}=\min(N_{0})\]
and
\[n_{m+1}=\min(N_{m+1}\setminus\{n_{0},\ldots,n_{m}\}).\]
Clearly \((g_{a_{m},n})_{n\in N}\) is convergent on all \(P_{s}\) for \(m\in\omega\) and \(s\in 2^{<\omega}\). By the properties of fusion we obtain that the set
\[Q=\bigcup_{x\in 2^{\omega}}\bigcap_{i\in\omega}P_{x|_{i}}=\bigcap_{m\in\omega }\bigcup_{s\in 2^{m}}P_{s}\]
is compact as well as uncountable and thus contains a perfect subset \(P\). We get that \((g_{a_{m},n})_{n\in N}\) is convergent on \(P\) for any \(m\in\omega\). Thus it follows from density of
and continuity of the functions \(f_{n}\) we obtain that the sequence \((f_{n})_{n\in N}\) is pointwise convergent on the product \(P\times\prod\limits_{k>0}C\).
**QED**
**Corrolary 3**.: _Let \(f_{n}\colon C^{\omega}\to C\) be continuous functions. Then there exists \(N=\{n_{i}\colon i\in\omega\}\in[\omega]^{\omega}\) and non-empty perfect sets \(P_{k}\subseteq C\) for \(k\in\omega\) such that the subsequence \((f_{n_{i}})_{i\in\omega}\) is pointwise convergent on the product \(\prod\limits_{k\in\omega}P_{k}\). What is more we can assume that \(P_{k}=C\) for \(i>0\)._
_Proof:_ For \(x=(x_{0},x_{1},\ldots)\in 2^{\omega}=C\) define
\[\pi_{l}(x)=x_{l}\text{ for }l\in\omega.\]
The projections \(pi_{n}\) are clearly continuous. Let \(g_{l,n}=\pi_{l}\circ f_{n}\). We can apply the theorem above to the functions \(g_{0,n}\) and get the non-empty perfect set \(Q_{\emptyset}\subseteq C\) and \(N_{0}\in[\omega]^{\omega}\) such that \((g_{0,n})_{n\in N}\) is pointwise convergent on \(Q_{\emptyset}\times\prod\limits_{k>0}C\). There exist disjoint clopen sets \(U_{(0)},U_{(1)}\) such that
\[P_{(0)}=U_{(0)}\cap Q_{\emptyset}\neq\emptyset\]
and
\[P_{(1)}=U_{(1)}\cap Q_{\emptyset}\neq\emptyset.\]
The sets \(P_{(0)}\) and \(P_{(1)}\) are perfect and thus homeomorphic to \(C\) itself.
Assume that all \(P_{s}\) and \(N_{i}\) are defined for \(s\in 2^{l+1}\) and \(i\leq l\). As the set \(2^{l+1}\) is finite from the theorem above we get that there exist the perfect sets \(Q_{s}\subseteq P_{s}\) for \(s\in 2^{l+1}\) and \(N_{l+1}\in[N_{l}]^{\omega}\) such that the sequence \((g_{l+1,n})_{n\in N_{l+1}}\) is pointwise convergent on the product \((\bigcup\limits_{s\in 2^{l+1}}Q_{s})\times\prod\limits_{i>0}C\). For each such set there exist disjoint clopen sets \(U_{s^{\frown}0},U_{s^{\frown}1}\subseteq C\) such that
\[P_{s^{\frown}0}=U_{s^{\frown}0}\cap Q_{s}\neq\emptyset\]
and
\[P_{s^{\frown}1}=U_{s^{\frown}1}\cap Q_{s}\neq\emptyset.\]
Clearly all the sets \(P_{s^{\frown}j}\) are homeomorphic to \(C\).
Let us define the set \(N=\{n_{m}\colon m\in\omega\}\) in a following way.
\[n_{0}=\min(N_{0})\]
and
\[n_{m+1}=\min(N_{m+1}\setminus\{n_{0},\ldots,n_{m}\}).\]
By the properties of fusion we obtain that the set
\[Q=\bigcup\limits_{x\in 2^{\omega}}\bigcap\limits_{i\in\omega}P_{x|_{i}}= \bigcap\limits_{l\in\omega}\bigcup\limits_{s\in 2^{l}}P_{s}\]
is compact as well as uncountable and thus contains a perfect subset \(P\). We get that \((g_{l,n})_{n\in N}\) is convergent on \(P\times\prod\limits_{i>0}C\) for any \(l\in\omega\).
**QED**
Applying our earlier codomain reduction argument we obtain.
**Corrolary 4**.: _Let \(X_{k}\) be metric spaces and \(Q_{k}\subseteq X_{k}\) be perfect for \(k\in\omega\). For any continuous functions \(f_{n}\colon X_{k}\to[0;1]\) there exist \(N=\{n_{i}\colon i\in\omega\}\in[\omega]^{\omega}\) and non-empty perfect sets \(P_{k}\subseteq Q_{k}\) for \(k\in\omega\) such that the subsequence \((f_{n_{i}})_{i\in\omega}\) is pointwise convergent on the product \(\prod\limits_{k\in\omega}P_{k}\)._
Amongst other this result provides a positive answer to the original Harrington problem. The generalization to the Baire functions follows in a straightforward way.
**Theorem 10**.: _Let \(f_{n}\colon C^{\omega}\to[0;1]\) be Baire functions. Then there exists \(N=\{n_{i}\colon i\in\omega\}\in[\omega]^{\omega}\) and non-empty perfect sets \(P_{k}\subseteq C\) for \(k\in\omega\) such that the subsequence \((f_{n_{i}})_{i\in\omega}\) is pointwise convergent on the product \(\prod\limits_{k\in\omega}P_{k}\)._
_Proof:_ As weight of \(C^{\omega}\) is equal to \(\omega\) it follows that each \(f_{n}\) is continuous apart from a meager set, ie it is continuous on the intersection
\[G_{n}=\bigcap\limits_{i\in\omega}U_{n,i}\]
of open and dense subsets of \(C^{\omega}\). As \(C^{\omega}\) is a Baire space the set
\[G=\bigcap\limits_{n\in\omega}G_{n}=\bigcap\limits_{(n,i)\in\omega^{2}}U_{n,i} =\bigcap\limits_{j\in\omega}U_{j}\]
is a dense \(G_{\delta}\) set and all of the functions \(f_{n}\) are continuous on \(G\). We obtain \(P=\prod\limits_{k\in\omega}P_{k}\subseteq G\) such that all \(P_{k}\) are homeomorphic to the Cantor set. As all the functions \(f_{n}\) are continuous on \(G\) they are also continuous on \(P\) and the result follows directly from the theorems above.
**QED**
## 5 Further developments
In the next part we will generalize our results to the wider variety of functions as well as topological spaces, including
1. measurable functions
2. functions with \((s)\)-property which are modelled after Sack's forcing [2]
3. functions with an analog of \((s)\)-property for Silver's forcing
4. completely Ramsey function on the space \([\omega]^{<\omega}\) with the Ellentuck topology [4] (which could be thought of as a topological representation of Mathias forcing)
Applying fusion technique to those cases will remain our main focus. Some of those variants might require using fusion technique for different forcing notions and the generalizations of fusion such as Axiom A. [3]
|
2305.06048 | Toward Open Integrated Access and Backhaul with O-RAN | Millimeter wave (mmWave) communications has been recently standardized for
use in the fifth generation (5G) of cellular networks, fulfilling the promise
of multi-gigabit mobile throughput of current and future mobile radio network
generations. In this context, the network densification required to overcome
the difficult mmWave propagation will result in increased deployment costs.
Integrated Access and Backhaul (IAB) has been proposed as an effective mean of
reducing densification costs by deploying a wireless mesh network of base
stations, where backhaul and access transmissions share the same radio
technology. However, IAB requires sophisticated control mechanisms to operate
efficiently and address the increased complexity. The Open Radio Access Network
(RAN) paradigm represents the ideal enabler of RAN intelligent control, but its
current specifications are not compatible with IAB. In this work, we discuss
the challenges of integrating IAB into the Open RAN ecosystem, detailing the
required architectural extensions that will enable dynamic control of 5G IAB
networks. We implement the proposed integrated architecture into the first
publicly-available Open-RAN-enabled experimental framework, which allows
prototyping and testing Open-RAN-based solutions over end-to-end 5G IAB
networks. Finally, we validate the framework with both ideal and realistic
deployment scenarios exploiting the large-scale testing capabilities of
publicly available experimental platforms | Eugenio Moro, Gabriele Gemmi, Michele Polese, Leonardo Maccari, Antonio Capone, Tommaso Melodia | 2023-05-10T10:58:19Z | http://arxiv.org/abs/2305.06048v1 | # Toward Open Integrated Access and Backhaul with O-RAN
###### Abstract
Millimeter wave (mmWave) communications has been recently standardized for use in the fifth generation (5G) of cellular networks, fulfilling the promise of multi-gigabit mobile throughput of current and future mobile radio network generations. In this context, the network densification required to overcome the difficult mmWave propagation will result in increased deployment costs. Integrated Access and Backhaul (IAB) has been proposed as an effective mean of reducing densification costs by deploying a wireless mesh network of base stations, where backhaul and access transmissions share the same radio technology. However, IAB requires sophisticated control mechanisms to operate efficiently and address the increased complexity. The Open Radio Access Network (RAN) paradigm represents the ideal enabler of RAN intelligent control, but its current specifications are not compatible with IAB. In this work, we discuss the challenges of integrating IAB into the Open RAN ecosystem, detailing the required architectural extensions that will enable dynamic control of 5G IAB networks. We implement the proposed integrated architecture into the first publicly-available Open-RAN-enabled experimental framework, which allows prototyping and testing Open-RAN-based solutions over end-to-end 5G IAB networks. Finally, we validate the framework with both ideal and realistic deployment scenarios exploiting the large-scale testing capabilities of publicly available experimental platforms.
IAB, O-RAN, 5G, Colosseum
## I Introduction
Radio Access Network (RAN) densification is a key technique to boost the coverage and performance metrics of current and future generations of mobile radio networks [1]. However, these ultra-dense deployments come with increased costs and complexity for provisioning wired backhaul to each base station [2]. To address this, the 3rd Generation Partnership Project (3GPP) has introduced Integrated Access and Backhaul (IAB) in its Release 16 for NR [3]. With IAB, the backhaul traffic is multiplexed on the air interface together with regular User Equipments (UEs) access traffic. This effectively creates a wireless mesh network of Base Stations (BSs) where only a few require an expensive wired connection to the Core Network (CN) (i.e., the IAB-Donors). Hence the cost-reduction potential through wireless relays (i.e., the IAB-Nodes) [4]. Additionally, IAB is especially relevant for millimeter wave (mmWave)-based radio access, where inexpensive network densification is a fundamental necessity [5].
While the standardization process has reached a sufficient maturity level, the open challenges brought about by integrating access and backhaul are still open. Consequently, IAB offers optimization opportunities at all layers of communication abstraction. At the lowest levels, specialized IAB-aware techniques are required to ensure a fair and effective resource allocation among UEs and Mobile Terminations (MTs) [6, 7]. At the same time, backhaul and access transmission multiplexing must be managed to minimize interference [8]. Furthermore, adaptive topology reconfiguration mechanisms must be provisioned to maintain resiliency against link failures, traffic unbalances and anomalous user distribution [9]. Overall, these sophisticated management procedures require control primitives that go beyond what has been specified by 3GPP.
The unprecedented paradigm shift brought about by the O-RAN architecture, developed by the O-RAN Alliance, promises to enable programmatic control of RAN components through open interfaces and centralized control loops [10]. As such, it is the ideal candidate to unlock the potential optimization and management gains awaiting in IAB. However, the current O-RAN architecture is tailored to traditional RAN deployments, and an extension to enable IAB control
is required. The first contribution of this work resides in a discussion on how the O-RAN architecture, interfaces, and control loops can be extended to IAB scenarios, with the ultimate goal of allowing large-scale, data-driven control and management of 5th generation (5G) IAB networks.
Additionally, to foster prototyping and testing with IAB and O-RAN, we propose a comprehensive framework where researchers can easily deploy an end-to-end O-RAN-enabled IAB network with Over-The-Air (OTA) and hardware-in-the-loop emulation capabilities. In line with O-RAN core concepts, our framework is designed to be open, accessible and flexible by leveraging on open-source software and Commercial Off-the-Shelf (COTS) hardware. The framework builds on IABEST, the first large-scale accessible and open IAB testbed presented in [11]. This testbed has been enriched to produce a complete O-RAN IAB experimental solution, effectively replicating the proposed O-RAN-IAB integrated architecture. In particular, IAB-Donors and IAB-Nodes have been equipped with custom-developed agents for the so-called E2 and O1 standard interfaces. These additions enable the controllers introduced by the O-RAN architecture to manage IAB-Nodes, effectively representing the first publicly available O-RAN-enabled IAB prototyping and testing solution.
To further facilitate experimental research activities, we have packaged and integrated the entire framework into OpenRAN Gym, a publicly-available research platform for data-driven O-RAN experimentation at scale [12]. Through OpenRAN Gym, researchers can swiftly deploy and test the proposed framework over large-scale and publicly available hardware experimental platforms, such as the PAWR testbeds and Colosseum [13, 14]. Notably, we showcase how Colosseum can be leveraged for large-scale IAB testing through hardware-in-the-loop channel emulation to create sophisticated deployment scenarios. A tutorial on how to deploy an O-RAN-driven IAB network, together with the source code of all the framework, is available on the OpenRAN Gym website.1 Finally, we use Colosseum to validate the proposed framework numerically. In particular, we test the attainable performance in a controlled radio scenario and in a more realistic deployment in which we reconstruct a part of Florence, Italy.
Footnote 1: [https://openrangym.com/tutorials/iab-tutorial](https://openrangym.com/tutorials/iab-tutorial)
The remainder of this paper is organized as follows. Section II analyses the challenges of extending O-RAN to 5G IAB networks. Section III contains a description of the proposed frameworks, focusing on the O-RAN extensions that have been included in [11]. Section IV contains the results of the experiments we performed to validate our framework by exploiting the large-scale testing capabilities of Colosseum. Finally, Section V concludes the paper and discusses future extensions.
## II Integrating IAB in Open RAN
As discussed in Section I, IAB represents a scalable solution to the need for backhaul in ultra-dense 5G and 6G deployments. At the same time, however, the wireless backhaul introduces additional complexity to the network deployments: new parameters and configurations that need to be tuned--and possibly, adapted dynamically--to get the best performance out of the network and to seamlessly adjust to updated conditions in the scenario and in the equipment status. For example, it is possible to optimize the IAB network performance by properly selecting the connectivity of IAB-Nodes to their parents [9], or by appropriately allocating resources to backhaul and access flows sharing the same air interface [6].
As for traditional RAN deployments with fiber-based backhaul [15], there is a case to be made for providing IAB RAN equipment with primitives for flexible, dynamic, data-driven programmatic control. This requires providing endpoints to expose telemetry, measurements, and analytics from IAB-Nodes, as well as parameters and control knobs to enable the optimization. So far, the Open RAN paradigm has been successfully applied to non-IAB networks to achieve the same goals, thanks to interfaces that give access to 3GPP Key Performance Measurements (KPMs) and control parameters in the RAN nodes [16, 17]. The Open RAN vision, which is being developed into technical specifications by the O-RAN Alliance, includes controllers that run custom control loops, i.e., the RAN Intelligent Controllers (RICs). The O-RAN Alliance has defined control loops and related RICs that can operate at a time scale of 10 ms to 1 s (i.e., _near-real-time_) or more than 1 s (i.e., _non-real-time_) [18]. The near-real-time, or near-RT, RIC is connected to the RAN nodes through the E2 interface, while the non-real-time RIC, which is part of the network Service Management and Orchestration (SMO), interacts with the RAN through the O1 interface, as shown in the left part of Figure 1. Other interfaces from the non-RT RIC/SMO include A1 to the near-RT RIC, for policy guidance and Artificial Intelligence (AI)/Machine Learning (ML) model management, and the O2 interface to the O-Cloud, which is an abstraction of the virtualization infrastructure that can support the deployment of O-RAN functions. The use of standard interfaces makes it possible to run even third-party applications in the controllers, the so-called _xApps_ and _rApps_ for the Near Real-time RAN Intelligent Controller (near-RT RIC) and Non-Real-Time Ran Intelligent Controller (non-RT RIC), respectively.
The 3GPP already provides control and adaptation capabilities through the IAB Backhaul Adaptation Protocol (BAP) layer, the F1 interface, and the Radio Resource Control (RRC) layer across the IAB-Donor Central Unit (CU) and the IAB-Node Distributed Unit (DU). How and when control and adaptation of such configurations could be performed, however, is left to the vendor implementation. This is where an extension of the O-RAN architecture to IAB networks can play a role, exposing IAB-Donor and IAB-Node functions to the RICs. These can leverage a centralized point of view on the RAN and a wealth of analytics and information usually unavailable in the individual IAB-Donors and Nodes. For IAB, this could translate into effective multi-donor coordination with reduced interference and agile topology adaptation across different IAB-Donor domains, and dynamic resource allocation with
for example--data-driven proactive congestion identification and resolution across access and backhaul links.
### _Extensions to Open RAN_
Extending the O-RAN architecture and interfaces to IAB deployments, however, presents some design and architectural challenges. Primarily, supporting O-RAN interfaces in IAB-Nodes means either (i) terminating the interfaces at the IAB-Donor; or (ii) transporting their data over the wireless backhaul. The first option is simpler, does not require architectural updates, but at the same time limits the control and reconfiguration to what is available in the IAB-Donor, without insight on the IAB-Nodes. The second option, instead, provides more granular access at the cost of additional complexity and tunneling of data over the wireless backhaul.
The 3GPP already foresees performing SMO-like operations through the wireless backhaul interface [19]. Therefore, in this paper and in the architecture described in Figure 1 we consider the second option, which would provide tighter and more effective integration between O-RAN and IAB deployments. In general, the tunneling can be performed by encapsulating the O-RAN interfaces payloads into dedicated bearers. Note that this requires some interaction between functions of the control plane of the network and the transport in the user plane, e.g., through a dedicated Packet Data Unit (PDU) session between a local User Plane Function (UPF) in the IAB-Donor and in the IAB-Node MT. Then, a local interface termination can be installed in the IAB-Node, as it would in a traditional, fiber-equipped RAN node. The O-RAN traffic, in this case, would be multiplexed with user data on the wireless backhaul resources, and it needs to be properly prioritized to achieve the control goals while not harming users' performance or creating congestion.
**E2 extension for IAB.** The extension of the E2 interface likely requires one or multiple new, dedicated E2 Service Models (E2SMs). The E2SM represents the semantic of the E2 interface, i.e., the RAN function with which an xApp in the near-RT RIC interacts. For IAB, an extension of E2SM KPM [20] can be used to expose performance metrics related to the MT, besides the DU. Another near-real-time control target over E2 can include, for example, resource partitioning between backhaul and access traffic, or dynamic Time Division Duplexing (TDD) slot configuration to adapt to varying traffic on the access and backhaul.
**O1 extension for IAB.** The O1 interface would connect the SMO to the IAB-Node, e.g., to perform maintenance and updates of the components (MT and DU) of the IAB-Node. Compared to E2 near-real-time control, the O1 interface would run control loops at 1 s or more. Thus its traffic can be transported with lower priority than the E2 traffic. This makes a case for dedicated bearers and tunnels on the backhaul interface for _each_ of the O-RAN interfaces.
**O2 extension for IAB.** This interface can be used to integrate the IAB-Nodes as resources in the O-Cloud. Compared to traditional virtualization infrastructure for the O-Cloud, the IAB-Nodes are available--and reachable over O2--only when a session is established from one IAB-Donor to the IAB-Node itself.
## III An Experimental Framework for IAB and O-RAN
Our proposed experimental framework packages the entire software chain required to run the O-RAN-enabled IAB network described in Section II in a multi-layer architecture. At the hardware level, our framework does not present any specific requirement. Indeed, every software component can run on COTS hardware like generic x86 machines and USRP Software-defined Radio (SDR). On the other hand, some software components are customized or designed from scratch to reproduce and support a 5G IAB network. In particular, we have adapted OpenAirInterface (OAI), an open source 5G RAN framework [21], to implement IAB-Donors, IAB-Nodes, and IAB-capable core functions. Additionally, we have integrated agents for the E2 and O1 interfaces in the IAB-Donor and IAB-Node, effectively implementing the architectural integration proposed in Section II. These interfaces are used by the non-real-time and real-time RICs packaged in our framework to control all the components of the deployed IAB
Fig. 1: IAB and O-RAN integrated architectures.
network. We now describe the aforementioned components, separating them into the RAN and O-RAN domains.
### _RAN and Core Network Components_
Figure 2 represents an overview of the radio access functional components that enable end-to-end communication in our framework. In particular, we provide the following: a minimal yet functional deployment of 5G CN functions, software-defined IAB-Nodes and IAB-Donors and software-defined UEs.
**IAB-Nodes and IAB-Donors.** According to 3GPP specifications [4], an IAB-Donor hosts a CU and multiple DUs. Similarly, IAB-Node is split into a DU and an MT. Functionally, these have the task of enabling downstream and upstream connectivity, respectively. At the time of writing, OAI's main branch implementation of the CU/DU functional split does not support multiple DUs connected to a single CU [22]. This limitation is incompatible with the IAB architecture. Consequently, we employ a full OAI Next Generation Node Base (gNB) in place of both CU and DU. In other words, the IAB-Nodes and IAB-Donors in our framework do not follow 3GPP split 2. Instead, these components are deployed as monolithic gNBs. As for the MT, an open-source implementation is currently unavailable. However, this component is functionally equivalent to a UE, as it connects to upstream nodes using the same resources and protocols. Consequently, we have selected OAI's software-defined UE to act as MTs in the proposed framework. This results in a system where a single IAB-Node is made up of two concurrently running instances: an OAI gNB--acting as a DU--and an OAI UE--acting as a MT. In the resulting architecture, IAB-Nodes are naturally deployed over two separate machines, hosting the gNB and the UE, and connected out-of-band as it is shown in Figure 2. Alternatively, the two software components can run on a single x86 machine, provided that sufficient computing power is available. While this architecture does not require any particular modification to OAI's implementations, we have added a signaling functionality through which the IAB-Nodes or IAB-Donors can discern connected MTs from connected UEs. This has been achieved through proper manipulation of the UE Capability messages. Such information can be exploited to enable IAB-aware optimization solutions in the gNB.
**Core Network Functions.** A minimal set of 5G CN functions have been included in our framework: Network Repository Function (NRF), Access and Mobility Management Function (AMF), Slicing Magangement Framework (SMF) and User Plane Function (UPF), all based on the OAI 5G core implementation. All these functions run as containers on a single x86 machine, as shown in Figure 2. Due to the selected IAB system design, the UPF required modifications to enable IAB operations. As previously mentioned, UEs acts as MTs in IAB-Nodes, connecting to upstream nodes. The established GPRS Tunneling Protocol (GTP) tunnels are then used to provide direct connectivity between the DU of the node and the CN functions. In other words, MT-acting UEs relay the backhaul traffic of the IAB-Nodes. However, OAI's UPF implementation lacks support for the required forwarding capability,2 as any packet whose destination is not a UE is dropped. Therefore, we have implemented a minimal version of framed routing [23] in OAI UPF, enabling UEs to act as intermediate nodes.
Footnote 2: To the best of the authors’ knowledge, there is no available open source implementation that supports this operating mode.
**User Equipment** From the perspective of the UE, an IAB network deployed using the components described above is entirely standard-compliant. As such, both software-defined UEs (as shown in Figure 2) and COTS UEs can be used in the proposed framework.
### _O-RAN Components_
As mentioned in Section II, O-RAN defines a set of standardized and open interfaces with which the RAN exposes data collection and control primitives to the RICs. In the proposed framework, we have enabled IAB-Nodes and IAB-Donors to be O-RAN-compatible by integrating software agents for the
Fig. 2: Overview of the RAN architecture deployed over white-box hardware.
E2 and O1 interfaces into the codebase of OAI. Furthermore, our framework comprises a near-RT RIC and a non-RT RIC.
**E2 interface integration.** The E2 interface is functionally split into two protocols: E2AP--tasked with establishing a connection with the near-RT RIC--and E2SM--which implements specific monitoring and control functionalities, namely Service Models (SMs), as discussed in Section II. In the software implementation we provide, E2AP has been adapted from O-RAN Alliance Software Community reference implementation and, as such, it is entirely compliant with O-RAN. On the other hand, the SMs provided by the O-RAN alliance are defined using ASN.1: a powerful production-ready abstract description language which is, however, cumbersome and challenging to use in the fast-paced research and development environments targeted by our framework. In light of this, we employ custom SM that are defined through Protocol Buffers (protobuf)--an abstract definition language that is easier to handle and allows for fast prototyping and testing, facilitating the development of IAB-aware control solutions. Since the E2 interface is such that the E2SM messages are encoded and decoded only in the RAN and xApp, the custom SM definitions are transparent to the RIC, allowing our proposed solution to retain generic O-RAN compliance. At the time of this writing, we have implemented a set of protobuf messages that can be used to reproduce both the KPM and RAN Control (RC) SMs [10]. These can be used to develop data collection and control xApps, respectively.
**O1 interface integration.** In order to properly manage all the different aspects of networked elements, the O1 interface defines various Management Services (MnS), which can be used either from the managed entities (the gNBs) to report information back to the RIC or from the managing entity (the SMO and the rApps running on it) to deploy configurations changes, transfer files or update the software on the managed entities [10, 24]. Among all the different MnS, we have focused our contribution on implementing the Heartbeat MnS, which periodically transmits heartbeats; the Fault Supervision MnS, which reports errors and events; and the Performance Assurance MnS, which streams performance data. Those MnS have been integrated into the OAI codebase by implementing a scheduler that, running on a dedicated thread, periodically sends Virtual Network Function (VNF) Event Stream (VES) notifications in JSON format over HTTP. This format and communication protocol has been chosen among the different options defined in the standard, as it is widely known and easily extendable by other researchers. As of now, our implementation reports performance metrics, such as the throughput and information on the channel quality between IAB-Nodes, and failure events, such as RRC or Uplink Shared Channel (UL-SCH) failures, which can be used in rApps to monitor and optimize the backhaul network. Provisioning MnS, which can be used by the rApps to deploy configuration changes (e.g., topology optimizations), have not been implemented by following the O1 specifications, as it would have needed major reworks in the OAI codebase. Instead, we have taken advantage of _IAB-Manager_, a software component we developed to orchestrate IAB experiments, as discussed next.
**IAB-Manager.** IAB networks are generally expected to include several IAB-Nodes, and the proposed framework can scale to such numbers. However, managing experiments with tens or more RAN components can take time and effort. Indeed, each component is potentially hosted by a dedicated machine, and setting up an IAB deployment requires each one to be activated and configured according to a sequence that starts from the CN functions and ends with the terminal IAB-Nodes. To facilitate experimenting at such a large scale, we have developed _IAB-Manager_[11]: a software component that can automate the IAB network deployment and testing through a command line interface and an Application Programming Interface (API). In particular, _IAB-Manager_ is a single entrypoint for controlling the entire experiment: network components and radio environment setup (in case of wireless channel emulation), topology and routing management and reconfiguration, automated testing, and result collection. From a functional perspective, the manager connects to the machines involved in the experimentation and configures them according to the assigned roles. In particular, once the user specifies the roles, the manager sequentially activates each network component until the final deployment is ready for experimentation, greatly simplifying the setup phase. Additionally, as previously mentioned, _IAB-Manager_ executes the network configuration changes mandated by the rApps.
**RAN Intelligent Controllers.** The proposed framework packages a near-RT RIC and a non-RT RIC. Both are compliant with the standard and based on O-RAN Software Community reference implementations.
## IV Validation and Results
This section focuses on validating our proposed framework from an experimental perspective. In particular, we are interested in giving an initial characterization of some fundamental Key Performance Indicators (KPIs) of the deployments allowed by our IAB framework while validating its correct functioning.
While the openness and flexibility of the software components are such that the framework can run on generic hardware, we chose to base our validation campaign on Colosseum [14]. The Colosseum testbed is a publicly available large-scale testing platform with hardware-in-the-loop capabilities. It
\begin{table}
\begin{tabular}{l l} \hline \hline Parameter & Value \\ \hline Area Size for realistic deployment & 0.627 km\({}^{2}\) \\ gNB Density & 45 \#NB/km\({}^{2}\) \\ IAB-donors/ IAB-nodes ratio & 1/10 \\ Emulated center frequency & 28 GHz \\ Bandwidth & 40 MHz \\ Scheduler & 7 2 1 \\ Subcarrier Spacing & 30khz \\ Colosseum Base loss & 50 dB \\
3GPP Channel Model & Urban Micro \\ MIMO layers & 1 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Table of System Settings
comprises 128 Standard Radio Nodes (SRNs), each composed of a powerful x86 computing node and an USRP X310 SDR. All the components in the proposed framework can be easily deployed on SRNs. Every SRN radio is interconnected by an FPGA mesh that emulates arbitrary radio channels defined through tapered delay models. With the capability of emulating complex scenarios of tens of entities, Colosseum makes it possible to deploy large IAB networks over complex propagation scenarios. As such, it represents an ideal validation platform for our framework. Furthermore, Colosseum is open to the research community, and the validation tools are made available, allowing interested parties to start experimenting with a minimal initial effort.
### _Experiments with a linear chain_
We start by evaluating the performance of an IAB network deployed in a tightly controlled scenario. To this end, we consider a 5-hop linear topology, as shown in Figure 2(a). As detailed in Section II, each IAB-Node comprises an MT and a DU, bringing this experiment's overall radio node count to 10. In order to characterize the upper-bound performance of the proposed framework, we employ an ideal propagation scenario. Through properly manipulating Colosseum's channel emulator, a 0 dB pathloss model is selected for nodes connected in the linear topology, and an infinite pathloss is set for all the other channels, effectively suppressing any possible interference. In other words, this radio scenario is equivalent to connecting the SDRs with coaxial cables.3 Transmissions occur on band n78 with 106 Physical Resource Blocks (PRBs) available, for a total of 40MHz bandwidth.
Footnote 3: [https://www.cs.ucsd.edu/~david/](https://www.cs.ucsd.edu/~david/)
Figure 2(b) shows the downlink and uplink TCP throughput against the number of hops, as measured between the core network and the specific MT/UE. The first-hop values of 47 Mbps in DL and 21 Mbps in UL represent the maximum throughput attainable in the testing settings. This upper bound is far from the theoretical maximum allowed by the available bandwidth. It is limited by several factors that depend on the experimental platform, OAI software implementation, and system design. Most notably, the strongest detractor to the final throughput performance is given by the OAI implementation of the software-defined UE, which is employed to build the MT. In particular, the OAI UE is inefficient in reception and transmission, thus becoming a bottleneck for the entire communication chain. Efforts are ongoing to improve the performance and stability of this software framework. Furthermore, the frameworks' system design is such that each IP packet is encapsulated into as many GTP packets as the number of hops. This increased overhead can cause packet fragmentation with a further negative impact on the overall performance. Furthermore, even if the emulated channel is set to a 0 dB pathloss, Colosseum's architecture includes an unavoidable base loss of 50 dB [25] due to characteristics of the hardware architecture. This, together with the aforementioned inefficiencies, make such that packet drops and subsequent retransmissions happen also in this ideal scenario.
As the number of hops increase, the downlink throughput experiences a sharp decrease before stabilizing on a per-hop loss of around 6 Mbps. The notable throughput loss experienced at the second hop can be explained by observing the standard deviation of the throughput, represented by the whiskers in Figure 2(b). This value is at its maximum for the first hop, suggesting that the first radio link is unstable due to the RX pipeline of the MT being overwhelmed. This substantial variability is caused by packet loss and retransmissions and internal buffer overflow, which negatively affect the performance of the second hop, as it is noticeable in the numerical results. At the same time, the second hop's throughput standard deviation is lower, as the decreased traffic volume causes less drops in the involved MTs. This stabilizing effect propagates down the topology, as both the decreasing standard deviation and the linear per-hop loss testify. On the other hand, the uplink throughput is relatively stable and close to the upper bound, even at the fourth hop. This is because the limited OAI UE performance and BS scheduling process limits the uplink traffic volume, and the gNBs are far from being overwhelmed. On the other hand, since the uplink throughput does not significantly decrease from the maximum, the UE's congestion level remains relatively stable and high, as proven by the constant standard deviation values.
RTT is measured when the network is unloaded, that is
Fig. 3: Topology and results for the linear chain.
when there is no traffic flowing through the IAB network. As shown in Figure 2(c), the first hop latency is around 11 ms. This value represents the base processing delay plus a small fixed propagation delay that is, however, the same for each hop. As the number of hops increases, the RTT experiences a linear increase comparable with the first hop latency, as expected. This shows how the system does not introduce any spurious latency when the network is unloaded. Finally, the relatively higher RTT standard deviation of the last hop (as represented by the whiskers in Figure 2(c)) suggests that multiple packet retransmissions are required.
### _Validation over realistic RF scenarios_
After having validated the system performance in a controlled environment, we move to more realistic urban scenarios, representing the typical deployment environment of an IAB network. We start by selecting a densely populated area of a city, from which we extract a precise 3D model of the environment. On top of this, we apply a coverage planning heuristic to find the optimal locations for the IAB-Nodes [26]. We then take advantage of a viewshed algorithm implemented on GPU and the above-mentioned 3D models to evaluate the Line of Sight (LoS) between each pair of locations and to produce the so-called visibility graph [27]. Then, we characterize the propagation according to the 3GPP Urban Micro parametric model [28], and we produce a tapered-delay representation of the communication channels, which Colosseum can then emulate. This process has been carried out for several European cities and four different scenarios are made available.4
Footnote 4: [https://colosseumeu.freshdesk.com/support/solutions/articles/61000303373-integrated-access-and-backhaul-scenarios](https://colosseumeu.freshdesk.com/support/solutions/articles/61000303373-integrated-access-and-backhaul-scenarios)
Motivated by the fact that IAB is unanimously considered as a key enabler of mmWave RAN [29], we are interested in providing an experimental solution that enables testing in such conditions. While Colosseum is not directly capable of operating at frequencies higher than 6 GHz, we can approximate these radio scenarios by reproducing the most relevant propagation characteristics of mmWaves, namely the extremely directive transmissions through beamforming and the increased pathloss [30]. In particular, the pathloss between nodes that are not directly connected in the provided topologies has been set to infinite. The resulting suppression of inter-node interference might appear too ideal at first. However, this is compatible with the highly directive transmissions typical of mmWave, where interference in static conditions (i.e., as in a backhaul IAB topology) can be practically neglected [31]. A more refined mmWave channel emulation will be subject of future extensions. In addition, since Colosseum's channel emulation happens in base-band, we can apply arbitrary pathloss independently of the radio frequency employed during the experiments. Thanks to this flexibility, we could compute pathloss for a carrier frequency of 28 GHz and apply them to LoS links. Nonetheless, the scenarios made available to the Colosseum community are available for both 3.6 GHz and 28 GHz, both with and without inter-node interference suppression.
For the experimental evaluation presented in this work, we
Fig. 4: Realistic deployment scenario in Florence, Italy. Donors are represented in red, while IAB-Nodes are represented in yellow.
Fig. 5: Measurements for the realistic scenario.
have selected a scenario based on the city of Florence, Italy. Figure 4 shows both the urban layout and the IAB deployment, which is extended over \(0.7\) km\({}^{2}\) and comprises \(21\) nodes (3 of which are IAB-Donors). To determine which nodes are going to become IAB-Donors, we have applied the group closeness centrality metric [32] to the visibility graph. This centrality metric selects \(k\) nodes such that their distance to all the other nodes is minimized. Then, we have determined the IAB topology as a Shortest-Path Forest computed over the visibility graph of the area with the well-known Dijktra's Algorithm. Similar to what has been done for the previous analysis, we characterize the throughput and latency at each hop in the network. In this case, however, the different link lengths cause performance variations in the per-hop throughput and latency. As such, we employ box plots to synthetically describe the network performance statistics in Figure 5. In particular, consider Figure 5a. Here the bottom and top edges of each box represent the first and third quartile of the downlink throughput measurements taken at all the different hops in the scenario. Similarly, the central marks indicate the median, and the whiskers represent the extreme data points. The plotted values indicate how the realistic pathloss introduced in the study scenario causes lower performance than the ideal case previously analyzed, independently of the considered hop. The same can be noted for the uplink throughput, as shown in Figure 5b. In both cases, the decreasing per-hop throughput trend is conserved. However, the throughput variability is the same for the two transmission directions. This is because, as opposed to the ideal scenario, the link length now represents the main performance-determining factor. This is testified by the significant distance between the first and third quartile of the first hop in both downlink and uplink throughput, which is consistent with the high variance of the first hop length in the topology of study. As for the second and third hop, the relatively closer quartiles are motivated by lower link length variations for these hops in the considered topology. Finally, the upper whiskers represent the performance of the shortest links, giving a further characterization of the system performance in this realistic scenario.
Figure 5c shows the RTT statistic through the same plotting technique. Differently from the throughput, the latency is not affected by the link length variations in the considered scenario for the first two hops. Additionally, the RTT increase at hops 1 and 2 is consistent with the one experienced in the controlled scenario. On the other hand, the high RTT variance of the third and last hop suggests a high probability of requiring retransmissions along the IAB path.
## V Conclusions
In this work, we have discussed the motivations and challenges of integrating IAB with O-RAN. On this matter, we have proposed possible architecture extensions that enable dynamic control and data collection over 5G IAB networks through O-RAN intelligent controllers. We have implemented the proposed integrated architecture and packaged it into the first publicly available experimental framework enabling at-scale testing and prototyping of O-RAN-based solutions applied to IAB networks. The system comprises all the software components required to establish end-to-end connectivity, plus custom-developed E2 and O1 agents that allow software-defined IAB-Nodes to be O-RAN-compliant. The framework is designed to be open and accessible and can be deployed over COTS hardware. We numerically validated the framework exploiting the large-scale testing capabilities of Colosseum, showing the system's performance over both an ideal linear topology and more sophisticated realistic deployments. Finally, the framework has been packaged and released into OpenRAN Gym and is available to the research community.
|
2302.04820 | High-fidelity Interpretable Inverse Rig: An Accurate and Sparse Solution
Optimizing the Quartic Blendshape Model | We propose a method to fit arbitrarily accurate blendshape rig models by
solving the inverse rig problem in realistic human face animation. The method
considers blendshape models with different levels of added corrections and
solves the regularized least-squares problem using coordinate descent, i.e.,
iteratively estimating blendshape weights. Besides making the optimization
easier to solve, this approach ensures that mutually exclusive controllers will
not be activated simultaneously and improves the goodness of fit after each
iteration. We show experimentally that the proposed method yields solutions
with mesh error comparable to or lower than the state-of-the-art approaches
while significantly reducing the cardinality of the weight vector (over 20
percent), hence giving a high-fidelity reconstruction of the reference
expression that is easier to manipulate in the post-production manually. Python
scripts for the algorithm will be publicly available upon acceptance of the
paper. | Stevo Racković, Cláudia Soares, Dušan Jakovetić, Zoranka Desnica | 2023-02-09T18:15:08Z | http://arxiv.org/abs/2302.04820v2 | High-fidelity Interpretable Inverse Rig: An Accurate and Sparse Solution Optimizing the Quartic Blendshape Model
###### Abstract
We propose a method to fit arbitrarily accurate blendshape rig models by solving the inverse rig problem in realistic human face animation. The method considers blendshape models with different levels of added corrections and solves the regularized least-squares problem using coordinate descent, i.e., iteratively estimating blendshape weights. Besides making the optimization easier to solve, this approach ensures that mutually exclusive controllers will not be activated simultaneously and improves the goodness of fit after each iteration. We show experimentally that the proposed method yields solutions with mesh error comparable to or lower than the state-of-the-art approaches while significantly reducing the cardinality of the weight vector (over \(20\%\)), hence giving a high-fidelity reconstruction of the reference expression that is easier to manipulate in the post-production manually. Python scripts for the algorithm will be publicly available upon acceptance of the paper.
Inverse Rig Complex Blendshape Model High-fidelity Coordinate Descent
## 1 Introduction
The human face has always occupied a central place in the animation industry due to its role in nonverbal communication and our high sensitivity to subtle expression changes. With the advances in movie and video game production, the models for representing a face are increasingly more complex and demand algorithms with ever-increasing accuracy and attention to detail in order to provide a high-fidelity appearance of the avatars. One of the most popular approaches for animating the face is the blendshape model Lewis et al. (2014). The main building blocks in the blendshape model are a neutral mesh, represented in a vector form as \(\mathbf{b}_{0}\in\mathbb{R}^{3n}\), where \(n\) is the total number of vertices in the face, and \(m\) blendshape vectors \(\mathbf{b}_{1},...,\mathbf{b}_{m}\in\mathbb{R}^{3n}\) that represent meshes corresponding to the atomic deformations of the face, and are later combined to build more complex expressions. In the case of the _delta_ blendshape model, these vectors are not actual meshes, but their offsets from the neutral, and they are added on top of the neutral face with corresponding activation weights \(w_{1},...,w_{m}\). A linear delta blendshape model is given by
\[f_{L}(\mathbf{w})=\mathbf{b}_{0}+\sum_{i=1}^{m}w_{i}\mathbf{b}_{i}=\mathbf{b}_ {0}+\mathbf{B}\mathbf{w}, \tag{1}\]
where the subscript \(L\) stands for _linear_, in order to distinguish it from a more complex rig function that we will introduce later in this section. The matrix \(\mathbf{B}\in\mathbb{R}^{3n\times m}\) is a blendshape matrix whose columns are the above introduced blendshape vectors, and \(\mathbf{w}\in\mathbb{R}^{m}\) is a vector of activation weights \(\mathbf{w}=[w_{1},...,w_{m}]^{T}\).
The problem that is of interest to us is the inversion of the rig to obtain activations \(\mathbf{w}\) from an acquired mesh, i.e., the inverse rig problem. It takes a given face mesh \(\widehat{\mathbf{b}}\in\mathbb{R}^{3n}\) as input and estimates a weight vector \(\mathbf{w}\) that would produce a good approximation to the mesh, i.e., such that \(f_{L}(\mathbf{w})\approx\widehat{\mathbf{b}}\). This is often formulated as a least-squares fitting problem. In Joshi et al. (2006), the authors formulate the corresponding optimization problem as
\[\operatorname*{minimize}_{\mathbf{w}}\frac{1}{2}\|f_{L}(\mathbf{w})-\widehat{ \mathbf{b}}\|^{2}, \tag{2}\]
where \(\|\cdot\|\) denotes the \(l_{2}\) norm. A solution of (2) can be found in a closed form as
\[\mathbf{w}=(\mathbf{B}^{T}\mathbf{B})^{-1}\mathbf{B}^{T}\widehat{\mathbf{b}} =\mathbf{B}^{\dagger}\widehat{\mathbf{b}}, \tag{3}\]
where \(\mathbf{B}^{\dagger}\) represents the pseudoinverse of the matrix \(\mathbf{B}\), and throughout the paper, we refer to this approach as _Joshi_. This solution might be undesirable as it activates all the controllers, or a matrix \(\mathbf{B}^{T}\mathbf{B}\) might be singular if the number of vertices is too few relative to the number of model weights Lewis and Anjyo (2010); hence, more recent papers include additional regularization terms to problem (2). Another issue is that _Joshi_ does not incorporate constraints to the controllers -- for practical applications, weights are restricted to be non-negative since negative weights defy the intended semantics of the blendshapes and break the intuition for manual corrections Lewis et al. (2014). Similarly, weights should not exceed a value of 1, although some authors allow for this, justifying it as a way to produce exaggerated cartoonish expressions Choe and Ko (2006). To satisfy the constraints, the solution vector of _Joshi_ can be clipped to a feasible set, although this was not done in the original paper. Despite the above issues, the solution of Joshi et al. (2006) is often encountered in the literature because of its simplicity. There is a range of variants of this approach -- versions in Choe and Ko (2006); Liu et al. (2010) include constraints \(\mathbf{0}\leq\mathbf{w}\leq\mathbf{1}\); in Lewis and Anjyo (2010); Cetinaslan (2016); Cetinaslan and Orvalho (2020a) the weights are regularized by a squared \(l2\) norm to impose the stability of the solution, while the \(l1\) norm was used in Ribera et al. (2017) to stress on the sparsity. It is common for all these models to estimate all the weights jointly. While this can sometimes lead to a more precise mesh fit, such an approach does not allow so much flexibility in manually adjusting the solution weights afterward.
Figure 1: _Ada_, an example frame with predictions using different methods. The top row shows a mesh reconstruction, and regions of higher mesh error are highlighted in red, according to the color bar on the right. The bottom row shows corresponding blendshape weights activation, with summarized root mean squared error and cardinality of each approach. The method of _Seol_Seol et al. (2011) gives the sparsest vector of weights, yet the error is considerably higher than for the other approaches, and the resulting expression is wrong. Our approach with linear rig approximation (_Linear_) corrects some of these artifacts, yet the lips still do not fit well enough, similar to _Joshi_Joshi et al. (2006) and _Cetinaslan_Cetinaslan and Orvalho (2020a). The solution of _LMMM_Rackovic et al. (2022) leads to the lowest mesh error, but it activates the highest number of blendshapes. Only our approach with quartic corrective terms (_Quartic_) gives a good trade-off between the accurate mesh fit and a low number of activated weights.
A solution proposed in Seol et al. (2011) follows a different logic, relying on a two-step approach, motivated by the way artists animate manually. The weights are visited and estimated sequentially, updating the residuals after each controller. Initially, residual vector \(\mathbf{r}\in\mathbb{R}^{3n}\) is set to be equal to a given mesh \(\mathbf{r}^{(1)}:=\widehat{\mathbf{b}}\). Then each of the \(m\) controllers is visited once, solving the following two-step problem:
\[\begin{split}\text{For }i=1,...,m:\\ \text{step 1: }w_{i}\leftarrow\operatorname*{argmin}_{0\leq w_{i}} \|\mathbf{r}^{(i)}-\mathbf{b}_{i}w_{i}\|^{2},\\ \text{step 2: }\mathbf{r}^{(i+1)}\leftarrow\mathbf{r}^{(i)}- \mathbf{b}_{i}w_{i}.\end{split} \tag{4}\]
After each iteration \(i\), _step 1_ finds an optimal weight for the blendshape \(\mathbf{b}_{i}\), and _step 2_ removes the corresponding estimated effect, producing a new residual vector \(\mathbf{r}^{(i+1)}\). The output is a vector of weights \(\mathbf{w}\), estimated in a greedy search manner to fit the original target mesh \(\widehat{\mathbf{b}}\). Throughout the paper, we refer to this approach as _Seol_.
An important advantage of _Seol_ is that it empirically avoids simultaneous activation of mutually exclusive blendshapes, which is one of the leading causes of instability in the solution. As confirmed experimentally in Seol et al. (2011), this also leads to a sparser solution, and hence it is easier to manipulate the animation later if needed. In this approach, the order in which controllers are visited plays an important role. The authors suggest ordering them by the magnitude of change each blendshape produces when activated, that is, by its squared norm \(\|\mathbf{b}_{i}\|^{2}\). A recent reference Hyde et al. (2021) explores a coordinate-wise approach with a more thorough discussion of the coordinate order; they apply matching pursuit with pruning to estimate a sparse set of weights, yet without the advantage of a known structure of the blendshape rig function, leading to a computationally intensive method.
### Contributions
This paper follows a similar direction as Seol et al. (2011) but addresses several main issues. In the first place, we target a more complex blendshape model used in modern production for highly-realistic human faces. Besides the linear terms in (1), corrective terms for some pairs or tuples of blendshapes are included Seo et al. (2011); Wu et al. (2018); Rackovic et al. (2022). A corrective term for the pair of blendshapes \(\mathbf{b}_{i}\) and \(\mathbf{b}_{j}\) is denoted as \(\mathbf{b}^{(ij)}\in\mathbb{R}^{3n}\), and its activation weight is set to the product \(w_{i}w_{j}\). These vectors are called first-level corrections, and in a similar manner, one can include corrections of higher levels for tuples of three or four blendshapes. In our experiments, we will assume blendshape models with three levels of corrections, and hence the blendshape function (here with a subscript \(Q\) for _quartic_) is given by
\[\begin{split} f_{Q}(\mathbf{w})=&\mathbf{b}_{0}+ \mathbf{B}\mathbf{w}+\sum_{\{i,j\}\in\mathcal{P}}w_{i}w_{j}\mathbf{b}^{\{ij \}}+\sum_{\{i,j,k\}\in\mathcal{T}}w_{i}w_{j}w_{k}\mathbf{b}^{\{ijk\}}\\ &+\sum_{\{i,j,k,l\}\in\mathcal{Q}}w_{i}w_{j}w_{k}w_{l}\mathbf{b}^ {\{ijkl\}}\end{split} \tag{5}\]
where \(\mathcal{P},\mathcal{T},\) and \(\mathcal{Q}\) are sets of pairs, triplets, and quadruplets of blendshapes that involve a corresponding corrective term, respectively.
Fitting an arbitrarily accurate blendshape rigCompared with other methods that involve corrective terms Rackovic et al. (2022), the approach proposed here leads to computationally efficient solutions under the corrections of any order. Our method is general enough to work with any level of corrections. To illustrate this, we will also include a case when a linear approximation of the rig is used in the experiments. This is in contrast with Rackovic et al. (2022) that can only handle first-order corrections. The reason why we restrict our experiments to the third level of corrective terms is that the animated models at our disposal do not have additional levels of corrections.
A solution with low cardinalityAdditionally, we add an \(l_{1}\) norm regularizer to penalize the solution cardinality further. Finally, we also address the fact that Seol et al. (2011) performs only a single pass over the weights. This can be seen as one step of a coordinate descent minimization algorithm. Even though a single pass of the algorithm gives a relatively good estimate of the solution, we will show in our experiments that adding several algorithm steps to our data fitting procedure can provide a better mesh fit while keeping the cardinality similar.
The numerical results in Section 4.3 show that our method outperforms the state-of-the-art approaches, giving an accurate mesh reconstruction with a mesh error lower or comparable to the best-performing methods and simultaneously reducing the cardinality of the weight vector over \(20\%\). It further allows one to work with an arbitrary number of blendshape corrections, while the previous papers consider either a linear blendshape model or only a single correction
level. Finally, the proposed method gives smooth temporal transitions, as shown for several animated sequences in the supplementary materials. This is also confirmed by the roughness metric (see Section 4.3), as our method shows more than double of a reduction compared to the benchmarks of a similar mesh error level.
### Notation
Throughout this paper, scalar values will be denoted with lowercase Latin \(a\), \(b\), \(c\), or lowercase Greek \(\alpha,\beta,\gamma\) letters. Vectors are denoted with bold lowercase letters, e.g., **a**, **b**, **c** and are indexed using a subscript, i.e., the \(i^{th}\) element of a vector **a** is \(a_{i}\). If there is a subscript and the letter is still in bold, it is not indexing -- we will use this to distinguish blendshape vectors (\(\textbf{b}_{0},\textbf{b}_{1},...,\textbf{b}_{m}\)) as they have similar properties. We use **0** and **1** to denote vectors of all zeros and all ones, respectively. When we use order relations (\(\geq,\leq,=\)) between two vectors, they are assumed component-wise. Matrices are written in bold capital letters, e.g., **A**, **B**, **C**. Functions are given using lowercase letters, but with their arguments enclosed in parenthesis, e.g., \(f(\cdot),g(\cdot)\). The Euclidean norm is denoted by \(\|\cdot\|\).
## 2 Related Work
Blendshape animation is an attractive research topic because of its high relevance in our perception of the human face and the need for more realistic face representation. While anatomically-based face models might produce greater fidelity in realistic deformations and perception Sifakis et al. (2005), Ichim et al. (2017), they are usually much harder to animate and adjust manually and lack interpretability. Blendshape models have been studied in the literature as early as the end of the last century Pighin et al. (1998), Choe and Ko (2006), Choe et al. (2001), and several main research directions have been established. The first challenge in the model is creating the blendshape basis since it can take from a few dozen up to several hundred blendshapes, so it is a time and labor-intensive task. Several papers propose automated solutions for producing the basis. In Deng et al. (2006), Bouaziz et al. (2013), the authors apply principal component analysis (PCA) over a dense motion capture; however, while PCA-based meshes are well suited for automated animation Moser et al. (2021), they lack explainability, making them undesirable to artists. A different approach studied by Li et al. (2010, 2013), Ribera et al. (2017) considers a pre-sculpted generic blendshape basis, that is used to create a semantically equivalent basis for a custom character applying a deformation transfer. Using a similar approach, Chaudhuri et al. (2020) trains a deep learning method to estimate the person-specific blendshapes from video. Our paper does not provide contributions in this aspect and in the method we assume that a blendshape basis is given apriori, and that it closely resembles the actor/user.
The next challenge is adjusting controller weights to produce an animation. This step can be automated if there is a reference motion in the form of a 4D scan or motion capture of markers on the actor's face. This problem is called the inverse rig problem or automatic keyframe animation, and there are two main approaches to solving it: model-based and data-based methods. Model-based solutions rely on optimization techniques for model fitting and demand a precise definition of a rig function, while data is used for fitting model parameters. The problem is usually formalized as a least-squares problem, and regularization is often added to enhance the desired behavior, like stability Cetinaslan (2016), Cetinaslan and Orvalho (2020a,b), sparsity Bouaziz et al. (2013), Neumann et al. (2013), Rackovic et al. (2022), or temporal smoothness Tena et al. (2011), Seol et al. (2012). On the other side, data-based solutions can work with an arbitrary rig function Song et al. (2020) but demand vast amounts of data for training a good predictor. Common models here are Radial Basis Function-based regressors Song et al. (2011), Seol and Lewis (2014), Gaussian Processes Regression Holden et al. (2015), Reverdy et al. (2015) and Neural Networks Bailey et al. (2020), Chaudhuri et al. (2020). As a final step, the meshes obtained as a solution to the inverse rig problem are combined with albedo maps Feng et al. (2021), lighting conditions, and material parameters to render the person-specific skin details and produce a life-like output image Laine et al. (2020), Lombardi et al. (2018), Thies et al. (2018). The problem of rig inversion is the main focus of our paper, and we propose a model-based approach that solves a constrained \(l_{1}\) norm regularized non-linear least squares. The algorithm takes into account complex corrective blendshape terms, hence allowing high accuracy of the mesh fit, and yet, due to the coordinate descent approach and sparsity regularization, the obtained weight vectors have low cardinality making posterior manual adjustments possible.
Similar to the problem of the inverse rig is that of direct manipulation. However, it demands a real-time solution, since it assumes a deformation is propagated while the user is dragging the vertices of a character to adjust a given expression. To avoid artifacts that appear when all the non-selected markers in the face are kept fixed, Lewis and Anjyo (2010) proposes a general model where controllers are fitted taking into account an arbitrary number of selected manipulators. This idea is further developed in Seo et al. (2011), paying special attention to local influences of blendshapes. Later, Cetinaslan and Orvalho (2020a) develops a sketch-based method that leads to more intuitive manipulation, and the authors further improve the method in Cetinaslan and Orvalho (2020b).
Another direction of interest in blendshape animation is the segmentation of the face, and there is a number of approaches based on the final intention for the obtained segments. The main categories here are a localized or distributed approach to solving the inverse rig Joshi et al. (2006), Tena et al. (2011), Kei and Tomoyuki (2012), Reverdy et al. (2015), Fratarcangeli et al. (2020), Bailey et al. (2020), Rackovic et al. (2021), where the mesh segments are in general relatively big, and adding secondary motion effects Zoss et al. (2020) to increase the plausibility of already animated characters Neumann et al. (2013), Wu et al. (2016), Romeo and Schwartzman (2020), where, in general, one produces a large number of very small segments.
This paper focuses on a model-based approach to solving the inverse rig problem. The animated avatars are highly realistic pre-sculpted blendshape models with additional corrections levels. Reference frames are given in the form of 3D face scans, and an imperative is on the high-accuracy expression reconstruction.
## 3 Proposed Method
This section introduces our algorithm for an accurate solution to the inverse rig problem for complex blendshape models. It is desirable for a solution vector \(\mathbf{w}\) to be sparse while producing an accurate reconstruction of a given mesh \(\widehat{\mathbf{b}}\). Additionally, weights must stay within a \([0,1]\) interval to respect the construction of animated characters. We pose the optimization problem as
\[\operatorname*{minimize}_{\boldsymbol{\theta}\geq\mathbf{w}\leq \mathbf{1}}\frac{1}{2}\|f(\mathbf{w})-\widehat{\mathbf{b}}\|^{2}+\alpha\mathbf{ 1}^{T}\mathbf{w}, \tag{6}\]
with \(\alpha\geq 0\) being a regularization parameter. Note that, due to non-negativity constraints on \(\mathbf{w}\), the regularization term is equal to the \(l_{1}\) norm, which is known as a sparsity-enhancing regularizer. A rig function \(f(\mathbf{w})\) is given without a subscript because we want a solution that would work with arbitrary complexity of the rig, for example, linear or quartic. As the animated characters at our disposal consist of up to three levels of correction, the highest accuracy is achieved with a quartic rig defined in (5), hence we give a derivation (and experimental results) for \(f(\mathbf{w}):=f_{Q}(\mathbf{w})\). Nevertheless, we also include the case when \(f(\mathbf{w}):=f_{L}(\mathbf{w})\), since the linear blendshape model is the one used often in the literature. We will refer to the two approaches as _Quartic_ and _Linear_, respectively.
For the _Quartic_ case, a non-linearity of the rig makes the problem (6) non-convex and hard to solve if weights \(\mathbf{w}\) are to be estimated jointly. However, we can approach the problem component-wise, and then have a sequence of quadratic programs instead. That is, for a controller \(i\in\{1,...,m\}\) we assume that all the weights \(w_{j}\), for \(j\neq i\), are fixed (to initial or previously estimated value), while we only need to estimate \(w_{i}\), by solving
\[w_{i}\leftarrow\operatorname*{argmin}_{0\leq w\leq 1}\frac{1}{2}\|w\boldsymbol{ \phi}_{i}+\boldsymbol{\psi}_{i}\|^{2}+\alpha w. \tag{7}\]
Here \(\boldsymbol{\phi}_{i}\in\mathbb{R}^{3n}\) is a vector that contains all the blendshape components (i.e., blendshape vector and corrective terms) that participate in the product with \(w_{i}\):
\[\begin{split}\boldsymbol{\phi}_{i}&=\mathbf{b}_{i} +\sum_{\{i,j\in\mathcal{P}\}}w_{j}\mathbf{b}^{\{ij\}}+\sum_{\{i,j,k\}\in \mathcal{T}}w_{j}w_{k}\mathbf{b}^{\{ijk\}}+\\ &\sum_{\{i,j,k,l\}\in\mathcal{Q}}w_{j}w_{k}w_{l}\mathbf{b}^{\{ ijkl\}};\end{split} \tag{8}\]
and \(\boldsymbol{\psi}_{i}\in\mathbb{R}^{3n}\) contains all the other components, together with a given target mesh \(\widehat{\mathbf{b}}\):
\[\begin{split}\boldsymbol{\psi}_{i}&=\sum_{j\neq i }w_{j}\mathbf{b}_{j}+\sum_{\begin{subarray}{c}\{j,k\}\in\mathcal{P}\\ j,k\neq i\end{subarray}}w_{j}w_{k}\mathbf{b}^{\{ijk\}}+\sum_{\begin{subarray}{ c}\{j,k,l\}\in\mathcal{T}\\ j,k,l\neq i\end{subarray}}w_{j}w_{k}w_{l}\mathbf{b}^{\{jkl\}}+\\ &\sum_{\begin{subarray}{c}\{j,k,l,h\}\in\mathcal{Q}\\ j,k,l,h\neq i\end{subarray}}w_{j}w_{k}w_{l}w_{h}\mathbf{b}^{\{jklh\}}- \widehat{\mathbf{b}}.\end{split} \tag{9}\]
The global solution to (7) is found in closed form, by setting the derivative with respect to \(w_{i}\) to zero, and projecting to the feasible set:
\[w_{i}\gets P_{[0,1]}\left(\frac{\boldsymbol{\phi}_{i}^{T}\boldsymbol{\psi }_{i}-\alpha}{\|\boldsymbol{\phi}_{i}\|^{2}}\right), \tag{10}\]
where the projection operator is defined as
\[P_{[0,1]}(x)=\begin{cases}0,&\text{if }x<0,\\ 1,&\text{if }x>1,\\ x,&\text{otherwise}.\end{cases} \tag{11}\]
For the _Linear_ case, a coordinate optimization problem analogue to (7) is
\[w_{i}\leftarrow\operatorname*{argmin}_{0\leq w\leq 1}\frac{1}{2}\|w\mathbf{b}_ {i}+\sum_{j\neq i}w_{j}\mathbf{b}_{j}-\widehat{\mathbf{b}}\|^{2}+\alpha w, \tag{12}\]
with the solution
\[w_{i}\gets P_{[0,1]}\left(\frac{\mathbf{b}_{i}^{T}(\sum_{j\neq i}w_{j} \mathbf{b}_{j}-\widehat{\mathbf{b}})-\alpha}{\|\mathbf{b}_{i}\|^{2}}\right). \tag{13}\]
Notice that (10) and (13) have the same structure and that by setting the corrective blendshape terms of (8) and (9) to zero, solution (10) simplifies to (13). This confirms that our approach is general enough to work with an arbitrary number of correction levels.
When the optimal component-wise weights are estimated for all \(m\) controllers, the process is repeated until some stopping criterion is satisfied. This type of iteration is known as coordinate descent, and it is guaranteed to produce monotonically non-increasing costs Luo and Tseng (1992), Wright (2015). In the case when a model is defined via a linear rig function (1), the objective (6) consists of a smooth convex function \(\|\mathbf{Bw}-\widehat{\mathbf{b}}\|^{2}\) and a convex and separable regularization term \(\mathbf{1}^{T}\mathbf{w}\), hence we can claim that our method converges to the optimal solution of (6), as proved in Wright (2015). The pseudo-code of the proposed method is given in Algorithm 1.
```
0: A mesh vector \(\widehat{\mathbf{b}}\in\mathbb{R}^{3n}\), a set of blendshape vectors \(\mathbf{b}_{1},...,\mathbf{b}_{m}\in\mathbb{R}^{3n}\) ordered as \(\|\mathbf{b}_{1}\|^{2}\geq\|\mathbf{b}_{2}\|^{2}\geq\cdots\geq\|\mathbf{b}_{ m}\|^{2}\) (we implicitly order blendshape vectors by norm magnitude without loss of generality), regularization parameter \(\alpha\geq 0\), and the number of iterations \(T\in\mathbb{N}\). If a considered rig function is quartic, include also corrective terms \(\mathbf{b}^{(ij)}\) for \(\{i,j\}\in\mathcal{P}\), \(\mathbf{b}^{(ijk)}\) for \(\{i,j,k\}\in\mathcal{T}\), \(\mathbf{b}^{(ijkl)}\) for \(\{i,j,k,l\}\in\mathcal{Q}\).
0: Optimal weight vector \(\mathbf{w}\in\mathbb{R}^{m}\) such that \(\mathbf{0}\leq\mathbf{w}\leq\mathbf{1}\). Initialize weight vector by \(\mathbf{w}=\mathbf{0}\) for t=1,...,T do for i=1,...,m do if Rig function is linear then \(w_{i}\gets P_{[0,1]}\left(\frac{\mathbf{b}_{i}^{T}(\sum_{j\neq i}w_{j} \mathbf{b}_{j}-\widehat{\mathbf{b}})-\alpha}{\|\mathbf{b}_{i}\|^{2}}\right)\) elseif Rig function is quartic then Compute \(\boldsymbol{\phi}_{i}\) from (8) and \(\boldsymbol{\psi}_{i}\) from (9) \(w_{i}\gets P_{[0,1]}\left(\frac{\boldsymbol{\phi}_{i}^{T}\boldsymbol{\psi}_ {i}-\alpha}{\|\boldsymbol{\phi}_{i}\|^{2}}\right)\) endif endfor endfor
```
**Algorithm 1**
Notice that our approach has some similarities with _Seol_. If one sets \(\alpha=0\) and removes the upper constraint \(w\leq 1\), equation (13) becomes equivalent to the _Seol_ update rule from (4). For this reason, we adopt some of the tactics from Seol et al. (2011).
Any feasible vector \(\mathbf{w}\in\mathbb{R}^{m}\) can be used for the initialization; however, we stress that by initializing with a non-zero weight vector, the method cannot guarantee that mutually exclusive controllers will not get activated simultaneously, as explained in Seol et al. (2011).
In a sequential mesh fitting, the order in which blendshapes are visited plays an important role, so we adapt the strategy from Seol et al. (2011) to order them by the overall displacement magnitude:
\[\|\mathbf{b}_{1}\|^{2}\geq\|\mathbf{b}_{2}\|^{2}\geq\cdots\geq\|\mathbf{b}_ {m}\|^{2}. \tag{14}\]
This choice is inspired by the intuition behind the manual process, where an artist tends to first set the weights of the controllers of more drastic motions (like mouth opening or nose squeezing) before visiting more subtle ones. This has
some similarities with a sparse solution of the matching pursuit problem Mallat and Zhang (1993), yet we will discuss alternative strategies for choosing the optimization order in Section 4.4.
One of the relevant points not discussed in Seol et al. (2011) was the possibility of multiple algorithm passes -- the authors terminated the process after each controller was visited and updated once. We conclude from the theory on coordinate descent and from our experiments that increasing the number of passes leads to a significant reduction in mesh error at the cost of a slight increase of the solution cardinality.
In the next section, we show the performance of our algorithm on a set of animated human characters, and benchmark against state-of-the-art methods for solving the inverse rig problem. We also give an extensive results analysis and discuss the main improvements compared to the baseline _Seol_ approach and alternative strategies for some aspects of the algorithm.
## 4 Evaluation
This section compares the solutions of different approaches on several data sets and gives an extensive discussion of the results.
### Data
Experiments are performed over five animated avatars, four of which are freely available on the MetaHuman Creator1 -- _Ada_, _Jesse_, _Vivian_ and _Omar_, and shown in Figure 2. The fifth character is a proprietary model provided by 3lateral studio 2, for the purpose of this research, and we refer to it as _Char 5_. The size of the head is comparable over the characters, with a width (distance from left to right ear) of around \(18cm\). However, the number of vertices and controllers differ, and we give it summarized in Table 1.
Footnote 1: [https://www.unrealengine.com/en-US/metahuman](https://www.unrealengine.com/en-US/metahuman)
Footnote 2: [https://www.3lateral.com](https://www.3lateral.com)
Although it would be ideal to perform experiments on the actual 3D facial scans matching the avatars, such high-fidelity data is, in general, costly and only produced for profit applications, and it is not available for researchers. For this reason, we will have to restrict ourselves to working with synthetic data, yet we will include noise in our fitting data in order to mimic the meshes acquired with a 3D laser scanner or photogrammetry Vasiljevic et al. (2021); Cui et al. (2010). In Figure 4, we show a close shoot of _Ada_ with varying amounts of added Gaussian noise Wand et al. (2007); Sun et al. (2008). The top left subfigure is a clean mesh, followed by the meshes with the increasing variance of the added noise. We notice that with low values, like \(\sigma^{2}=0.01\), the mesh is almost indistinguishable from the clean one,
\begin{table}
\begin{tabular}{c|c c c c c} & Ada & Jesse & Vivian & Omar & Char 5 \\ \hline \(m\) & \(102\) & \(102\) & \(102\) & \(130\) & \(147\) \\ \(n\) & \(10000\) & \(10000\) & \(10000\) & \(3746\) & \(2511\) \\ \(N\) & \(600\) & \(600\) & \(600\) & \(600\) & \(600\) \\ \(|\mathcal{P}|\) & \(185\) & \(185\) & \(185\) & \(187\) & \(160\) \\ \(|\mathcal{T}|\) & \(130\) & \(130\) & \(130\) & \(130\) & \(68\) \\ \(|\mathcal{Q}|\) & \(50\) & \(50\) & \(50\) & \(50\) & \(12\) \\ \end{tabular}
\end{table}
Table 1: Dimensions of the four animated characters, where \(m\) is the number of blendshapes in the basis, \(n\) is the number of vertices in face mesh, \(N\) is the number of test frames, and \(|\mathcal{P}|\), \(|\mathcal{T}|\) and \(|\mathcal{Q}|\) are the numbers of corrective combinations of first, second and third order, respectively.
Figure 2: Head models available at MetaHuman creator.
while one order higher value gives a mesh that is too corrupted. We chose to work with \(\sigma^{2}=0.03\), as it produces a smoothness effect similar to that obtained by modern 3D scanners, yet later in this section, we will also see how varying noise levels affect the results of different methods. This noise is only added on the target meshes \(\widehat{\mathbf{b}}\) in the process of fitting, i.e., solving the inverse rig, while the error for the reconstructed meshes is computed with respect to the original, noise-free data.
In addition to Figure 3, Figure 4 compares the results over training frames for _Ada_ under different noise levels, as indicated by the color of the dots, for the proposed method and for two benchmark methods, _Seol_ and _Cetinaslan_. (See Section 4.3 for details on benchmark methods, and Section 4.2 for evaluation metrics.) The results are shown for each of 100 data points, and additionally, since _Quartic_ and _Cetinaslan_ demand a value of the regularization parameter \(\alpha\), we include choices of \(\alpha\in\{0,0.1,1\}\), as indicated by the size and opacity of the dots. One can notice that _Cetinaslan_ is more affected by the increase of the noise levels -- for low values of \(\sigma^{2}\), it gives the lowest mesh error, while with the increase, the error is drastically higher. On the other side, the proposed method is quite robust in this sense, and additionally, with an increased value of regularizer parameter \(\alpha\), it leads to a significant reduction of the cardinality, without visibly affecting the mesh error.
Figure 4: Point-wise predictions for three methods (_Seol_ as proposed in Seol et al. (2011), _Quartic_ proposed in this paper, and _Cetinaslan_ from Cetinaslan and Orvalho (2020a)) with respect to different noise levels. Dot colors correspond to the standard deviation \(\sigma^{2}\in\{0,0.01,0.1\}\) of the added noise, while the dot sizes in _Quartic_ and _Cetinaslan_ are proportional to the regularization value \(\alpha\in\{0,0.1,1\}\).
Figure 3: Face mesh of _Ada_ corrupted with different noise levels. The upper left is the original clean mesh, and each other represents a mesh with added Gaussian noise with the standard deviation \(\sigma^{2}\) corresponding to the value below the figure.
### Metrics
The mesh error is an important metric of interest in a realistic face reconstruction. To measure this, we use the root mean squared error (RMSE) given by
\[\text{RMSE}(\mathbf{w})=\sqrt{\frac{\|f_{Q}(\mathbf{w})-\tilde{\mathbf{b}}\|^{2} }{n}}, \tag{15}\]
where \(f_{Q}(\mathbf{w})\) is the reconstructed mesh vector and \(\tilde{\mathbf{b}}\) is a noise-free target mesh. We remind the reader that as an input to the algorithms in our experiments, we used meshes corrupted by Gaussian noise, \(\mathbf{\tilde{b}}=\tilde{\mathbf{b}}+\epsilon\), such that \(\epsilon\sim\mathcal{N}(0,\sigma^{2})\), and evaluated the results on the clean meshes \(\tilde{\mathbf{b}}\). Recall that we work with meshes with thousands of vertices, hence the average of the error over all of them might not be able to indicate if the obtained expression resembles the original well or not. It is important whether the error is visible, i.e., perceivable by a human, in a reconstructed face, hence it bears more significance if a small number of vertices gives a large offset than if each vertex in the face is only slightly off. To account for this, besides the mean over all the \(n\) vertices, we will also show the \(95^{th}\) percentile of the error over the mesh vertices. Besides the mesh error, we are also interested in the cardinality of the solution, i.e., the number of non-zero weights, since the frames with a large number of activated weights might be unstable and hard to adjust by an artist. We will complement this with the \(l_{1}\) norm as well since sometimes it is also used for approximately measuring the sparsity of the solution. Finally, since in the animation temporal continuity or smoothness is an important concept, we include another metric that measures the temporal roughness of the weight curves. The roughness is inversely proportional to the smoothness, and higher values of this metric indicate that the results are less smooth. The metric is based on second-order differences Marquis Bolduc and Phan (2022)
\[\text{Roughness}(w_{i})=\sum_{t=2}^{T-1}\left(w_{i}^{(t-1)}-2w_{i}^{(t)}+w_{i}^ {(t+1)}\right)^{2}, \tag{16}\]
where the score of zero is obtained for a constant vector and increases the more the consecutive entries differ. Notice that, while the other metrics introduced so far are computed for a single frame (and over the weights or vertices), _Roughness_ is computed for a single weight over the frames.
### Numerical Results
All the characters in our experiments have corrective terms and a rig function in the form (5), hence it is reasonable to apply our method under the quartic rig function. Nevertheless, we will also include the case when a linear rig approximation is used, to show that our method may outperform the others even in the simplified case, denoting these two approaches _Quartic_ and _Linear_ respectively. As a first benchmark method, we will use _Seol_, introduced earlier in Section 1. Since there are similarities between the proposed method and _Seol_, let us first examine how the results of these two relate. These results are shown in Figure 5. Our method with a linear rig approximation and a single
Figure 5: Trade-off between the mesh error and cardinality / L1 norm of the weight vector. A blue star (_Seol-1_) is the solution proposed by Seol et al. (2011). Adding an \(l_{1}\) regularization term to _Seol-1_ leads to the results presented as blue dots (_Linear-1_), where dot sizes correspond to a value of regularization parameter \(\alpha\). Further adjustment is adding multiple iterations of the algorithm (5 in this case), which is shown in purple, where again a star represents a non-regularized case (_Seol-5_) and dot sizes correspond to the value of \(\alpha\) (_Linear-5_). Analog to _Linear-1_ and _Linear-5_ are _Quartic-1_ (orange) and _Quartic-5_ (red) respectively, where the linear rig function (1) is substituted by a quartic (5). Gray vertical lines represent the values of the ground-truth data.
iteration over the weights is denoted with _Linear-1_ (similarly, a linear model with 5 iterations is denoted _Linear-5_, and in the same analogy we have _Quartic-1_ and _Quartic-5_). In the case where the regularization term is set to \(\alpha=0\), it simplifies to _Seol_, as indicated by a blue star, while for the higher values of \(\alpha\in\{0.1,0.2,0.5,1,2,5,10\}\), results are presented in blue dots of the corresponding sizes. We see that for low to medium values of \(\alpha\), our method gives a visible reduction in cardinality (and \(l_{1}\) norm) compared to _Seol_, without affecting the mesh error, while for the higher values, a trade-off is made between accuracy and sparsity. In this case of a single iteration, the inclusion of corrective terms does not seem to have significant effects, as the curve of _Quartic-1_ closely follows _Linear-1_. On the other side, with the increased number of iterations, we see that _Quartic-5_ gives a lower mesh error than _Linear-5_ for the same cardinality value. However, both these curves are under those of a single iteration, showing a large margin of improvement with respect to both axes. A purple star denoted _Seol-5_ indicates a modification of the solution from Seol et al. (2011), where the method is iterating through weights five times (as opposed to _Seol-1_, which is the solution as proposed in Seol et al. (2011)). _Seol-5_ exhibits more precise mesh fit than _Seol-1_, yet at the cost of considerable increase in cardinality.
As opposed to these sequential approaches, in Section 1, we also mentioned a solution proposed by Joshi et al. (2006) (_Joshi_), that solves for all the blendshape weights jointly. While the approach of _Joshi_ is simple and often satisfactory for general purposes, we will consider two other methods from the same group that are more recent and might provide a better solution. The first one we refer to as _Cetinaslan_, was proposed in Cetinaslan and Orvalho (2020a) as a generalization of the _Joshi_ solution. The method includes an \(l_{2}\) squared regularization to problem (2), so a solution is obtained by solving a set of linear equations
\[(\mathbf{B}^{T}\mathbf{B}+2\alpha\mathbf{I})\mathbf{w}=\mathbf{B}^{T}\widehat {\mathbf{b}}. \tag{17}\]
The weights are afterward clipped to \([0,1]\) interval in order to satisfy the constraints. The other approach was proposed in Rackovic et al. (2022), and to the best of our knowledge, it is the only model-based approach that includes corrective terms in the blendshape model when solving the inverse rig. It is, however, restricted to working with only the first-level corrections (as opposed to our proposed algorithm, which can take any number of corrective levels). The method considers the objective function
\[\operatorname*{minimize}_{\mathbf{0}\leq\mathbf{w}\leq\mathbf{1}}\|\mathbf{B} \mathbf{w}+\sum_{\{i,j\}\in\mathcal{P}}w_{i}w_{j}\mathbf{b}^{(ij)}-\widehat{ \mathbf{b}}\|^{2}+\alpha\mathbf{I}^{T}\mathbf{w}, \tag{18}\]
Figure 6: Trade-off between the mesh error and cardinality / L1 norm of the weight vector for _Ada_. The top row shows the mean mesh error, while the bottom corresponds to the \(95^{th}\) percentile of the error. Dot sizes are proportional to the size of the regularization parameter \(\alpha\) ranging from \(0\) to \(10\), as indicated on the right. The proposed solution with the quartic rig function is shown in red (_Quartic_), and the one with a linear approximation is in blue (_Linear_). The solution proposed by Cetinaslan and Orvalho (2020a) is represented by gray dots (_Cetinaslan_), where a star (_Joshi_) corresponds to a special case whit no regularization Joshi et al. (2006). The approach of Rackovic et al. (2022) is shown in green (_LMMM_). Gray vertical lines represent the values of the ground-truth data.
and applies Levenberg-Marquard Levenberg (1944); Marquardt (1963) and Majorization-Minimization Hunter and Lange (2004) to solve the problem iteratively, hence we refer to this approach as _LMMM_.
The first three benchmark approaches listed above, _Seol_, _Joshi_ and _Cetinaslan_, assume a linear rig function (1) when estimating the weights, while _LMMM_ assumes quadratic. Our method is tested with both linear and quartic rig (_Linear_ and _Quartic_ respectively). Nevertheless, once the weights are estimated, we use the full quartic rig to reconstruct the meshes with all the methods so that the evaluation results are fair.
Figure 8: Trade-off between the mesh error and cardinality / L1 norm of the weight vector for _Vivian_. The top row shows the mean mesh error, while the bottom corresponds to the \(95^{th}\) percentile of the error. Dot sizes are proportional to the size of regularization parameter \(\alpha\) ranging from \(0\) to \(10\), as indicated on the right.
Figure 7: Trade-off between the mesh error and cardinality / L1 norm of the weight vector for _Jesse_. The top row shows the mean mesh error, while the bottom corresponds to the \(95^{th}\) percentile of the error. Dot sizes are proportional to the size of regularization parameter \(\alpha\) ranging from \(0\) to \(10\), as indicated on the right.
In Figure 6, we show results for different methods over the training data for _Ada_. Except for the _Seol_ and _Joshi_, all the methods include a regularization parameter, hence we need to see how the results behave with varying values of \(\alpha\), as indicated by the corresponding sizes of the dots. The top row of the figure shows the mean mesh error, while the bottom gives the \(95^{th}\) percentile of the error. An important aspect to notice is that, even though the proposed method (_Quartic_ and _Linear_) does not reach as low mesh error as it is possible with _LMMM_ or _Cetinaslan_, it does have a favorable shape of the curve -- it offers a nice trade-off between accuracy and cardinality. Imposing higher \(\alpha\) values in _Cetinaslan_ and _LMMM_ increases the mesh error but does not reduce cardinality as much. We can notice a similar behavior for all the other animated characters, in Figures 7-10.
The general conclusions are as follows. _Seol_ produces reasonably low cardinality of \(\mathbf{w}\), which is around the same value as in artist-crafted reference animation, but relatively high error. _LMMM_ can achieve the lowest mesh error of the fit, yet both _LMMM_ and _Cetinaslan_ have higher cardinality, which cannot be significantly reduced even with relatively high regularization values. The proposed method is the only one that has a good trade-off between mesh error and sparsity of the results, which is tuned by choosing the right \(\alpha\) value. We further pick the optimal values of \(\alpha\) for each method and proceed to analyze the results in more detail over the test data. Chosen values for each character/method combination are given in Table 2.
Further, we proceed with the corresponding values and evaluate all the methods over the test data. For the test case, we consider animation sequences (see supplementary video materials), which allows us also to estimate the temporal smoothness of the results, as explained in Section 4.2. The resulting metric values for _Ada_ are presented in Figure 11, and accompanied by Table 3. Figure 11 shows a visible separation between the coordinate descent-based methods (_Quartic_, _Linear_ and _Seol_) that lead to low cardinality and smooth results, versus methods that estimate the weights jointly (_Joshi_, _Cetinaslan_ and _LMMM_), which lead to low RMSE but a more dense vector of weights. Notice that our method _Quartic_ gives relatively low RMSE, comparable to those of _LMMM_ and _Cetinaslan_, when observing both the mean error and the \(95^{th}\) percentile. At the same time, it gives sparse results, comparable with a less accurate method of _Seol_, and smooth frame-to-frame transitions. The only weak point of our method is the execution time, yet it is still
\begin{table}
\begin{tabular}{c|c c c c c} & _Ada_ & _Jesse_ & _Vivian_ & _Omar_ & _Char 5_ \\ \hline _Quartic_ & 0.5 & 1 & 0.5 & 0.5 & 0.5 \\ _Linear_ & 0.5 & 1 & 0.5 & 0.5 & 0.5 \\ _Cetinaslan_ & 0.2 & 0.5 & 0.5 & 0.5 & 0.5 \\ _LMMM_ & 0.2 & 0.5 & 0.5 & 0.5 & 0.5 \\ \end{tabular}
\end{table}
Table 2: The selected values of the regularization parameter \(\alpha\) for each method and each character.
Figure 9: Trade-off between the mesh error and cardinality / L1 norm of the weight vector for _Omar_. The top row shows the mean mesh error, while the bottom corresponds to the \(95^{th}\) percentile of the error. Dot sizes are proportional to the size of regularization parameter \(\alpha\) ranging from \(0\) to \(10\), as indicated on the right.
only a third of the time needed for the _LMMM_ solution. In other words, we can cope with more complex models like _LMMM_ but with an approximately three times faster solution.
Figure 1 gives an example frame for _Ada_, comparing the six methods. The top row shows reconstructed meshes, where red tones indicate the regions of higher error. The bottom row shows the corresponding blendshape activation weights (sorted by the weights of the reference frame). One can notice that _Seol_ gives a higher reconstruction error, producing a completely wrong facial expression. _Linear_, _Joshi_ and _Cetinaslan_ are better, yet the lower lip is slightly off. _Quartic_ and _LMMM_ give very accurate mesh reconstruction, but our method produces a considerably sparser vector of weights than _LMMM_.
More visual examples are given in Figure 16. For visualization purposes, we excluded _Linear_ and _Joshi_, as less significant, and we zoomed in on the mouth regions since that is where most of the visible error is. All the examples confirm the previously stated conclusions that _Quartic_ and _LMMM_ lead to an accurate mesh reconstruction, while only our method gives an optimal trade-off between the mesh error and the sparsity of the weight vector. At the same time, the execution time of the proposed method is three times lower than that of _LMMM_ (the only benchmark without visible mesh artifacts), and the value of _Roughness_ metric is drastically lower compared to any of the benchmarks that solve the weights jointly.
In the supplemental video materials, a reader can better grasp the visual differences since the animated sequence is represented side-by-side with a reconstruction of each method. The proposed method with quartic rig function, and _LMMM_, give almost flawless reconstructions, while _Seol_ is visibly worse than any of the other methods. An interesting aspect to notice in the case of _Joshi_ and _Cetinaslan_ is that they produce a shivering-like effect throughout the entire sequence (especially visible in the lips). This aligns with the exhibited high value for roughness in Figure 11, i.e., even
Figure 10: Trade-off between the mesh error and cardinality / L1 norm of the weight vector for _Char 5_. The top row shows the mean mesh error, while the bottom corresponds to the \(95^{th}\) percentile of the error. Dot sizes are proportional to the size of regularization parameter \(\alpha\) ranging from \(0\) to \(10\), as indicated on the right.
\begin{table}
\begin{tabular}{c|c c c c c c} & RMSE & RMSE & \multirow{2}{*}{Card.} & L1 & \multirow{2}{*}{Rough.} & \multirow{2}{*}{Time} \\ & mean & \(95^{th}\) & & norm & & \\ \hline Quartic & 0.015 & 0.050 & 59.3 & 7.68 & 0.108 & 5.848 \\ Linear & 0.019 & 0.061 & 58.1 & **7.37** & 0.108 & 0.361 \\ Seol & 0.041 & 0.120 & **55.9** & 7.79 & **0.092** & 0.091 \\ Joshi & 0.009 & 0.034 & 74.7 & 11.3 & 2.838 & **0.004** \\ Cetinaslan & 0.012 & 0.044 & 74.0 & 9.56 & 0.583 & 0.004 \\ LMMM & **0.007** & **0.021** & 80.2 & 9.57 & 0.567 & 14.09 \\ \end{tabular}
\end{table}
Table 3: _Ada_. Average values for each metric and each method, corresponding to Figure 11. The worst score for each column is shaded, while the best is highlighted and bold.
though individual frames give a relatively good mesh reconstruction, weight activations will differ more significantly between the consecutive frames. This artifact can also be noticed in _LMMM_, although it is very subtle and easy to oversee. On the other side, our method gives both a good reconstruction in the individual frames and temporally smooth animation.
The other four datasets lead to similar patterns of the results, further confirming the above conclusions. Numerical results for _Jesse_ are given in Figure 12 and Table 4, while example frames are in Figures 17 and 18. For _Vvian_, results are in Figure 13 and Table 5, and example frames are in Figures 19 and 20. For _Omar_, the results are in Figure 14 and Table 6, and example frames are in Figures 21 and 22, while for _Char 5_ the results are in Figure 15 and Table 7. We ask readers to look at the supplementary video materials in order to get a better idea of the quality of reconstruction and smoothness of the animated sequences.
Figure 12: Results statistics for _Jesse_, over the test animation. Horizontal gray lines indicate the average value of the corresponding metric in the ground truth data. Execution times are presented in the log scale. For the exact numerical values, consult Table 4.
Figure 13: Results statistics for _Vivian_, over the test animation. Horizontal gray lines indicate the average value of the corresponding metric in the ground truth data. Execution times are presented in the log scale. For the exact numerical values, consult Table 5.
\begin{table}
\begin{tabular}{c|c c c c c c} & RMSE & RMSE & \multicolumn{2}{c}{L1} & \multicolumn{2}{c}{} \\ & mean & \(95^{th}\) & Card. & norm & Rough. & Time \\ \hline Quartic & 0.023 & 0.065 & **66.8** & 8.68 & 0.432 & 7.714 \\ Linear & 0.035 & 0.116 & 67.9 & **8.22** & 0.395 & 0.404 \\ Seol & 0.054 & 0.160 & 74.2 & 10.1 & **0.384** & 0.074 \\ Joshi & 0.028 & 0.113 & 87.6 & 11.1 & 4.128 & **0.004** \\ Cetinaslan & 0.027 & 0.103 & 88.7 & 8.97 & 0.513 & 0.005 \\ LMMM & **0.013** & **0.040** & 100.0 & 9.16 & 0.509 & 15.94 \\ \end{tabular}
\end{table}
Table 6: _Omar_. Average values for each metric and each method, corresponding to Figure 14. The worst score for each column is shaded, while the best is highlighted and bold.
\begin{table}
\begin{tabular}{c|c c c c c c} & RMSE & RMSE & \multicolumn{2}{c}{} \\ & mean & \(95^{th}\) & Card. & \multicolumn{2}{c}{} \\ & 0.027 & 0.082 & **53.6** & 7.85 & 0.260 & 1.413 \\ Linear & 0.031 & 0.096 & 53.8 & **7.59** & **0.228** & 0.072 \\ Seol & 0.040 & 0.110 & 89.1 & 15.1 & 1.536 & 0.011 \\ Joshi & 0.015 & 0.065 & 108. & 22.0 & 15.40 & 0.004 \\ Cetinaslan & 0.023 & 0.097 & 107. & 11.5 & 0.564 & **0.003** \\ LMMM & **0.017** & **0.057** & 108. & 11.6 & 0.546 & **4.041** \\ \end{tabular}
\end{table}
Table 7: _Char 5_. Average values for each metric and each method, corresponding to Figure 15. The worst score for each column is shaded, while the best is highlighted and bold.
Figure 14: Results statistics for _Omar_, over the test animation. Horizontal gray lines indicate the average value of the corresponding metric in the ground truth data. Execution times are presented in the log scale. For the exact numerical values, consult Table 6.
\begin{table}
\begin{tabular}{c|c c c c c c} & RMSE & RMSE & \multicolumn{2}{c}{} \\ & mean & \(95^{th}\) & Card. & \multicolumn{2}{c}{} \\ & mean & \(95^{th}\) & Card. & norm & Rough. & Time \\ \hline Quartic & 0.017 & 0.053 & 60.6 & **10.1** & 0.158 & 5.865 \\ Linear & 0.024 & 0.071 & 60.3 & 10.2 & 0.161 & 0.379 \\ Seol & 0.050 & 0.129 & **54.9** & 11.2 & **0.107** & 0.063 \\ Joshi & 0.019 & 0.067 & 73.5 & 14.4 & 2.033 & **0.004** \\ Cetinaslan & 0.022 & 0.074 & 73.4 & 11.8 & 0.334 & 0.006 \\ LMMM & **0.013** & **0.039** & 79.5 & 11.9 & 0.324 & 14.09 \\ \end{tabular}
\end{table}
Table 5: _Vivian_. Average values for each metric and each method, corresponding to Figure 12. The worst score for each column is shaded, while the best is highlighted and bold.
Figure 16: _Ada_, example frames with predictions using different methods. The odd rows show mesh reconstructions, and regions of higher mesh error are highlighted in red, according to the color bar on the right. The even rows show corresponding blendshape weights activations, with summarized root mean squared error and cardinality of each approach.
Figure 15: Results statistics for _Char 5_, over the test animation. Horizontal gray lines indicate the average value of the corresponding metric in the ground truth data. Execution times are presented in the log scale. For the exact numerical values, consult Table 7.
Figure 17: _Jesse_, an example frame with predictions using different methods. The top row shows a mesh reconstruction, and regions of higher mesh error are highlighted in red, according to the color bar on the right. The bottom row shows corresponding blendshape weights activation, with summarized root mean squared error and cardinality of each approach.
Figure 18: _Jesse_, example frames with predictions using different methods. The odd rows show mesh reconstructions, and regions of higher mesh error are highlighted in red, according to the color bar on the right. The even rows show corresponding blendshape weights activations, with summarized root mean squared error and cardinality of each approach.
Figure 19: _Vivian_, an example frame with predictions using different methods. The top row shows a mesh reconstruction, and regions of higher mesh error are highlighted in red, according to the color bar on the right. The bottom row shows corresponding blendshape weights activation, with summarized root mean squared error and cardinality of each approach.
Figure 20: _Vivian_, example frames with predictions using different methods. The odd rows show mesh reconstructions, and regions of higher mesh error are highlighted in red, according to the color bar on the right. The even rows show corresponding blendshape weights activations, with summarized root mean squared error and cardinality of each approach.
Figure 21: _Omar_, an example frame with predictions using different methods. The top row shows a mesh reconstruction, and regions of higher mesh error are highlighted in red, according to the color bar on the right. The bottom row shows corresponding blendshape weights activation, with summarized root mean squared error and cardinality of each approach.
Figure 22: _Omar_, example frames with predictions using different methods. The odd rows show mesh reconstructions, and regions of higher mesh error are highlighted in red, according to the color bar on the right. The even rows show corresponding blendshape weights activations, with summarized root mean squared error and cardinality of each approach.
to poor results -- we denote this _Increasing magnitude_. Finally, another simple solution is to order the deformers randomly (_Random ordering_).
We can also resort to the other tactics that incur a bit higher computational cost (since the order vector needs to be estimated for each frame independently) but might be expected to give a more favorable solution in terms of the objective function. At first, we consider the strategy of Matching Pursuit Mallat and Zhang (1993), where the components are ranked based on the correlation with the target vector. In this sense, we can compute the correlation between each blendshape with the target mesh \(\widehat{\mathbf{b}}\), and then proceed following that order when fitting the corresponding frame. We call this _Frame correlation_. The other approach is to recompute correlations after each update, that is, when we choose the blendshape with the highest correlation with the target, we estimate its activation weight using (10), and then recompute the correlations, repeating the process using the updated residual instead of the target mesh. We denote this _Iteration correlation_, and notice that this takes \(m\) times more correlation computations per iteration than _Frame correlation_. Another possibility is to use _Gauss-Southwell_ update rule Nutini et al. (2015, 2017), which chooses the coordinate whose derivative at the given point has the largest magnitude. Since we are solving a constrained problem, we need to exclude the candidate weights \(w_{i}\) that have values 0 or 1, such that the gradient indicates unfeasible directions. Finally, we can choose the next component based on the reduction in the objective function, and we will denote it _Maximum improvement_. The idea is to solve (10) for each weight \(w_{i}\) independently and estimate what would be the value of the cost function (6) for each of the updates. Then, we keep only the weight that leads to the biggest decrease of (6) and discard all the others. After updating the corresponding weight component, we repeat the process until convergence. The last two approaches might be better than the previous ones in terms of solution fit and sparsity, but they are wasteful in terms of computational costs.
The trade-off curves for six variants for _Ada_ are presented in Figure 23. The proposed method of _Decreasing magnitude_ has a similar trade-off curve as that of _Frame correlation_, while _Increasing magnitude_ and _Random ordering_ show both higher error and higher cardinality. _Iteration correlation_ leads to slightly lower cardinality at the cost of increased mesh error, while it seems that the _Maximum improvement_ approach outperforms the others, while _Gauss-Southwell_ partly overlaps with it; but without offering as low RMSE in case of lower regularization. We choose the same value of a regularization parameter \(\alpha=0.5\) for all six cases and proceed with the test data. The results are given in Figure 24 and Table 8. _Maximum improvement_ shows great results in terms of both mesh error and cardinality, however, it is drastically slower than the other methods. The wasteful computations and function evaluations of this approach lead to execution time that is 15 times longer than most other approaches, which makes it unsuitable for practical use. This aspect is similar for _Gauss-Southwell_, while it also yields a relatively high mesh error. The proposed approach of _Decreasing magnitude_ is performing relatively well in each aspect, which confirms that it is a good heuristic for this problem. _Frame correlation_ behaves similarly to _Decreasing magnitude_ -- the main difference is in terms of temporal smoothness, where _Decreasing magnitude_ performs better. _Random ordering_ is much worse in terms of temporal smoothness than any other approach. This comes as no surprise since, at each frame, the algorithm follows an arbitrary ranking in the fitting phase, hence, even if the individual meshes fit relatively accurately, the consecutive frames will be semantically different. _Increasing magnitude_ leads to a bit higher error, but it still gives smooth results, since the controllers are always visited in the same order. Finally, _Iteration correlation_ performs poorly in both mesh error and temporal smoothness, even though the produced weight vectors are sparse.
We can also confirm these conclusions by visually inspecting an example frame in Figure 25. _Iteration correlation_ gives the highest mesh error, with red tones all around the surface, and visible misfits in the mouth region. _Random ordering_ and _Increasing magnitude_ are slightly better, but the shape of the lips is still different from the reference frame. The
Figure 23: Trade-off between the mesh error and cardinality / L1 norm of the weight vector for different ordering of the updates for _Quadratic_ method (_Ada_).
## Appendix A Appendix
Figure 24: Results statistics for _Ada_ for different ordering approaches, over the test animation. Horizontal gray lines indicate the average value of the corresponding metric in the ground truth data. Execution times are presented in the log scale. For the exact numerical values, consult Table 8.
\begin{table}
\begin{tabular}{c|c c c c c c} & RMSE mean & RMSE \(95^{th}\) & Card. & L1 norm & Rough. & Time \\ \hline Decreasing magn. & 0.015 & 0.050 & 59.3 & 7.68 & **0.108** & 5.847 \\ Random ord. & 0.018 & 0.058 & 57.3 & 9.40 & 22.01 & 5.867 \\ Increasing magn. & 0.019 & 0.061 & 58.2 & 11.0 & 0.160 & 5.839 \\ Frame corr. & **0.014** & 0.043 & 55.0 & 7.48 & 0.456 & 5.439 \\ Iteration corr. & 0.026 & 0.072 & 27.3 & 6.84 & 4.930 & **2.654** \\ Maximum Improvement Gauss Southwell & 0.014 & **0.041** & 32.9 & 6.65 & 1.848 & 91.241 \\ \end{tabular}
\end{table}
Table 8: _Ada_. Average values for each metric for different ordering rules, corresponding to Figure 24. The worst score for each column is shaded, while the best is highlighted and bold.
other four approaches give good reconstruction, with _Maximum improvement_ and _Gauss-Southwell_ having far sparser weight vectors than the rest.
## 5 Conclusion
The method proposed in this paper addresses the inverse rig problem in a coordinate descent manner, using complex nonlinear blendshape models. Our coordinate descent-based method is general enough to work with different levels of corrections or even with a linear rig function. Numerical experiments performed over several datasets show that the proposed method outperforms the state-of-the-art model-based solutions based on the trade-off between mesh error and cardinality of the weight vector, even in the case when using a linear approximation of the rig. Visual inspection further confirms that our method produces a higher-fidelity reconstruction of the original mesh and that it produces a correct facial expression even when the other methods fail. The sequential update rule that is applied implies that our algorithm will not activate mutually exclusive controllers and hence avoid one of the main causes of the instability of a solution. In this respect, the proposed method is somewhat similar to _Seol_Seol et al. (2011), yet it is superior in the accuracy of the reconstructed meshes. On the other side, _LMMM_Rackovic et al. (2022) gives a high-fidelity mesh reconstruction that sometimes outperforms that of our algorithm, yet it suffers from a high cardinality, similar to _Joshi_Joshi et al. (2006) and _Cetinaslan_Cetinaslan and Orvalho (2020a). For this reason, it is often susceptible to artifacts in the reconstructed meshes and might be hard or impossible to manually adjust later, as opposed to our method. Besides the optimal tradeoff between the mesh error and sparsity of the weights produced by our method, we have seen that our method is three times faster than _LMMM_ (the only baseline with comparable mesh reconstruction accuracy) and the resulting animation sequences are visibly more smooth. For all these reasons, our method is a favorable approach in the production of close-shot facial animation, where high accuracy of expression cloning is crucial.
**Supplemental Materials**
Supplementary video materials are available at [https://youtu.be/Kw4wV24-04c](https://youtu.be/Kw4wV24-04c) for _Ada_, [https://youtu.be/GLQ11LJVI_Q](https://youtu.be/GLQ11LJVI_Q) for _Jesse_, [https://youtu.be/lmUGL8MoN4U](https://youtu.be/lmUGL8MoN4U) for _Omar_, and [https://youtu.be/Lo-enHOrfyQ](https://youtu.be/Lo-enHOrfyQ) for _Vivian_. For each character we present a reference animation on the left, and a mirrored estimate by different method. Further, the gray bar-plots on the left correspond to weight activations in the reference animation, and the green ones on the right to the estimated solutions.
**Funding**
This work has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 812912, from FCT IP strategic project NOVA LINCS (FCT UIDB/04516/2020) and project DSAIPA/AI/0087/2018. The work has also been supported in part by the Ministry of Education, Science and Technological Development of the Republic of Serbia (Grant No. 451-03-9/2021-14/200125).
Figure 25: _Ada_, an example frame with predictions using different ordering techniques. The top row shows a mesh reconstruction, and regions of higher mesh error are highlighted in red, according to the color bar on the right. The bottom row shows corresponding blendshape weights activation, with summarized root mean squared error and cardinality of each approach. |
2306.07606 | Combined two-loop self-energy corrections at finite and zero
temperatures | In this paper we investigate higher-order corrections to the energies of
bound states in hydrogen subjected to the external blackbody radiation field.
In particular, within the framework of thermal quantum electrodynamics and
$S$-matrix approach we analyze combined type of two-loop self-energy
corrections, including one zero-vacuum and one loop at finite temperature. By
utilizing the method of dimensional regularization, we derive closed analytical
expressions for the energy shifts of atomic levels. Our numerical calculations
demonstrate that even at room temperature these corrections can be significant
for excited states, reaching the magnitude of the thermal induced Stark
contribution. | T. Zalialiutdinov, D. Solovyev | 2023-06-13T08:03:23Z | http://arxiv.org/abs/2306.07606v2 | # Combined two-loop self-energy corrections at finite and zero temperatures
###### Abstract
In this paper we investigate higher-order corrections to the energies of bound states in hydrogen subjected to the external blackbody radiation field. In particular, within the framework of thermal quantum electrodynamics and \(S\)-matrix approach we analyze combined type of two-loop self-energy corrections, including one zero-vacuum and one loop at finite temperature. By utilizing the method of dimensional regularization, we derive closed analytical expressions for the energy shifts of atomic levels. Our numerical calculations demonstrate that even at room temperature these corrections can be significant for excited states, reaching the magnitude of the thermal induced Stark contribution.
## I Introduction
The interaction between blackbody radiation (BBR) and atomic systems has been a subject of interest for many years [1; 2; 3; 4; 5]. Recent advancements in atomic physics have revealed the significance of BBR-stimulated effects in both fundamental and applied sciences [6]. In the pursuit of greater precision in the measurement of atomic transition energies and the development of frequency standards, the impact of uncertainty caused by BBR cannot be ignored, as evidenced by the most accurate clock experiments [7; 8; 9] and frequency measurements [6; 10; 11; 12; 13; 14]. This challenge has prompted extensive research on the calculation of BBR-induced shift in clock systems [15; 16], with the potential to revolutionize the field of high-precision metrology [17].
While early research on BBR-induced effects focused primarily on Rydberg atoms [18], the emergence of high-precision spectroscopy and frequency metrology has expanded the study of thermal induced effects to low-lying energy levels [6; 19; 20]. This development offers a promising path towards understanding fundamental physical constants, including the Rydberg constant, \(R_{\infty}\), and fine-structure constant, \(\alpha\). As the effects are not very pronounced, consideration of the finite temperature impact on atomic systems is typically limited to lower-order corrections within the framework of quantum mechanical (QM) perturbation theory.
Previously, we developed a method for calculating higher-order corrections in the framework of the \(S\)-matrix line profile approach [21; 22] and quantum electrodynamics of bound states at finite temperatures (TQED), which makes it possible to carry out calculations in complete analogy with ordinary quantum electrodynamics at zero temperature. In particular, thermal one-loop corrections to hyperfine splitting, \(g\)-factor, recombination cross sections, and probabilities of one- and two-photon transitions were calculated [23; 24; 25; 26; 27; 28].
In this study, we extend the application of these methods to calculate combined two-loop self-energy radiative corrections with one ordinary and one thermal loop to the energy levels of a hydrogen-like atom. Our approach for evaluating the relevant equations incorporates the bound-state \(S\)-matrix formalism, finite-temperature quantum field theory, and non-relativistic quantum electrodynamics (NRQED), together with a technique known as dimensional regularization. The investigation of these higher-order corrections can be essential for achieving more accurate determinations of the blackbody radiation (BBR) shift in various physical systems. Our results pave the way for further progress in this field.
The paper is structured as follows. In Section II, we consider the derivation of the leading order one-loop self-energy contribution to the energy shift by applying the dimensional regularization approach. Section III is dedicated to the evaluation of the finite temperature one-loop contribution and its renormalization. In Section IV, we apply these approaches to evaluate the combined two-loop problem. We present the numerical results and discussion in Section V. Throughout the paper, we use relativistic units in which the reduced Planck constant \(\hbar\), the speed of light \(c\), and the vacuum permittivity \(\varepsilon_{0}\) are set to unity (\(\hbar=c=\varepsilon_{0}=1\)). The fine structure constant \(\alpha\) is given in these units as \(\alpha=e^{2}/(4\pi)\), where \(e\) is the electron charge.
## II One-loop electron self-energy in the dimensional regularization
In this section we briefly remind the derivation of leading order \(\alpha(\alpha Z)^{4}\) (\(Z\) is the nuclear charge) self-energy correction to the atomic energy level within the nonrelativistic approach applying dimensional regularization.
As is customary in dimensionally regularized quantum electrodynamics, we assume that the dimension of the space-time is \(D=4-2\varepsilon\), and that of space \(d=3-2\varepsilon\). The parameter \(\varepsilon\) is considered as small, but only on the level of matrix elements, where an analytic continuation to a noninteger spatial dimension is allowed. At first we briefly discuss the extension of the basic formulas of NRQED to the case of an arbitrary number of dimensions. The application of dimensional-regularized NRQED approach to estimate the Lamb shift in hydrogen can be found in [29].
The energy shift of the state \(a\) corresponding to the one-loop Feynman diagram depicted in Fig. 1 is given by the following expression [30]:
\[\Delta E_{a}=-{\rm i}e^{2}\int\frac{d^{D}K}{(2\pi)^{D}}D_{\mu\nu}(K) \tag{1}\] \[\times\langle\overline{\psi}_{a}|\gamma^{\mu}\frac{1}{\not{p}- \not{K}-m-\gamma_{0}V}\gamma^{\nu}|\psi_{a}\rangle-\delta m,\]
where
\[D_{\mu\nu}(K)=\frac{g_{\mu\nu}}{K^{2}} \tag{2}\]
is the photon propagator in the Feynman gauge (\(\mu\), \(\nu=0,\,1,\,2,\,3\)), \(K=(k_{0},\mathbf{k})\) and \(p=(p_{0},\mathbf{p})\) are the the 4-vectors of photon and electron momenta, respectively, \(e\) is the electron charge, \(g_{\mu\nu}\) is the metric tensor, \(\delta m\) is the one-loop mass counter term, \(\psi_{a}\) is the solution of Dirac equation for the hydrogen atom and \(\overline{\psi}_{a}=\gamma_{0}\psi_{a}^{\dagger}\) its Dirac conjugation. The Coulomb potential \(V\) in the denominator of Eq. (1) is given by
\[V(\mathbf{q})=-\frac{Ze^{2}}{q^{2}}, \tag{3}\]
where \(q=+\sqrt{\mathbf{q}^{2}}\). The Fourier transform of Eq. (3) can be written as follows
\[V(\mathbf{r})=-Ze^{2}\int\frac{d^{d}q}{(2\pi)^{d}}\frac{e^{{\rm i}\mathbf{q}\mathbf{r}}}{q ^{2}}=-\frac{Z_{e}e^{2}}{4\pi r}=-\frac{Z_{e}\alpha}{r}. \tag{4}\]
The latter representation provides an implicit definition of \(Z_{\varepsilon}\)[31]. The integration over \(k_{0}\) in Eq. (1) is taken along the standard Feynman contour \(C\)[32].
To pass to the nonrelativistic limit, we first transform the matrix element in the integrand of Eq. (1) [33]:
\[\langle\overline{\psi}_{a}|\gamma^{\mu}\frac{1}{\not{p}-\not{K}-m -\gamma_{0}V}\gamma^{\nu}|\psi_{a}\rangle \tag{5}\] \[=\langle\overline{\psi}_{a}|\gamma^{\mu}e^{{\rm i}\mathbf{k}\mathbf{r}} \frac{1}{\not{p}-\gamma_{0}k_{0}-\gamma_{0}V-m}\gamma^{\nu}e^{-{\rm i}\mathbf{k} \mathbf{r}}|\psi_{a}\rangle\] \[=\langle\psi_{a}^{\dagger}|\alpha^{\mu}e^{{\rm i}\mathbf{k}\mathbf{r}} \frac{1}{p_{0}-k_{0}-\mathbf{\alpha}\mathbf{p}-V-\gamma_{0}m}\alpha^{\nu}e^{-{\rm i} \mathbf{k}\mathbf{r}}|\psi_{a}\rangle\] \[=\langle\psi_{a}^{\dagger}|\alpha^{\mu}e^{{\rm i}\mathbf{k}\mathbf{r}} \frac{1}{E_{a}-k_{0}-H_{D}}\alpha^{\nu}e^{-{\rm i}\mathbf{k}\mathbf{r}}|\psi_{a}\rangle,\]
where in the last line we took into account that \(p_{0}=E_{a}\) is the Dirac energy of the bound state \(a\) and \(H_{D}=\mathbf{\alpha}\mathbf{p}-V-\gamma_{0}m\) is the Dirac Hamiltonian.
In the Coulomb gauge the photon propagator is
\[D_{00}=\frac{1}{\mathbf{k}^{2}}, \tag{6}\]
\[D_{ij}=\frac{1}{K^{2}}\left(\delta_{ij}-\frac{k_{i}k_{j}}{\mathbf{k}^{2}}\right) \tag{7}\]
The integration over \(k_{0}\) in Eq. (1) along the contour \(C\) can be reduced to two scales in the self-energy problem: the atomic energy scale \(m(\alpha Z)^{2}\) (low-energy part, \(\Delta E_{a}^{\rm L}\)) and the relativistic electron mass scale (high-energy part, \(\Delta E_{a}^{\rm H}\)). Then the dimensional regularization can be applied to both using only one regularization parameter: the dimension of the coordinate space, given by \(d=3-2\varepsilon\). This leads to a straightforward derivation of radiative corrections in terms of the expectation values of effective operators and the Bethe logarithm. Following the work [34], we represent total one-loop self-energy correction to the atomic energy of the state \(a\) as a sum of two contributions \(\Delta E_{a}=\Delta E_{a}^{\rm L}+\Delta E_{a}^{\rm H}\).
The leading nonrelativistic low-energy contribution of Eq. (1) comes from the dipole approximation of matrix element given by Eq. (5). Performing integration over \(k_{0}\) in Eq. (1) and taking into account that poles of electron propagator do not contribute in the low-energy limit [29; 34], we find
\[\Delta E_{a}^{\rm L} = e^{2}\int\frac{d^{d}k}{(2\pi)^{d}}\frac{1}{2k}\left(\delta_{ij} -\frac{k_{i}k_{j}}{\mathbf{k}^{2}}\right)\] \[\times\langle\phi_{a}|p^{i}\frac{1}{E_{a}-H_{S}-k}p^{j}|\phi_{a}\rangle,\]
where \(k=+\sqrt{\mathbf{k}^{2}}\equiv\omega\) and \(H_{S}\) denotes the nonrelativistic Hamiltonian in \(d\) dimensions. The wave function \(\phi\), in contrast to \(\psi\) in Eq. (5), corresponds to the nonrelativistic Schrodinger wave function.
The angular integration in Eq. (8) can be performed by noting that [35]
\[\int d^{d}k\left(\delta_{ij}-\frac{k_{i}k_{j}}{\mathbf{k}^{2}}\right) = \int d\Omega_{d}k^{d-1}dk\left(\delta_{ij}-\frac{k_{i}k_{j}}{\bm {k}^{2}}\right)\] \[= \frac{2\pi^{d/2}}{\Gamma(d/2)}\frac{d-1}{d}\delta_{ij}\int\limits _{0}^{\infty}k^{d-1}dk.\]
Figure 1: The Feynman graphs representing the lowest-order self-energy QED correction to the atomic energy level. The double solid line denotes the electron in the external Coulomb potential \(V\) (the Furry picture), the wavy line denotes the virtual photon.
Then, using Eq. (9) and expanding the result into the Taylor series up to the terms \(\sim O(\varepsilon)\) (details are given in Appendix A), we obtain
\[\Delta E_{a}^{\rm L}=\frac{2\alpha}{3\pi}\langle\phi_{a}|p_{i}(H_{S }-E_{a})\left[\frac{1}{2\varepsilon}+\frac{5}{6}-\frac{\gamma_{E}}{2}\right. \tag{10}\] \[\left.-\log[2(H_{S}-E_{a})]+\frac{1}{2}\log 4\pi\right]p_{i}|\phi_{a}\rangle.\]
The expression (10) can be simplified by noting that
\[\langle\phi_{a}|p_{i}(H_{S}-E_{a})p^{i}|\phi_{a}\rangle \tag{11}\] \[=\frac{1}{2}\langle\phi_{a}|[p_{i},[H_{S},p^{i}]]+p^{2}H_{S}+H_{S }p^{2}|\phi_{a}\rangle\] \[-E_{a}\langle\phi_{a}|p^{2}|\phi_{a}\rangle=\frac{1}{2}\langle \phi_{a}|[p_{i},[H_{S},p^{i}]]|\phi_{a}\rangle\] \[=\frac{1}{2}\langle\phi_{a}|\Delta V|\phi_{a}\rangle.\]
Then it is easy to show that the term in Eq. (11), which is singular in the limit \(\varepsilon\to 0\), is \(+\frac{\alpha}{6\pi\varepsilon}\langle\phi_{a}|\Delta V|\phi_{a}\rangle\). Below we will see that the same divergence, but with the opposite sign, occurs in the high-energy part and is eventually compensated in the total shift.
Taking into account that potential \(V\) satisfies the \(d\)-dimensional Poisson equation
\[\Delta V(\mathbf{r})=4\pi Z\alpha\delta^{(d)}(\mathbf{r}), \tag{12}\]
we arrive at
\[\Delta E_{a}^{\rm L}=\frac{2\alpha}{3\pi}\left[\langle\phi_{a}|2 \pi Z\alpha\delta^{d}(\mathbf{r})|\phi_{a}\rangle\left(\frac{5}{6}-\log(\alpha Z)^ {2}\right)\right. \tag{13}\] \[\left.-\langle\phi_{a}|p_{i}(H_{S}-E_{a})\log\frac{2(H_{S}-E_{a}) }{(\alpha Z)^{2}}p_{i}|\phi_{a}\rangle\right].\]
Here the constants \(-\frac{1}{2}\log 4\pi\) and \(-\frac{\gamma_{E}}{2}\) are removed by corresponding contribution in mass counter term \(\delta m\)[36]. The matrix elements of operators arising in Eq. (13) are
\[\langle\phi_{a}|2\pi Z\alpha\delta^{d}(\mathbf{r})|\phi_{a}\rangle= \frac{2(Z\alpha)^{4}}{n_{a}^{3}}\delta_{l0}, \tag{14}\]
\[\langle\phi_{a}|p_{i}(H_{S}-E_{a})\log\frac{2(H_{S}-E_{a})}{( \alpha Z)^{2}}p^{i}|\phi_{a}\rangle \tag{15}\] \[=\frac{2(\alpha Z)^{4}}{n_{a}^{3}}\log\beta_{a},\]
where \(\log\beta_{a}\) is the Bethe logarithm. The latter can be conveniently calculated in the acceleration gauge, see [37; 38], as follows:
\[\log\beta_{a}=\frac{B_{a}}{C_{a}}, \tag{16}\]
where
\[B_{a}=\sum_{n^{\prime}l^{\prime}}\frac{\left|\langle n_{a}l_{a}|\frac{\mathbf{r}} {r^{3}}|n^{\prime}l^{\prime}\rangle\right|^{2}\log\frac{2|E_{a^{\prime}}-E_{a }|}{(\alpha Z)^{2}}}{E_{n^{\prime}}-E_{a}} \tag{17}\]
\[C_{a}=\sum_{n^{\prime}l^{\prime}}\frac{\left|\langle n_{a}l_{a}|\frac{\mathbf{r}} {r^{3}}|n^{\prime}l^{\prime}\rangle\right|^{2}}{E_{n^{\prime}}-E_{a}}. \tag{18}\]
Here the summation runs over the entire spectrum including discrete and continuum states. Equation (16) has a numerical advantage for the evaluation of Bethe logarithm since it accelerates the convergence of sums in Eqs. (17), (18). Using the B-spline approach [39] for the solution of Schrodinger equation for hydrogen atom the infinite sums over entire spectrum can be converted to a finite sums over the pseudo states. Our numerical calculations of Bethe logarithm for \(1s\) and \(2s\) states in hydrogen leads to the values \(2.984129\) and \(2.811770\), respectively, which are consistent with [40].
Finally, substituting Eqs. (14) and (15) into Eq. (13) we arrive at
\[\Delta E_{a}^{\rm L}=\frac{2\alpha}{3\pi}\left[\frac{2(Z\alpha)^{ 4}}{n_{a}^{3}}\delta_{l0}\left(\frac{5}{6}-\log(\alpha Z)^{2}\right)\right. \tag{19}\] \[\left.-\frac{2(\alpha Z)^{4}}{n_{a}^{3}}\log\beta_{a}\right],\]
The next step in the consideration of one-loop self energy correction is the evaluation of high-energy part of Eq. (1). For the photon energies of the order of electron rest mass the electron propagator in Eq. (1) can be expanded in a series in powers of interaction with the Coulomb field of nucleus:
\[\frac{1}{\not{p}-\not{K}-m-\gamma_{0}V}=\frac{1}{\not{p}-\not{K} -m} \tag{20}\] \[+\frac{1}{\not{p}-\not{K}-m}\gamma_{0}V\frac{1}{\not{p}-\not{K} -m}+\ldots\]
Substituting Eq. (20) into Eq. (1) the high-energy part in the leading order can be written as a sum of zero and one potential terms (the first and second graphs in the right-hand side of the equation in Fig. 2):
\[\Delta E_{a}^{\rm H}=\langle\overline{\psi}_{a}|\Sigma(p)|\psi_{a}\rangle+ \langle\overline{\psi}_{a}|\Gamma_{0}(p,p)V|\psi_{a}\rangle-\delta m, \tag{21}\]
Figure 2: Potential expansion for the electron self-energy radiative correction. The ordinary solid line corresponds to the free electron, the dashed line with the cross at the end denotes the external potential.
where \(\Gamma_{0}(p^{\prime},p)\) is the vertex-function [32]
\[\Gamma_{\delta}(p^{\prime},p)=-{\rm i}e^{2}\int\frac{d^{D}K}{(2\pi)^ {D}}\frac{g_{\mu\nu}}{K^{2}} \tag{22}\] \[\times\left(\gamma^{\mu}\frac{1}{\not{p}^{\prime}-\not{K}-m} \gamma\frac{1}{\not{p}-\not{K}-m}\gamma^{\nu}\right),\]
and \(\Sigma(p)\) is the free-electron self-energy
\[\Sigma(p)=-{\rm i}e^{2}\int\frac{d^{D}K}{(2\pi)^{D}}\frac{g_{\mu\nu}}{K^{2}} \gamma^{\mu}\frac{1}{\not{p}-\not{K}-m}\gamma^{\nu}. \tag{23}\]
Note that higher order terms of decomposition (20) do not contribute at \(\alpha(\alpha Z)^{4}\) level.
Now we decompose \(\Gamma_{\mu}(p^{\prime},p)\) into a sum of the limit for zero momentum transfer \(q=p^{\prime}-p=0\) ("forward scattering") and the remainder [32]:
\[\Gamma_{\mu}(p^{\prime},p)=\Gamma_{\mu}(p,p)+\Gamma_{\mu}^{\rm R}(p^{\prime}, p). \tag{24}\]
The dimensionally regularized vertex-function \(\Gamma_{\mu}^{\rm R}(p^{\prime},p)\) is given by the electron form factors \(F_{1}(q^{2})\) and \(F_{2}(q^{2})\):
\[\Gamma_{\mu}^{\rm R}(p^{\prime},p)=F_{1}(q^{2})\gamma_{\mu}+\frac{{\rm i}}{2}F _{2}(q^{2})\sigma_{\mu\nu}q^{\nu}, \tag{25}\]
where
\[F_{1}(q^{2})=\frac{\alpha}{2\pi}\left[-\frac{1}{3\varepsilon}-\frac{1}{4}- \varepsilon\right]q^{2}+O(q^{4}), \tag{26}\]
\[F_{2}(q^{2})=\frac{\alpha}{2\pi}\left[(1+4\varepsilon)+\left(\frac{1}{6}+ \frac{5}{6}\varepsilon\right)q^{2}\right]+O(q^{4}), \tag{27}\]
and
\[\sigma_{\mu\nu}=\frac{{\rm i}}{2}[\gamma_{\mu},\gamma_{\nu}]. \tag{28}\]
After the cancellation of singular part of vertex-function and free-electron self-energy with mass counter term \(\delta m\) the leading order of high-energy part is
\[\Delta E_{a}^{\rm H}=\langle\overline{\psi}_{a}|\Gamma_{0}^{\rm R }(p,p)V|\psi_{a}\rangle \tag{29}\] \[=\langle\overline{\psi}_{a}|F_{1}(q^{2})\gamma_{0}V+\frac{{\rm i} }{2}F_{2}(q^{2})\sigma_{0\nu}q^{\nu}V|\psi_{a}\rangle,\]
where the regularized vertex-function \(\Gamma_{0}^{\rm R}\) contains only infrared divergence in parameter \(\varepsilon\). For further evaluation it is convenient to rewrite Eq. (29) in the coordinate representation:
\[\Delta E_{a}^{\rm H}=\langle\overline{\psi}_{a}|\frac{\alpha}{2 \pi}\left(-\frac{1}{3\varepsilon}-\frac{1}{4}\right)\gamma_{0}\Delta V \tag{30}\] \[+\frac{{\rm i}\alpha}{4\pi}(\mathbf{\alpha}\mathbf{\nabla}V)|\psi_{a}\rangle.\]
Here we used the fact that potential \(V\) does not depend on time and substituted \(q^{2}\to-(\partial_{t}^{2}-\Delta)\to\Delta\) together with the anti-commutation relation \(\{\gamma_{\mu},\gamma_{\nu}\}=2g_{\mu\nu}\). Passing to the nonrelativistic limit in Eq. (30) and using the Foldy-Wouthuysen transformation [41], we find
\[\Delta E_{a}^{\rm H}=\langle\phi_{a}|\left[\frac{\alpha}{2\pi} \left(-\frac{1}{3\varepsilon}-\frac{1}{4}\right)\Delta V\right. \tag{31}\] \[\left.+\frac{\alpha}{8\pi}\Delta V-\frac{\alpha}{2\pi}\frac{1}{r} \frac{dV}{dr}(\mathbf{l}\cdot\mathbf{s})\right]|\phi_{a}\rangle\] \[=\langle\phi_{a}|\left[-\frac{\alpha}{6\pi\varepsilon}\Delta V- \frac{\alpha}{2\pi}\frac{1}{r}\frac{dV}{dr}(\mathbf{l}\cdot\mathbf{s})\right]|\phi_{a }\rangle.\]
Note, that the divergent term in Eq. (31) is \(-\frac{\alpha}{6\pi\varepsilon}\langle\phi_{a}|\Delta V|\phi_{a}\rangle\) and exactly cancels the same contribution in the low energy part, see Eq. (10).
For our further purposes it is also convenient to give an expression for the remaining regular part of matrix element in Eq. (31) which only contributes to the high-energy part in the nonrelativistic limit:
\[\langle\phi_{n^{\prime}}|\Gamma_{0}^{\rm R}(p,p)V|\phi_{n}\rangle_{ \rm reg}=-\frac{\alpha}{2\pi}\langle\phi_{n^{\prime}}|\frac{1}{r}\frac{dV}{dr }(\mathbf{l}\cdot\mathbf{s})|\phi_{n}\rangle \tag{32}\] \[=-\frac{\alpha(Z\alpha)}{2\pi}\langle\phi_{n^{\prime}}|r^{-3}(\bm {l}\cdot\mathbf{s})|\phi_{n}\rangle.\]
Performing the angular reduction of matrix element we have for \(l_{a}\neq 0\):
\[\Delta E_{a}^{\rm H}=-\frac{\alpha(Z\alpha)^{4}}{2\pi n_{a}^{3}}\frac{\left(j_ {a}(j_{a}+1)-l_{a}(l_{a}+1)-\frac{3}{4}\right)}{l_{a}(l_{a}+1)(2l_{a}+1)} \tag{33}\]
Assembling the low-energy part (19) and the high-energy part (33), the finite result of the lowest order for one-loop self-energy is
\[\Delta E_{a}^{\rm SE}=\Delta E_{a}^{\rm L}+\Delta E_{a}^{\rm H} \tag{34}\] \[=\frac{4\alpha(Z\alpha)^{4}}{3\pi n_{a}^{3}}\left[\left(\frac{5}{ 6}-2\log(\alpha Z)\right)\delta_{l_{a}0}-\log\beta_{a}\right]\] \[-\frac{\alpha}{2\pi}\frac{(Z\alpha)^{4}}{n_{a}^{3}}\frac{\left(j_ {a}(j_{a}+1)-l_{a}(l_{a}+1)-3/4\right)}{l_{a}(l_{a}+1)(2l_{a}+1)}.\]
This expression coincides with the well-known expression for the leading order of one-loop Lamb shift derived within Pauli-Villars regularization scheme [42]. For the \(1s\) and \(2s\) states we find 8344.5 MHz and 1066.4 MHz, respectively.
## III Thermal one-loop self-energy
In this section we briefly remind the derivation of thermal one-loop self-energy correction to the arbitrary atomic energy level \(a\). In the leading order it is given by the well-known AC-Stark shift induced by the equilibrium radiation field with Planckian spectrum. The corresponding correction can be obtained by replacing ordinary photon propagator \(D_{\mu\nu}\) in Eq. (1) by thermal one \(D_{\mu\nu}^{\beta}\)[43]. In [44] it was found that the thermal part
of photon propagator \(D^{\beta}_{\mu\nu}\)[43] admits a different (equivalent) form which implies the integration over \(k_{0}\) along the contour \(C_{1}\), see Fig. 3.
In the space representation the finite temperature part of photon propagator reads
\[D^{\beta}_{\mu\nu}(x,x^{\prime})=g_{\mu\nu}\int\limits_{C_{1}}\frac{d^{4}K}{(2 \pi)^{4}}\frac{e^{{\rm i}K(x-x^{\prime})}}{K^{2}}n_{\beta}(E_{k}). \tag{35}\]
Here \(n_{\beta}(E_{k})=(\exp(E_{k}\beta)-1)^{-1}\) is the photon occupation number (Planck's distribution function), \(E_{k}=|\mathbf{k}|\), \(\beta=1/(k_{B}T)\), \(k_{B}\) is the Boltzmann constant and \(T\) is the radiation temperature in Kelvin. The equivalence of both forms, given by Eq. (35) and in [43], was demonstrated in [44]. Then the correction corresponding to the graph in Fig. 1 can be expressed as
\[\Delta E_{a}=-{\rm i}e^{2}\int\limits_{C_{1}}\frac{d^{4}K}{(2\pi )^{4}}D^{\beta}_{\mu\nu}(K) \tag{36}\] \[\times\langle\overline{\psi}_{a}|\gamma^{\mu}\frac{1}{\not{p}- \not{K}-m-\gamma_{0}V}\gamma^{\nu}|\psi_{a}\rangle-\delta m^{\beta},\]
where \(\delta m^{\beta}\) is the thermal mass counter term. As in the case of ordinary QED at \(T=0\), it can be obtained through the diagram of the free thermal self-energy on the mass shell. An accurate evaluation of the thermal mass counter term was performed in [43], resulting in \(\delta m^{\beta}=\pi\alpha(k_{B}T)^{2}/3\) (see Eq. (35) there).
It should be noted that in contrast to the ordinary one-loop correction Eq. (1) the ultraviolet divergence is absent due to the factor \(n_{\beta}\) in the integrand of Eq. (35). Hence, the contribution of the high-energy part, when \(k_{0}\) is of the order of the electron rest-mass, is strongly suppressed and only the low-energy contribution can be considered. This also implies that the temperatures are sufficiently low, i.e \(k_{B}T\) is of the order of binding energy \(m(\alpha Z)^{2}\) or less.
As for an ordinary self-energy loop, we pass to the Coulomb gauge and nonrelativistic limit for the operators and wave functions [31]. Then integration over \(k_{0}\) along the contour \(C_{1}\) leads to
\[\Delta E^{\beta}_{a}= e^{2}\sum_{\pm}\int\frac{d^{3}k}{(2\pi)^{3}}\frac{n_{\beta}(k)}{2 k}\left(\delta_{ij}-\frac{k_{i}k_{j}}{\mathbf{k}^{2}}\right) \tag{37}\] \[\times\langle\phi_{a}|p^{i}\frac{1}{E_{a}-H_{S}\pm k}p^{j}|\phi_ {a}\rangle-\delta m^{\beta}.\]
Here \(\sum_{\pm}\) denotes the sum of two contributions with \(+k\) and \(-k\) in the denominator. Performing in Eq. (37) integration over the angles, Eq. (9), we arrive at
\[\Delta E^{\beta}_{a}=\frac{2\alpha}{3\pi}\sum_{\pm}\int\limits^{ \infty}_{0}dk\,k\,n_{\beta}(k) \tag{38}\] \[\times\langle\phi_{a}|p^{i}\frac{1}{E_{a}-H_{S}\pm k}p^{i}|\phi_{ a}\rangle-\delta m^{\beta}.\]
For numerical calculations it is convenient to rewrite Eq. (38) in the basis set representation and with matrix elements in the length form. The former can be done with the use of equality
\[\frac{1}{E_{a}-H_{S}-k}=\sum_{n}\frac{|\phi_{n}\rangle\langle\phi_{n}|}{E_{a} -E_{n}-k}, \tag{39}\]
where the sum over \(n\) runs over entire spectrum of Schrodinger equation including continuum. The transition to the length form is made using the expression (14) given in Appendix B. The result is
\[\Delta E^{\beta}_{a}=\frac{2\alpha}{3\pi}\sum_{\pm}\sum_{n}\int \limits^{\infty}_{0}dk\,k^{3}\,n_{\beta}(E_{k}) \tag{40}\] \[\times\frac{\langle\phi_{a}|r^{i}|\phi_{n}\rangle\langle\phi_{n}| r^{i}|\phi_{a}\rangle}{E_{a}-E_{n}\pm k}+\frac{2\alpha}{\pi}\int\limits^{\infty}_{0} dk\,k\,n_{\beta}(k)-\delta m^{\beta}.\]
The first term in Eq. (40) represents ordinary AC-Stark shift induced by the blackbody radiation field [5]. The second term is state independent and can be evaluated analytically, leading to
\[\frac{2\alpha}{\pi}\int\limits^{\infty}_{0}dk\,k\,n_{\beta}(E_{k})=\frac{\pi \alpha}{3}(k_{B}T)^{2}. \tag{41}\]
After substituting Eq. (41) into Eq. (40), the state-independent contribution is cancelled by the mass-counter term, \(\delta m^{\beta}\). Then the AC-Stark shift is
\[\Delta E^{\beta}_{a}=\frac{2\alpha}{3\pi}\sum_{\pm}\sum_{n}\int \limits^{\infty}_{0}dk\,k^{3}\,n_{\beta}(E_{k}) \tag{42}\] \[\times\frac{\langle\phi_{a}|r^{i}|\phi_{n}\rangle\langle\phi_{n}| r^{i}|\phi_{a}\rangle}{E_{a}-E_{n}\pm k}.\]
This equation coincides with a well-known quantum mechanical result [5]. Taking into account that in relativistic units \(k\sim m(\alpha Z)^{2}\), \(r\sim(m\alpha Z)^{-1}\) and \(\int^{\infty}_{0}k^{3}n_{\beta}(k)\sim\)
Figure 3: Integration contour \(C_{1}\) in \(k_{0}\) plane. Arrows on the contour define the pole-bypass rule. The poles \(\pm|\mathbf{k}|\) are indicated by oblique crosses.
\((k_{B}T)^{4}\) we find that for the ground state \(\Delta E_{a}^{\beta}\sim\frac{(k_{B}T)^{4}}{m^{3}\alpha_{0}^{3}Z^{4}}\) r.u., which is in agreement with estimations given in [20]. At the same time, for states with \(n\geq 2\) in the denominator of Eq. (42), energy differences corresponding to the Lamb shift can arise, see [20; 45]. In this case, the temperature parameter comes at a lower power, which increases the magnitude of the correction. At \(T=300\) K the AC-Stark shifts are \(-0.039\) Hz and \(-1.04\) Hz for the \(1s\) and \(2s\) states in the hydrogen atom, respectively. As the temperature increases to \(T=1000\) K, the corresponding shifts become \(-4.79\) Hz and \(-132.2\) Hz. The accurate numerical evaluation of Eq. (42) and the corresponding analysis can be found in [5; 20; 45; 46].
It should be noted that in higher orders \(\delta m^{\beta}\), the thermal mass counter term has not yet been calculated, and the explicit separation of thermal mass in two-loop diagrams with a bound electron in the same manner is rather difficult. However, in the higher orders of perturbation theory, when passing from the velocity form to the length form, all terms proportional to \((k_{B}T)^{2}\) vanish in the sum of contributions from all diagrams depicted in Fig. 4. In particular, previous studies [27; 28] have demonstrated the cancellation of such terms when calculating the thermal loop corrections to the bound electron \(g\)-factor or hyperfine splitting. Thus, the renormalization procedure of the thermal loop in the nonrelativistic limit can be validly reduced to a calculation in the length form, while the procedure of renormalization of zero-temperature (ordinary) loop remains the same.
## IV Two loop self-energy with one thermal loop
When dealing with two-loop combined-type diagrams, the calculation procedure can be separated into two contributions, similar to the one-loop case. It is important to note that in this scenario, the photon momenta in the thermal loop (denoted as \(k_{1}\)) are always in the low-energy region, since the temperatures under consideration are much lower than the electron rest mass. As a result, the high-energy contribution arises from the photon momenta of the ordinary loop (denoted as \(k_{2}\)). Consequently, the regularization process involves eliminating ultraviolet divergences in the integrals over the momentum \(k_{2}\) of an ordinary photon. Meanwhile, the integral over the momenta of a thermal photon converges due to the presence of a Planckian distribution in the integrand.
Building upon the results of previous sections, we begin by considering the low-energy part of the two-loop problem. Specifically, we will extend the findings of the zero-temperature case discussed in [34] to the case of finite temperature. To eliminate singular terms in this part the dimensional regularization is used. Following this technique, the same singular terms with opposite signs will arise in the high-energy regime when decomposing the denominators containing momentum \(k_{2}\) by the powers of the Coulomb potential interaction in the limit of large photon momenta.
### Low-energy limit of two-loop contribution
The low-energy contribution \(\Delta E_{a}^{\rm L}\), which has been redefined for the two-loop problem, arises from two photon momenta that are of the order of \(m\alpha^{2}\). Following the method described in [34], and taking into account Eq. (35) for the thermal photon propagator in the Coulomb gauge, \(\Delta E_{a}\) the length gauge is expressed by
\[\Delta E_{a}^{\rm L}=\left[e^{2}\int\frac{d^{d}k_{1}k_{1}^{2}}{( 2\pi)^{d}2k_{1}}\frac{d-1}{d}n_{\beta}(k_{1})\right] \tag{43}\] \[\times\left[e^{2}\int\frac{d^{d}k_{2}k_{2}^{2}}{(2\pi)^{d}2k_{2}} \frac{d-1}{d}\right]P(k_{1},k_{2}),\]
the thermal loop (denoted as \(k_{1}\)) are always in the low-energy region, since the temperatures under consideration are much lower than the electron rest mass. As a result, the high-energy contribution arises from the photon momenta of the ordinary loop (denoted as \(k_{2}\)). Consequently, the regularization process involves eliminating ultraviolet divergences in the integrals over the momentum \(k_{2}\) of an ordinary photon. Meanwhile, the integral over the momenta of a thermal photon converges due to the presence of a Planckian distribution in the integrand.
Building upon the results of previous sections, we begin by considering the low-energy part of the two-loop problem. Specifically, we will extend the findings of the zero-temperature case discussed in [34] to the case of finite temperature. To eliminate singular terms in this part the dimensional regularization is used. Following this technique, the same singular terms with opposite signs will arise in the high-energy regime when decomposing the denominators containing momentum \(k_{2}\) by the powers of the Coulomb potential interaction in the limit of large photon momenta.
### Low-energy limit of two-loop contribution
The low-energy contribution \(\Delta E_{a}^{\rm L}\), which has been redefined for the two-loop problem, arises from two photon momenta that are of the order of \(m\alpha^{2}\). Following the method described in [34], and taking into account Eq. (35) for the thermal photon propagator in the Coulomb gauge, \(\Delta E_{a}\) the length gauge is expressed by
\[\Delta E_{a}^{\rm L}=\left[e^{2}\int\frac{d^{d}k_{1}k_{1}^{2}}{( 2\pi)^{d}2k_{1}}\frac{d-1}{d}n_{\beta}(k_{1})\right] \tag{44}\] \[\times\left[e^{2}\int\frac{d^{d}k_{2}k_{2}^{2}}{(2\pi)^{d}2k_{2}} \frac{d-1}{d}\right]P(k_{1},k_{2}),\]
where \(P(k_{1},k_{2})\) is the momentum of the \(k_{1}\)-th particle in the \(k_{2}\)-th particle in the \(k_{1}\)-th particle in the \(k_{2}\)-th particle in the \(k_{1}\)-th particle in the \(k_{2}\)-th particle in the \(k_{2}\)-th particle in the \(k_{1}\)-th particle in the \(k_{2}\)-th particle in the \(k_{2}\)-th particle in the \(k_{1}\)-th particle in the \(k_{2}\)-th particle in the \(k_{2}\)-th particle in the \(k_{1}\)-th particle in the \(k_{2}\)-th particle in the \(k_{2}\)-th particle in the \(k_{1}\)-th particle in the \(k_{2}\)-th particle in the \(k_{2}\)-th particle in the \(k_{2}\)-th particle in the \(k_{1}\)-th particle in the \(k_{2}\)-th particle in the \(k_{1}\)-th particle in the \(k_{2}\)-th particle in the \(k_{2}\)-th particle in the \(k_{1}\)-th particle in the \(k_{2}\)-th particle in the \(k_{2}\)-th particle in the \(k_{1}\)-th particle in the \(k_{2}\)-th particle in the \(k_{1}\)-th particle in the \(k_{2}\)-th particle in the \(k_{2}\)-th particle in the \(k_{1}\)-th particle in the \(k_{2}\)-th particle in the \(k_{2}\)-th particle in the \(k_{1}\)-th particle in the \(k_{2}\)-th particle in the \(k_{2}\)-th particle in the \(k_{1}\)-th particle in the \(k_{2}\)-th particle in the \(k_{2}\)-th particle in the \(k_{1}\)-th particle in the \(k_{2}\)-th particle in the \(k_{2}\)-th particle in the \(k_{1}\)-th particle in the \(k_{2}\)-th particle in the \(k_{2}\)-th particle in the \(k_{2}\)-th particle in the \(k_{2}\)-th particle in the \(k_{2}\)-th particle in the \(k_{1}\)-th particle in the \(k_{2}\)-th particle in the
where \(d=3-2\varepsilon\) and \(d^{d}k=k^{d-1}dkd\Omega_{d}\).
The terms in Eq. (44) can be associated with different loop diagrams: the first term corresponds to the crossed loop diagrams (CL) depicted in Fig. 4 (e) and (f), the second term corresponds to the vacuum loop inside the thermal loop (ViT) diagram shown in Fig. 4 (b); the third term represents the contribution of the vacuum loop over the thermal loop (VoT) diagram depicted in Fig. 4 (a); while the last three summands corresponds to the irreducible and reducible parts of the loop after the loop (LaL), see Fig. 4 (c) and (d).
Taking into account that in relativistic units \(k\sim m(\alpha Z)^{2}\), \(dk\sim m(\alpha Z)^{2}\), \(r\sim(m\alpha Z)^{-1}\) and \(\int_{0}^{\infty}k^{3}n_{\beta}(k)\sim(k_{B}T)^{4}\) we find that \(\Delta E_{a}^{\beta}\sim\frac{(k_{B}T)^{4}}{m^{3}Z^{2}}\) r.u. At the same time, when integrating over \(k_{2}\) or when summing over the spectrum, situations arise when one of the denominators reduces the power of \(k_{1}\) in the numerator and, consequently, reduces power of the temperature in the corresponding parametric estimate thereby increasing the magnitude of the effect.
The integration over the \(k_{2}\) can be done analytically using the dimensional regularization technique, while the integration over the \(k_{1}\) will be done numerically in the last step of the calculation. All integrals necessary for these calculations are evaluated in the Appendix A.
#### iii.2.1 Crossed loops
In the Coulomb gauge and nonrelativistic limit the corresponding contribution is given by the first term in Eq. (44) and can be written as follows
\[\Delta E_{a}^{\text{L, CL}}=\sum_{\pm}\left[e^{2}\int\frac{d^{d}k _{1}\,k_{1}^{2}}{(2\pi)^{d}2k_{1}}\frac{d-1}{d}n_{\beta}(k_{1})\right] \tag{45}\] \[\times\left[e^{2}\int\frac{d^{d}k_{2}\,k_{2}^{2}}{(2\pi)^{d}2k_{2 }}\frac{d-1}{d}\right]\left\langle\phi_{a}\left|r_{i}\frac{1}{E_{a}-H_{S}\pm k _{1}}\right.\right.\] \[\times\left.\left.r_{j}\frac{1}{E_{a}-H_{S}\pm k_{1}-k_{2}}r^{i} \frac{1}{E_{a}-H_{S}-k_{2}}r^{j}\right|\phi_{a}\right\rangle.\]
Before proceeding to isolating the finite part of the low-energy contribution, it is convenient to pass to the basis set representation, see equality (39). Then, performing \(d\)-dimensional integration over \(k_{2}\) with the use of Eq. (101), and angular integration over the \(k_{1}\), the finite part of low-energy contribution of CL diagram is
\[\Delta E_{a}^{\text{L, CL}}=\frac{2\alpha}{3\pi}\sum_{\pm}\sum_{n _{1}n_{2}n_{3}}\int\limits_{0}^{\infty}dk_{1}k_{1}^{3}n_{\beta}(k_{1}) \tag{46}\] \[\times\frac{\langle\phi_{a}|r_{i}|\phi_{n_{1}}\rangle\langle\phi_ {n_{1}}|r_{j}|\phi_{n_{2}}\rangle\langle\phi_{n_{2}}|r^{i}|\phi_{n_{3}}\rangle \langle\phi_{n_{3}}|r^{j}|\phi_{a}\rangle}{E_{a}-E_{n_{1}}\pm k_{1}}\] \[\times\frac{2\alpha}{3\pi}\left\{\frac{5}{6}\frac{\left((E_{a}-E_ {n_{2}}\pm k_{1})^{3}-(E_{a}-E_{n_{3}})^{3}\right)}{E_{n_{3}}-E_{n_{2}}\pm k_{ 1}}\right.\] \[-\left.\frac{(E_{a}-E_{n_{2}}\pm k_{1})^{3}\log[2|E_{a}-E_{n_{2}} \pm k_{1}|]}{E_{n_{3}}-E_{n_{2}}\pm k_{1}}\right.\] \[\left.+\frac{(E_{a}-E_{n_{3}})^{3}\log[2|E_{a}-E_{n_{3}}|]}{E_{n_{ 3}}-E_{n_{2}}\pm k_{1}}\right\},\]
where summation over \(n_{1},\,n_{2},\,n_{3}\) extends over entire spectrum of Schrodinger equation for hydrogen atom including continuum states.
The part with infrared divergence arising in the dimensional integration, see Eq. (101), can be transformed with the use of commutation relation (100) and resonance condition \(k_{1}=\pm(E_{a}-E_{n_{1}})\) to the form:
\[\frac{2\alpha}{3\pi}\sum_{\pm}\int\limits_{0}^{\infty}k_{1}^{3}n _{\beta}(k_{1})\left(\frac{\alpha}{6\pi\varepsilon}\right) \tag{47}\] \[\times\sum_{n_{1}}\frac{\langle\phi_{a}|r^{i}|\phi_{n_{1}} \rangle\langle\phi_{n_{1}}|r_{i}|\phi_{n_{2}}\rangle\langle\phi_{n_{2}}|\Delta| \phi_{n_{a}}\rangle}{E_{a}-E_{n_{1}}\pm k_{1}}.\]
Below we show that it identically compensates for the similar contribution of the high-energy part, see Eq. (60). Thus only finite result given by Eq. (46) remains.
#### iii.2.2 Vacuum loop inside thermal loop
The corresponding contribution to the energy shift Eq. (43) is given by the second term of Eq. (44), which
Figure 4: The Feynman graphs representing the combined two-loop self-energy QED corrections to the atomic energy level. Various contributions are indicated using the following notations: (a) - vacuum loop over thermal loop (VoT), (b) - vacuum loop inside thermal loop (ViT), (c) and (d) - loop- after-loop (LaL), (e) and (f) - crossed loops (CL). The double solid line denotes the electron in the external Coulomb potential \(V\) (the Furry picture), the tiny wavy line denotes the zero-temperature virtual photon, while the bold one corresponds to the finite temperatures.
can be written as follows
\[\Delta E_{a}^{\text{L, ViT}}=\sum_{\pm}\left[e^{2}\int\frac{d^{d}k_{1 }\,k_{1}^{2}}{(2\pi)^{d}2k_{1}}\frac{d-1}{d}n_{\beta}(k_{1})\right]\times \tag{48}\] \[\left[e^{2}\int\frac{d^{d}k_{2}\,k_{2}^{2}}{(2\pi)^{d}2k_{2}}\frac {d-1}{d}\right]\left\langle\phi_{a}\left|r_{i}\frac{1}{E_{a}-H_{S}\pm k_{1}}\times\right.\right.\] \[\left.\left.r_{j}\frac{1}{E_{a}-H_{S}\pm k_{1}-k_{2}}r^{j}\frac{1 }{E_{a}-H_{S}\pm k_{1}}r^{i}\right|\phi_{a}\right\rangle.\]
Integrating over \(k_{2}\), see Eq. (16), then over \(k_{1}\) angles, and going to the basis set representation, we obtain
\[\Delta E_{a}^{\text{L, ViT}}=\frac{2\alpha}{3\pi}\sum_{\pm}\sum_{ n_{1}n_{2}}\int\limits_{0}^{\infty}dk_{1}k_{1}^{3}n_{\beta}(k_{1})\times \tag{49}\] \[\frac{\langle\phi_{a}|r_{i}|\phi_{n_{1}}\rangle\langle\phi_{n_{1 }}|r_{j}|\phi_{n_{2}}\rangle\langle\phi_{n_{2}}|r_{j}|\phi_{n_{3}}\rangle \langle\phi_{n_{3}}|r^{i}|\phi_{a}\rangle}{(E_{a}-E_{n_{1}}\pm k_{1})(E_{a}-E_ {n_{3}}\pm k_{1})}\times\] \[\left\{\frac{2\alpha}{3\pi}(E_{n_{2}}-E_{a}\pm k_{1})^{3}\left( \frac{5}{6}-\log[2|E_{n_{2}}-E_{a}\pm k_{1}|]\right)\right\}.\]
The singular term discarded above, after conversion Eq. (17) and application of the resonance condition \(k_{1}=\pm(E_{a}-E_{n_{1}})=\pm(E_{a}-E_{n_{3}})\), can be written as
\[\frac{2\alpha}{3\pi}\int\limits_{0}^{\infty}dk_{1}k_{1}^{3}n_{ \beta}(k_{1})\sum_{\pm}\sum_{n_{1}n_{2}}\left(+\frac{\alpha}{6\pi\varepsilon} \right)\times \tag{50}\] \[\frac{\langle\phi_{a}|r_{i}|\phi_{n_{1}}\rangle\langle\phi_{n_{1 }}|\Delta V|\phi_{n_{3}}\rangle\langle\phi_{n_{3}}|r_{i}|\phi_{a}\rangle}{(E_ {a}-E_{n_{1}}\pm k_{1})(E_{a}-E_{n_{3}}\pm k_{1})}.\]
Again, see below, the high-energy part of the ViT diagram represents the same contribution, but of the opposite sign. Thus, in the aggregate of the low-energy and high-energy parts, all infrared divergences arising in the renormalization procedure of this diagram disappear.
#### iii.2.3 Vacuum loop over thermal loop
Within the nonrelativistic limit the contribution represented by the vacuum loop over the thermal loop, see the third term in Eq. (44), is given by
\[\Delta E_{a}^{\text{L, VoT}}=\sum_{\pm}\left[e^{2}\int\frac{d^{d}k _{1}\,k_{1}^{2}}{(2\pi)^{d}2k_{1}}\frac{d-1}{d}n_{\beta}(k_{1})\right] \tag{51}\] \[\times\left.r_{j}\frac{1}{E_{a}-H_{S}\pm k_{1}-k_{2}}r^{j}\frac{1 }{E_{a}-H_{S}-k_{2}}r^{i}\right|\phi_{a}\right\rangle.\]
The integral over \(k_{2}\) in Eq. (51) diverges and within the dimensional regularization can be evaluated using Eq. (10). Then the finite part is given by
\[\Delta E_{a}^{\text{L, VoT}}=\frac{2\alpha}{3\pi}\int\limits_{0}^{ \infty}dk_{1}k_{1}^{3}n_{\beta}(k_{1})\left\{\frac{2\alpha}{3\pi}\langle\phi_{ a}|r_{i}|\phi_{n_{1}}\rangle\langle\phi_{n_{1}}|r_{j}|\phi_{n_{2}}\rangle \langle\phi_{n_{2}}|r_{j}|\phi_{n_{3}}\rangle\langle\phi_{n_{3}}|r^{i}|\phi_{a}\right\rangle \tag{52}\] \[\times(3E_{a}-E_{n_{1}}-E_{n_{2}}-E_{n_{3}}\pm k_{1})\left(\frac{ (E_{a}-E_{n_{1}})^{3}\log[2|E_{a}-E_{n_{1}}|]-(E_{a}-E_{n_{2}}\pm k_{1})^{3} \log[2|E_{a}-E_{n_{2}}\pm k_{1}|]}{(E_{n_{1}}-E_{n_{2}}\pm k_{1})(-E_{n_{2}}+E _{n_{3}}\pm k_{1})(3E_{a}-E_{n_{1}}-E_{n_{2}}-E_{n_{3}}\pm k_{1})}\right.\] \[\left.-\frac{(E_{a}-E_{n_{1}})^{3}\log[2|E_{a}-E_{n_{1}}|]-(E_{a}- E_{n_{3}})^{3}\log[2|E_{a}-E_{n_{3}}|]}{(E_{n_{1}}-E_{n_{3}})(-E_{n_{2}}+E_{n_{3}}\pm k_{1})(3E _{a}-E_{n_{1}}-E_{n_{2}}-E_{n_{3}}\pm k_{1})}+\frac{5}{6}\right)\right\}\]
#### iii.2.4 Loop-after-loop
The last contribution to be considered is a loop-after-loop diagram. It can can be splitted into irreducible and reducible parts (reference state part). The latter can be represented as a product of the matrix elements of the diagonal one-loop matrix elements of the ordinary and thermal electron self-energy operators and the corresponding first derivatives over energy \(E_{a}\) (last two terms in Eq. (44)). Therefore, the evaluation of reducible loop-after-loop part is identical to the previously considered one-loop contribution and in this section we will focus on the low-energy limit of irreducible part only. This is
\[\Delta E_{a}^{\text{L, LaL}}(\text{irr})=\sum_{\pm}\left[e^{2}\int \frac{d^{d}k_{1}\,k_{1}^{2}}{(2\pi)^{d}2k_{1}}\frac{d-1}{d}n_{\beta}(k_{1})\right]\times \tag{53}\] \[\left[e^{2}\int\frac{d^{d}k_{2}\,k_{2}^{2}}{(2\pi)^{d}2k_{2}}\frac {d-1}{d}\right]\left\langle\phi_{a}\left|r_{i}\frac{1}{E_{a}-(H_{S}\pm k_{1})}r^{i }\times\right.\] \[\left.\frac{1}{(E_{a}-H_{S})^{{}^{\prime}}r_{j}}\frac{1}{E_{a}-(H_ {S}+k_{2})}r^{j}\right|\phi_{a}\right\rangle.\]
Performing the same steps as in previous sections, the finite part in the basis set representation is reduced to
\[\Delta E_{a}^{\text{L,\,LaL}}(\text{irr})=\frac{2\alpha}{3\pi}\sum_{ \pm}\int dk_{1}k_{1}^{3}n_{\beta}(k_{1})\left[\frac{2\alpha}{3\pi}\sum_{ \begin{subarray}{c}n_{1}n_{3}\\ n_{2}\neq a\end{subarray}}\frac{\langle\phi_{a}|r_{i}|\phi_{n_{1}}\rangle \langle\phi_{n_{1}}|r^{i}|\phi_{n_{2}}\rangle\langle\phi_{n_{2}}|r_{j}|\phi_ {n_{3}}\rangle\langle\phi_{n_{3}}|r^{j}|\phi_{a}\rangle}{(E_{a}-E_{n_{1}}\pm k _{1})(E_{a}-E_{n_{2}})}\times\right. \tag{54}\] \[\left.\left(\frac{5}{6}(E_{n_{3}}-E_{a})^{3}-(E_{n_{3}}-E_{a})^{3} \log[2|E_{n_{3}}-E_{a}|]\right)\right].\]
As before, the divergent contribution arising in the irreducible part is exactly compensated by a similar term in the irreducible high-energy part of the loop-after-loop diagram.
### High-energy limit of two-loop contribution
In the two-loop combined diagrams we assume that the thermal photon energies are much smaller than the electron's rest mass, and the integrals over the closed thermal photon loop are suppressed by the Planck distribution. This means that the high-energy part is related only to the zero-temperature loop. Following the consideration of one-loop Lamb shift, we consider the two-loop high energy part in a similar way by denoting the finite and zero-temperature photon 4-momenta as \(K_{1}=(k_{10},\mathbf{k}_{1})\) and \(K_{2}=(k_{20},\mathbf{k}_{2})\), respectively.
To renormalize the thermal contribution in the non-relativistic limit and the dipole approximation, we pass to the length form. After this step, we only need to renormalize the zero-temperature (ordinary) QED loop's contribution. We can achieve this by expanding all electron propagators that contain momenta \(K_{2}\) in powers of the Coulomb interaction potential.
Similar to the one-loop case, we consider only the leading-order contributions in \(\alpha Z\) and apply dimensional regularization. Doing this, we can extract an ultraviolet finite contribution after subtracting the mass counter-term that contains only the infrared divergence in the limit \(\varepsilon\to 0\). This divergence is compensated completely by the similar contribution of the low-energy part when the two energy scales are stitched together.
#### iii.2.1 Crossed loops
It is convenient to consider the high-energy part of the crossed loops relying on fully relativistic two-loop equations for energy shifts given in the Feynman gauge. Taking into account that ultraviolet divergences arise only when integrating over the photon momentum in a zero-temperature loop, the finite part can be obtained as in the one-loop case, i.e. expanding the denominators of the corresponding electron propagators by powers of the Coulomb interaction.
Then, performing \(D=4-2\varepsilon\) dimensional integration over the photon loop and subtracting the corresponding mass counter-term, the final result can be expressed via the regularized vertex function \(\Gamma_{\nu}^{\text{R}}\) (see Eq. (25)) with the remaining regular temperature dependent part. Thus, the infrared divergence is contained in \(\Gamma_{\nu}^{\text{R}}\).
Following the above, see also [47; 48], we write
\[\Delta E_{a}^{\text{H,\,CL}}=-e^{4}\left[\int_{C_{1}}\frac{d^{D}K _{1}}{(2\pi)^{D}}\frac{g_{\mu\nu}}{K_{1}^{2}}n_{\beta}(E_{k_{1}})\right]\left[ \int\frac{d^{D}K_{2}}{(2\pi)^{D}}\frac{g_{\lambda\sigma}}{K_{2}^{2}}\right]\times \tag{55}\] \[\left\langle\overline{\psi}_{a}\left|\gamma^{\mu}\frac{1}{\not{p} -\not{K}_{1}-m-\gamma_{0}V}\gamma^{\lambda}\frac{1}{\not{p}-\not{K}_{1}-\not{K} _{2}-m-\gamma_{0}V}\gamma^{\nu}\frac{1}{\not{p}-\not{K}_{2}-m-\gamma_{0}V} \gamma^{\sigma}\right|\psi_{a}\right\rangle-\delta m^{\text{CL}}.\]
Decomposition of the second and third electron propagators containing the momentum of the high-energy photon \(K_{2}\) and the accounting of the counter-term \(\delta m^{\text{CL}}\), see [49], yield
\[\Delta E_{a}^{\text{H,\,CL}}=\left[-\text{i}e^{2}\int_{C_{1}} \frac{d^{D}K_{1}}{(2\pi)^{D}}\frac{g_{\mu\nu}}{K_{1}^{2}}n_{\beta}(E_{k_{1}})\right] \tag{56}\] \[\times\left\langle\overline{\psi}_{a}\left|\gamma^{\mu}\frac{1}{ \not{p}-\not{K}_{1}-m-\gamma_{0}V}\Gamma_{\nu}^{\text{R}}(p-K_{1},p)\right| \psi_{a}\right\rangle.\]
According to the definition of form-factor Eq. (22), it is easy to find that the square of the transferred momentum \(q^{2}\) in the regularised vertex function is \(q^{2}=(p-K_{1}-p)^{2}=K_{1}^{2}\). As in the case of one-loop self-energy the arguments in the electronic form-factors \(F_{1,\,2}\) included in the regularized vertex function can still be represented as an expansions in powers of \(q^{2}\) given by Eqs. (26) and (27). Taking this into account, passing to the Coulomb gauge and coordinate representation, within the nonrelativistic limit in Eq. (56), we arrive at
\[\Delta E_{a}^{\rm H,\,CL}=e^{2}\sum_{\pm}\int\frac{d^{d}k_{1}}{(2\pi)^{d}2k_{1}} \frac{d-1}{d}n_{\beta}(k_{1})\left\langle\phi_{a}\left|p^{i}\frac{1}{E_{a}-H_{S} \pm k_{1}}\left[p_{i}\left(-\frac{\alpha}{6\pi\varepsilon}\Delta\right)+\frac{ \alpha}{8\pi}\gamma_{0}\sigma_{ij}\nabla^{j}\right]\right|\phi_{a}\right\rangle. \tag{57}\]
Here we used that the operator \(q^{2}\) in the form-factors \(F_{1,2}\) acts on a time-independent wave function in the coordinate representation and \(q_{\mu}\rightarrow-{\rm i}\partial_{\mu}\).
Then, for \(\sigma_{ij}=\varepsilon_{ijk}\Sigma^{k}\)[32], where \(\varepsilon_{ijk}\) is the Levi-Civita antisymmetric tensor and \(\Sigma^{k}\) is a component of the Dirac matrix \(\mathbf{\Sigma}\), by introducing the spin operator \(\mathbf{s}=\frac{\mathbf{\sigma}}{2}\), one can obtain
\[\Delta E_{a}^{\rm H,\,CL}=\frac{2\alpha}{3\pi}\sum_{\pm}\int \limits_{0}^{\infty}dk_{1}k_{1}n_{\beta}(k_{1})\times \tag{58}\] \[\left\langle\phi_{a}\left|p^{i}\frac{1}{E_{a}-H_{S}\pm k_{1}} \left[p_{i}\left(-\frac{\alpha\Delta}{6\pi\varepsilon}\right)-\frac{\alpha}{ 4\pi}(\mathbf{s}\times\mathbf{p})_{i}\right]\right|\phi_{a}\right\rangle.\]
Going to the basis set representation and length form in the matrix elements, the finite high-energy part of CL contribution can be reduced to
\[\Delta E_{a}^{\rm H,\,CL}=\frac{2\alpha}{3\pi}\sum_{\pm}\int \limits_{0}^{\infty}k_{1}^{3}n_{\beta}(k_{1})\left(\frac{\alpha}{4\pi}\right) \tag{59}\] \[\times\sum_{n_{1}}\frac{\langle\phi_{a}|r^{i}|\phi_{n_{1}} \rangle\langle\phi_{n_{1}}|(\mathbf{r}\times\mathbf{s})_{i}|\phi_{a}\rangle}{E_{a}-E_{ n_{1}}\pm k_{1}}.\]
The divergent term in Eq. (58) transforms to
\[\frac{2\alpha}{3\pi}\sum_{\pm}\int\limits_{0}^{\infty}k_{1}^{3}n _{\beta}(k_{1})\left(-\frac{\alpha}{6\pi\varepsilon}\right) \tag{60}\] \[\times\sum_{n_{1}}\frac{\langle\phi_{a}|r^{i}|\phi_{n_{1}} \rangle\langle\phi_{n_{1}}|r_{i}|\phi_{n_{2}}\rangle\langle\phi_{n_{2}}| \Delta|\phi_{n_{a}}\rangle}{E_{a}-E_{n_{1}}\pm k_{1}}.\]
This term compensates exactly Eq. (47).
#### iii.2.2 Vacuum loop inside thermal loop
In this section, we consider the two-loop diagram illustrated in Fig. 4 (b), which represents the case of the ordinary self-energy loop inside the thermal one. The high-energy contribution to the energy shift can be obtained as in [47]. For this purpose, we start with the original expression for the energy shift, and use the dimensional regularization.
In the Feynman gauge we have
\[\Delta E_{a}^{\rm H,\,ViT}=-e^{4}\left[\int_{C_{1}}\frac{d^{D}k_{1 }}{(2\pi)^{D}}\frac{g_{\mu\nu}}{K_{1}^{2}}n_{\beta}(E_{k_{1}})\right]\left[ \int\frac{d^{D}K_{2}}{(2\pi)^{D}}\frac{g_{\lambda\sigma}}{K_{2}^{2}}\right] \tag{61}\] \[\times\left\langle\overline{\psi}_{a}\left|\gamma^{\mu}\frac{1}{ \not{p}-\not{K}_{1}-m-\gamma_{0}V}\gamma^{\lambda}\frac{1}{\not{p}-\not{K}_{1 }-\not{K}_{2}-m-\gamma_{0}V}\gamma^{\nu}\frac{1}{\not{p}-\not{K}_{1}-m-\gamma_ {0}V}\gamma^{\sigma}\right|\psi_{a}\right\rangle-\delta m^{\rm ViT},\]
where \(\delta m^{\rm ViT}\) gives the mass counter-term.
The decomposition of the middle electron propagator in Eq. (61), containing the momentum of the high-energy photon \(K_{2}\), by powers of interaction with the external Coulomb field, leads to
\[\Delta E_{a}^{\rm H,\,ViT}=\left[-{\rm i}e^{2}\int_{C_{1}}\frac{d ^{D}K_{1}}{(2\pi)^{D}}\frac{g_{\mu\nu}}{K_{1}^{2}}n_{\beta}(E_{k_{1}})\right] \left[\left\langle\overline{\psi}_{a}\left|\gamma^{\mu}\frac{1}{\not{p}-\not{K}_ {1}-m-\gamma_{0}V}\left[\Sigma(p^{\prime})\right]\frac{1}{\not{p}-\not{K}_{1}-m -\gamma_{0}V}\gamma^{\nu}\right|\psi_{a}\right\rangle\right] \tag{62}\] \[+\left[-{\rm i}e^{2}\int_{C_{1}}\frac{d^{D}K_{1}}{(2\pi)^{D}} \frac{g_{\mu\nu}}{K_{1}^{2}}n_{\beta}(E_{k_{1}})\right]\left\langle\overline{ \psi}_{a}\left|\gamma^{\mu}\frac{1}{\not{p}-\not{K}_{1}-m-\gamma_{0}V}\left[ \Gamma_{0}(p^{\prime},p^{\prime})V\right]\frac{1}{\not{p}-\not{K}_{1}-m-\gamma_ {0}V}\gamma^{\nu}\right|\psi_{a}\right\rangle-\delta m^{\rm ViT},\]
where \(p^{\prime}=p-K_{1}\) and \(\Gamma_{\mu}(p^{\prime},p)\) is given by Eq. (22). Then, applying the mass renormalization procedure,
\[\Delta E_{a}^{\rm H,\,ViT}=-{\rm i}e^{2}\int_{C_{1}}\frac{d^{D}K_{1}}{(2\pi)^{D}} \frac{g_{\mu\nu}}{K_{1}^{2}}n_{\beta}(E_{k_{1}})\left\langle\overline{\psi}_{a} \left|\gamma^{\mu}\frac{1}{\not{p}-\not{K}_{1}-m-\gamma_{0}V}\right.\left.\left[ \Gamma_{0}^{\rm R}(p^{\prime},p^{\prime})V\right]\right.\left.\frac{1}{\not{p} -\not{K}_{1}-m-\gamma_{0}V}\gamma^{\nu}\right|\psi_{a}\right\rangle. \tag{63}\]
To identify the divergent contribution with similar one in the low energy part we turn to the Coulomb gauge and nonrelativistic limit in the remaining matrix elements of Eq. (63). As in the case of the one thermal loop, the integration over \(k_{10}\) does not involve the poles of the electron propagators. Then,
\[\Delta E_{a}^{\rm H,\,ViT}=e^{2}\sum_{\pm}\int\frac{d^{d}k_{1}}{( 2\pi)^{d}2k_{1}}\frac{d-1}{d}n_{\beta}(k_{1}) \tag{64}\] \[\times\left\langle\phi_{a}\left|p_{i}\frac{1}{E_{a}-H_{S}\pm k_{ 1}}\right.\left.\left[\Gamma_{0}^{\rm R}V\right]\right.\frac{1}{E_{a}-H_{S} \pm k_{1}}p_{i}\right|\phi_{a}\right\rangle.\]
In the basis set representation and length form:
\[\Delta E_{a}^{\rm H,\,ViT}=\frac{2\alpha}{3\pi}\int\limits_{0}^{ \infty}dk_{1}k_{1}^{3}n_{\beta}(k_{1})\sum_{\pm}\sum_{n_{1}n_{2}} \tag{65}\] \[\times\frac{\langle\phi_{a}|r_{i}|\phi_{n_{1}}\rangle\langle\phi_ {n_{1}}|\Gamma_{0}^{\rm R}V|\phi_{n_{3}}\rangle_{\rm reg}\langle\phi_{n_{3}}|r _{i}|\phi_{a}\rangle}{(E_{a}-E_{n_{1}}\pm k_{1})(E_{a}-E_{n_{3}}\pm k_{1})},\]
where the matrix element of the operator \(\Gamma_{0}^{\rm R}V\) is given by Eq. (32). The singular term omitted in Eq. (65) is
\[\frac{2\alpha}{3\pi}\int\limits_{0}^{\infty}dk_{1}k_{1}^{3}n_{ \beta}(k_{1})\sum_{\pm}\sum_{n_{1}n_{2}}\left(-\frac{\alpha}{6\pi\varepsilon}\right) \tag{66}\] \[\times\frac{\langle\phi_{a}|r_{i}|\phi_{n_{1}}\rangle\langle\phi_ {n_{1}}|\Delta V|\phi_{n_{3}}\rangle\langle\phi_{n_{3}}|r_{i}|\phi_{a}\rangle} {(E_{a}-E_{n_{1}}\pm k_{1})(E_{a}-E_{n_{3}}\pm k_{1})},\]
and is exactly reduced by the divergent term in the low-energy part of the ViT contribution.
#### iii.2.3 Vacuum loop over thermal loop
In a similar way, for the VoT contribution in the Feynman gauge, one can write
\[\Delta E_{a}^{\rm H,\,VoT}=-e^{4}\left[\int_{C_{1}}\frac{d^{D}k_{ 1}}{(2\pi)^{D}}\frac{g_{\mu\nu}}{K_{1}^{2}}n_{\beta}(E_{k_{1}})\right]\left[ \int\frac{d^{D}K_{2}}{(2\pi)^{D}}\frac{g_{\lambda\sigma}}{K_{2}^{2}}\right] \tag{67}\] \[\times\left\langle\overline{\psi}_{a}\left|\gamma^{\mu}\frac{1}{ \not{p}-\not{K}_{2}-m-\gamma_{0}V}\gamma^{\lambda}\frac{1}{\not{p}-\not{K}_{ 1}-\not{K}_{2}-m-\gamma_{0}V}\gamma^{\sigma}\frac{1}{\not{p}-\not{K}_{2}-m- \gamma_{0}V}\gamma^{\sigma}\right|\psi_{a}\right\rangle-\delta m^{\rm VoT}\]
with \(\delta m^{\rm VoT}\) representing the mass counter-term.
#### iii.2.4 Loop after loop
As mentioned above, the contribution of two-loop LaL diagrams (see graphs (c) and (d) in Fig. 4) can be partitioned into irreducible and reducible parts:
\[\Delta E_{a}^{\rm H,\,LaL}=\Delta E_{a,{\rm irr}}^{\rm H,\,LaL}+\Delta E_{a,{ \rm red}}^{\rm H,\,LaL}-\delta m^{\rm LaL}, \tag{68}\]
\[\Delta E_{a,{\rm irr}}^{\rm LaL}=\left[e^{2}\int_{C_{1}}\frac{d^{ D}K_{1}}{(2\pi)^{D}}\frac{g_{\mu\lambda}}{K_{1}^{2}}n_{\beta}(E_{k_{1}})\right]\times \tag{69}\] \[\left[e^{2}\int\frac{d^{D}K_{2}}{(2\pi)^{D}}\frac{g_{\nu\sigma}}{ K_{2}^{2}}\right]\left\langle\overline{\psi}_{a}\left|\gamma^{\mu}\frac{1}{ \not{p}-\not{K}_{1}-m-\gamma_{0}V}\gamma^{\lambda}\times\right.\right.\] \[\left.\left.\frac{1}{(\not{p}-m-\gamma_{0}V)}\gamma^{\nu}\frac{1}{ \not{p}-\not{K}_{2}-m-\gamma_{0}V}\gamma^{\sigma}\right|\psi_{a}\right\rangle,\]
\[\Delta E^{\text{LaL}}_{a,\text{red}}\equiv\Delta E^{\text{H, LaL}}_{a,\text{red}\,1}+\Delta E^{\text{H, LaL}}_{a,\text{red}\,2}= \tag{70}\] \[-\frac{1}{2}\left[e^{2}\int_{C_{1}}\frac{d^{D}K_{1}}{(2\pi)^{D}} \frac{g_{\mu\lambda}}{K_{1}^{2}}n_{\beta}(E_{k_{1}})\right]\left[e^{2}\int\frac {d^{D}K_{2}}{(2\pi)^{D}}\frac{g_{\nu\sigma}}{K_{2}^{2}}\right]\] \[\qquad\qquad\times\left(\left\langle\overline{\psi}_{a}\left| \gamma^{\mu}\frac{1}{\not{p}-\not{K}_{1}-m-\gamma_{0}V}\gamma^{\lambda}\right| \psi_{a}\right\rangle\right.\] \[\qquad\qquad\left.\times\frac{\partial}{\partial E_{a}}\left\langle \overline{\psi}_{a}\left|\gamma^{\nu}\frac{1}{\not{p}-\not{K}_{2}-m-\gamma_{0 }V}\gamma^{\sigma}\right|\psi_{a}\right\rangle\right.\] \[\qquad\qquad\left.+\left\langle\overline{\psi}_{a}\left|\gamma^{ \nu}\frac{1}{\not{p}-\not{K}_{2}-m-\gamma_{0}V}\gamma^{\sigma}\right|\psi_{a }\right\rangle\right.\] \[\times\left.\left.\frac{\partial}{\partial E_{a}}\left\langle \overline{\psi}_{a}\left|\gamma^{\mu}\frac{1}{\not{p}-\not{K}_{1}-m-\gamma_{0 }V}\gamma^{\lambda}\right|\psi_{a}\right\rangle\right).\]
In complete analogy with the evaluation of CL, ViT and VoT, the finite part of the irreducible high energy LaL contribution reduces to
\[\Delta E^{\text{H, LaL}}_{a,\text{irr}}=\frac{2\alpha}{3\pi}\int \limits_{0}^{\infty}dk_{1}k_{1}^{3}n_{\beta}(k_{1})\sum_{\pm}\sum_{n_{1}\atop n _{2}\not{\equiv}a} \tag{71}\] \[\times\frac{\langle\phi_{a}|r_{i}|n_{1}\rangle\langle\phi_{n_{1} }|r_{i}|\phi_{n_{2}}\rangle\langle\phi_{n_{2}}|\Gamma^{\text{R}}_{0}V|\phi_{a }\rangle_{\text{reg}}}{(E_{a}-E_{n_{1}}\pm k_{1})(E_{a}-E_{n_{2}})}.\]
As can be seen from Eq. (70), the reducible part of loop-after-loop diagrams is represented by the product of two diagonal matrix elements of finite temperature SE and energy derivative of ordinary SE operator and vice versa. Therefore, to calculate this contribution, one can directly use the regularized one-loop results for low and high energy parts \(\Delta E^{\text{L+H, LaL}}_{a}\). Then, according to sections II and III, the reducible part of loop-after-loop diagrams is given by
\[\Delta E^{\text{L+H, LaL}}_{a,\text{red}\,1}=\frac{1}{2}\left[\frac{2\alpha}{3 \pi}\sum_{\pm}\sum_{n_{1}}\int\limits_{0}^{\infty}dk_{1}\,k_{1}^{3}\,n_{\beta}( k_{1})\frac{\langle\phi_{a}|r_{i}|\phi_{n_{1}}\rangle\langle\phi_{n_{1}}|r^{i}| \phi_{a}\rangle}{E_{a}-E_{n_{1}}\pm k_{1}}\right]\frac{\partial}{\partial E_{a }}\left[\frac{4\alpha(\alpha Z)^{4}}{3\pi n_{a}^{3}}\log\beta_{a}\right], \tag{72}\]
\[\Delta E^{\text{L+H, LaL}}_{a,\text{red}\,2}=-\frac{1}{2}\left[ \frac{4\alpha(\alpha Z)^{4}}{3\pi n_{a}^{3}}\left[\left(\frac{5}{6}-2\log( \alpha Z)\right)\delta_{l_{a}0}-\log\beta_{a}\right]-\frac{\alpha(Z\alpha)^{4} }{2\pi n_{a}^{3}}\frac{\left(j_{a}(j_{a}+1)-l_{a}(l_{a}+1)-\frac{3}{4}\right) }{l_{a}(l_{a}+1)(2l_{a}+1)}\right] \tag{73}\] \[\times\left[\frac{2\alpha}{3\pi}\sum_{\pm}\sum_{n_{3}}\int \limits_{0}^{\infty}dk_{1}\,k_{1}^{3}\,n_{\beta}(k_{1})\frac{\langle\phi_{a}|r_ {i}|\phi_{n_{3}}\rangle\langle\phi_{n_{3}}|r^{i}|\phi_{a}\rangle}{(E_{a}-E_{n _{3}}\pm k_{1})^{2}}\right].\]
The final formulas obtained for the combined two-loop diagrams are given by the expressions Eqs. (46), (49), (54), (59), (64) and (71)-(73), which can be further evaluated numerically. To perform a triple summation over the entire spectrum of intermediate states, we further use the B-splines method [39], which restores solutions of the Schrodinger equation by replacing the entire spectrum of states onto a finite number of pseudo-states.
## V Results and conclusions
In this work, we focus exclusively on calculating the combined two-loop self-energy corrections for the low-lying \(s\)-states in the hydrogen atom. The combination of two-loop diagrams is represented by one thermal and one loop pertaining to the zero temperature case, see Fig. 4. The final results are given by the expressions in Eqs. (46), (49), (54), (59), (64) and (71)-(73). As it was found, the high-energy contribution vanishes upon integrating over the angles in the relevant matrix elements, just as it does in the case of one-loop self-energy correction. Hence, for \(s\)-states, all computations reduce to a numerical calculation of the low-energy part only.
The primary challenge in performing numerical calculations of higher-order QED corrections lies in computing sums over intermediate states. We utilize B-splines method to approximate the solutions of the Schrodinger equation for the hydrogen atom and generate an appropriate basis set. This enables us to replace the sums over the intermediate states, which include the continuum spectrum, with sums over the discrete set of pseudo-states. We are interested in calculating two-loop radiative corrections for the ground \(1s\) and metastable \(2s\) states in the hydrogen atom due to their particular importance in modern spectroscopic experiments, see, e.g., [50; 51; 52]. Thus, we generated basis sets on \(ns\), \(np\), and \(nd\) intermediate states, each of which is \(n\leq 40\) long, letting us verify convergence and guarantee the digits presented in all obtained results.
Table 1 provides the numerical values of the combined two-loop self-energy corrections for two different temperatures, namely \(T=300\) and \(T=1000\) K. It is worth noting that the magnitude of the calculated corrections aligns with the rough estimates presented in the recent study [53]. These corrections are also comparable to the
thermal corrections to the Breit interactions discussed earlier in [23].
As can be seen from Table 1, the largest contribution comes from the reducible part of the loop-after-loop diagram, it is the correction defined by Eq. (73). Fortunately, the latter is the easiest expression to calculate among all those we have considered in this work. The value of \(\Delta E_{\text{a,red}\,2}^{\text{LaL}}\), which reaches \(10^{-3}\) level in order of magnitude at \(T=300\) K, demonstrates the role of the effect considered here. It can be straightforwardly compared with the well-known Stark shift caused by the blackbody radiation [5]. For the hydrogen atom in the BBR environment at 300 K, the AC-Stark shift is about 0.04 Hz and 1 Hz for \(1s\) and \(2s\) states, respectively, see [5; 20; 45]. The currently achieved accuracy of 10 Hz [52] of the \(1s-2s\) transition frequency measurement permits us to expect experimental observation of such thermal effects in the near future.
Table 1 also shows the values of two-loop self-energy corrections calculated at a temperature of 1000 K. A direct comparison of numerical results for two different temperatures leads to the conclusion that the parametric estimates on temperature given, for example, in [53], are hardly applicable to the two-loop contributions. In contrast to the one-loop contribution, this follows from the integration over the frequencies involving the Planck distribution function. For example, the correction \(\Delta E_{a,\text{red}\,2}^{\text{LaL}}\) does not scale as \(T^{4}\) or similar. Thus, numerical calculations should be performed to obtain an accurate value for each specific temperature.
Along with the fundamental generalization of the TQED theory for bound states to the two-loop level presented in this paper, one can draw attention to the following. The corrections given by expressions Eqs. (46), (49), (54), (59), (64) and (71)-(73) can be considered in the context of precision experiments carried out with systems relating to atomic clocks [54; 55]. Roughly estimating the two-loop shift in such systems, as in the hydrogen atom, one finds that it contributes at the level of dynamical corrections to the AC-Stark effect, i.e., it produces an additional frequency shift at the level of \(10^{-18}\) of relative magnitude. Thus, although representing a separate problem, these effects are of particular importance for atomic clocks, being at least within (or close to) the experimental error.
## VI Acknowledgements
This work was supported by grant MK-4796.2022.1.2. Evaluation of high energy contributions (section IV.2) was supported by the Russian Science Foundation under grant No. 22-12-00043.
## Appendix A Dimensional regularization of loop integrals
This Appendix contains the results of evaluating \(d\)-dimensional integrals over photon momentum in one- and two-loop contributions. The first step involves performing spatial integration. In \(d=3-2\varepsilon\) dimensions this can be accomplished using Eq. (9) provided in the main text. The remaining integral along the real half-axis can be calculated by applying Cauchy theorem, and then the result can be expanded into a Taylor series of \(\varepsilon\) powers.
The following integral is involved in calculating the one-loop self-energy correction:
\[I_{1} =e^{2}\int\frac{d^{d}k}{(2\pi)^{d}2k}\left(\delta_{ij}-\frac{k_{ i}k_{j}}{\mathbf{k}^{2}}\right)p^{i}\frac{1}{E_{a}-H_{S}-k_{2}}p^{j} \tag{101}\] \[=e^{2}\frac{2\pi^{d/2}}{\Gamma(d/2)}\frac{d-1}{d}\frac{1}{2(2\pi )^{d}}\int\limits_{0}^{\infty}k^{d-2}dk\,p_{i}\frac{1}{E_{a}-H_{S}-k}p^{i}\] \[\approx\frac{2\alpha}{3\pi}p_{i}(H_{S}-E_{a})\left(\frac{1}{2 \varepsilon}-\log[2(H_{S}-E_{a})]+\frac{5}{6}-\frac{\gamma_{\text{E}}}{2}\right.\] \[\left.+\frac{1}{2}\log 4\pi\right)p^{i}.\]
As can be seen from this expression, all divergences arising in the loop diagrams are explicitly concatenated with the division by the parameter \(\varepsilon\to 0\). In the spectral representation of electron propagator this integral can be written as
\[I_{1} =\frac{2\alpha}{3\pi}\sum_{n}p_{i}|n\rangle\langle n|p^{i}(E_{n}-E _{a}) \tag{102}\] \[\times\left(\frac{1}{2\varepsilon}-\log[2|E_{n}-E_{a}|]+\frac{5} {6}-\frac{\gamma_{\text{E}}}{2}+\frac{1}{2}\log 4\pi\right).\]
A similar integral can be taken in the length gauge, which gives an additional \(k^{2}\) factor in the numerator of
\begin{table}
\begin{tabular}{l c c} \hline \hline & \multicolumn{2}{c}{\(T=300\) K} \\ \hline & \multicolumn{2}{c}{\(1s\)} & \(2s\) \\ \hline \(\Delta E_{a,\text{red}\,2}^{\text{CL}}\) & \(-1.59\times 10^{-7}\) & \(-2.13\times 10^{-6}\) \\ \(\Delta E_{a}^{\text{V},\text{T}}\) & \(1.57\times 10^{-7}\) & \(3.00\times 10^{-5}\) \\ \(\Delta E_{a,\text{T}}^{\text{V},\text{T}}\) & \(-1.28\times 10^{-4}\) & \(-9.69\times 10^{-5}\) \\ \(\Delta E_{a,\text{red}\,2}^{\text{LaL}}\) & \(-1.37\times 10^{-7}\) & \(-4.11\times 10^{-6}\) \\ \(\Delta E_{a,\text{red}\,1}^{\text{LaL}}\) & \(3.49\times 10^{-8}\) & \(3.50\times 10^{-7}\) \\ \(\Delta E_{a}^{\text{LaL},\text{red}\,2}\) & \(-5.87\times 10^{-8}\) & \(-3.53\times 10^{-3}\) \\ \(\Delta E_{a}^{\text{total}}\) & \(\mathbf{1.28\times 10^{-4}}\) & \(\mathbf{3.53\times 10^{-3}}\) \\ \hline \hline & \multicolumn{2}{c}{\(T=1000\) K} \\ \hline & \multicolumn{2}{c}{\(1s\)} & \(2s\) \\ \hline \(\Delta E_{a}^{\text{CL}}\) & \(-1.97\times 10^{-5}\) & \(-2.64\times 10^{-4}\) \\ \(\Delta E_{a}^{\text{V},\text{T}}\) & \(1.96\times 10^{-5}\) & \(7.46\times 10^{-4}\) \\ \(\Delta E_{a,\text{L}}^{\text{VoT}}\) & \(-1.17\times 10^{-2}\) & \(-9.13\times 10^{-3}\) \\ \(\Delta E_{a,\text{lr}\,1}^{\text{LaL}}\) & \(-1.69\times 10^{-5}\) & \(-5.26\times 10^{-4}\) \\ \(\Delta E_{a,\text{lr}\,1}^{\text{LaL},1}\) & \(4.31\times 10^{-6}\) & \(4.46\times 10^{-5}\) \\ \(\Delta E_{a,\text{red}\,2}^{\text{LaL},1}\) & \(-7.27\times 10^{-6}\) & \(-3.93\times 10^{-2}\) \\ \(\Delta E_{a}^{\text{total}}\) & \(\mathbf{-1.17\times 10^{-2}}\) & \(\mathbf{-4.84\times 10^{-2}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Combined two-loop self-energy corrections for the \(1s\) and \(2s\) states of hydrogen atom at different temperatures (in Kelvin). All values are in Hz.
the integrand in the Eq. (16):
\[I_{1}=e^{2}\int\frac{d^{d}k\,k^{2}}{(2\pi)^{d}2k}\left(\delta_{ij}- \frac{k_{i}k_{j}}{\mathbf{k}^{2}}\right)r^{i}\frac{1}{E_{a}-H_{S}-k_{2}}r^{j} \tag{17}\] \[=e^{2}\frac{2\pi^{d/2}}{\Gamma(d/2)}\frac{d-1}{d}\frac{1}{2(2\pi)^ {d}}\int\limits_{0}^{\infty}k^{d}dk\,r_{i}\frac{1}{E_{a}-H_{S}-k_{2}}r^{i}\] \[\approx\frac{2\alpha}{3\pi}r_{i}(H_{S}-E_{a})^{3}\left(\frac{1}{2 \varepsilon}-\log[2(H_{S}-E_{a})]+\frac{5}{6}\right.\] \[\left.-\frac{\gamma_{\rm E}}{2}+\frac{1}{2}\log 4\pi\right)r^{i}.\]
In the spectral representation of electron propagator one can obtain
\[I_{1}=\frac{2\alpha}{3\pi}\sum_{n}r_{i}|n\rangle\langle n|r^{i}( E_{n}-E_{a})^{3} \tag{18}\] \[\times\left(\frac{1}{2\varepsilon}-\log[2|E_{n}-E_{a}|]+\frac{5} {6}-\frac{\gamma_{\rm E}}{2}+\frac{1}{2}\log 4\pi\right).\]
The integral arising in the reducible part of loop-after-loop diagrams can be easily evaluated as
\[I_{2}=-\frac{\partial}{\partial E_{a}}I_{1}=e^{2}\int\frac{d^{d}k \,k^{2}}{(2\pi)^{d}2k}\left(\delta_{ij}-\frac{k_{i}k_{j}}{\mathbf{k}^{2}}\right) \tag{19}\] \[\times r^{i}\frac{1}{(E_{a}-H_{S}-k)^{2}}r^{j}\approx\frac{2 \alpha}{3\pi}r_{i}(3(H_{S}-E_{a})^{2})\] \[\left(\frac{1}{2\varepsilon}-\log[2(H_{S}-E_{a})]+\frac{1}{2}- \frac{\gamma_{\rm E}}{2}+\frac{1}{2}\log 4\pi\right)r^{i},\]
and similarly for spectral representation
\[I_{2}=\frac{2\alpha}{3\pi}\sum_{n}r_{i}|n\rangle\langle n|r^{i}( 3(E_{n}-E_{a})^{2}) \tag{20}\] \[\times\left(\frac{1}{2\varepsilon}-\log[2|E_{n}-E_{a}|]+\frac{1} {2}-\frac{\gamma_{\rm E}}{2}+\frac{1}{2}\log 4\pi\right).\]
The corresponding integral arising for crossed loops diagrams in the spectral representation of electron propagators can be calculated found in the form:
\[I_{3}=e^{2}\int\frac{d^{d}k_{2}\,k_{2}^{2}}{(2\pi)^{d}2k_{2}} \left(\delta_{ij}-\frac{k_{i}k_{j}}{\mathbf{k}^{2}}\right)r^{i}\frac{1}{(E_{a}-H_ {S}\pm k_{1}-k_{2})}r^{k}\frac{1}{(E_{a}-H_{S}-k_{2})}r^{j}= \tag{21}\] \[\frac{2\alpha}{3\pi}\sum_{\pm}\sum_{n_{2}n_{3}}r_{i}|\phi_{n_{2} }\rangle\langle\phi_{n_{2}}|r^{k}|\phi_{n_{3}}\rangle\langle\phi_{n_{3}}|r_{i} \left[\left((E_{a}-E_{n_{2}}\pm k_{1})^{2}+(E_{a}-E_{n_{2}}\pm k_{1})(E_{a}-E_ {n_{3}})+(E_{a}-E_{n_{3}})^{2}\right)\times\right.\] \[\left.\left(\frac{1}{2\varepsilon}+\frac{5}{6}-\frac{\gamma_{\rm E }}{2}+\frac{1}{2}\log 4\pi\right)-\frac{(E_{a}-E_{n_{2}}\pm k_{1})^{3}\log[2|E_{a}-E_ {n_{3}}|]}{E_{n_{3}}-E_{n_{2}}\pm k_{1}}+\frac{(E_{a}-E_{n_{3}})^{3}\log[2|E_{ a}-E_{n_{3}}|]}{E_{n_{3}}-E_{n_{2}}\pm k_{1}}\right].\]
\[I_{4}=e^{2}\int\frac{d^{d}k_{2}\,k_{2}^{2}}{(2\pi)^{d}2k_{2}} \left(\delta_{ij}-\frac{k_{i}k_{j}}{\mathbf{k}^{2}}\right)r^{i}\frac{1}{(E_{a}-H_ {S}-k_{2})}r^{k}\frac{1}{(E_{a}-H_{S}\pm k_{1}-k_{2})}r^{l}\frac{1}{(E_{a}-H_{S }-k_{2})}r^{j} \tag{22}\] \[=\frac{2\alpha}{3\pi}\sum_{\pm}\sum_{n_{1}n_{2}n_{3}}r_{i}|\phi_ {n_{1}}\rangle\langle\phi_{n_{2}}|r^{k}|\phi_{n_{2}}\rangle\langle\phi_{n_{2}}| r^{l}|\phi_{n_{3}}\rangle\langle\phi_{n_{3}}|r_{i}\] \[\times\left[(3E_{a}-E_{n_{1}}-E_{n_{2}}-E_{n_{3}}\pm k_{1})\left( \frac{(E_{a}-E_{n_{1}})^{3}\log[2|E_{a}-E_{n_{1}}|]-(E_{a}-E_{n_{2}}\pm k_{1})^ {3}\log[2|E_{a}-E_{n_{2}}\pm k_{1}]}{(E_{n_{1}}-E_{n_{2}}\pm k_{1})(-E_{n_{2}}+E _{n_{3}}\pm k_{1})(3E_{a}-E_{n_{1}}-E_{n_{2}}-E_{n_{3}}\pm k_{1})}\right.\] \[\left.-\frac{(E_{a}-E_{n_{1}})^{3}\log[2|E_{a}-E_{n_{1}}|]-(E_{a}- E_{n_{3}})^{3}\log[2|E_{a}-E_{n_{3}}|]}{(E_{n_{1}}-E_{n_{3}})(-E_{n_{2}}+E_{n_{3}}\pm k_{1})(3E _{a}-E_{n_{1}}-E_{n_{2}}-E_{n_{3}}\pm k_{1})}+\frac{1}{2\varepsilon}-\frac{ \gamma_{\rm E}}{2}+\frac{5}{6}+\frac{1}{2}\log(4\pi)\right)\right].\]
## Appendix B Relations between matrix elements in velocity and length form
In order to represent the arising expressions in a form convenient for numerical calculations, the transformation of matrix elements in Eq. (38) to the length gauge should be performed. With the use of commutation relations
\[p_{i}={\rm i}[H_{S},r_{i}]={\rm i}[H_{S}-E+k,r_{i}], \tag{23}\]
it is easy to show that the following equality is valid
\[\left\langle\phi_{n^{\prime}}\left|p_{i}\frac{1}{E_{n^{\prime}}-H_{S }-k}p_{j}\right|\phi_{n}\right\rangle= \tag{100}\] \[-k(E_{n^{\prime}}-E_{n}-k)\left\langle\phi_{n^{\prime}}\left|r_{i} \frac{1}{E_{n^{\prime}}-H_{S}-k}r_{j}\right|\phi_{n}\right\rangle\] \[+(k-\frac{1}{2}(E_{n^{\prime}}-E_{n}))\left\langle\phi_{n^{\prime }}\left|r_{i}r_{j}\right|\phi_{n}\right\rangle+\frac{3}{2}\delta_{n^{\prime}n}.\]
In the case of \(n=n^{\prime}=a\), Eq. (100) turns to
\[\sum_{\pm}\left\langle\phi_{a}\left|p_{i}\frac{1}{E_{a}-H_{S}\pm k }p_{i}\right|\phi_{a}\right\rangle \tag{101}\] \[=k^{2}\sum_{\pm}\left\langle\phi_{a}\left|r_{i}\frac{1}{E_{a}-H_ {S}\pm k}r_{i}\right|\phi_{a}\right\rangle+3.\]
## Appendix C Evaluation of matrix elements and commutation relations
Cyclic component of vector product of position and spin operators in the matrix elements of Eq. (59) can be written in terms of tensor product as follows
\[(\mathbf{r}\times\mathbf{s})_{1q}=-\mathrm{i}\sqrt{2}\{s_{1}\otimes r_{1}\}_{1q} \tag{102}\]
Then with the use of standard technique for the evaluation of matrix elements we find [56]
\[\langle n^{\prime}l^{\prime}s^{\prime}j^{\prime}m^{\prime}|(\mathbf{r }\times\mathbf{s})_{1q}|nlsjm\rangle= \tag{103}\] \[-\mathrm{i}\sqrt{6}(-1)^{j^{\prime}-m^{\prime}}\sqrt{(2j^{\prime }+1)(2j+1)}\times\] \[\begin{pmatrix}j^{\prime}&1&j\\ -m^{\prime}&q&m\end{pmatrix}\begin{\{}1&1&1\\ l^{\prime}&s^{\prime}&j^{\prime}\\ l&s&j\end{pmatrix}\langle n^{\prime}l^{\prime}||r||nl\rangle\langle s^{\prime} ||s||s\rangle,\]
where the reduced matrix elements of coordinate and spin operators are given by the equations
\[\langle n^{\prime}l^{\prime}||r||nl\rangle=(-1)^{l^{\prime}}\sqrt{ (2l^{\prime}+1)(2l+1)} \tag{104}\] \[\times\begin{pmatrix}l^{\prime}&1&l\\ 0&0&0\end{pmatrix}\int\limits_{0}^{\infty}drr^{3}R_{n^{\prime}l^{\prime}}(r)R _{nl}(r),\]
\[\langle s^{\prime}||s||s\rangle=\delta_{s^{\prime}s}\sqrt{s(s+1)(2s+1)}, \tag{105}\]
respectively, and \(R_{nl}(r)\) is a solution of radial part of Schrodinger equation for the hydrogen-like atom.
The matrix element of spin-orbit interaction can be evaluated in a similar manner as follows
\[\langle n^{\prime}l^{\prime}s^{\prime}j^{\prime}m^{\prime}|r^{-3}( \mathbf{l}\times\mathbf{s})_{1q}|nlsjm\rangle= \tag{106}\] \[-\mathrm{i}\sqrt{6}(-1)^{j^{\prime}-m^{\prime}}\sqrt{(2j^{\prime}+ 1)(2j+1)}\times\] \[\begin{pmatrix}j^{\prime}&1&j\\ -m^{\prime}&q&m\end{pmatrix}\begin{\{}1&1&1\\ l^{\prime}&s^{\prime}&j^{\prime}\\ l&s&j\end{\}}\langle n^{\prime}l^{\prime}||r^{-3}||nl\rangle\langle s^{\prime} ||s||s\rangle,\]
where
\[\langle n^{\prime}l^{\prime}||r^{-3}||nl\rangle=\delta_{l^{\prime} l}\sqrt{l(l+1)(2l+1)} \tag{107}\] \[\times\int\limits_{0}^{\infty}drr^{2}R_{n^{\prime}l^{\prime}}(r) \left(\frac{1}{r^{3}}\right)R_{nl}(r)\]
Likewise, we find
\[\langle n^{\prime}l^{\prime}s^{\prime}j^{\prime}m_{j^{\prime}}|r^{- 3}(\mathbf{l}\cdot\mathbf{s})|nlsjm_{j}\rangle=\delta_{l^{\prime}l}\delta_{s^{\prime}s }\delta_{j^{\prime}j} \tag{108}\] \[\times\delta_{m_{j^{\prime}}m_{j}}(-1)^{j+l+s^{\prime}}\begin{ \{}l^{\prime}&l&1\\ s&s^{\prime}&j\end{\}}\sqrt{l(l+1)(2l+1)}\] \[\times\sqrt{s(s+1)(2s+1)}\int\limits_{0}^{\infty}drr^{2}R_{n^{ \prime}l^{\prime}}(r)\left(\frac{1}{r^{3}}\right)R_{nl}(r).\]
The matrix element of components of position operator can be evaluated as follows
\[\langle n^{\prime}l^{\prime}s^{\prime}j^{\prime}m^{\prime}|r_{1q}| nlsjm\rangle= \tag{109}\] \[\delta_{s^{\prime}s}(-1)^{j+l^{\prime}+s-1}(-1)^{j^{\prime}-m^{ \prime}}\sqrt{(2j^{\prime}+1)(2j+1)}\] \[\times\begin{pmatrix}j^{\prime}&1&j\\ -m^{\prime}&q&m\end{pmatrix}\begin{\{}l&s&j\\ j^{\prime}&1&j\end{\}}\langle n^{\prime}l^{\prime}||r||nl\rangle.\]
For the analytical calculation of the Lamb shift, we also need the commutation relation
\[[p_{i},H]=-\mathrm{i}\nabla_{i}V, \tag{110}\]
which is used in the proof of equality
\[\langle\phi_{n^{\prime}}|p_{i}(H_{S}-E_{a}\pm k)p^{i}|\phi_{n}\rangle \tag{111}\] \[=\frac{1}{2}\langle\phi_{n^{\prime}}|\Delta V|\phi_{n}\rangle+ \left(\frac{1}{2}(E_{n^{\prime}}+E_{n})-E_{a}\pm k\right)\] \[\times\langle\phi_{n^{\prime}}|p^{2}|\phi_{n}\rangle.\]
For the case when \(n^{\prime}=n=a\) and \(k=0\), Eq. (111) turns to Eq. (11) in the main text.
In order to align the algebraic expressions of the divergent contributions in both the low-energy and high-energy regions, we also employ the following commutation relations:
\[[r_{i},p_{j}]=\mathrm{i}\delta_{ij}, \tag{112}\]
\[\langle n^{\prime}|r_{i}|n\rangle\langle E_{n^{\prime}}-E_{n} \rangle^{2}=\langle n^{\prime}|[H,[H,r_{i}]]|n\rangle. \tag{113}\] |
2310.14822 | pyCOFBuilder: A python package for automated creation of Covalent
Organic Framework models based on the reticular approach | Covalent Organic Frameworks (COFs) have gained significant popularity in
recent years due to their unique ability to provide a high surface area and
customizable pore geometry and chemistry. These traits make COFs a highly
promising choice for a range of applications. However, with their vast
potential structures, exploring COFs experimentally can be challenging and
time-consuming, yet it remains an attractive avenue for computational
high-throughput studies. However, generating COF structures can be a
time-consuming and challenging task. To address this challenge, here we
introduce the pyCOFBuilder, an open-source Python package designed to
facilitate the generation of COF structures for computational studies. The
pyCOFBuilder software provides an easy-to-use set of functionalities to
generate COF structures following the reticular approach. In this paper, we
describe the implementation, main features, and capabilities of the
pyCOFBuilder demonstrating its utility for generating COF structures with
varying topologies and chemical properties. pyCOFBuilder is freely available on
GitHub at https://github.com/lipelopesoliveira/pyCOFBuilder. | Felipe Lopes Oliveira, Pierre Mothé Esteves | 2023-10-23T11:41:14Z | http://arxiv.org/abs/2310.14822v2 | # pyCOFBuilder: A python package for automated assembly of Covalent Organic Framework structures
###### Abstract
Covalent Organic Frameworks (COFs) have gained significant popularity in recent years due to their unique ability to provide a high surface area and customizable pore geometry and chemistry. These traits make COFs a highly promising choice for a range of applications. However, with their vast potential structures, exploring COFs experimentally can be challenging and time-consuming, yet it remains an attractive avenue for computational high-throughput studies. However, generating COF structures can be a time-consuming and challenging task. To address this challenge, here we introduce the pyCOFBuilder, an open-source Python package designed to facilitate the generation of COF structures for computational studies. The pyCOFBuilder software provides an easy-to-use set of functionalities to generate COF structures following the reticular approach. In this paper, we describe the implementation, main features, and capabilities of the pyCOFBuilder demonstrating its utility for generating COF structures with varying topologies and chemical properties. pyCOFBuilder is freely available on GitHub at [https://github.com/lipelopesoliveria/pyCOFBuilder](https://github.com/lipelopesoliveria/pyCOFBuilder).
Covalent Organic Framework ; Reticular Chemistry ; CO\({}_{2}\) capture ; Adsorption
## 1 Introduction
The quest for understanding the properties and discovery of new materials has been the subject of intense interest to the scientific community for many years. Although most discoveries have been made through trial and error combined with serendipity, the design of materials at the atomic level and the control over their properties at the nanometric scale has been the quintessential objective of materials scientists.
A class of materials that have attracted significant interest over the past years is the Covalent Organic Frameworks (COFs). COFs are materials with well-defined nanoporous architectures designed in a bottom-up approach by the covalent bonding of one or more organic building blocks by strong covalent bonds, thus forming an extended nanoporous crystalline structure.[1, 2, 3]
The general process for generating a structural model for a COF structure is commonly referred to as the "reticular approach", as depicted on Fig. 1.[4, 5, 6, 7] This method starts by selecting a suitable network topology and breaking it down into its building units. The geometric constraints of the underlying network determine the building block geometry (linear, triangular, squared, etc.) and connectivity (number of points used to build the extended structure). The building blocks are created based on an organic core that defines its size and chemical properties. Next, a suitable set of reacting groups is selected to create the covalent connections between the core units to form the extended reticular structure. Finally, functional groups can be introduced into the organic core, either _pre- or_ _post-_ synthetically, to control the physical-chemical characteristics of the pore surface.
The use of the reticular approach to construct crystalline organic structures provides five different dimensions for the design of materials, i.e. topology, organic core, connection group, functionalization, and supramolecular arrangement, thus allowing the control at the atomic level over the material properties. This makes the COFs attractive for a wide range of applications that include gas capture, separation, and storage,[8, 9, 10] heterogeneous catalysis[11, 12, 13], energy storage and production,[14, 15] chemo-sensing,[10] organic semiconductors,[17] supercapacitors,[18] and many others.[19, 20]
Currently, a considerable amount of different COFs have been successfully synthesized and characterized, with these structures compiled in several databases.[21, 22] However, the number of reported structures barely scratches the surface of the chemical diversity that could be generated. The combination of structural diversity presented by different topologies with the extensive library of organic molecules available for the construction of COFs combined with the additional variability introduced by different connection groups and functionalities makes the chemical space of available structures formidably large.[23, 6]
Much of the research done until recently is profoundly based on the trial-and-error approach, where a combination of basic knowledge and chemical intuition is applied to propose new materials or modifications to existing ones, with a great research effort dedicated to synthesizing and characterizing the desired structures. However, due to the high financial cost, enormous human effort, and time required for this strategy there has been a growing interest in the development of alternative methodologies for materials discovery.[24, 25, 6, 26]
Recent advancements in computational chemistry, coupled with the increasing processing power of modern computational facilities, have enabled the use of high-throughput computation-based and machine-learning approaches.[27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 213, 214, 215, 216, 217, 218, 22, 23, 24, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 151, 162, 174, 149, 152, 153, 154, 155, 156, 157, 158, 159, 161, 175, 162, 163, 164, 165, 166, 167, 168, 176, 177, 188, 189, 190, 191, 193, 194, 195, 196, 197, 200, 211, 22, 223, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 82, 83, 84, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98, 99, 100, 103, 104, 105, 106, 107, 108, 109, 119, 117, 118, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 131, 140, 143, 144, 145, 146, 147, 148, 149, 150, 151, 162, 152, 153, 154, 155, 156, 157, 158, 159, 163, 179, 180, 194, 195, 196, 197, 200, 212, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 59, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 73, 74, 75, 76, 77, 78, 79, 82, 83, 84, 85, 86, 87, 89, 93, 94, 95, 96, 97, 101, 102, 103, 104, 105, 106, 107, 108, 109, 119, 119, 120, 121, 132, 14, 143, 144, 145, 146, 147
related to the applications of COFs helps to streamline the materials discovery process, allowing researchers to more efficiently explore, identify, and improve new useful materials. However, generating COF structures for computational studies can be a challenging and time-consuming task, requiring a high level of expertise in both molecular modeling and computer programming.
Currently, the options for creating computational models for reticular materials are notably constrained being limited to some alternatives developed to generically generate several different materials or specifically for Metal-Organic Framework structures.[29, 30, 31, 32, 33, 34] Besides, there have been some studies that employed different methodologies for generating COF structures. In 2018, Mercado _et al.[35]_ developed a database containing 69,840 structures created from 666 distinct building blocks and utilized GCMC calculations to assess the most suitable candidates for methane capture. In 2018, Lan _et al.[36]_ introduced a database comprising approximately 470,000 materials. In 2023, De Vos _et al.[37]_ presented a database named ReDDOFFEFE, which encompassed 268,687 COF structures along with system-specific force fields.
However, in all cases, the software employed for constructing the structures was not open-source. This implies that any modifications to existing structures or the creation of new ones had to be carried out manually. Moreover, the incapacity to generate new structures obstructs the application of generative model approaches for inventive framework designs.
To advance the computational study of COF structures here we introduce pyCOBuilder, an open-source Python package that automates the creation of computational models for COF structures based on the reticular approach. The current version of pyCOBuilder offers a user-friendly platform with a broad range of features, including the implementation of main 2D and 3D networks, various stacking patterns (for 2D structures) or interpenetration classes (for 3D structures), a vast library of building blocks with dozens of possible functionalizations. With these features combined, pyCOBuilder can potentially generate billions of unique COF structures, thus offering a robust path for implementing diverse computational and machine-learning techniques in developing new COF materials.
To illustrate the power of the reticular approach unlocked by the use of pyCOBuilder, we generated a diverse set of structures exploring the four dimensions of the reticular approach and investigated their impact on the CO2 capture capability of these materials. The results demonstrate that it is not only possible to generate materials with higher capture capacity than those currently synthesized but also to utilize machine learning techniques to dissect the contribution of each dimension of the reticular design in this property, thereby enabling a deeper understanding of the CO2 capture phenomenon by COFs.
## 2 Computational Methods
To evaluate the quality of the structures generated by pyCOF-Builder the obtained structures were fully optimized at the DFT PBE-D3(BJ) level and using two different tight-binding approaches: the density functional tight binding (DFTB)[38] with mio set of parameters[39] and the extended Tight Binding (xTB)[40] with the GFN1-xTB method, both implemented on CP2K.
The DFT calculations were performed using the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional[41] with DFT-D3(BJ) dispersion corrections.[42, 43] The Quickstep code of the CP2K v2023.1 package was used,[44, 45] employing GTH pseudopotentials,[46, 47] TZV2P-MOLOPT contracted Gaussian basis sets, and an auxiliary plane wave basis set. The plane wave cutoff is set to 600 Ry cutoff mapped on a 5-level multigrid. The orbital transformation (OT)[48] method was used for all the structures with band gap \(>\) 0.5 eV. For the structures with a band gap \(<\) 0.5 eV, the Broyden diagonalization and Thomas-Fermi smearing with 300 K for electronic temperature.
For the geometry optimization, the atomic positions and cell parameters were fully optimized until the convergence with total forces below 1.0 millihartree/bohr (a root-mean-squared value below 0.7) and the total pressure below 100 bar, using the Limited-memory BFGS (L-BFGS) minimizer. The unit cell was optimized, imposing the constraint that the original symmetry of the generated structure, i.e., the type of Bravais lattice, is maintained to allow a direct comparison with experimental data. Tests have demonstrated that in the majority of cases, the symmetry remains intact even without the need for this constraint during optimization. In the rare instances where there is some variation, it is typically small and does not affect any of the conclusions drawn.
With the obtained structure the textural properties, namely largest included sphere (D\({}_{i}\)), largest free sphere (D\({}_{i}\)), largest included sphere along free path (D\({}_{i}\)), crystal density \(\rho\), vol
Figure 1: **Reticular approach for building a COF material. This process involves 1. selecting an underlying topology and a net that represents it, 2. decomposing a given net into nodes and linker, 3. selecting connecting chemistry and functionalization to form the building block molecule, and 4. create covalent bonds between these building block molecules (reticulation) according to the geometry dictated by the net to form an extended structure represented by a unit cell.**
umetric and gravimetric surface area and pore volume were calculated using the Zo++ software v0.3 using a probe with radius of 1.86 A[9, 50].
To simulate the gas uptake by the studied structures force field-based Grand-Canonical Monte Carlo (GCMC) simulations were performed using the RASPA2[51, 52] package. For all GCMC simulations, 50,000 cycles were employed and the equilibrium uptake average was calculated only on the equilibrated part using the pMySER package. The simulations were performed in a supercell large enough that each cell vector is larger than the cutoff radius for short-range interaction (12.8 A). For the long-range interactions, the Ewald sum technique was used with a relative precision of 10\({}^{\text{-6}}\). A Lennard-Jones potential with parameters taken from the TraPPE[53] force field was used to treat the van der Waals interactions of adsorbed molecules (N\({}_{2}\) and CO\({}_{2}\)) and from DREIDING[54] to treat the framework atoms. LJ parameters between atoms of different types were calculated using the Lorentz-Berthelot mixing rules and were shifted to zero at the short-range cutoff.
The electrostatic interactions were calculated using partial charges centered in the atoms, calculated based on the Density-derived electrostatic charges (DDEC) approach within the DDEC6 method[55, 56, 57, 58] as implemented in the Chargemol software. The charges were derived based on the electron density of the optimized structures computed by CP2K on the DFT PBE-D3(BJ) level.
## 3 Results and Discussion
### pyCOFBuilder general algorithm.
String-based representation for reticular structuresThe primary aim of pyCOFBuilder is to provide a comprehensive solution for the generation of COF structures using the reticular approach. This approach involves the connection of organic building blocks by covalent bonds to create a crystalline structure with the geometry defined by an underlying net. The resulting structures can exhibit a variety of functional groups and intermolecular degrees of freedom, _i. e._, distinct stacking pattern for 2D structures or interpenetrating class for 3D structures.
Due to the complexity of COF structures and the absence of a standardized nomenclature or string representation for crystalline structures, here we propose a simple yet effective string-based representation for COF structures. This representation is designed to encapsulate the key characteristics of a reticular structure, including the building blocks (BB), underlying net (NET), and stacking/interpenetration (ST) in the form of BB1-BB2-NET-ST.
The building blocks (BB) are represented by its four crucial properties: the geometry and connectivity number (SC), the type of organic core (CORE), the connection group (CONECTOR), and any present functional groups (R\({}_{x}\)). This properties are represented by descriptors connected by underscores to form a unique string in the format SC_CORE_CONECTOR_R\({}_{1}\)_R\({}_{2}\)_R\({}_{3}\)... Here the underscores were chosen to make it easy to differentiate the descriptors of the structure, which is connected by dashes, from the descriptors of the building blocks. The symmetry and connectivity number (SC) are encoded by a letter that represents the geometry of the building block followed by its connectivity number, as shown in Table 1, using a modified version of the symbols used to represent vertex-transitive polyhedra[39]. Some examples of common organic cores, radical groups, and connection groups are presented in Fig. 2.
To illustrate this representation, let us consider the 2,4,6-triformyl phoroglucinol molecule which is a commonly used building block for COF synthesis. In the proposed notation, this molecule is denoted as T3_BEBIZ_CHO_DH. Here, T3 refers to the geometric shape and the number of connections of a tritopic building block, BE2N2 is a four-letter encoding of the benzenoid shape of the organic core, CHO represents the aldehyde groups that undergo a reaction with other building blocks to form the
\begin{table}
\begin{tabular}{l c c} \hline \hline Symbol & Connectivity & Geometric Figure \\ \hline L & 2 & Line \\ T & 3 & Triangle \\ S & 4 & Square \\ R & 4 & Rectangle \\ T & 4 & Tetrahedron \\ O & 6 & Octahedron \\ P & 6 & Trigonal prism \\ H & 6 & Hexagon \\ C & 8 & Cube \\ A & 8 & Square antiprism \\ E & 8 & Octagon \\ B & 12 & Cuboctahedron \\ I & 12 & Icosahedron \\ U & 12 & Truncated tetrahedron \\ X & 12 & Hexagonal prism \\ \hline \hline \end{tabular}
\end{table}
Table 1: Symbols, connectivity numbers, and geometric figures used to represent the building blocks.
Figure 2: Example of organic cores, connection groups, and functional groups available to generate COF structures. The organic cores are encoded on the COF string representation using the four-letter code presented below the structures. Although arbitrarily defined, the letters are chosen to approximate an abbreviation of the IUPAC name of the molecule from which the organic core is derived. Q marks the position of the connection groups and R\({}_{x}\) marks the position of the functional groups. Both the connection group and functional groups are encoded using the common chemistry abbreviation.
extended structure, and OH indicates the presence of hydroxyl groups in the 1, 3, and 5 positions.
To encode the reticular net a three-letter code adapted from the Reticular Chemistry Structure Resource (RCSR) nets is used.[60, 61] The current version of pyCOFBuilder implements several different nets, as illustrated in Supporting Information Figure S2. To represent the stacking pattern or interpenetrating class desired, the encoding string will depend on the selected net. For 2D nets, available stacking patterns include AA, AA1, AA2, AA1, and AA4, among others depending on the topology. In the case of 3D nets, the encoding string is determined by the number of interpenetrating structures.
The pyCOFBuilder nomenclature proposed here is, in a way, inspired by the MOFid[62], RFcode[63], and the nomenclature proposed by Yaghi et al.[2] to represent MOFs and COFs. However, the pyCOFBuilder approach promotes a more simple alternative for the representation of COF structures, in addition to facilitating the incorporation of crucial information about the structure such as the differentiation of connection and radical groups and the information about the stacking/interpenetration profile of the final structure. This method can be readily expanded to represent Metal-Organic Frameworks using the secondary building unit (SBU) approach.[64]
The Supplementary Information provides the complete list of currently available organic cores, connection groups, functional groups, nets, and stacking/interpenetration options, along with examples for naming some well-known COFs.
Structure building algorithmAfter defining the string representation for the desired structure, pyCOFBuilder performs initial checks to ensure the construction is feasible. Specifically, it determines if the building blocks are compatible with the desired net and if the connection groups can be linked to each other. If the compatibility requirements are met, pyCOFBuilder proceeds to generate the crystalline structure.
The building block's structure for COFs is created by attaching the selected connection groups and functional groups to a chosen organic core, identified by a four-letter code. The organic core structures contain ghost atoms labeled as Q for connection groups and R for functional groups. These ghost atoms act as positioning vectors directing the attachment of the selected groups on the organic core structure.
Each building block molecule is created as a BuildingBlock python object, that can be created and manipulated independently of the COF building process. The atomic positions of the building block structure can also be stored in \(xyz\) format along with the structure during the creation of the COF, thus allowing independent calculations of these molecules independently of the COF structure.
Once the building blocks molecules are created, the selected net is used as a topological blueprint for the final structure. First, the size of the building blocks is used to calculate the cell vectors (\(a\), \(b\), and \(c\)). Next, the building blocks are rotated and translated to occupy the node and edge positions defined by the selected net. After the building blocks have been arranged in the correct positions, the psurtagen package is utilized to create a "Structure" object. This object is then subjected to symmetry analysis to calculate the space group and generate the crystal structure.
The stacking pattern can be specified using a two-letter code, such as AA, AA1, or AB1, while the number of interpenetrating structures for 3D nets is used to define the interpenetration class. To generate a COF structure with the desired stacking pattern or interpenetration class, the atoms in the unit cell are duplicated, rotated, and translated accordingly. Then, another symmetry analysis is performed on the structure to ensure that the correct unit cell is used to represent the new structure.
Software outline and general usageThe pyCOFBuilder software was implemented using object-oriented Python and utilizes several scientific programming libraries, such as Numpy5 and pymtgen[66] to execute mathematical operations and symmetry analyses. The GitHub repository contains detailed instructions about the manual and automatic installation. We recommend the user set up a Python environment using the environment.yml file provided therein.
The code is designed with a focus on two primary objects, namely BuildingBlock and Framework. The BuildingBlock object is intended to deal with the molecules that will form the COF and can be independently used. The Framework object represents the COF itself, which is constructed by linking the BuildingBlock molecules. The software is distributed under the MIT license and is provided with unit tests and comprehensive online documentation.
The COF structure, referred to in the code as Framework, can be created directly by their string-encoding as in the example shown on Fig. 3. For example, the COF referred to in the literature as TpPa-1\({}^{\#}\) or DAB-TFP[68] can be translated to the unique representation T3_BENZ_CHO_OH_-L2_BENZ_N2_H_H_-HCB_A-AA. The resulting COF structure generated by pyCOFBuilder can be saved in several file formats including CIF, XZZ, PDB, PQR, PQSCAR, QuantumNSPRESSO input, among others, allowing a simple integration on a wide variety of high-throughput workflows.
To assess the quality of the models generated by pyCOFBuilder, we selected a set of 33 structures from the literature and conducted a comparison of the experimental cell parameters with those generated by pyCOFBuilder, as well as those obtained after full geometry optimization (cell parameters and atomic positions) at DFT-PBE-D3(BJ) level and two TightBinding approaches (xTB-GNF1 and DFTB). The complete list of materials with their respective cell parameters and references are provided in _Supporting information_.
The results presented in Fig. 4 reveal that all tested methods exhibit good agreement with the experimental values for the lattice parameter a, showing root-mean-squared error (RMSE) values of 1.15, 1.01, 0.94, and 1.06 A for pyCOFBuilder, DFTB, xTB, and PBE-D3(BJ), respectively. pyCOFBuilder generated structures with cell parameters close to the experimentally determined values even without undergoing any geometry optimization process, suggesting that the generated structures can be used effectively in high-throughput studies.
The crystallographic parameter \(c\), on the other hand, exhibits a notable deviation from all the simulation methods applied to the experimental data. Whereas the experimental measurements exhibit values within the range between 3.1 and 4.1 A, the simulated values tend to concentrate around certain specific values, which are 3.6, 3.1, 3.4, and 4.0 for pyCOFBuilder, DFTB, xTB, and PBE-D3(BJ), respectively. This discrepancy is attributed to the challenges in adequately treating the dispersion interactions between the COF sheets, which continues to pose a significant obstacle in accurately modeling 2D materials.[69, 70] Thus, a more comprehensive analysis is required to better understand the underlying factors responsible for these variations and their implications on the properties obtained through different approaches.
### Property-Structure Relationships
The application of the reticular approach to design new materials targeting specific applications offers a significant advantage through its ability to exert atomic-level control over the structure of the material, which in turn enables precise manipulation of the material's properties. This control is achieved by exploring the five key dimensions of the reticular design: topology, organic core, connection group, functionalization, and supramolecular
lar arrangement. By varying these dimensions, a wide range of materials with diverse chemical and geometrical attributes can be generated, ultimately leading to the development of materials with distinct and unique characteristics.
### Impact of the topology on textural properties
To exemplify the potential impact of the underlying topology on the textural characteristics of 2D COFs Fig. 5 presents the distribution of three crucial properties associated with the porosity of the material for six different 2D topologies. Notably, topologies such as KGD and HXI_A tend to decrease both pore size and pore volume, while concurrently increasing the specific surface area. This observation leads us to anticipate that materials possessing such distinct geometric arrangements might be particularly well-suited for applications demanding extensive surface areas, such as H\({}_{2}\) and CH\({}_{4}\) storage.
The augmented versions of the HCB and SQL nets present a clear tendency to increase both in pore size and specific area when compared to their non-augmented versions. Therefore, using these augmented networks can be a simple strategy to generate materials with larger pores.
### Impact of functional groups
To provide a concrete example of how pyCOFBuilder can expedite COF research, we investigate the potential use of several COFs for capturing CO\({}_{2}\). We selected two types of building blocks: tritopic blocks (BENZ and TBPZ) that contain aldehyde connector groups, and ditopic blocks (BENZ, NAPT, BPNY, DHSI, PYRN, ANTR, TPNY, DPEY, DPEL, DPSY, BPYB) that contain amine connector groups. Here, iming condensation was used to connect these building blocks as it is a widely used method for producing COFs with high crystallinity. The resulting COF structures presented an HCB-A topology. To introduce chemical diversity, we decorate the ditopic building blocks with 12 different functional groups: _H_-OH, -CH\({}_{3}\), -OMe, -OEt, -NH\({}_{2}\), -NO\({}_{2}\), -CN, -F, -COOH, -CHO, and -tBu.
The structures were built using pyCOFBuilder and optimized following the three-step procedure proposed by Ongari _et al._[22] at the PBE-D3 level with CP2K. The partial charges for the atoms were calculated based on the electronic density of the optimized structure using the DDEC approach implemented on Chargemol. To evaluate the CO\({}_{2}\) adsorption capacity of the structures force-field-based Grand-Canonical Monte Carlo simulations were performed using the Dreiding force field to treat the framework atoms and the TraPPE force field for the adsorbed molecules. Simulations were performed at 298k with pressures ranging from 0.001 to 10 bar. For details on the simulation methodology see the Methods section. To evaluate the performance of porous materials for CO\({}_{2}\) capture applications the working capacity and the CO\({}_{2}\)/N\({}_{2}\) selectivity were used as performance metrics.[71]
The working capacity was calculated as the difference in the equilibrium amount adsorbed between the adsorption and desorption cycle design to model a simple PSA (_pressure swing adsorption_) cycle of CO\({}_{2}\) capture at 298K, with adsorption at 2 bar and desorption at 0.1 bar. The selectivity of CO\({}_{2}\) over N\({}_{2}\), \(\alpha_{CO_{2}/N_{2}}\), is defined as
\[\alpha_{CO_{2}/N_{2}}=\frac{x_{CO_{2}}/x_{N_{2}}}{y_{CO_{2}}y_{N_{2}}} \tag{1}\]
where \(x_{i}\) and \(y_{i}\) are the adsorbed and gas phase mole fractions of species \(i\), respectively
Fig. 7 shows the dependence of working capacity and selectivity on the textural properties of the selected COFs. Specifically, this dependence has been assessed concerning the COFs' specific area, pore volume, pore size, and void fraction. AL
Figure 4. Comparison between cell parameters of pyCOFBuilder generated structures, Tight-binding, and DFT-based geometry optimization with experimental data. a) Lattice parameter a, b) Lattice parameter c. All materials possess hexagonal unit cells and thus \(a=b\). It is noteworthy that the level of agreement between simulated and experimental values is substantially higher for parameter a compared to parameter c.
Figure 5. Violin plot showing the influence of the topology and stacking pattern on three selected textual properties for 2D COFs. a) Pore size, b) Specific area, c) Pore Volume.
Figure 3. Code snippet showing how to generate a clf file of a COF structure. The Framework class creates the structure based on a given name. The save method saves the created structure with the desired file format and size of supercell.
though some trends are apparent regarding the metrics of performance and their dependence on these textural properties, the factor that appears to exert the most significant influence on the overall COF performance is the type of functional group. Indeed, by evaluating the isotherm of adsorption of some selected COFs (Fig. S8) it can be observed that the type of functional group is capable of producing variations on CO\({}_{2}\) uptake values ranging from 2.1 mol/kg (R = OEt) to 8.3 mol/kg (R=CHO) at 10 bar, despite the relatively small variation in pore size caused by the insertion of these functional groups. The impact of functional group choice is also reflected in the enthalpy of adsorption, which varies from 12 and 22 kJ/mol for these functional groups.
To quantify the relative importance of the textural properties and functional group on the performance metrics, we combined the XGBoost machine learning algorithm with the SHAP (Shapley Additive exPlanations) approach. The XGBoost model was trained to predict the working capacity and CO\({}_{2}\)/N\({}_{2}\) selectivity of a COF based on its textural properties and the type of functional group. The results, presented on Fig. 7, show that for the prediction of the working capacity, the functionalization has the higher global relative importance presenting a mean SHAP value of 0.16, followed by density, pore size, void fraction, specific area and pore volume with 0.14, 0.09, 0.09, 0.08, and 0.05 respectively. For the selectivity, the values followed the same trend with functionalization presenting a mean SHAP value of 0.61, followed by density, void fraction, pore size, specific area, and pore volume presenting values of 0.41, 0.38, 0.32, 0.23, and 0.18 respectively.
These results highlight the importance that the chemical characteristics present inside the pores can have in the design of new materials for CO\({}_{2}\) capture, indicating that this type of approach can be a viable path for developing new materials with high CO\({}_{2}\) capture capacity.
## 4 Conclusions
Here we have introduced pyCOFBuilder as a tool for the generation of COF structures for computational studies based on the reticular approach. This Python tool enables the generation of COF structures using a novel string-based representation proposed in this work.
The implementation details of the software have been presented in detail, including the object-oriented design and the scientific programming libraries utilized. Additionally, the generated structures demonstrated good agreement with experimental data and simulations with different levels of theory, highlighting the accuracy and reliability of the software.
We also presented an exploration of the property-structure relationships for the impact of the topology on the textural characteristics of 2D COFs and the working capacity for CO\({}_{2}\) capture and CO\({}_{2}\)/N\({}_{2}\), two wildly used figures or merit for performance evaluation on carbon capture applications, highlighting the potential of pyCOFBuilder for practical use.
Overall, the presented results illustrate the utility of pyCOFBuilder for the switch production of a wide array of COF structures. This process not only facilitates the identification of the most promising candidates for a particular application but also enables the scientific investigation of the underlying principles behind their performance.
We anticipate that by simplifying the process of generating
Figure 6: Impact of building block functionalization on CO\({}_{2}\) capture for different COFs. The choice of the functional group presents a greater influence than the textural properties, such as specific area and pore volume, on the CO\({}_{2}\) working capacity (a-d) and CO\({}_{2}\)/N\({}_{2}\) selectivity (e-h) for COFs.
Figure 7: Global SHAP importance of each feature on the prediction of working capacity (a) and selectivity (b) by the trained XGBoost model. The functionalization type presents the higher means SHAP value for both properties, indicating its dominance over the textural properties.
COF structures, pyCOFBuilder can accelerate the discovery and optimization of new materials with desirable properties, ultimately facilitating progress and innovation in a diverse range of fields.
## Acknowledgments
We acknowledge financial support from CAPES (Project 001), CNPq, and FAPERJ. The authors would like to thank the Nucleo Avancado de Computacao de Alto Desempenho (NACAD) of COPPE/UFRJ for the computational facility.
## Contributions
F. L. O. developed the Python code, executed the simulations, compiled and analyzed the data, and wrote the manuscript. P.M.E. wrote the manuscript. All authors contributed to the discussion and approved the final version of the manuscript.
## Conflicts of interest
There are no conflicts of interest to declare.
## Data Availability
The code presented in this manuscript is available under the MIT license at the GitHub repository [https://github.com/lipelopesoliveira/pyCOFBuilder](https://github.com/lipelopesoliveira/pyCOFBuilder).
## Supporting Information
The Supporting Information is available free of charge on the publisher's website.
|
2305.01727 | Neutron decay into a Dark Sector via Leptoquarks | In this paper, we extend the Standard Model (SM) scalar sector with scalar
leptoquarks (LQ) as a portal to the dark sector to resolve some observational
anomalies simultaneously. We introduce LQ coupling to scalar dark matter (DM)
to suggest an exotic decay channel for the neutron into scalar DM and an SM
anti-neutrino. If the branching ratio of this new neutron decay channel is
$1\%$, a long-standing discrepancy in the measured neutron lifetime between two
different experimental methods, bottle and beam experiments, can be solved. The
mass of the scalar DM produced from neutron decay should be in a narrow range
and as a result, its production in the early universe is challenging. We
discuss that the freeze-in mechanism can produce this scalar DM in the early
universe with the correct relic abundance. Then we show that the model can
explain other SM anomalies like the muon $(g-2)$, and $R_{D^{(*)}}$ anomaly
simultaneously. | Sara Khatibi | 2023-05-02T18:52:43Z | http://arxiv.org/abs/2305.01727v2 | # Neutron decay into a Dark Sector via Leptoquarks
###### Abstract
In this paper, we extend the Standard Model (SM) scalar sector with scalar leptoquarks (LQ) as a portal to the dark sector to resolve some observational anomalies simultaneously. We introduce LQ coupling to scalar dark matter (DM) to suggest an exotic decay channel for the neutron into scalar DM and an SM anti-neutrino. If the branching ratio of this new neutron decay channel is 1%, a long-standing discrepancy in the measured neutron lifetime between two different experimental methods, bottle and beam experiments, can be solved. The mass of the scalar DM produced from neutron decay should be in a narrow range and as a result, its production in the early universe is challenging. We discuss that the freeze-in mechanism can produce this scalar DM in the early universe with the correct relic abundance. Then we show that the model can explain other SM anomalies like the muon \((g-2)\), \(R_{D^{(*)}}\) anomaly, and the tiny neutrino mass simultaneously.
Introduction
The Standard Model (SM) of particle physics is one of the most successful theories and almost all of its predictions are consistent with experimental results. However, there are some observations that the SM failed to explain. One of the intriguing challenges in particle physics is the neutron lifetime anomaly. It is well-known that the neutron dominantly decays to a proton, an electron, and an anti-electron-neutrino (\(\beta\) decay) in the SM framework. The neutron lifetime has been measured by two different methods in experiment, bottle and beam experiments. In the bottle experiment, the ultra-cold neutrons are kept in a container for a time longer than the neutron lifetime, then the remaining neutrons are counted and the neutron lifetime is extracted. The average neutron lifetime from five bottle experiments is [1; 2; 3; 4; 5],
\[\tau_{n}^{\rm bottle}\,=879.6\pm 0.6\ \rm{s}. \tag{1}\]
In the second method, the beam experiment, the numbers of the produced protons from the neutron decay are counted and then the neutron lifetime is measured. The average neutron lifetime from two beam experiments is longer than those from the previous method [6; 7],
\[\tau_{n}^{\rm beam}\,=888.0\pm 2.0\ \rm{s}. \tag{2}\]
There is a \(4\sigma\) discrepancy in neutron lifetime measurements. This discrepancy can be solved if the neutron partially decays to the invisible, for instance, particles in the dark sector (with a branching ratio around 1%) [8; 9; 10; 11; 12; 13].
On the other hand, numerous observations from galactic to cosmic scales indicate the existence of dark matter (DM) that corresponds to approximately 25% of the energy budget of the Universe. Understanding the nature of DM is one of the longstanding problems in particle physics. Although lots of efforts have been down to unveil the DM nature, its properties are still unknown. For example, we don't know if the DM is a fermion or a boson, how it interacts (non-gravitationally) with the SM particles, how it was produced in the early universe and the DM mass value (since the wide range of mass is still valid for the DM particle).
It is well-known that the leptoquark (LQ) models are the economical method to address most of the SM anomalies. The LQs can be a scalar or vector and can simultaneously couple to a quark and a lepton. In this paper, the scalar sector of the SM is extended by two scalar LQs (\(S_{1}^{\alpha}\) and \(S_{1}^{\beta}\)) where both have the same quantum number under the SM gauge group but have different baryon and lepton numbers. Also, we add a dark scalar (\(\phi\)) which is a singlet under the SM. It is shown
that by introducing a new coupling between the LQs and the dark scalar, which is a portal between the SM and the dark sectors, the neutron can decay to a dark scalar and an SM anti-neutrino through the scalar LQ mediators. Then it is indicated that the neutron lifetime anomaly can be evaded in a suitable parameter space region.
Since there are severe constraints on the baryon number violation process, we consider the new exotic neutron decay channel with respect to the baryon number [14; 15; 16; 17]. So, the new dark scalar carries the baryon and lepton numbers. Furthermore, we show that the dark scalar can be a good DM candidate since it is the lightest particle with the baryon number in the model. On the other hand, the exotic neutron decay channel should be kinematically allowed and at the same time the proton decay should be prevented, so the mass of the scalar DM should be in the narrow range. As a result, the production of such a scalar DM is challenging. However, we show that through the freeze-in scenario, it can be produced in the early universe and its relic density is compatible with the observed abundance of the DM.
In the rest of the paper, we examine other SM anomalies that can be addressed by our model. One of the established anomalies in the SM is related to the high-precision measurement of the magnetic moment of the muon. The SM prediction for the magnetic moment of the muon has around \(4.2\sigma\) deviation from the combined result measurement from Brookhaven National Laboratory (BNL) and Fermi National Accelerator Laboratory (FNAL) [18; 19; 20; 21]. To alleviate this problem we need new physics with extra particles. Furthermore, the semi-leptonic decays of B-mesons are sensitive to new physics. For example, the BaBar [22; 23], Belle [24; 25; 26; 27], and LHCb [28; 29; 30] experiments have measured the \(R_{D}\) and \(R_{D^{*}}\) observables where they have shown that their result has a deviation from the SM prediction. Although the current uncertainties should be understood better, one can study the new physics effects on these anomalies. Moreover, from the neutrino oscillation experiments, we know that the neutrinos have a non-zero small mass which is in contrast with the current SM framework. So, to explain this small neutrino mass we need new physics beyond the SM. Then, we show that our model can solve these SM anomalies simultaneously.
The organization of the paper is as follows. In Section II, we explain the model in detail. In section III the different phenomenological aspects of the model are discussed. Finally, section IV summarises the paper.
## II The model
We extend the SM scalar sector by three new particles. The \(S_{1}^{\alpha}\) and \(S_{1}^{\beta}\) are the LQ scalars and have the same quantum number under the SM gauge group \((\overline{\bf 3},{\bf 1},1/3)\), however, they have different baryon and lepton numbers. The third scalar \(\phi\) is singlet under the SM gauge symmetries but it carries the baryon and lepton numbers. Table 1 represents all the new scalars with their quantum numbers. The Lagrangian of the model contains all the particle interactions, and can be written as follows,
\[{\cal L}={\cal L}_{\rm SM}+{\cal L}_{\rm LQ,\;int}+{\cal L}_{\rm Scalars}, \tag{3}\]
where \({\cal L}_{\rm SM}\) is the SM Lagrangian and \({\cal L}_{\rm Scalars}\) contains all kinetic, mass and interaction terms for scalars. \({\cal L}_{\rm LQ,int}\) indicates the scalar LQ interactions with the SM fermion fields and has the following form [31],
\[{\cal L}_{\rm LQ,\;int}=y_{ij}^{LL}\bar{Q}_{L}^{Ci,a}S_{1}^{\alpha}\epsilon^{ ab}L_{L}^{j,b}+z_{ij}^{LL}\bar{Q}_{L}^{Ci,a}{S_{1}^{\beta}}^{*}\epsilon^{ab}Q_{L }^{j,b}+y_{ij}^{RR}\bar{u}_{R}^{Ci}S_{1}^{\alpha}e_{R}^{j}+\mbox{ h.c. }, \tag{4}\]
where \(Q_{L}\) (\(L_{L}\)) indicate the left-handed quark (lepton) doublet and \(u_{R}\) (\(e_{R}\)) show the right-handed up-type quark (charged lepton). The flavor (\(SU(2)\)) indices are shown by \(i,j=1,2,3\) (\(a,b=1,2\)), and \(\epsilon^{ab}=(i\sigma^{2})^{ab}\) that \(\sigma^{2}\) is the second Pauli matrix. For the fermion \(\psi\), we use the following notation, \(\bar{\psi}=\psi^{\dagger}\gamma^{0}\) and \(\psi^{C}=C\bar{\psi}^{T}\), where \(C=i\gamma^{2}\gamma^{0}\) is the charge conjugation operator. The \(y^{LL}\) and \(y^{RR}\) are completely arbitrary \(3\times 3\) matrices but \(z^{LL}\) is a symmetric matrix in flavor space (\(z_{ij}^{LL}=z_{ji}^{LL}\)). After the contraction in the \(SU(2)\) space, we have the following interaction terms for the LQ scalars with the SM fermions,
\[{\cal L}_{\rm LQ,\;int} = - \tag{5}\] \[- \left(z^{LL}V^{\dagger}\right)_{ij}\bar{d}_{L}^{Ci}{S_{1}^{ \beta}}^{*}u_{L}^{j}+y_{ij}^{RR}\bar{u}_{R}^{Ci}S_{1}^{\alpha}e_{R}^{j}+\mbox{ h.c. },\]
\begin{table}
\begin{tabular}{|c c c c c c c|} \hline Particles & B & L & \(SU(3)_{C}\times SU(2)_{{}_{L}}\times U(1)_{Y}\) \\ \hline \hline \(S_{1}^{\alpha}\) & \(-1/3\) & \(-1\) & \((\overline{\bf 3}\) &, & 1 &, 1/3) \\ \(S_{1}^{\beta}\) & \(2/3\) & \(0\) & \((\overline{\bf 3}\) &, & 1 &, 1/3) \\ \(\phi\) & \(1\) & \(1\) & \((1\) &, & 1 &, 0) \\ \hline \end{tabular}
\end{table}
Table 1: The quantum numbers of the new scalars. The second and third columns show the baryon and lepton numbers, respectively. The last column presents the quantum numbers under the SM gauge groups.
where \(U\) is a Pontecorvo-Maki-Nakagawa-Sakata (PMNS) unitary mixing matrix and \(V\) is a Cabibbo-Kobayashi-Maskawa (CKM) mixing matrix. All fields in the above equation are in the mass eigenstate basis. Moreover, The scalar Lagrangian for a SM singlet scalar \(\phi\) and two other scalars \(S_{1}^{\alpha}\) and \(S_{2}^{\alpha}\) is given by,
\[\mathcal{L}_{\text{\tiny Scalars}} = \left|D_{\mu}\phi\right|^{2}-m_{\phi}^{2}\left|\phi\right|^{2}+ \left|D_{\mu}S_{1}^{\alpha}\right|^{2}-m_{S_{1}^{\alpha}}^{2}\left|S_{1}^{ \alpha}\right|^{2}+\left|D_{\mu}S_{1}^{\beta}\right|^{2}-m_{S_{1}^{\beta}}^{2} \left|S_{1}^{\beta}\right|^{2}-\lambda_{1}\left|S_{1}^{\alpha}\right|^{4}- \lambda_{2}\left|S_{1}^{\beta}\right|^{4}\] \[- \lambda_{3}\left|\phi\right|^{4}-\lambda_{4}|H|^{2}\left|S_{1}^{ \alpha}\right|^{2}-\lambda_{5}|H|^{2}\left|S_{1}^{\beta}\right|^{2}-\lambda_{ 6}|H|^{2}\left|\phi\right|^{2}-(\mu\ S_{1}^{\alpha}(S_{1}^{\beta})^{*}\phi+ \ \text{h.c.}),\]
where \(H\) is the SM Higgs doublet. The \(\lambda_{i}\)s are the dimensionless couplings whereas the \(\mu\) has a dimension of mass. The last term in the above Lagrangian plays a crucial role in the model because it is a portal between the SM and the dark sectors. As we will explain in the next section the \(\phi\) is the lightest particle with the baryon number in our model, as a result, it is stable and can be a good DM candidate.
It is worth mentioning that, in the rest of the paper, for sake of simplicity and in order to resolve some anomalies simultaneously, we consider the following economical flavor ansatz,
\[y_{12}^{LL}\neq 0,\qquad y_{33}^{LL}\neq 0,\qquad y_{32}^{RR}\neq 0,\qquad z_{11} ^{LL}\neq 0, \tag{6}\]
and other couplings in the LQ Lagrangian (Eq. 5) are considered to be zero.
## III Phenomenology
In this section, we study different phenomenological aspects of our model. In the first subsection III.1, we show how the neutron can decay to a scalar DM and an anti-neutrino via the scalar LQs. We find an appropriate benchmark in our model in which the neutron decay anomaly resolve. The DM production in the early universe and calculating DM relic abundance are presented in the subsection III.2. Then in the subsection III.3, we explain how our setup can eliminate the muon anomalous magnetic moment. In subsection III.4, we indicate that the \(R_{D^{(*)}}\) anomaly can be alleviated through our model with the proper parameter space. And finally in subsection III.5, we show that the SM neutrino can have radiative mass by a two-loop Feynman diagram if we add a new right-handed neutrino to the model and consider its interaction term with the scalar LQs.
### Neutron Decay Anomaly
As we mentioned before, one of the intriguing challenges in particle physics is the neutron lifetime anomaly. In order to evade such an impasse the neutron can partially decay into the dark sector. In
our model, the neutron can decay into a scalar DM and an SM anti-neutrino (\(n\to\phi\bar{\nu}\)). According to the Lagrangian in Eq. 3 and considering the flavor ansatz in Eq. 6, the following terms have contribution to the exotic neutron decay,
\[\mathcal{L}_{n\to\phi\bar{\nu}^{i}}=-y_{12}^{LL}U_{2i}\bar{d}_{L}^{C}\nu_{L}^{i}S _{1}^{\alpha}+z_{11}^{LL}V_{11}\bar{u}_{L}^{C}d_{L}(S_{1}^{\beta})^{*}-\mu\ S_{1}^{ \alpha}(S_{1}^{\beta})^{*}\phi+\text{h.c.}, \tag{7}\]
where \(\nu_{L}^{i}\) can be any mass eigenstate of the SM neutrino. The corresponding Feynman diagram for neutron decay to a scalar DM and an anti-neutrino is shown in Fig. 0(a).
It is worth mentioning that, there are some constraints on the scalar DM mass (\(m_{{}_{\phi}}\)). First, for the exotic neutron decay channel to be kinematically allowed, the \(\phi\) should be lighter than the neutron. The other bound comes from proton decay. In our model, the proton also can decay to a scalar DM and an anti-muon (according to our flavor ansatz). The Feynman diagram for proton decay is illustrated in Fig. 0(b), and the following Lagrangian terms give rise to proton decays,
\[\mathcal{L}_{p\to\phi\bar{\mu}}=+y_{12}^{LL}V_{11}\bar{u}_{L}^{C}\mu_{L}S_{1}^ {\alpha}+z_{11}^{LL}V_{11}\bar{u}_{L}^{C}d_{L}(S_{1}^{\beta})^{*}-\mu\ S_{1}^{ \alpha}(S_{1}^{\beta})^{*}\phi+\text{h.c.}. \tag{8}\]
To prevent proton decay, the scalar DM mass should be in the following range,
\[m_{p}-m_{\mu}<m_{\phi}<m_{n}\hskip 28.452756pt\to\hskip 28.452756pt832.71\ \text{MeV}<m_{\phi}<939.565\ \text{MeV}.\]
Moreover, nuclear physics puts other bounds on the \(m_{{}_{\phi}}\). The most stringent constraint is required to prevent nuclear decay of \({}^{9}\text{Be}\)[8],
\[937.900\ \text{MeV}<m_{\phi}<939.565\ \text{MeV}.\]
According to the aforementioned limits, we chose \(m_{\phi}=938\ \text{MeV}\) as our benchmark. As a result, the \(\phi\) is stable since it is the lightest particle with baryon number.
There are some constraints on the (first and second generation) scalar LQs mass from the LHC experiments, where their mass should be larger than 1.28 TeV [32; 33]. So, the scalar LQs are
Figure 1: The Feynman diagrams contributing to neutron and proton decay.
heavier than other particles in the model and they can be integrated out from the Lagrangian. As a result, the effective Lagrangian contributing to the exotic neutron decay is given by,
\[\mathcal{L}^{\rm eff}_{n\rightarrow\phi\bar{\nu}^{i}}=\kappa^{i}\bar{n}^{C}_{L} \nu^{i}_{L}\phi^{*}+{\rm h.c.}, \tag{9}\]
where
\[\kappa_{i}=\frac{\mu~{}\beta(y^{LL}_{12}U_{2i})(z^{LL}_{11}V_{11})}{m^{2}_{{}_ {S^{a}_{1}}}m^{2}_{{}_{S^{b}_{1}}}}, \tag{10}\]
that \(\beta\cong 0.014~{}{\rm GeV}^{3}\) form lattice QCD [34]. According to the above effective Lagrangian, the exotic neutron decay width to a scalar DM and an anti-neutrino can be calculated as follows,
\[\Delta\Gamma(n\rightarrow\phi\bar{\nu})=\sum_{i}|\kappa_{i}|^{2}\frac{1}{16 \pi m^{3}_{n}}(m^{2}_{n}-m^{2}_{{}_{\phi}})^{2}, \tag{11}\]
where
\[\sum_{i}|\kappa_{i}|^{2}=|\frac{\mu~{}\beta(y^{LL}_{12})(z^{LL}_{11}V_{11})}{ m^{2}_{{}_{S^{a}_{1}}}m^{2}_{{}_{S^{b}_{1}}}}|^{2}, \tag{12}\]
and the unitary condition of the PMNS matrix (\(\sum_{i}|U_{2i}|^{2}=1\)) is used. To resolve the neutron decay anomaly, the exotic decay width should have the following value [8],
\[\Delta\Gamma(n\rightarrow\phi\bar{\nu})=\Gamma^{\rm bottle}_{n}~{}-\Gamma^{\rm beam }_{n}~{}\simeq 7.1\times 10^{-30}~{}{\rm GeV}. \tag{13}\]
Therefore, the following limit is imposed on the combination of the model parameters,
\[|\frac{\mu~{}y^{LL}_{12}z^{LL}_{11}}{m^{2}_{{}_{S^{a}_{1}}}m^{2}_{{}_{S^{b}_{1} }}}|^{2}\simeq 1.8\times 10^{-19}~{}{\rm GeV}^{-6}. \tag{14}\]
In Fig. 2, we show the accepted values for the LQ mass and LQ coupling with the SM fermion according to the above limit. For simplicity, we assume that \(m_{{}_{S^{a}_{1}}}=m_{{}_{S^{b}_{1}}}\) and \(y^{LL}_{12}=z^{LL}_{11}\). The Left panel shows the LQ coupling as a function of the LQ mass for different values of \(\mu\), and the right panel shows the dimensionful coupling \(\mu\) as a function of the LQ mass for different values of LQ coupling. According to the figures, the LQ mass around 1.3 TeV and \(y^{LL}_{12}=z^{LL}_{11}=0.8\) are allowed.
It is worth mentioning that the neutron star (NS) can put constraints on the models that suggest a new neutron decay channel. The Tolman-Oppenheimer-Volko (TOV) equations determine the NS structure [35; 36]. If we integrate the TOV equations from the center of the NS, where the pressure is a constant, to the radius of the NS with zero pressure, we can find the mass of NS as a function of its radius. Also, we can predict the maximum possible mass of the NS. However, to do the above procedure we need to have the Equation of State (EOS) of the NS. The EOS gives the relation
between the energy density and the pressure for the NS. The new neutron dark decay channel causes the DM to be thermalized inside the NS and as a result, the EOS would be changed. Ref. [37] showed that for a non-interacting DM with mass below the neutron mass (to have a kinematically allowed neutron decay channel), the DM produces more energy than the pressure and the EOS of the NS becomes softer. As a result, these models predict maximum mass for neutron stars below \(0.7M_{\odot}\), which is in contradiction with observation. According to the data the current maximum mass for the NS is about \(2M_{\odot}\)[38; 39].
However, Ref. [37; 40] showed that the DM model with mass greater than 1.2 GeV or the repulsive self-interacting DM model can escape from these constraints. For instance, if the DM is charged under a new gauge symmetry (U(1) or SU(2)), NS limits can be evaded for a suitable value of dark gauge mediator mass and gauge coupling [12; 13]. In our model, the neutron decays to a dark scalar and an anti-neutrino (\(n\to\phi\bar{\nu}\)). According to the Lagrangian in Eq. 6, the scalar DM (\(\phi\)) has a repulsive self-interaction term \(\lambda_{3}\left|\phi\right|^{4}\) for the \(\lambda_{3}>0\). As a result, we can evade NS constraints.
### DM Production
In our model, the DM can be produced through the freeze-in mechanism in early universe [41; 42]. In this mechanism, the DM has negligible abundance at the early time, however, some interaction with bath particles can produce the DM. In our case, after the QCD phase transition, the neutron and anti-neutron can decay into \(\phi\) and contribute to the DM relic density. Although this contri
Figure 2: The left panel shows the LQ coupling as a function of the LQ mass for \(\mu\) = 1,1.5, and 2 TeV. The right panel shows the \(\mu\) coupling as a function of the LQ mass for \(y_{12}^{LL}=z_{11}^{LL}\) = 0.4,0.6, and 0.8.
bution is negligible since the obtained relic density for \(\phi\) from the neutron decay is four orders of magnitude less than the observed cosmological DM relic [43]. Another type of interaction that can contribute to \(\phi\) relic abundance is \(n\pi^{0}\rightarrow\phi\bar{\nu}\) scattering. The number density of the DM (\(n_{\phi}\)) can be calculated by the Boltzmann equation in the freeze-in scenario,
\[\dot{n}_{\phi}+3n_{\phi}H\approx\int d\Pi_{n}d\Pi_{\pi}d\Pi_{\bar{\nu}}d\Pi_{ \phi}(2\pi)^{4}\delta^{4}\left(p_{n}+p_{\pi}-p_{\bar{\nu}}-p_{\phi}\right)|M|^{ 2}_{n\pi\rightarrow\bar{\nu}\phi}f_{n}f_{\pi}, \tag{15}\]
where the \(H\) is the Hubble parameter, \(d\Pi_{i}=d^{3}p_{i}/(2\pi)^{3}2E_{i}\) are phase space elements and \(f_{i}=\left(e^{E_{i}/T}\pm 1\right)^{-1}\) are phase space densities. Assuming the initial particles are in thermal equilibrium we can consider \(f_{i}\approx e^{-E_{i}/T}\), and the Boltzmann equation can have the following form [44],
\[\dot{n}_{\phi}+3n_{\phi}H\approx\frac{T}{512\pi^{6}}\int_{(m_{n}+m_{\pi})^{2} }^{\infty}ds\ d\Omega\ P_{B_{1}B_{2}}\ P_{B_{3}\phi}\ |M|^{2}_{n\pi\rightarrow\bar{\nu}\phi}K_{1}(\sqrt{s}/T)/\sqrt{s}, \tag{16}\]
where the \(s\) and \(T\) are the center of mass energy of the interaction and temperature, respectively. The \(K_{1}\) is the first modified Bessel Function of the 2nd kind, and
\[P_{ij}\equiv\frac{\left[s-\left(m_{i}+m_{j}\right)^{2}\right]^{1/2}\left[s- \left(m_{i}-m_{j}\right)^{2}\right]^{1/2}}{2\sqrt{s}}. \tag{17}\]
The angular integration over the squared amplitude for \(n\pi\rightarrow\bar{\nu}\phi\) interaction is as follows,
\[\int d\Omega\ |M|^{2}_{n\pi\rightarrow\bar{\nu}\phi}=4\pi\lambda^{2}\frac{(s-m _{\phi}^{2})(s+m_{n}^{2}-m_{\pi}^{2})}{2s}, \tag{18}\]
where \(\lambda^{2}=|\frac{\mu\ \beta\ y_{\rm s}^{LL}z_{\rm s}^{LL}g_{\rm s}^{2}}{m_{S_{1} }^{2}m_{S_{2}}^{2}}|^{2}\) is the effective coupling and \(g_{\rm s}\) is the strong coupling constant. If we use the yield definition, \(Y_{\phi}\equiv n_{\phi}/S\) where \(S\) is the entropy density, and consider \(\dot{T}=-HT\) the left part of Eq. 16 becomes,
\[\dot{n}_{\phi}+3n_{\phi}H=-SHT\frac{dY_{\phi}}{dT}, \tag{19}\]
where \(S=2\pi^{2}g_{*}^{S}T^{3}/45\), \(H=1.66\sqrt{g_{*}^{g}}T^{2}/M_{Pl}\), and \(M_{Pl}\) is the non-reduced Planck mass. The \(g_{*}^{S,\rho}\) are the effective numbers of degrees of freedom in the bath at the freeze-in temperature for the entropy and energy density, respectively. And finally, the variation of yield is given by,
\[\frac{dY_{\phi}}{dT}\approx\frac{-1}{SHT}\frac{4\pi\lambda^{2}T}{512\pi^{6}} \int_{(m_{n}+m_{\pi})^{2}}^{\infty}\frac{\sqrt{s-(m_{n}+m_{\pi})^{2}}}{2\sqrt {s}}\frac{s-m_{\phi}^{2}}{2\sqrt{s}}\frac{(s-m_{\phi}^{2})(s+m_{n}^{2}-m_{\pi }^{2})}{2s}\frac{K_{1}(\sqrt{s}/T)}{\sqrt{s}}\ ds. \tag{20}\]
By doing the temperature integral with \(T_{\rm min}=T_{\rm BBN}=1\) MeV and \(T_{\rm max}=\Lambda_{\rm QCD}=180\) MeV, we can obtain the yield of the DM at present (\(Y_{\phi}^{0}\)). In this temperature range, the \(g_{*}^{S,\rho}\) is 17.25. Then the DM relic density can be calculated by the following formula,
\[\Omega_{\phi}h^{2}=\frac{m_{\phi}Y_{\phi}^{0}S_{0}}{\rho_{c}/h^{2}}, \tag{21}\]
where \(S_{0}=2890/\text{cm}^{3}\) is the entropy density at the present time and \(\rho_{c}/h^{2}=1.05\times 10^{-5}\text{GeV}/\text{cm}^{3}\) that \(\rho_{c}\) is the critical density. As we mentioned before, the model parameters are constrained by the neutron lifetime anomaly (Eq. 14), so the value for \(\lambda^{2}=|\frac{\mu~{}\beta~{}y_{12}^{LL}x_{s}^{LL}g_{s}^{2}}{m_{S_{1}}^{2 }m_{S_{1}}^{2}}|^{2}\) is fixed. Therefore, the DM relic density in the model is given by,
\[\Omega_{\phi}h^{2}\approx 0.12(\frac{\lambda}{7.38\times 10^{-11}})^{2}, \tag{22}\]
which is consistent with the Planck collaboration report (\(\Omega_{\text{DM}}h^{2}=0.12\)) [45]. It is also noteworthy to mention that, we do a naive calculation here. For more accuracy, someone should consider the following points:
* Some other similar processes can contribute to the \(\phi\) relic density such as \(p\pi^{0}\rightarrow\phi\bar{\mu}\). However, their contributions to the DM abundance should be in the same order as the \(n\pi^{0}\rightarrow\phi\bar{\nu}\) process.
* As reviewed above, the DM yield is strongly dependent on the QCD confinement scale (\(T_{\text{max}}=\Lambda_{\text{QCD}}\)) and the value of the strong coupling constant (\(g_{s}\)). Since there is a wide range for \(\Lambda_{\text{QCD}}\) from 100 MeV to 1 GeV, the value of the \(Y_{\phi}^{0}\) is changing dramatically in this range.
* The effect of the pion structure should also be considered in the effective coupling mentioned above (\(\lambda\)). Therefore, an extra factor (for example, the pion form factor) should multiply in the \(\lambda\), which can reduce the value of the effective coupling by one or two orders of magnitude and change the DM relic density.
However, by considering all of the above-mentioned points, in the worst-case scenario, the \(\phi\) scalar can at least contribute to the 10% of the total DM abundance.
### Muon \(g-2\)
Another long-standing challenge in the particle physics is the anomalous magnetic moment of muon. The combined result from Brookhaven National Laboratory [19] and Fermi National Accelerator Laboratory [20; 21] for \(a_{\mu}=(g-2)_{\mu}/2\) has \(4.2\sigma\) deviation from its SM prediction [18],
\[\delta a_{\mu}=a_{\mu}^{\text{Exp}}-a_{\mu}^{\text{SM}}=(2.7\pm 0.8)\times 10^{-9}. \tag{23}\]
It is well-known that the scalar LQ can explain this anomaly [46; 47; 48; 49; 50; 31]. In our setup, the scalar \(S_{1}^{\alpha}\) can contribute to the magnetic moment of the muon and its relevant terms from the Lagrangian
Eq. 5 are as follows,
\[\mathcal{L}\supset\left(V^{T}y^{LL}\right)_{ij}\bar{u}_{L}^{Ci}S_{1}^{\alpha}e_{L }^{j}+y_{ij}^{RR}\bar{u}_{R}^{Ci}S_{1}^{\alpha}e_{R}^{j}+\text{h.c.}, \tag{24}\]
where the \(u^{i}\) is the up-type quark \((u,c,t)\) and \(e^{j}\) is the charged lepton. According to our economical ansatz (Eq. 6) and because of the large mass of the top quark the following terms involving the top quark and muon have important effects on the \(a_{\mu}\),
\[\mathcal{L}\supset y_{12}^{LL}V_{13}\bar{t}_{L}^{C}\mu_{L}S_{1}^{\alpha}+y_{3 2}^{RR}\bar{t}_{R}^{C}\mu_{R}S_{1}^{\alpha}+\text{h.c.}. \tag{25}\]
The contribution of the above terms to the anomalous magnetic moment of the muon is given by [46],
\[\delta a_{\mu}=-\frac{N_{c}m_{\mu}}{8\pi^{2}m_{S_{1}^{\alpha}}^{2}}[m_{\mu}(|y _{12}^{LL}V_{13}|^{2}+|y_{32}^{RR}|^{2})\mathcal{F}(x_{t})+m_{t}\operatorname {Re}[(y_{32}^{RR})^{*}(y_{12}^{LL}V_{13})]\mathcal{G}(x_{t})], \tag{26}\]
where \(m_{\mu}\) and \(m_{t}\) indicate the muon and top quark masses, respectively, \(x_{t}=m_{t}^{2}/m_{S_{1}^{\alpha}}^{2}\), and \(N_{c}=3\) is the number of the QCD colors. The definition of \(\mathcal{F}(x)\) and \(\mathcal{G}(x)\) functions are,
\[\mathcal{F}(x) =\tfrac{1}{3}f_{S}(x)-f_{F}(x,)\] \[\mathcal{G}(x) =\tfrac{1}{3}g_{S}(x)-g_{F}(x), \tag{27}\]
where
\[f_{S}(x)=\tfrac{x+1}{4(1-x)^{2}}+\tfrac{x\log x}{2(1-x)^{3}}, \qquad g_{S}(x)=\frac{1}{x-1}-\frac{\log x}{(x-1)^{2}},\] \[f_{F}(x)=\tfrac{x^{2}-5x-2}{12(x-1)^{3}}+\tfrac{x\log x}{2(x-1)^ {4}},\qquad g_{F}(x)=\frac{x-3}{2(x-1)^{2}}+\frac{\log x}{(x-1)^{3}}.\]
As we can see, the first term in Eq. 26 is suppressed by muon mass. The scalar LQ (\(S_{1}^{\alpha}\)) should have both left-handed and right-handed couplings to generate the second term, which is proportional to top quark mass. As a result of this chirality-enhanced effect and top quark mass, the significant contribution to the \(a_{\mu}\) is as follows [48],
\[\delta a_{\mu}\approx-\frac{N_{c}}{48\pi^{2}m_{S_{1}^{\alpha}}^{2}}m_{\mu}m_{ t}\operatorname{Re}\left[(y_{32}^{RR})^{*}(y_{12}^{LL}V_{13})\right]\left(7+4 \log\left(\frac{m_{t}^{2}}{m_{S_{1}^{\alpha}}^{2}}\right)\right). \tag{28}\]
It can be seen that the muon \((g-2)\) anomaly can be explained if the mass of the scalar LQ is 1.3 TeV and \(y_{12}^{LL}=0.8\) (inspired by constraints from the neutron lifetime anomaly) and \(y_{32}^{RR}=1.4\).
### \(R_{D^{(*)}}\) Anomaly
The semi-leptonic decays of B-mesons are sensitive to new physics. The BaBar [22; 23], Belle [24; 25; 26; 27], and LHCb [28; 29; 30] experiments have measured the \(R_{D}\) and \(R_{D^{*}}\) observables where they have shown that their result has a deviation from the SM prediction. Although the current uncertainties should be understood better, one can study the new physics effects on these anomalies. The definition of two anomalous observables are as follows,
\[R_{D} = \frac{\text{BR}(\text{B}\to\text{D}\tau\bar{\nu})}{\text{BR}( \text{B}\to\text{D}\ell\bar{\nu})},\] \[R_{D^{*}} = \frac{\text{BR}(\text{B}\to\text{D}^{*}\tau\bar{\nu})}{\text{BR}( \text{B}\to\text{D}^{*}\ell\bar{\nu})}, \tag{29}\]
where \(\ell=e,\mu\) for BaBar and Bell and \(\ell=\mu\) for LHCb. The experimental world averages reporting by Heavy Flavor Averaging Group are [51],
\[R_{D}^{\text{exp}} = 0.339\pm 0.026\pm 0.014,\] \[R_{D^{*}}^{\text{exp}} = 0.295\pm 0.010\pm 0.010. \tag{30}\]
While the SM predictions for these observables are [52],
\[R_{D}^{\text{SM}} = 0.299\pm 0.003,\] \[R_{D^{*}}^{\text{SM}} = 0.258\pm 0.005. \tag{31}\]
The combination of the experimental result for \(R_{D}\) and \(R_{D^{*}}\) has a deviation from the SM prediction by about \(4\sigma\). The effective Lagrangian for \(b\to c\tau\nu^{i}\) is as follows,
\[\mathcal{L}_{b\to c\tau\nu^{i}}^{\text{eff}}=-\frac{4G_{F}}{\sqrt{2}}V_{cb}C_{ cb}[(\bar{c}_{L}\gamma^{\mu}b_{L})(\bar{\tau}_{L}\gamma_{\mu}\nu_{L}^{i})]+\text{h.c.}, \tag{32}\]
where \(G_{F}\) is the Fermi constant and \(C_{cb}=1\) in the SM. The new physics can contribute in the above effective operator. To explain the \(R_{D^{(*)}}\) anomaly the effective coupling from the new physics should be \(C_{cb}^{\text{\tiny{BSM}}}=0.07\).
Figure 3: The Feynman diagram contributing to the \(b\to c\tau\nu^{i}\) process and the \(R_{D^{(*)}}\) Anomaly.
The LQs are good candidates in order to explain this anomaly. In literature, the different effects of LQs on the B-meson anomalies have been studied extensively [53; 54; 55; 56; 57; 48; 50; 53; 54; 55; 56; 57]. From the Lagrangian in Eq. 5, the terms relevant to the \(R_{D^{(*)}}\) anomaly are,
\[\mathcal{L}\supset-\left(y^{LL}U\right)_{ij}\bar{d}_{L}^{Ci}S_{1}^{\alpha}\nu_ {L}^{j}+\left(V^{T}y^{LL}\right)_{ij}\bar{u}_{L}^{Ci}S_{1}^{\alpha}e_{L}^{j}+ \text{h.c.}, \tag{33}\]
that the \(S_{1}^{\alpha}\) LQ can contribute to the \(b\to c\tau\nu^{i}\) process. The relevant Feynman diagram is shown in Fig. 3. According to the economical flavor ansatz (Eq. 6) and after integrating out the scalar LQ, the effective Lagrangian relevant for \(b\to c\tau\nu^{i}\) is given by [31],
\[\mathcal{L}^{\text{eff}}_{b\to c\tau\nu^{i}}=-\frac{4G_{F}}{\sqrt{2}}V_{cb}C^ {\text{\tiny{BSM}}}_{cb}[(\bar{c}_{L}\gamma^{\mu}b_{L})(\bar{\tau}_{L}\gamma_ {\mu}\nu_{L}^{i})]+\text{h.c.}, \tag{34}\]
where \(C^{\text{\tiny{BSM}}}_{cb}=\frac{v^{2}(y_{33}^{LL}U_{33})(V_{23}^{T}y_{33}^{LL} )^{*}}{4\ m_{S_{1}^{\alpha}}^{2}}\) that \(v\) is the SM Higgs expectation value. For the scalar LQ mass around 1.3 TeV and \(y_{33}^{LL}=2.8\), the \(R_{D^{(*)}}\) anomaly can be solved.
### Neutrino Mass
Due to the neutrino experiments, well-established that the neutrino has a small mass. One well-known explanation for tiny neutrino mass is the seesaw mechanism where the new heavy right-handed neutrino which is singlet under the SM gauge group is introduced [58]. According to the experimental upper limits on the SM neutrino mass, the mass of the right-handed neutrino should be around \(10^{14}\) GeV which is far beyond the boundaries of the current and future experiments. Another acceptable way to generate the tiny neutrino mass is considering the radiative correction where arise the small neutrino mass due to the loop suppression factor [59; 60; 61; 62].
In our model, by introducing a new right-handed neutrino, the tiny neutrino mass arises from a two-loop radiative correction. For this new right-handed neutrino, one can write an interaction
Figure 4: The two-loop Feynman diagram contribution to the neutrino mass.
term with the scalar LQ and a Majorana mass term where both respect all gauge symmetries. The contributing Lagrangian terms in the two-loop neutrino mass are given by,
\[\mathcal{L}\supset-\left(y^{LL}U\right)_{ij}\bar{d}_{L}^{Ci}S_{1}^{\alpha}\nu_{L }^{j}+y^{\overline{RR}}\bar{d}_{R}^{C}S_{1}^{\alpha}\nu_{R}+\ \text{h.c.}\, \tag{35}\]
where the \(\nu_{R}\) shows the right-handed neutrino, \(y^{\overline{RR}}\) is the arbitrary coupling, and \(M\) is the Majorana mass for the right-handed neutrino. The Feynman diagram for this two-loop contribution to the neutrino mass is shown in Fig. 4. After integrating-out the scalar LQ, the estimation for the neutrino masses is as follows [63],
\[m_{\nu^{i}}\sim\frac{(y_{12}^{LL}U_{2i})^{2}(y^{\overline{RR}})^{2}}{(16\pi^{2 })^{2}}\frac{m_{d}^{2}M^{3}}{m_{S_{1}^{\alpha}}^{4}}. \tag{36}\]
According to the values of LQ mass and coupling (\(y_{12}^{LL}\)) implies by the neutron lifetime anomaly (Eq. 14), the sum of the three neutrino mass becomes as follows,
\[\sum m_{\nu^{i}}\sim 0.13\ \text{[eV]}\ (\frac{y_{12}^{LL}\times y^{\overline{RR} }}{0.8\times 0.8})^{2}(\frac{M}{10\ \text{TeV}})^{3}(\frac{1.3\ \text{TeV}}{m_{S_{1}^{ \alpha}}})^{4}, \tag{37}\]
which is consistent with the current upper bound on the sum of the three active neutrino masses (\(\sum m_{\nu}<0.17\) [eV]) [64; 65]. In the above estimation, the Majorana mass of the right-handed neutrino is considered to be 10 TeV, and also \(y_{12}^{LL}=y^{\overline{RR}}\) is assumed.
## IV Summary
In this paper, we introduce a new portal between the standard model (SM) and the dark sectors by scalar leptoquarks (LQ) to resolve some long-standing anomalies simultaneously. The SM predicts the branching ratio of the neutron decay to proton, electron, and anti-electron-neutrino is 100%, however, there is an anomaly in the neutron decay width measurements. In the bottle experiments, where the number of the remaining neutrons is counted, the measured neutron lifetime is shorter than the one measured in beam experiments, where the number of the produced protons is counted. This anomaly can be solved, if the neutron decays to invisible (for example, particles in the dark sector) with a branching ratio around 1%. We suggest that the neutron decays into a dark scalar (\(\phi\)) and an SM anti-neutrino by these scalar LQ mediators. The dark scalar is singlet under the SM gauge symmetries but it carries the baryon and lepton numbers since there are severe constraints on the baryon and lepton numbers violation processes. The mass of the \(\phi\) should be in the narrow range between 937.9 and 939.565 to satisfy all the current bounds.
The \(\phi\) with the aforementioned properties can be a good dark matter (DM) candidate. We showed that the freeze-in mechanism can produce the dark scalar in the early universe and its
relic abundance is compatible with the DM relic density measured by the Planck collaboration. Furthermore, we discussed that this model in good parameter space region can explain other SM observational anomalies simultaneously. For instance, the anomalous magnetic moment of the muon, the \(R_{D^{(*)}}\) anomaly, and the tiny neutrino mass can be explained through our model at the same time.
###### Acknowledgements.
We gratefully thank Fatemeh Elahi for the fruitful discussion and her comments that greatly improved the manuscript. Also, we are thankful to the CERN theory division for their hospitality.
|
2307.00240 | VesselMorph: Domain-Generalized Retinal Vessel Segmentation via
Shape-Aware Representation | Due to the absence of a single standardized imaging protocol, domain shift
between data acquired from different sites is an inherent property of medical
images and has become a major obstacle for large-scale deployment of
learning-based algorithms. For retinal vessel images, domain shift usually
presents as the variation of intensity, contrast and resolution, while the
basic tubular shape of vessels remains unaffected. Thus, taking advantage of
such domain-invariant morphological features can greatly improve the
generalizability of deep models. In this study, we propose a method named
VesselMorph which generalizes the 2D retinal vessel segmentation task by
synthesizing a shape-aware representation. Inspired by the traditional Frangi
filter and the diffusion tensor imaging literature, we introduce a
Hessian-based bipolar tensor field to depict the morphology of the vessels so
that the shape information is taken into account. We map the intensity image
and the tensor field to a latent space for feature extraction. Then we fuse the
two latent representations via a weight-balancing trick and feed the result to
a segmentation network. We evaluate on six public datasets of fundus and OCT
angiography images from diverse patient populations. VesselMorph achieves
superior generalization performance compared with competing methods in
different domain shift scenarios. | Dewei Hu, Hao Li, Han Liu, Xing Yao, Jiacheng Wang, Ipek Oguz | 2023-07-01T06:02:22Z | http://arxiv.org/abs/2307.00240v2 | # VesselMorph: Domain-Generalized Retinal Vessel Segmentation via Shape-Aware Representation
###### Abstract
Due to the absence of a single standardized imaging protocol, domain shift between data acquired from different sites is an inherent property of medical images and has become a major obstacle for large-scale deployment of learning-based algorithms. For retinal vessel images, domain shift usually presents as the variation of intensity, contrast and resolution, while the basic tubular shape of vessels remains unaffected. Thus, taking advantage of such domain-invariant morphological features can greatly improve the generalizability of deep models. In this study, we propose a method named _VesselMorph_ which generalizes the 2D retinal vessel segmentation task by synthesizing a shape-aware representation. Inspired by the traditional Frangi filter and the diffusion tensor imaging literature, we introduce a Hessian-based bipolar tensor field to depict the morphology of the vessels so that the shape information is taken into account. We map the intensity image and the tensor field to a latent space for feature extraction. Then we fuse the two latent representations via a weight-balancing trick and feed the result to a segmentation network. We evaluate on six public datasets of fundus and OCT angiography images from diverse patient populations. VesselMorph achieves superior generalization performance compared with competing methods in different domain shift scenarios.
Keywords:domain generalization vessel segmentation tensor field shape representation retina
## 1 Introduction
Medical images suffer from the distribution shift caused by the discrepancy in imaging acquisition protocols. Images can appear in different contrast, resolution and range of intensity values, even within the same modality. A set of examples is shown in Fig. 1. This obstacle severely impedes the learning-based algorithms reaching clinical adoption. Therefore, much effort has been spent on solving the domain generalization (DG) problem so that the deep models can robustly work on out-of-distribution (OOD) data. There are three major types of solutions: data augmentation [23, 18], meta-learning [6, 14] and domain alignment [24]. The first two strategies aim to improve the model's generalizability by either augmenting the source domain with additional data or replicating the exposure to
OOD data during training. In contrast, the domain alignment strives to align the distribution of the target domains in either image [8] or feature space [1, 15].
We propose a novel method, _VesselMorph_, to improve the DG performance by providing an explicit description of the domain-agnostic shape features as auxiliary training material. Even though traditional algorithms are outperformed by their learning-based counterparts in many aspects, they can typically better generalize to any dataset, regardless of distribution shifts. Specifically for vessel segmentation, Frangi et al. [7] proposed a Hessian-based model to express the tubular shape of vessels which can be regarded as a domain-invariant feature. Merging the Hessian-based shape description [12] with the principles of diffusion tensor imaging (DTI) [13], we introduce a bipolar tensor field (BTF) to explicitly represent the vessel shape by a tensor at each pixel. To effectively merge the features in the intensity image and the shape descriptor BTF, we employ a full-resolution feature extraction network to obtain an interpretable representation in the latent space from both inputs. This technique is broadly used in unsupervised segmentation [17, 11] and representation disentanglement [3, 21].
As shown in Fig. 2, let \(\mathbf{x}\) be the input image and \(\Psi(\mathbf{x})\) the corresponding BTF. \(D(E^{I}(\cdot))\) and \(D(E^{S}(\cdot))\) are two feature extraction networks with a shared decoder \(D\). We empirically observe that the intensity representation \(\mathbf{z}^{I}\) can precisely delineate thinner vessels while the structure representation \(\mathbf{z}^{S}\) works better on thick ones. We combine the strengths of the two pathways for a robust DG performance. The two latent images are fused by a weight-balancing trick \(\Gamma(\mathbf{z}^{I},\mathbf{z}^{S})\) to avoid any potential bias induced by the selection of source domains. Finally, we train a segmentation network \(D^{T}\) on the fused latent images. We compare the performance of VesselMorph to other DG approaches on four public datasets that represent various distribution shift conditions, and show that VesselMorph has superior performance in most OOD domains. Our contributions are:
* A Hessian-based bipolar tensor field (BTF) that provides an explicit description of the vessel morphology (Sec. 2.1).
* A full-resolution feature extraction network that generates vessel representation from both the intensity image and the BTF (Sec. 2.2).
* A training pipeline that generates stable latent images for both pathways and a weight-balancing method to fuse the two representations (Sec. 2.3).
Figure 1: Domain shift among retinal vessel images. Panels (1-3) are the green channel of fundus images. Panels (4-5) are OCT-A images. They all have the same size (\(300\text{pix}\times 300\text{pix}\)). Suppose (1) represents the source domain. Distribution shifts in test domain can be caused by (2) pathology, (3) resolution change, and (4, 5) different modality.
* A comprehensive evaluation on public datasets which shows superior cross-resolution and cross-modality generalization performance (Sec. 3.2).
## 2 Methods
### Bipolar Tensor Field
Unlike ML models, our visual interpretation of vessels is rarely affected by data distribution shifts. Mimicking the human vessel recognition can thus help address the DG problem. In addition to intensity values, human perception of vessels also depends on the local contrast and the correlation in a neighborhood, which is often well described by the local Hessian. Inspired by the use of DTI to depict the white matter tracts, we create a Hessian-based bipolar tensor field to represent the morphology of vessels. Given a 2D input image \(\mathbf{x}\in\mathbb{R}^{h\times w}\) and scale \(\sigma\), the classical Frangi vesselness \(\mathcal{V}(\sigma)\)[7] is defined as:
\[\mathcal{V}(\sigma)=\begin{cases}0&\text{if }\lambda_{2}>0,\\ \exp\left(-\frac{\mathcal{R}_{B}^{2}}{2\beta^{2}}\right)\left[1-\exp\left(- \frac{S^{2}}{2c^{2}}\right)\right]&\text{else}\end{cases}. \tag{1}\]
Here, \(\lambda_{1},\lambda_{2}\) are the sorted eigenvalues of the Hessian \(\mathcal{H}\), \(\mathcal{R}_{B}=\lambda_{1}/\lambda_{2}\), \(S\) is the Frobenius norm of the Hessian (\(\|\mathcal{H}\|_{F}\)), \(\beta=0.5\) and \(c=0.5\). Note that we assume vessels are brighter than the background; fundus images are negated to comply. To represent vessels of different sizes, we leverage the multiscale vesselness filter that uses the optimal scale \(\sigma^{*}\) for the Hessian \(\mathcal{H}(\mathbf{x}_{ij},\sigma)\) at each pixel \((i,j)\). This is achieved by grid search in the range \([\sigma_{min},\sigma_{max}]\) to maximize the vesselness \(\mathcal{V}(\sigma)\), i.e., \(\sigma^{*}=\operatorname*{argmax}_{\sigma_{min}\leq\sigma\leq\sigma_{max}} \mathcal{V}(\sigma)\). Then the optimized Hessian is represented by a \(2\times 2\) matrix:
\[\mathcal{H}(\mathbf{x}_{ij},\sigma^{*})=(\sigma^{*})^{2}\mathbf{x}_{ij}* \nabla^{2}G(\mathbf{x}_{ij},\sigma^{*}) \tag{2}\]
Figure 2: The overall structure of VesselMorph. The shaded layers include transformer blocks. The dashed line indicates \(D\) will be discarded in testing.
where \(G(\mathbf{x}_{ij},\sigma^{*})\) is a 2D Gaussian kernel with standard deviation \(\sigma^{*}\). Then we apply the eigen decomposition to obtain the eigenvalues \(\lambda_{1}\), \(\lambda_{2}\) (\(|\lambda_{1}|\leq|\lambda_{2}|\)) and the corresponding eigenvectors \(\mathbf{v}_{1}\), \(\mathbf{v}_{2}\) at the optimal \(\sigma^{*}\).
Instead of solely analyzing the signs and magnitudes of the Hessian eigenvalues as in the traditional Frangi filter, we propose to leverage the eigenvectors along with custom-designed magnitudes to create our tensor field as shown in Fig. 3(Left). The core idea of Frangi filter is to enhance the tubular structure by matching the vessel diameter with the distance between the two zero crossings in the second order derivative of Gaussian ( \(2\sqrt{2}\sigma^{*}\)). However, the solution is not guaranteed to land in range \([\sigma_{min},\sigma_{max}]\), especially for small vessels. Consequently, we observe that the inaccurate estimation of \(\sigma^{*}\) results in a blurring effect at the vessel boundary, which is problematic for segmentation. As an example in Fig. 3(Left), the direction of \(\mathbf{v}_{1}\) at \(p_{2}\) aligns with that at \(p_{1}\), even though \(p_{1}\) is inside the vessel while \(p_{2}\) is in the background but close to the boundary. This makes it difficult for the vector orientations alone to differentiate points inside and outside the vessel. To tackle this, we introduce the idea of a bipolar tensor by assigning a large magnitude to the orthogonal eigenvector \(\mathbf{v}_{2}\) to points in the background, as shown in the blue dashed ellipse. Specifically, we define the magnitudes \(\alpha_{1}\) and \(\alpha_{2}\) associated with the eigenvectors \(\mathbf{v}_{1}\) and \(\mathbf{v}_{2}\) as:
\[\alpha_{1}=\underbrace{P(\mathbf{x}\leq\mathbf{x}_{ij})\text{exp}\left(- \epsilon\frac{\lambda_{1}^{2}}{\|\mathcal{H}\|_{F}^{2}}\right)}_{\text{vessel-like}} \text{,}\quad\alpha_{2}=\underbrace{P(\mathbf{x}>\mathbf{x}_{ij})\text{exp} \left(-\epsilon\frac{\lambda_{2}^{2}}{\|\mathcal{H}\|_{F}^{2}}\right)}_{\text{ dark}} \tag{3}\]
where \(P(\mathbf{x}>\mathbf{x}_{ij})\) is the probability that the intensity of a random pixel \(x\) in the image is greater than \(\mathbf{x}_{ij}\). This is equivalent to normalizing the histogram by the factor \(hw\) and computing the cumulative distribution function at \(\mathbf{x}_{ij}\). This term thus provides a normalized brightness function in the range \([0,1]\). The exponential term represents how vessel-like the voxel is by using a normalized eigenvalue, and is in the \([0,1]\) range as well. \(\epsilon\) is a constant that controls the sensitivity, which is empirically set to \(0.5\). With the custom magnitudes \(\alpha_{1}\) and \(\alpha_{2}\), the two poles
Figure 3: **Left:** A simplified illustration of BTF. The red arrows indicate the orientation of \(\mathbf{v}_{1}\) while the blue arrows correspond to \(\mathbf{v}_{2}\). The ellipses represent the tensors at \(p_{1}\) (in the vessel) and \(p_{2}\) (in the background). **Right:** BTF applied on an OCTA image.
can better differentiate vessels from the background. Fig. 3(Right) is an example of BTF on an OCTA image. In practice, we stack the two vectors as the input to the structural encoding network, i.e., \(\Psi(\mathbf{x}_{ij})=\left[\alpha_{1}\mathbf{v}_{1}^{\top},\alpha_{2}\mathbf{v }_{2}^{\top}\right]^{\top}\in\mathbb{R}^{4\times 1}\).
### Latent Vessel Representation
Preserving the spatial resolution for the bottleneck of models with U-Net backbone is a common strategy to emphasize the structural features in unsupervised segmentation [17, 11] and representation disentanglement [3, 21]. We employ a network that has a full-resolution (\(h\times w\) pixels) latent space as the feature extraction model. We propose to extract vessel structure from both the intensity image \(\mathbf{x}\in\mathbb{R}^{h\times w}\) and its corresponding BTF, \(\Psi(\mathbf{x})\in\mathbb{R}^{4\times h\times w}\). Therefore, in Fig. 2, the intensity \(D(E^{I}(\cdot))\) and structure \(D(E^{S}(\cdot))\) encoding pathways share the decoder D, and the latent images \(\mathbf{z}^{I},\mathbf{z}^{S}\in\mathbb{R}^{h\times w}\). To distribute more workload on the encoder, \(D\) has a shallower architecture and will be discarded in testing. For the intensity encoding, the model is optimized by minimizing the segmentation loss function defined as the combination of cross-entropy and Dice loss:
\[\mathcal{L}_{seg}=-\frac{1}{N}\sum_{n=1}^{N}\mathbf{y}_{n}\log\hat{\mathbf{y}} _{n}^{I}+\left(1-\frac{2\sum_{n=1}^{N}\mathbf{y}_{n}\hat{\mathbf{y}}_{n}^{I}}{ \sum_{n=1}^{N}\mathbf{y}_{n}^{2}+(\hat{\mathbf{y}}_{n}^{I})^{2}}\right) \tag{4}\]
where \(N=h\times w\) is the total number of pixels in the image, \(\mathbf{y}\) is the ground truth and \(\hat{\mathbf{y}}^{I}\) is the prediction from the training-only decoder \(D\). Although there is no explicit constraint on the latent image \(E^{I}(\mathbf{x})=\mathbf{z}^{I}\), we note that the segmentation-based supervision encourages it to include the vessels while most other irrelevant features are filtered out. Hence, we can view the latent feature as a vessel representation.
Our approach is slightly different for the structure encoding as we observe that it is hard for the feature extraction network to generate a stable latent image that is free of artifacts when the number of input channels is greater than 1. Thus, it is necessary to use \(E^{I}\) as a teacher model that provides direct supervision on the vessel representation. In other words, we first train the intensity encoding path to get \(E^{I}\) and \(D\), then train the \(E^{S}\) by leveraging both the segmentation loss in Eq. 4 and a similarity loss defined as:
\[\mathcal{L}_{sim}(\mathbf{z}^{S},\mathbf{z}^{I})=\sum_{n=1}^{N}\|\mathbf{z}^ {S}_{n}-\mathbf{z}^{I}_{n}\|_{1}+\text{SSIM}(\mathbf{z}^{S},\mathbf{z}^{I}) \tag{5}\]
which is a weighted sum of \(L_{1}\) norm and structural similarity loss SSIM [10]. SSIM is defined as \(\text{SSIM}(A,B)=2\frac{(2\mu_{A}\mu_{B}+c_{1})(2\sigma_{AB}+c_{2})}{(\mu_{A} ^{2}+\mu_{B}^{2}+c_{1})(\sigma_{A}^{2}+\sigma_{B}^{2}+c_{2})}\), where \(\mu\) and \(\sigma\) represent the mean and standard deviation of the image, and we set \(c_{1}=0.01\) and \(c_{2}=0.03\). The overall loss function for the structural encoding is thus \(\mathcal{L}(\Psi(\mathbf{x}),\mathbf{y})=\omega_{1}\mathcal{L}_{seg}(\hat{ \mathbf{y}}^{S},\mathbf{y})+\omega_{2}\mathcal{L}_{sim}(\mathbf{z}^{S},\mathbf{ z}^{I})\), with empirically determined weights \(\omega_{1}=1\), \(\omega_{2}=5\). Experimentally, we found that the \(\mathbf{z}^{I}\) is good at preserving small vessels, while \(\mathbf{z}^{S}\) works better on larger ones.
### Fusion of Vessel Representations
Given the two synthesized vessel representations \(\mathbf{z}^{I}\) and \(\mathbf{z}^{S}\), we need to introduce a fusion method to take advantage of both intensity and structure features. Naively stacking these two channels as input to the segmentation network is prone to inducing bias: if \(\mathbf{z}^{I}\) is consistently better for images from the source domain, then the downstream task model \(D^{T}\) would learn to downplay the contribution of \(\mathbf{z}^{S}\) due to this biased training data. As a result, despite its potential to improve performance, \(\mathbf{z}^{S}\) would be hindered from making a significant contribution to the target domain during testing. To circumvent this issue, we propose a simple weight-balancing trick. As illustrated in Fig. 2, we randomly swap some patches between the two latent images so that \(D^{T}\) does not exclusively consider the feature from a single channel, even for biased training data. This trick is feasible because \(\mathbf{z}^{S}\) and \(\mathbf{z}^{I}\) are in the same intensity range, due to the similarity constraints applied in Eq. 5. Thus the input to \(D^{T}\) is \(\tilde{\mathbf{x}}=\Gamma(\mathbf{z}^{I},\mathbf{z}^{S})\), where \(\tilde{\mathbf{x}}\in\mathbb{R}^{2\times h\times w}\). The loss function leveraged for \(D^{T}\) is the same as Eq. 4.
```
input : Source domains \(\mathcal{S}=\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{K}\), hyperparameters: \(\epsilon,\sigma_{min},\sigma_{max}\), \(\eta_{E^{I}}\), \(\eta_{E^{S}}\), \(\eta_{D^{T}}\)\(\omega_{1}\), \(\omega_{2}\) output : parameters of models \(\theta_{I}^{*}\), \(\theta_{S}^{*}\), \(\varphi_{T}^{*}\) // Train the intensity encoder \(E^{I}\) as a teacher model
1repeat for\(i=1:K\)do
2\(\theta_{I}^{{}^{\prime}}\leftarrow\theta_{I}-\eta_{E^{I}}(i)\nabla\mathcal{L}_ {seg}(D(E^{I}(\mathbf{x}_{i})),\mathbf{y}_{i})\) until converge
3 Generate the tensor field: \(\Psi(\mathbf{x})\) // Train the structure encoder \(E^{S}\) as a student model
4repeat for\(i=1:K\)do
5\(\hat{\mathbf{y}}_{i}\gets D(E^{S}(\Psi(\mathbf{x}_{i})))\) \(\mathcal{L}(\Psi(\mathbf{x}_{i}),\mathbf{y}_{i})\leftarrow\omega_{1}\mathcal{ L}_{seg}(\hat{\mathbf{y}}_{i},\mathbf{y}_{i})+\omega_{2}\mathcal{L}_{sim}(E^{S}( \Psi(\mathbf{x}_{i})),E^{I}(\mathbf{x}_{i}))\) \(\theta_{S}^{{}^{\prime}}\leftarrow\theta_{S}-\eta_{E^{S}}(i)\nabla\mathcal{L} (\Psi(\mathbf{x}_{i}),\mathbf{y}_{i})\) until converge // Train the segmentation network \(D^{T}\) repeat for\(i=1:K\)do
6\(\tilde{\mathbf{x}}_{i}\leftarrow\Gamma(E^{I}(\mathbf{x}_{i}),E^{S}(\Phi( \mathbf{x}_{i})))\) \(\varphi_{T}^{{}^{\prime}}\leftarrow\varphi_{T}-\eta_{D^{T}}(i)\nabla\mathcal{L }_{seg}(D^{T}(\tilde{\mathbf{x}}_{i}),\mathbf{y}_{i})\) until converge
```
The complete algorithm for training of VesselMorph is shown in Algorithm.1. Briefly, we first train the intensity encoder \(E^{I}\) as it is easier to generate a stable vessel representation \(\mathbf{z}^{I}\). Then a structure encoder \(E^{S}\) is trained with the supervision of the ground truth and teacher model \(E^{I}\) so that an auxiliary rep
resentation \(\mathbf{z}^{S}\) is extracted from the structural descriptor BTF. The last step is to train a segmentation network \(D^{T}\) with the fusion of the two vessel maps \(\Gamma(\mathbf{z}^{I},\mathbf{z}^{S})\). During testing, the patch-swapping is no longer needed, so we simply concatenate \(E^{I}(\mathbf{x})\) and \(E^{S}(\Psi(\mathbf{x}))\) as the input to \(D^{T}\).
## 3 Experiments
### Experimental Settings
**Datasets.** The 6 publicly available datasets used in this study are listed in Table 1. Since there are more labeled fundus data available, we set up a source domain \(\mathcal{S}\) that includes three fundus datasets: DRIVE, STARE and the control subjects in ARIA. In the target domain \(\mathcal{T}\), we test the performance of the model under three different conditions: pathology (diabetic/AMD subjects in ARIA), resolution change (HRF) and cross-modality (OCTA500 and ROSE).
**Compared methods.** We pick one representative algorithm from each of the three major categories of DG approaches (Sec. 1) as a competing method. For data augmentation, we implement BigAug [23]. For meta-learning, we use the MASF [4] model. For domain alignment, we use the domain regularization network [1]. In addition, we also include VFT [12] which proposes the idea of shape description for DG. The baseline model is a vanilla residual U-Net trained on \(\mathcal{S}\), and the oracle model is the same network trained directly on each target domain to represent the optimal performance. Note that for a fair comparison, we set the baseline model to have a bit more parameters than \(D(E^{I}(\cdot))\) (\(7.4\times 10^{5}:6.7\times 10^{5}\)).
**Implementation Details.** We use the residual U-Net structure for \(E^{I}\), \(D\) and \(\overline{D^{T}}\). To take advantage of the tensor field, the structure encoder \(E^{S}\) is equipped with parallel transformer blocks with different window sizes as proposed in [12]. All networks are trained and tested on an NVIDIA RTX 2080TI 11GB GPU. We use a batch size of 5 and train for 100 epochs. We use the Adam optimizer with the initial learning rate \(\eta_{E^{I}}=\eta_{E^{S}}=5\times 10^{-4}\), \(\eta_{D^{T}}=1\times 10^{-3}\), decayed by 0.5 for every 3 epochs. For fundus images, we use the green channel as network input \(\mathbf{x}\). The intensity values are normalized to \([0,1]\).
\begin{table}
\begin{tabular}{l c c c c} \hline & **modality** & **resolution** & **\# sample** & **domain** \\ \hline DRIVE [22] & fundus & \(565\times 584\) & 20 & \(\mathcal{S}\) \\ STARE [9] & fundus & \(700\times 605\) & 20 & \(\mathcal{S}\) \\ ARIA [5] & fundus & \(768\times 576\) & \(61/59/23\) & \(\mathcal{S}/\mathcal{T}/\mathcal{T}\) \\ HRF [2] & fundus & \(3504\times 2336\) & \(15/15/15\) & \(\mathcal{T}\) \\ OCTA-500(6M) [16] & OCTA & \(400\times 400\) & 300 & \(\mathcal{T}\) \\ ROSE [19] & OCTA & \(304\times 304\) & 30 & \(\mathcal{T}\) \\ \hline \end{tabular}
\end{table}
Table 1: Publicly available datasets used in our experiments. For ARIA and HRF, we list the number of samples per class. ARIA classes: healthy, diabetic and AMD (age-related macular degeneration). HRF classes: healthy, diabetic and glaucoma. The shading of the rows indicates datasets in similar distributions to each other.
### Results
Fig. 4 shows a qualitative ablation study: it illustrates that the intensity representation \(\mathbf{z}^{I}\) may miss large vessels in the very high-resolution HRF images, while \(\mathbf{z}^{S}\) remains robust. In contrast, \(\mathbf{z}^{I}\) provides sharper delineation for very thin vessels in ROSE. The fusion of both pathways outperforms either pathway for most scenarios. These observations are further supported by the quantitative ablation study in Fig.6. We note that \(\mathbf{z}^{S}\) and \(\mathbf{z}^{I}\) can be used as synthetic angiograms that provide both enhanced vessel visualization and model interpretability.
Fig. 5 shows the t-SNE plots [20] of the datasets. The distribution gaps between datasets are greatly reduced for the two latent vessel representations.
Table 2 compares all methods on the target domain \(\mathcal{T}\). For the diseased ARIA data, all methods show comparable performance and are not significantly different from the baseline. VesselMorph has the best OOD outcome for both cross-modality (dark gray) and cross-resolution (light gray) scenarios, except the
Figure 4: Qualitative ablation. The shown patches are \(1000\times 1000\)pix for HRF diabetic image and \(200\times 200\)pix for ROSE. **Top row:** raw image, \(\mathbf{z}^{I}\) and \(\mathbf{z}^{S}\). **Bottom row:** the VesselMorph segmentation and prediction from each pathway, i.e., \(D^{T}(\Gamma(\mathbf{z}^{I},\mathbf{z}^{S}))\), \(D(\mathbf{z}^{I})\), and \(D(\mathbf{z}^{S})\). **Red** and **green** indicate the false negative (FN) and false positive (FP), respectively. \(\mathbf{z}^{I}\) may miss large vessels, while \(\mathbf{z}^{S}\) may miss thin ones. The fusion provides robust performance, as can also be seen quantitatively in Supp. Fig. 1.
Figure 5: t-SNE on raw data \(\mathbf{x}\)(left), \(\mathbf{z}^{I}\)(center) and \(\mathbf{z}^{S}\)(right). \(\mathcal{S}\) is coded by shades of green, while fundus and OCTA in \(\mathcal{T}\) are coded by red and blue shades respectively. Both intensity and structure representations reduce the domain gaps between datasets.
OCTA500 dataset where VFT, MASF and VesselMorph perform similarly. The results of VFT and VesselMorph prove the value of the shape information.
## 4 Conclusion
In this work, we propose to solve the DG problem by explicitly modeling the domain-agnostic tubular vessel shape with a bipolar tensor field which connects traditional algorithms with deep learning. We extract vessel representation from both intensity and BTF, then fuse the information from the two pathways so that the segmentation network can better exploit both types of description. Our VesselMorph model provides significant quantitative improvement on Dice score across a variety of domain shift conditions, and its latent images offer enhanced vessel visualization and interpretability.
**Acknowledgements.** This work is supported by the NIH grant R01EY033969 and the Vanderbilt University Discovery Grant Program.
\begin{table}
\begin{tabular}{c c c c c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{ARIA} & \multicolumn{2}{c|}{HRF} & \multirow{2}{*}{OCTA 500} & \multirow{2}{*}{ROSE} \\ & amd & diabetic & healthy & diabetic & glaucoma & \\ \hline _baseline_ & 0.6382 & 0.6519 & 0.6406 & 0.5267 & 0.5566 & 0.7316 & 0.6741 \\ \hline *Regular & 0.6489 & 0.6697 & 0.6403 & 0.5216 & 0.5625 & 0.7354 & 0.6836 \\ *BigAug & 0.6555 & 0.6727 & 0.6613 & 0.5389 & 0.5735 & 0.7688 & 0.6932 \\ *MASF & 0.6533 & 0.6775 & 0.6131 & 0.5358 & 0.5629 & 0.7765 & 0.6725 \\ VFT & 0.6181 & 0.6405 & 0.7058 & 0.5732 & 0.6410 & **0.7791** & 0.7281 \\ VesselMorph & **0.6619\({}^{\sim}\)** & **0.6787\({}^{\sim}\)** & **0.7420\({}^{\dagger}\)** & **0.6145\({}^{\dagger}\)** & **0.6756\({}^{\dagger}\)** & 0.7714\({}^{\dagger}\) & **0.7308\({}^{\dagger}\)** \\ \hline _oracle_ & 0.7334 & 0.7065 & 0.8358 & 0.7524 & 0.7732 & 0.8657 & 0.7603 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Dice values for testing on target domains. **Boldface**: best result. Underline: second best result. \(\widetilde{\phantom{0}}\) : p-value \(\geq\) 0.05, \({}^{\dagger}\) : p-value \(\ll\) 0.05 in paired t-test against the baseline output. The background is color-coded the same way as Table 1.
Figure 6: Quantitative ablation results. Dice scores of vanilla residual U-Net, intensity encoding \(D(\mathbf{z}^{I})\), structural encoding \(D(\mathbf{z}^{S})\) and the final output \(D^{T}(\Gamma(\mathbf{z}^{I},\mathbf{z}^{S}))\).The background is encoded the same way as Table 1. We note that the \(\mathbf{z}^{S}\) is especially useful in capturing the thick vessels in HRF, whereas \(\mathbf{z}^{I}\) provides additional precision in the thin vessels in the OCTA datasets. The proposed model combines these advantages and is robust across the board. |
2306.01263 | Adaptive Robotic Information Gathering via Non-Stationary Gaussian
Processes | Robotic Information Gathering (RIG) is a foundational research topic that
answers how a robot (team) collects informative data to efficiently build an
accurate model of an unknown target function under robot embodiment
constraints. RIG has many applications, including but not limited to autonomous
exploration and mapping, 3D reconstruction or inspection, search and rescue,
and environmental monitoring. A RIG system relies on a probabilistic model's
prediction uncertainty to identify critical areas for informative data
collection. Gaussian Processes (GPs) with stationary kernels have been widely
adopted for spatial modeling. However, real-world spatial data is typically
non-stationary -- different locations do not have the same degree of
variability. As a result, the prediction uncertainty does not accurately reveal
prediction error, limiting the success of RIG algorithms. We propose a family
of non-stationary kernels named Attentive Kernel (AK), which is simple, robust,
and can extend any existing kernel to a non-stationary one. We evaluate the new
kernel in elevation mapping tasks, where AK provides better accuracy and
uncertainty quantification over the commonly used stationary kernels and the
leading non-stationary kernels. The improved uncertainty quantification guides
the downstream informative planner to collect more valuable data around the
high-error area, further increasing prediction accuracy. A field experiment
demonstrates that the proposed method can guide an Autonomous Surface Vehicle
(ASV) to prioritize data collection in locations with significant spatial
variations, enabling the model to characterize salient environmental features. | Weizhe Chen, Roni Khardon, Lantao Liu | 2023-06-02T04:15:28Z | http://arxiv.org/abs/2306.01263v3 | # Adaptive Robotic Information Gathering via Non-Stationary Gaussian Processes
###### Abstract
Robotic Information Gathering (RIG) is a foundational research topic that answers how a robot (team) collects informative data to efficiently build an accurate model of an unknown target function under robot embodiment constraints. RIG has many applications, including but not limited to autonomous exploration and mapping, 3D reconstruction or inspection, search and rescue, and environmental monitoring. A RIG system relies on a probabilistic model's prediction uncertainty to identify critical areas for informative data collection. Gaussian Processes (GPs) with _stationary_ kernels have been widely adopted for spatial modeling. However, real-world spatial data is typically _non-stationary_ - different locations do not have the same degree of variability. As a result, the prediction uncertainty does not accurately reveal prediction error, limiting the success of RIG algorithms. We propose a family of non-stationary kernels named Attentive Kernel (AK), which is simple, robust, and can extend any existing kernel to a non-stationary one. We evaluate the new kernel in elevation mapping tasks, where AK provides better accuracy and uncertainty quantification over the commonly used stationary kernels and the leading non-stationary kernels. The improved uncertainty quantification guides the downstream informative planner to collect more valuable data around the high-error area, further increasing prediction accuracy. A field experiment demonstrates that the proposed method can guide an Autonomous Surface Vehicle (ASV) to prioritize data collection in locations with significant spatial variations, enabling the model to characterize salient environmental features.
Robotic Information Gathering, Informative Planning, Non-Stationary Gaussian Processes, Attentive Kernel 2017/01/17 +v1.20
## 1 Introduction
Collecting informative data for effective modeling of an unknown physical process or phenomenon has been studied in different domains, _e.g._, Optimal Experimental Design in Statistics (Atkinson, 1996), Optimal Sensor Placement in Wireless Sensor Networks (Krause et al., 2008), Active Learning (Settles, 2012) and Bayesian Optimization (Snoek et al., 2012) in Machine Learning.
In Robotics, this problem falls within the spectrum of _Robotic Information Gathering (RIG)_(Thrun, 2002). RIG has recently received increasing attention due to its wide applicability. Applications include environmental modeling and monitoring (Dunbabin and Marques, 2012), 3D reconstruction and inspection (Hollinger et al., 2013; Schmid et al., 2020), search and rescue (Merea et al., 2019), exploration and mapping (Jadidi et al., 2019), as well as active System Identification (Buisson-Fenet et al., 2020).
A RIG system typically relies on a probabilistic model's prediction uncertainty to identify critical areas for informative data collection. Figure 1 illustrates the workflow of a RIG system, which shows three major forces that drive the progress of RIG: probabilistic models, objective functions, and informative planners.
The defining element distinguishing other _active information acquisition_ problems and RIG is the robot embodiment's physical constraints (Taylor et al., 2021). In Active Learning (Biyik et al., 2020) or Optimal Sensor Placement (Krause et al., 2008), an agent can sample arbitrary data in a given space. In RIG, however, a robot must collect data sequentially along the motion trajectories. Consequently, most existing work in RIG is dedicated to a sequential decision-making problem called _Informative (Path) Planning_(Binney et al., 2013; Hollinger and Sukhatme, 2014; Lim et al., 2016;
Figure 1: Diagram of A Robotic Information Gathering System. The goal is to autonomously gather informative elevation measurements of Mount St. Helens to efficiently build a terrain map unknown _a priori_. The color indicates elevation, and black dots are collected samples.
Choudhury et al., 2018; Jadidi et al., 2019; Best et al., 2019). Specifically, Informative Planning seeks an action sequence or a policy by optimizing an objective function that guides the robot to collect informative data, aiming to efficiently build an accurate model of the process under the robot's motion and sensing cost constraints (Chen and Liu, 2019; Popovic et al., 2020). The decisive objective function is derived from the uncertainty of probabilistic models such as Gaussian processes (GPs) (Ghaffari Jadidi et al., 2018), Hilbert maps (Senanayake and Ramos, 2017), occupancy grid maps (Charrow et al., 2015), and Gaussian mixture models (Dhawale and Michael, 2020). Since the performance of a RIG system depends on not only planning but also learning, as shown in the feedback loop of Figure 1, a natural question is: how can we further boost the performance by improving the probabilistic models? In this work, we answer this question from the perspective of improving the _modeling flexibility_ and _uncertainty quantification_ of GPs.
Gaussian Process Regression (GPR) is one of the most prevalent methods for mapping continuous spatiotemporal phenomena. GPR requires the specification of a kernel, and _stationary_ kernels, _e.g._, the radial basis function (RBF) kernel and the Matern family, are commonly adopted (Rasmussen and Williams, 2005). However, real-world spatial data typically does not satisfy stationary models which assume different locations have the same degree of variability. For instance, the environment in Figure 1 shows higher spatial variability around the crater. Due to the mismatch between the assumption and the ground-truth environment, GPR with stationary kernels cannot portray the characteristic environmental features in detail. Figure 1(a) shows the over-smoothed prediction of the elevation map after training a stationary GPR using the collected data shown in Figure 1. The model also assigns low uncertainty to the high-error area, _c.f._, the circled regions in Figure 1(b) and Figure 1(c), leading to degraded performance when the model is used in RIG.
Non-stationary GPs, on the other hand, are of interest in many applications, and the past few decades have witnessed great advancement in this research field (Gibbs, 1997; Paciorek and Schervish, 2003; Lang et al., 2007; Plagemann et al., 2008, 2018; Wilson et al., 2016; Calandra et al., 2016; Heinonen et al., 2016; Remes et al., 2017, 2018). However, prior work leaves room for improvement. The problem is that many non-stationary models learn fine-grained variability at every location, making the model too flexible to be trained without advanced parameter initialization and regularization techniques. We propose a family of non-stationary kernels named _Attentive Kernel_ (AK) to mitigate this issue. The main idea of our AK is limiting the non-stationary model to combine a fixed set of correlation scales, _i.e._, primitive length-scales, and mask out data across discontinuous jumps by "soft" selection of relevant data. The correlation-scale composition and data selection mechanisms are learned from data. Figure 1(d) shows the prediction of GPR with the AK on the same dataset used in Figure 1(a). As the arrows highlight, the AK depicts the environment at a finer granularity. Figure 1(e) and Figure 1(f) show that the AK allocates high uncertainty to the high-error area; thus, sampling the high-uncertainty locations can help the robot collect valuable data to decrease the prediction error further.
### Contributions
The main contribution of this paper is in designing the Attentive Kernel (AK) and evaluating its suitability for Robotic Information Gathering (RIG). We present an extensive evaluation to compare the AK with existing non-stationary kernels and a stationary baseline. The benchmarking task is elevation mapping in several natural environments that exhibit a range of non-stationary features. The results reveal a significant advantage of the AK when it is used in passive learning, active learning, and RIG. We also conduct a field experiment to demonstrate the behavior of the proposed method in a real-world elevation mapping task, where the prediction uncertainty of the AK guides an
Figure 2: Comparison of Gaussian Process Regression with Radial Basis Function Kernel and Attentive Kernel.
Autonomous Surface Vehicle (ASV) to identify essential sampling locations and collect valuable data rapidly. Last but not least, we release the code (github.com/weizhe-chen/attentive_kernels) for reproducing all the results.
This paper presents an extended and revised version of previous work by Chen et al. (2022). The major modifications include a comprehensive literature review on RIG to contextualize our work, additional evaluation, results, and discussion on the AK, and a substantially improved Python library. Specifically, we provide the following contributions:
* We present a broader and deeper survey on related work to highlight how our work fits into the existing literature on RIG.
* We add more results to the experiments and discuss them in detail to provide further evidence for our conclusions.
* We thoroughly evaluate the AK from various perspectives and discuss its limitations and potential future work.
* We release a new Python library called PyPolo (pypolo.readthedocs.io) for learning, researching, and benchmarking RIG algorithms. This library is a significant improvement and restructure compared to the one presented in Chen et al. (2022).
## 2 Related Work
In this section, we will first survey related work in RIG, which mainly revolves around three pillars: probabilistic models (Section 2.1.2), objective functions (Section 2.1.1), and informative planning algorithms (Section 2.1.3). Also, we discuss relevant RIG applications in Section 2.1.4. Then, we categorize prior efforts on non-stationary GPs and how the proposed method relates to the existing solutions (Section 2.2). Finally, we describe the relationship between RIG and some related research topics to locate our work within the context of existing literature (Section 2.3)
### Robotic Information Gathering
A RIG system has three essential components:
1. A model to approximate the unknown target function;
2. An objective function that can characterize the model's prediction error;
3. An informative planner that makes _non-myopic_ decisions by optimizing the objective function under the robot's embodiment constraints.
We discuss these three aspects in this section.
#### 2.1.1 Objective Functions
RIG can be the main goal of some tasks, such as infrastructure inspection (Bircher et al., 2018), or serve as an auxiliary task for achieving other goals, _e.g._, seeking the biological hotspots in an unknown environment (McCammon and Hollinger, 2018). In the former cases, the objective function is purely "_information-driven_" (Ferrari and Wettergren, 2021; Bai et al., 2021), while in the latter scenarios, the objective function balances exploration and exploitation (Marchant and Ramos, 2012, 2014; Bai et al., 2016). The objective function can be further extended to multi-objective cases (Chen and Liu, 2019; Ren et al., 2022; Dang, 2020).
Many objective functions have been proposed, inspired by Information Theory and Optimal Experimental Design (Charrow et al., 2015; Zhang et al., 2020; Carrillo et al., 2015). Information-theoretic objective functions include Shannon's and Renyi's entropy, mutual information, and Kullback-Leibler divergence between the prior and posterior predictive distributions. In the case of multivariate Gaussian distributions, these information measures are all related to the logarithmic determinant of the posterior covariance matrix, which can be intuitively viewed as computing the "size" of the posterior covariance matrix. Optimal design theory directly measures the size by computing the matrix determinant, trace, or eigenvalues. Computing the matrix determinant and eigenvalue is known to be computationally expensive. Therefore, many existing works on objective functions are dedicated to alleviating the computational bottleneck (Charrow et al., 2015; Zhang et al., 2020; Zhang and Scaramuzza, 2020; Gupta et al., 2021; Xu et al., 2021).
Most objective functions are summary statistics of the predictive (co)variance given by a probabilistic model. Only when the predictive (co)variance captures modeling error well, optimizing these objective functions can guide the robot to collect informative data that effectively improve the model's accuracy. From this perspective, improving the uncertainty-quantification capability of probabilistic models can broadly benefit future work based on these objective functions. This aspect is what we strive to improve in this work. As can be seen in the next section, this problem is understudied.
#### 2.1.2 Probabilistic Models
Many probabilistic models have been applied to RIG, _e.g._, Gaussian processes (Stachniss et al., 2009; Marchant and Ramos, 2012, 2014; Ouyang et al., 2014; Ma et al., 2017; Luo and Sycara, 2018; Jang et al., 2020; Popovic et al., 2020; Lee et al., 2022), Hilbert maps (Ramos and Ott, 2016; Senamayake and Ramos, 2017; Guizilini and Ramos, 2019), occupancy grid maps (Popovic et al., 2017, 2020; Saroya et al., 2021), and Gaussian mixture models (O'Meadhra et al., 2018; Tabib et al., 2019). GPs are widely adopted due to their excellent uncertainty quantification feature, which is decisive to RIG. However, the vanilla GP models need to be more computationally efficient to be suitable for real-time applications and multi-robot scenarios. Therefore, related work in RIG mainly discusses GPs in the context of improving computational efficiency and coordinating multiple robots. Jang et al. (2020) apply the distributed GPs (Deisenroth and Ng, 2015) to decentralized multi-robot online Active Sensing. Ma et al. (2017) and Stachniss et al. (2009) use sparse GPs to alleviate the computational burden. The mixture of GP experts (Rasmussen and Ghahramani, 2001) has been applied to divide the workspace into smaller parts for multiple robots to model an environment simultaneously (Luo and Sycara, 2018; Ouyang et al., 2014).
The early work by Krause and Guestrin (2007) is highly related to our work. They use a spatially varying linear combination of localized stationary processes to model the non-stationary pH values in a river. The weight of each
local GP is the normalized predictive variance at the test location. This idea is similar to the length-scale selection idea in Section 4.1.1. The main difference is that they manually partition the workspace while our model learns a weighting function from data. To the best of our knowledge, our work is the first to discuss the influence of the probabilistic models' uncertainty quantification on RIG performance.
#### 2.1.3 Informative Planning
The problem of seeking an action sequence or policy that yields informative data is known as Informative _Path_ Planning due to historical reasons (Singh et al., 2007; Meliou et al., 2007). However, the problem is not restricted to path planning. For example, recent work has discussed informative _motion_ planning (Teng et al., 2021), informative _view_ planning (Lauri et al., 2020), and exploratory _grasping_(Danielczuk et al., 2021). Hence, we adopt the generic term Informative Planning to unify different branches of the same problem.
Early works on Informative Planning propose various _recursive greedy_ algorithms that provide performance guarantee by exploiting the _submodularity_ property of the objective function (Singh et al., 2007; Meliou et al., 2007; Binney et al., 2013). Note that the performance guarantee is on uncertainty reduction rather than modeling accuracy. Planners based on dynamic programming (Low et al., 2009; Cao et al., 2013) and mixed integer quadratic programming (Yu et al., 2014) lift the assumption on the objective function at the expense of higher computational complexity. These methods solve combinatorial optimization problems in discrete domains, thus scaling poorly in problem size. To develop efficient planners in _continuous_ space with motion constraints, Hollinger and Sukhatme (2014) introduce sampling-based informative motion planning, which is further developed to online variants (Schmid et al., 2020; Jadidi et al., 2019). Monte Carlo Tree Search (MCTS) methods are conceptually similar to sampling-based informative planners (Kantzos et al., 2021; Schlotfeldt et al., 2018) and have recently garnered great attention (Arora et al., 2019; Best et al., 2019; Morere et al., 2017; Chen and Liu, 2019; Flaspohler et al., 2019). Trajectory optimization is a solid competitor to sampling-based planners. Bayesian Optimization (Marchant and Ramos, 2012; Bai et al., 2016; Di Caro and Yousaf, 2021) and Evolutionary Strategy (Popovic et al., 2017, 2020; Hitz et al., 2017) are the two dominating methods in this realm. New frameworks of RIG, _e.g._, Imitation Learning (Choudhury et al., 2018), are emerging. Communication constraints (Lauri et al., 2017) and adversarial attacks (Schlotfeldt et al., 2021) have also been discussed.
#### 2.1.4 Relevant Applications
Mobile robots can be considered as autonomous data-gathering tools, enabling scientific research in remote and hazardous environments (Li, 2020; Bai et al., 2021). RIG has been successfully applied to environmental mapping and monitoring (Dunbabin and Marques, 2012). An underwater robot with a profiling sonar can inspect a ship hull autonomously (Hollinger et al., 2013). In Girdhar et al. (2014), the underwater robot performs semantic exploration with online topic modeling, which can group corals belonging to the same species or rocks of similar types. Flaspohler et al. (2019) deploy an ASV for localizing and collecting samples at the most exposed coral head. Hitz et al. (2017) monitor algal bloom using an ASV, which can provide early warning to environmental managers to conduct water treatment in a more appropriate time frame. Manjanna et al. (2018) show that a robot team can help scientists collect plankton-rich water samples via _in situ_ mapping of Chlorophyll density. Fernandez et al. (2022) propose delineating the sampling locations that correspond to the quantile values of the phenomenon of interest, which helps the scientists to collect valuable data for later analysis. Active labeled mapping, where the static ground truth is available, can serve as a testbed for ocean bathymmetric mapping (Ma et al., 2018). RIG can also be applied to the 3D reconstruction of large scenes (Kompis et al., 2021) and object surfaces (Zhu et al., 2021). In addition to geometric mapping, semantic mapping is also explored in (Atanasov et al., 2014), where a PR2 robot with an RGB-D camera attached to the wrist leverages non-myopic view planning for active object classification and pose estimation. Meera et al. (2019) present a realistic simulation of a search-and-rescue scenario in which informative planning maximizes search efficiency under the Unmanned Aerial Vehicle (UAV) flight time constraints. Fixed-wing UAVs use aerodynamics akin to aircraft, so it has a much longer flight time than multi-rotors. Moon et al. (2022) simulate a fixed-wing UAV with a forward-facing camera to search for multiple objects of interest in a large search space.
### Non-Stationary Gaussian Processes
GPs suffer from two significant limitations (Rasmussen and Ghahramani, 2001). The first one is the notorious cubic computational complexity of a vanilla implementation. Recent years have witnessed remarkable progress in solving this problem based on sparse GPs (Quinonero-Candela and Rasmussen, 2005; Titsias, 2009; Hoang et al., 2015; Sheth et al., 2015; Bui et al., 2017; Wei et al., 2021). The second drawback is that the covariance function is commonly assumed to be stationary, limiting the modeling flexibility. Developing non-stationary GP models that are easy to train is still an active open research problem. Ideas of handling non-stationarity can be roughly grouped into three categories: input-dependent length-scale (Gibbs, 1997; Paciorek and Schervish, 2003; Lang et al., 2007; Plagemann et al., 2008, 2016; Remes et al., 2017), input warping (Sampson and Guttorp, 1992; Snoek et al., 2014; Calandra et al., 2016; Wilson et al., 2016; Tompkins et al., 2020; Salimbeni and Deisenroth, 2017), and the mixture of experts (Rasmussen and Ghahramani, 2001; Trapp et al., 2020).
Input-dependent length-scale provides excellent flexibility to learn different correlation scales at different input locations. Gibbs (1997) and Paciorek and Schervish (2003) have shown how one can construct a valid kernel with input-dependent length-scales, namely, a _length-scale function_. The standard approach uses another GP to model the length-scale function, which is then used in the kernel of a GP, yielding a hierarchical Bayesian model. Several papers have developed inference techniques for such models and demonstrated their use in some applications (Lang et al., 2007; Plagemann et al., 2008, 2016; Remes et al., 2017). Recently, Remes et al. (2018) show that modeling the length-scale function using a neural
network improves performance. Note, however, that learning a length-scale function is nontrivial (Wang et al., 2020).
Input warping is more widely applicable because it endows any stationary kernel with the ability to model non-stationarity by mapping the input locations to a distorted space and assuming stationarity holds in the new space. This approach has a tricky requirement: the mapping must be _injective_ to avoid undesirable folding of the space (Sampson and Guttorp, 1992; Snoek et al., 2014; Salimbeni and Deisenroth, 2017).
A mixture of GP experts (MoGPE) uses a _gating network_ to allocate each data to a local GP that learns its hyperparameters from the assigned data. It typically requires Gibbs sampling (Rasmussen and Ghahramani, 2001), which can be slow. Hence, one might need to develop a faster approximation (Nguyen-Tuong et al., 2008). We view MoGPE as an orthogonal direction to other non-stationary GPs or kernels because any GP model can be treated as the expert so that one can have a mixture of non-stationary GPs.
The AK lies at the intersection of these three categories. Section 4.1.1 presents an input-dependent length-scale idea by weighting base kernels with different fixed length-scales at each location. Composing base kernels reduces the difficulty of learning a length-scale function from scratch and makes our method compatible with any base kernel. In Section 4.1.2, we augment the input with extra dimensions. We can view the augmentation as warping the input space to a higher-dimensional space, ensuring _injectivity_ by design. Combining these two ideas gives a conceptually similar model to MoGPE (Rasmussen and Ghahramani, 2001) in that they both divide the space into multiple regions and learn localized hyper-parameters. The idea of augmenting the input dimensions has been discussed by Pfingsten et al. (2006). However, they treat the augmented vector as a latent variable and resort to Markov chain Monte Carlo for inference. The AK treats the augmentation vector as the output of a deterministic function of the input, resulting in a more straightforward inference procedure. Also, the AK can be used in MoGPE to build more flexible models.
In robotic mapping, another line of notable work on probabilistic models is the family of Hilbert maps (Ramos and Ott, 2016; Senanayake and Ramos, 2017; Guizilini and Ramos, 2019), which aims to alleviate the computational bottleneck of GPs (O'Callaghan and Ramos, 2012) by projecting the data to another feature space and applying a logistic regression classifier in the new space. Since Hilbert maps are typically used for occupancy mapping (Doherty et al., 2016) and reconstruction tasks (Guizilini and Ramos, 2017), related work also considers non-stationarity for better prediction (Senanayake et al., 2018; Tompkins et al., 2020).
### Relationship to Other Research Topics
RIG is a fundamental research problem seeking an answer to the following question:
_How does a robot (team) collect informative data to efficiently build an accurate model of an unknown function under robot embodiment constraints?_
Depending on how we define _data_ and what the unknown _target function_ is, RIG appears in the form of Active Dynamics Learning, Active Mapping, Active Localization, and Active Simultaneous Localization and Mapping (SLAM). Figure 3 shows a Venn diagram of these topics. Although we evaluate the AK in Active Mapping tasks, other related problems, _e.g._, Active Dynamics Learning, can also benefit from the proposed method if the target function is modeled by a GP. On top of that, guiding the data collection process by minimizing _well-calibrated_ uncertainty estimates applies to all these related topics (Rodriguez-Arevalo et al., 2018).
#### 2.3.1 Active Dynamics Learning
Control synthesis typically depends on the system dynamics. Due to the complex interaction between the robot and the environment, _e.g._, a quadruped running at high speed over rough terrain, mechanical wear and tear, and actuator faults, it may be infeasible to build an accurate dynamics model _a priori_(Cully et al., 2015). In these cases, the robot must take safe actions and observe its dynamics to explore different behavioral regimes sample-efficiently (Abraham and Murbey, 2019). When the robot collects dynamics information to infer the unknown transition function, the RIG problem is known as Active Dynamics Learning or System Identification (Taylor et al., 2021). In this context, informative _data_ refers to the state-action-state pairs or the full state-action trajectories that help efficiently learn an accurate model of the unknown _system dynamics or transition function_. The system dynamics can be modeled as fixed-form equations (Jegorova et al., 2020), data-driven models, including _parametric_ models (Chua et al., 2018), _non-parametric_ models (Calandra et al., 2016), and _semi-parametric_ models (Romeres et al., 2019), and the combination of the analytical models and data-driven models (Heiden et al., 2021). GPs have arguably become the _de facto_ standard in collecting informative data that minimizes the predictive uncertainty of data-driven models to achieve sample-efficient dynamics learning (Rezaei-Shoshtari et al., 2019; Buisson-Fenet et al., 2020; Capone et al., 2020; Lew et al., 2022; Yu et al., 2021). With the rise of Automatic Differentiation (Paszke et al., 2017), a large body of recent work tend to estimate the physical parameters inside _differentiable Rigid-Body Dynamics_ models (Sutanto et al., 2020; Lutter et al., 2021; de Avila Belbute-Peres et al., 2018) or _differentiable robotics simulators_(Hu et al., 2019; Freeman et al., 2021; Werling et al., 2021). The literature emphasizes that calibrating the simulation (Mehta et al., 2021) is essential for both Reinforcement Learning with domain randomization (Ramos et al., 2019; Muratore et al., 2022) or trajectory optimization (Du et al., 2021; Heiden et al., 2021). In this context, we can consider RIG as Active Simulation
Figure 3: **Research Topics Related to RIG.**
Calibration since the robot collects informative trajectories to efficiently learn an accurate model of the unknown simulation parameters under the kinodynamic constraints. Active Simulation Calibration can also directly optimize the task-specific reward. For instance, Muratore et al. (2021) model the policy return as a GP and use Bayesian Optimization to tune the simulation parameters. Liang et al. (2020) learn a task-oriented exploration policy to collect informative data for calibrating task-relevant simulation parameters.
#### 2.3.2 Active Perception
When the robot collects data from the _environment_ rather than its dynamics, RIG becomes _Active Perception_ - an agent (_e.g._, camera or robot) changes its angle of view or position to perceive the surrounding environment better (Bajcsy, 1988; Aloimonos et al., 1988; Bajcsy et al., 2018). If the agent actively perceives the environment to reduce the _localization uncertainty_, the problem is referred to as Active Localization (Fox et al., 1998; Borghi and Caglioti, 1998). If the goal is to build the best possible _representation of an environment_, the problem essentially becomes Active Mapping (Lluvia et al., 2021).
#### 2.3.3 Active Localization
Localization uncertainty can arise from perceptual degradation (Ebadi et al., 2020), noisy actuation (Thrun, 2002), and inaccurate modeling (Roy et al., 1999). Decision-making or planning under uncertainty (LaValle, 2006; Bry and Roy, 2011; Preston et al., 2022) provides an elegant framework to formulate these problems using partially observable Markov decision processes (POMDP) (Kaelbling et al., 1998; Cai et al., 2021; Lauri et al., 2022). A principled approach to address these problems is to plan in the _belief space_(Kaelbling and Lozano-Perez, 2013; Nishimura and Schwager, 2021). _Information gathering_ is a natural behavior generated by Belief-Space Planning (Platt et al., 2010). Computing optimal policy in belief space is computationally intensive, but useful heuristics enable efficient computation of high-quality solutions (Kim et al., 2019; Prentice and Roy, 2009; Zheng et al., 2022). Although the localization uncertainty can come from different sources, in perceptually degraded environments such as subterranean, perception uncertainty outweighs the others. A dedicated topic for this case is perception-aware planning (Zhang, 2020). Note that localization is not necessarily positioning a mobile robot on a map (Chaplot et al., 2018); it can also be locating and tracking an object in the workspace of a manipulator with force-torque sensor measurements (Wirnshofer et al., 2020; Schneider et al., 2022).
#### 2.3.4 Active Sensing and Mapping
Suppose the data refers to the robot's observations, _e.g._, camera images or LiDAR point clouds, and the unknown target function is the ground-truth representation of the environment. In that case, RIG can be considered an Active Mapping problem (Placed et al., 2022). Mapping uncertainty can come from _aleatoric_ uncertainty inherent in measurement noise and _epistemic_ uncertainty due to unknown model parameters and data scarcity (Krause and Guestrin, 2007). Active Mapping efficiently builds an accurate model of the environment by minimizing epistemic uncertainty, which is often termed Active Sensing when focusing on the active acquisition of sensor measurements for better _prediction_ rather than _model learning_(Cao et al., 2013; MacDonald and Smith, 2019; Schlotfeldt et al., 2019; Ruckin et al., 2022). When mapping a 3D environment using a sensor with a limited field-of-view, this is known as the Next-Best View problem (Connolly, 1985; Bircher et al., 2016; Palomeras et al., 2019; Lauri et al., 2020). Autonomous Exploration is sometimes used interchangeably with Active Mapping (Lluvia et al., 2021). However, the nuances of the assumptions and evaluation metrics of the two domains yield significantly different solutions and robot behaviors. Specifically, Active Mapping typically assumes ideal localization (Popovic et al., 2020) and aims at building an accurate environment map using noisy and sparse observations; thus, the performance is evaluated by reconstruction error against the ground truth. The robot might revisit some complex regions to collect more data if the model prediction is not accurate enough. For example, when performing Active Mapping of a ship hull, the robot should collect more data around the propeller (Hollinger et al., 2013). Autonomous Exploration emphasizes obtaining the global structure of a vast unknown environment, implying that the robot (team) should avoid duplicate coverage; thus, the evaluation criterion is the explored volume (Cao et al., 2021). In contrast to Active Mapping, unreliable localization is one of the major challenges in Autonomous Exploration that should be addressed (Tranzatto et al., 2022; Papachristos et al., 2019). In this work, our application belongs to the Active Mapping problem, where the better uncertainty quantification of the proposed non-stationary GPR guides the robot to collect more informative data for rapid learning of an accurate map.
#### 2.3.5 Active SLAM
Controlling a robot performing SLAM to reduce both the localization and mapping uncertainty is called active SLAM (Placed et al., 2022). Active Localization and Active Mapping are two conflicting objectives. The former asks the robot to revisit explored areas for potential _loop closure_(Stachniss et al., 2004), while the latter guides the robot to expand _frontiers_ for efficient map building (Yamauchi, 1997). We refer the interested reader to the corresponding survey papers (Lluvia et al., 2021; Placed et al., 2022).
## 3 Problem Statement
Consider deploying a robot to _epfficiently_ build a map of an _unknown_ environment using only _sparse_ sensing measurements of onboard sensors. For instance, when reconstructing a pollution distribution map, the environmental sensors can only measure the pollutant concentration in a _point-wise
\begin{table}
\begin{tabular}{l l l} \hline \hline Meaning & Example & Remark \\ \hline variable & \(m\) & lower-case \\ constant & \(\mathbf{M}\) & upper-case \\ vector & \(\mathbf{x}\) & bold, lower-case \\ matrix & \(\mathbf{X}\) & bold, upper-case \\ set/space & \(\mathbb{R}\) & blackboard \\ Cartesian product & \([\mathbf{a},\mathbf{b}]^{D}\) & \(\mathbf{D}\)-dim hypercube \\ function & d\((\cdot)\) & typewriter \\ \hline special PDF & \(\mathbf{X}\) & calligraphy capital \\ definition & \(\frac{\Delta}{m}\) & normal \\ transpose & \(\mathbf{m}^{\dagger}\) & customized command \\ Euclidean norm & \(\|\cdot\|_{2}\) & customized command \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mathematical Notations.
sampling manner, yielding sparse measurements along the trajectory. Another scenario is building a large bathymetric map of the seabed. The depth measurements of a multi-beam sonar can be viewed as _point measurements_ because the unknown target area is typically vast. Exhaustively sampling the whole environment is prohibitive, if not impossible; thus, one must develop adaptive planning algorithms to collect the most informative data for building an accurate model. Table 1 introduces the notation system used in this paper. We use column vector by default.
### Minimization of Error vs. Uncertainty
**Problem 1**.: The target environment is an unknown function \(\mathbf{f}_{\text{env}}(\mathbf{x}):\mathbb{R}^{D}\mapsto\mathbb{R}\) defined over spatial locations \(\mathbf{x}\in\mathbb{R}^{D}\). Let \(\mathbb{T}\triangleq\{t\}_{t=0}^{T}\) be the set of decision epochs. A robot at state \(\mathbf{s}_{t-1}\in\mathbb{S}\) takes an action \(a_{t-1}\in\mathbb{A}\), arrives at the next state \(\mathbf{s}_{t}\) following a transition function \(p(\mathbf{s}_{t}\mid\mathbf{s}_{t-1},a_{t-1})\), and collects \(N_{t}\in\mathbb{N}\) noisy measurements \(\mathbf{y}_{t}\in\mathbb{R}^{N_{t}}\) at sampling locations \(\mathbf{X}_{t}=[\mathbf{x}_{1},\dots,\mathbf{x}_{N_{t}}]^{\intercal}\in \mathbb{R}^{N_{t}\times D}\) when transitioning from \(\mathbf{s}_{t-1}\) to \(\mathbf{s}_{t}\). We assume that the transition function is known and deterministic and that the robot state is observable. The robot maintains a probabilistic model built from all the training data collected so far \(\mathbb{D}_{t}=\{(\mathbf{X}_{i},\mathbf{y}_{i})\}_{i=1}^{t}\). The model provides predictive mean \(\mu_{t}:\mathbb{R}^{D}\mapsto\mathbb{R}\) and predictive variance \(\nu_{t}:\mathbb{R}^{D}\mapsto\mathbb{R}_{\geq 0}\) functions. Let \(\mathbf{x}^{*}\) be a test or query location, and \(\texttt{error}(\cdot)\) be an error metric. At each decision epoch \(t\in\mathbb{T}\), our goal is to find sampling locations that minimize the _expected error_ after updating the model with the collected data
\[\operatorname*{arg\,min}_{\mathbf{X}_{t}}\mathbb{E}_{\mathbf{x}^{*}}\left[ \texttt{error}\left(\mathbf{f}_{\text{env}}(\mathbf{x}^{*}),\mu_{t}(\mathbf{ x}^{*}),\nu_{t}(\mathbf{x}^{*})\right)\right]. \tag{1}\]
The predictive variance is also included in Equation (1) because it is required when computing some error metrics, _e.g._, negative log predictive density. Note that the expected error cannot be directly used as the objective function for a planner because the ground-truth function \(\mathbf{f}_{\text{env}}\) is unknown. RIG bypasses this problem by optimizing a surrogate objective.
**Problem 2**.: Assuming the same conditions as Problem 1, find _informative_ sampling locations that minimize an uncertainty measure \(\texttt{info}(\cdot)\), _e.g._, entropy:
\[\operatorname*{arg\,min}_{\mathbf{X}_{*}}\mathbb{E}_{\mathbf{x}^{*}}\left[ \texttt{info}\left(\nu_{t}(\mathbf{x}^{*})\right)\right]. \tag{2}\]
RIG implicitly assumes that minimizing prediction uncertainty (Problem 2) can also effectively reduce prediction error (Problem 1). This assumption is valid when the model uncertainty is _well-calibrated_. A model with well-calibrated uncertainty gives high uncertainty when the prediction error is significant and low uncertainty otherwise.
### Gaussian Process Regression
The predictive mean and variance functions are given by a Gaussian process regression (GPR) model in this work. A Gaussian process (GP) is a collection of random variables, any finite number of which have a joint Gaussian distribution (Rasmussen and Williams, 2005).
#### 3.2.1 Model Specification
We place a Gaussian process _prior_ over the unknown target function
\[\mathbf{f}_{\text{env}}(\mathbf{x})\sim\mathcal{GP}(\mathbf{m}(\mathbf{x}), \mathbf{k}(\mathbf{x},\mathbf{x}^{\prime})), \tag{3}\]
which is specified by a mean function \(\mathbf{m}(\mathbf{x})\) and a covariance function \(\mathbf{k}(\mathbf{x},\mathbf{x}^{\prime})\), _a.k.a. kernel_. After standardizing the training targets \(y\) to have a near-zero mean empirically, the mean function is typically simplified to a _zero function_, rendering the specification of the covariance function an important choice. Popular choices of the covariance functions are _stationary kernels_ such as the RBF kernel and the Matern family. We refer the interested reader to Rasmussen and Williams (2005) for other commonly used kernels.
This paper uses the RBF kernel to show how we transform a stationary kernel to a non-stationary one using the proposed method. Given two inputs \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\), the RBF kernel measures their correlation by computing the following kernel value
\[\mathbf{k}(\mathbf{x},\mathbf{x}^{\prime})=\exp\left(-\frac{\|\mathbf{x}- \mathbf{x}^{\prime}\|_{2}^{2}}{2\ell^{2}}\right). \tag{4}\]
The correlation scale parameter \(\ell\) is called the _length-scale_, which informally indicates the distance one has to move in the input space before the function value can change significantly (Rasmussen and Williams, 2005). A given sample should be most correlated to itself; thus, Equation (4) gives the largest kernel value when \(\mathbf{x}=\mathbf{x}^{\prime}\). Kernels are typically normalized to ensure that the largest kernel value is \(1\) and an _amplitude_ parameter \(\alpha\) can be used to scale the kernel value \(\alpha\mathbf{k}(\mathbf{x},\mathbf{x}^{\prime})\) to a larger range.
GPR assumes a Gaussian likelihood function. The target values \(y\) are the function outputs \(f\) corrupted by an additive Gaussian white noise
\[\mathbf{p}(y|\mathbf{x})=\mathcal{N}(y|\mathbf{f}(\mathbf{x}),\sigma^{2}), \tag{5}\]
where \(\sigma\) is the observational _noise scale_.
#### 3.2.2 Prediction
Since GP is a _conjugate_ prior to the Gaussian likelihood, given \(N\) training inputs \(\mathbf{X}\in\mathbb{R}^{N\times D}\) and training targets \(\mathbf{y}\in\mathbb{R}^{N}\), the posterior predictive distribution has a closed-form expression:
\[p(f_{*}|\mathbf{y}) =\mathcal{N}(f_{*}|\mu,\nu), \tag{6}\] \[\mu =\mathbf{k}_{*}^{\intercal}\mathbf{K}_{y}^{-1}\mathbf{y},\] (7) \[\nu =k_{\star\star}-\mathbf{k}_{*}^{\intercal}\mathbf{K}_{y}^{-1} \mathbf{k}_{\star}, \tag{8}\]
where \(\mathbf{k}_{\star}\) is the vector of kernel values between all the training inputs \(\mathbf{X}\) and the test input \(\mathbf{x}^{*}\), \(\mathbf{K}_{y}\) is a shorthand of \(\mathbf{K}_{\mathbf{x}}+\sigma^{2}\mathbf{I}\), \(\mathbf{K}_{\mathbf{x}}\) is the covariance matrix given by the kernel function evaluated at each pair of training inputs, and \(k_{\star\star}\triangleq\mathbf{k}(\mathbf{x}^{*},\mathbf{x}^{*})\).
#### 3.2.3 Learning
The prediction of GPR in Equation (6) is readily available with no need to train a model. However, the prediction quality of GPR depends on the setting of _hyper-parameters_\(\mathbf{\psi}\triangleq[\ell,\alpha,\sigma]\). These are the parameters of the kernel and likelihood function. Hence, optimizing these parameters - a process known as _model selection_ - is a common practice to obtain a better prediction. Model selection is typically implemented by maximizing the _model
evidence_, _a.k.a._, \(\log\)_marginal likelihood_,
\[\ln p(\mathbf{y}|\boldsymbol{\psi})=\frac{1}{2}\underbrace{(-\mathbf{y}^{ \intercal}\mathbf{K}_{y}^{-1}\mathbf{y})}_{\text{model fit}}-\underbrace{\ln\det( \mathbf{K}_{y})}_{\text{model complexity}}-\underbrace{N\ln(2\pi)}_{\text{ constant}},\]
where \(\det(\dots)\) is the matrix determinant.
When using GPR with the commonly used stationary kernels to reconstruct a real-world environment, high uncertainty is assigned to less sampled areas, regardless of the prediction error (see Figures 1(b) and 3). However, real-world spatial environments are typically non-stationary, and the high prediction error is more likely to be present in the high-variability region. In other words, the assumption of well-calibrated uncertainty is violated when using stationary kernels. Therefore, we aim to develop non-stationary kernels to improve GPR's uncertainty-quantification capability and prediction accuracy.
## 4 Methodology
We propose a new kernel called Attentive Kernel to deal with non-stationarity.
**Definition 1**.: Attentive Kernel (AK). Given two inputs \(\mathbf{x},\mathbf{x}^{\prime}\in\mathbb{R}^{D}\), vector-valued functions \(\mathbf{w}_{\boldsymbol{\theta}}(\mathbf{x}):\mathbb{R}^{D}\mapsto[0,1]^{M}\) and \(\mathbf{z}_{\boldsymbol{\phi}}(\mathbf{x}):\mathbb{R}^{D}\mapsto[0,1]^{M}\) parameterized by \(\boldsymbol{\theta},\boldsymbol{\phi}\), an amplitude \(\alpha\), and a set of \(M\) base kernels \(\{\mathbf{k}_{m}(\mathbf{x},\mathbf{x}^{\prime})\}_{m=1}^{M}\), let \(\bar{\mathbf{w}}=\nicefrac{{\mathbf{w}_{\boldsymbol{\theta}}(\mathbf{x})}}{ \left\lVert\mathbf{z}_{\boldsymbol{\theta}}(\mathbf{x})\right\rVert_{2}}\), and \(\bar{\mathbf{z}}=\nicefrac{{\mathbf{z}_{\boldsymbol{\phi}}(\mathbf{x})}}{ \left\lVert\mathbf{z}_{\boldsymbol{\theta}}(\mathbf{x})\right\rVert_{2}}\). The AK is defined as
\[\text{ak}(\mathbf{x},\mathbf{x}^{\prime})=\alpha\bar{\mathbf{z}}^{\intercal }\mathbf{z}^{\prime}\sum_{m=1}^{M}\bar{w}_{m}\bar{w}_{m}^{\prime}\mathbf{k}_{ m}(\mathbf{x},\mathbf{x}^{\prime}), \tag{9}\]
where \(\bar{w}_{m}\) is the \(m\)-th element of \(\bar{\mathbf{w}}\).
We learn the parametric functions that map each input \(\mathbf{x}\) to \(\mathbf{w}\) and \(\mathbf{z}\). The weight \(\bar{w}_{m}\bar{w}_{m}^{\prime}\) gives _similarity attention scores_ to combine the set of base kernels \(\{\mathbf{k}_{m}(\mathbf{x},\mathbf{x}^{\prime})\}_{m=1}^{M}\). The inner product \(\bar{\mathbf{z}}^{\intercal}\bar{\mathbf{z}}^{\prime}\) defines a _visibility attention score_ to mask the kernel value.
Definition 1 is generic because any existing kernel can be the base kernel. To address non-stationarity, we choose the base kernels to be a set of stationary kernels with the same functional form but different length-scales. Specifically, we use RBF kernels with \(M\) length-scales \(\{\ell_{m}\}_{m=1}^{M}\) that are evenly spaced in the interval \([\ell_{\text{min}},\ell_{\text{max}}]\):
\[\mathbf{k}_{m}(\mathbf{x},\mathbf{x}^{\prime})\triangleq\mathbf{k}_{\text{ RBF}}(\mathbf{x},\mathbf{x}^{\prime}|\ell_{m})=\exp\left(-\frac{\| \mathbf{x}-\mathbf{x}^{\prime}\|_{2}^{2}}{2\ell_{m}^{2}}\right).\]
Note that the length-scales \(\{\ell_{m}\}_{m=1}^{M}\) are prefixed constants rather than trainable variables. When applying the AK to a GPR, we optimize all the hyper-parameters \(\{\alpha,\boldsymbol{\theta},\boldsymbol{\phi},\sigma\}\) by maximizing the marginal likelihood and make prediction as in GPR.
At first glance, the AK looks like a heuristic composite kernel. However, the following sections explain how we design this kernel from the first principles. In short, the kernel is distilled from a generative model called AKGPR that can naturally model non-stationary processes.
### A Generative Derivation of AK
The example in Figure 4 motivates us to consider using different length-scales at different input locations. Ideally, we need a smaller length-scale for partition#3 and larger length-scales for the others. In addition, we need to break the correlations among data points in different partitions. An ideal non-stationary model should handle these two types of non-stationarity. Many existing works model the input-dependent length-scale as a length-scale function (Lang et al., 2007; Plagemann et al., 2008; Heinonen et al., 2016). However, parameter optimization of such models is sensitive to data distribution and parameter initialization. We propose a new approach to address this issue that _avoids learning an explicit length-scale function_. Instead, every input location can _select_ among a set of GPs with different predefined primitive length-scales and _select_ which training samples are used when making a prediction. This idea - selecting instead of inferring an input-independent length-scale - avoids optimization difficulties in prior work. These ideas are developed in the following sections.
#### 4.1.1 Length-Scale Selection
Consider a set of \(M\) independent GPs with a set of base kernels \(\mathbf{k}_{m}(\mathbf{x},\mathbf{x}^{\prime})\) using predefined primitive length-scales \(\{\ell_{m}\}_{m=1}^{M}\). Intuitively, if every input location can select a GP with an appropriate length-scale, the non-stationarity can be characterized well. We can achieve this by an _input-dependent weighted sum_
\[\mathbf{f}(\mathbf{x}) =\sum_{m}^{M}\mathbf{w}_{m}(\mathbf{x})\mathbf{g}_{m}(\mathbf{x}), \text{ where} \tag{10}\] \[\mathbf{g}_{m}(\mathbf{x}) \sim\mathcal{GP}(0,\mathbf{k}_{m}(\mathbf{x},\mathbf{x}^{\prime})). \tag{11}\]
Figure 4: Learning A Non-Stationary Function using GPR with RBF Kernel. The target function in red color consists of five partitions separated by vertical dashed lines. The black dots around the function are data points. The function changes drastically in partition#3 and smoothly in the remaining partitions. The transitions between neighboring partitions are sharp. This simple function is challenging for a stationary kernel with a _single_ length-scale. GPR with a stationary RBF kernel produces either the wiggly prediction shown in **(a)** or the over-smoothed prediction in **(b)**. Note that, in **(a)**, the prediction in the smooth regions is rugged, and the uncertainty is over-conservative when the training data is sparse. The prediction in **(b)** only captures the general trend, and every input location seems equally uncertain.
Here, \(\mathbf{w}_{m}(\mathbf{x})\) is the \(m\)-th output of a vector-valued weighting function \(\mathbf{w}_{\mathbf{\theta}}(\mathbf{x})\) which is parameterized by \(\mathbf{\theta}\). We denote \(\mathbf{w}=[\mathbf{v}_{1}(\mathbf{x}),\ldots,\mathbf{v}_{M}(\mathbf{x})]^{ \intercal}\).
Consider an extreme case where \(\mathbf{w}\) is a "one-hot" vector - a binary vector with only one element being one and all other elements being zeros. In this case, \(\mathbf{w}\) selects a single appropriate GP depending on the input location. Typically, inference techniques such as Gibbs sampling or Expectation Maximization are required for learning such discrete "assignment" parameters. We lift this requirement by continuous relaxation:
\[\mathbf{w}_{\mathbf{\theta}}(\mathbf{x})=\mathbf{softmax}(\tilde{\mathbf{w}}_{\mathbf{ \theta}}(\mathbf{x})), \tag{12}\]
where \(\tilde{\mathbf{w}}_{\mathbf{\theta}}(\mathbf{x})\) is an arbitrary \(M\)-dimensional function parameterized by \(\mathbf{\theta}\). Moreover, using such continuous weights has an advantage in modeling gradually changing non-stationarity, as shown in Figure 5.
Figure 6 shows that length-scale selection gives better prediction after learning from the same dataset as in Figure 4. However, when facing abrupt changes, as shown in the circled area, the model can only select a very small length-scale to accommodate the loose correlations among data. If samples near the abrupt changes are not dense enough, a small length-scale might result in a high prediction error. The following section explains how to handle abrupt changes using instance selection.
#### 4.1.2 Instance Selection
Intuitively, an input-dependent length-scale specifies each data point's neighborhood radius that it can impact. Simply varying the radius cannot handle abrupt changes, for example, in a step function, because data sampled before and after an abrupt change should break their correlations even when they are close in input locations. We need to control the _visibility_ among samples: each sample learns only from other samples in the same subgroup. To this end, we associate each input with a _membership vector_\(\mathbf{z}\triangleq\mathbf{z}_{\mathbf{\phi}}(\mathbf{x})\) and use a dot product between two membership vectors to control visibility. Two inputs are visible to each other when they hold similar memberships. Otherwise, their correlation will be masked out:
\[\mathbf{k}_{m}(\mathbf{x},\mathbf{x}^{\prime})=\mathbf{z}^{\intercal}\mathbf{ z}^{\prime}\mathbf{k}_{\text{RBF}}(\mathbf{x},\mathbf{x}^{\prime}|\ell_{m}). \tag{13}\]
We can view this operation as input _dimension augmentation_ where we append \(\mathbf{z}\) to \(\mathbf{x}\) but use a structured kernel in the joint space of \([\mathbf{x},\mathbf{z}]\).
Discussing one-hot vectors also helps understand the effect of \(\mathbf{z}\). In this case, the dot product is equal to \(1\) if and only if \(\mathbf{z}\) and \(\mathbf{z}^{\prime}\) are the same one-hot vector. Otherwise, the dot product in Equation (13) masks out the correlation. This way, we only use the subset of data points in the same group. To make the model more flexible and simplify the parameter optimization, we again use soft memberships:
\[\mathbf{z}_{\mathbf{\phi}}(\mathbf{x})=\mathbf{softmax}(\tilde{\mathbf{z}}_{\mathbf{ \phi}}(\mathbf{x})). \tag{14}\]
Here, \(\tilde{\mathbf{z}}_{\mathbf{\phi}}(\mathbf{x})\) is an arbitrary \(M\)-dimensional function parameterized by \(\mathbf{\phi}\).
#### 4.1.3 The AKGPR Model
Combining the two ideas, we get a new probabilistic generative model developed for non-stationary environments called Attentive Kernel Gaussian Process Regression (AKGPR). Given \(N\) inputs \(\mathbf{X}\in\mathbb{R}^{N\times D}\) and targets \(\mathbf{y}\in\mathbb{R}^{N}\), the model describes the generative process as follows. We use some shorthands for compact notation: \(\mathbf{g}_{m}\triangleq[\mathbf{g}_{m}(\mathbf{x}_{1}),\ldots,\mathbf{g}_{m} (\mathbf{x}_{N})]^{\intercal},\mathbf{f}\triangleq[\mathbf{f}(\mathbf{x}_{1}), \ldots,\mathbf{f}(\mathbf{x}_{N})]^{\intercal},\mathbf{w}_{m}\triangleq[ \mathbf{w}_{m}(\mathbf{x}_{1}),\ldots,\mathbf{w}_{m}(\mathbf{x}_{N})]^{\intercal}\). Here \(\mathbf{w}_{m}(\mathbf{x})\) is the \(m\)-output of Equation (12).
* We compute the membership vector \(\mathbf{z}_{n}\) for each input using Equation (14). Plugging \(\mathbf{z}_{n}\) and the predefined length-scales \(\ell_{m}\) into Equation (13), we then compute \(M\) covariance matrices \(\mathbf{K}_{m}\) evaluated at every pair of inputs.
* The vector \(\mathbf{g}_{m}\) follows a multivariate Gaussian distribution \(\mathcal{N}(\mathbf{0},\mathbf{K}_{m})\) according to the definition of GPs and Equation (11). From Equation (10), we can see that \(\mathbf{f}\) is the summation of \(M\) vectors that follows affine-transformed multivariate Gaussian distributions, thus \(\mathbf{f}\) also follows Gaussian distribution: \[\mathbf{f}=\sum_{m=1}^{M}\mathbf{W}_{m}\mathbf{g}_{m}\sim\mathcal{N}(\mathbf{0},\sum_{m=1}^{M}\mathbf{W}_{m}\mathbf{K}_{m}\mathbf{W}_{m}^{\intercal}),\] (15) where \(\mathbf{W}_{m}\) is a diagonal matrix with \(\mathbf{w}_{m}\) being the \(N\) diagonal elements.
Figure 5: **Learning \(f(x)=x\sin(40x^{4})\) with Soft Length-Scale Selection. The \(\mathbf{w}\)-plot visualizes the associated weighting vector \(\mathbf{w}_{\mathbf{\theta}}(\mathbf{x})\) of each input location. The more vertical length a color occupies, the higher weight we assign to the GP with the corresponding length-scale. The set of predefined length-scales is color-labeled at the bottom. The learned weighting function gradually shifts its weight from smooth GPs to bumpy ones.**
Figure 6: **Prediction of Length-Scale Selection.**
* Finally, we can generate the targets \(\mathbf{y}\) using the Gaussian likelihood in Equation (5).
The plate diagram of this generative process is shown in Figure 7. From Equation (15) we observe that the generative process of AKGPR is equivalent to that of a GPR with a new kernel:
\[\mathrm{k}(\mathbf{x},\mathbf{x}^{\prime})=\sum_{m=1}^{M}w_{m}\underbrace{ \mathbf{z}^{\mathsf{T}}\mathbf{z}^{\prime}\mathbf{x}_{\text{RBF}}(\mathbf{x}, \mathbf{x}^{\prime}|\ell_{m})}_{\text{hidden in $\mathbf{K}_{m}$}}w_{m}^{\prime}. \tag{16}\]
Since \(\mathbf{z}^{\mathsf{T}}\mathbf{z}^{\prime}\) is independent of \(m\), we can move it outside the summation to avoid duplicate computation.
Equation (16) is almost the same as the AK in Definition 1, except that this kernel is not normalized yet. When \(\mathbf{x}=\mathbf{x}^{\prime}\), the kernel value \(\mathrm{k}(\mathbf{x},\mathbf{x}^{\prime})\) might be greater than \(1\). As mentioned in Section 3.2.1, using an amplitude parameter \(\alpha\) to adjust the scale of the kernel value is a common practice in GPR. Introducing the amplitude hyper-parameter requires the kernel to be normalized; otherwise, the interplay between the amplitude and the scaling effect of a kernel before normalization makes the optimization difficult because more local optima are introduced due to the symmetries of the parameter space. We normalize \(\mathbf{w}\) and \(\mathbf{z}\) with \(\ell^{2}\) - norm to ensure that the maximum kernel value (when \(\mathbf{x}=\mathbf{x}^{\prime}\)) is \(1\), and \(\alpha\) is the only parameter that controls the scale of kernel value. After normalization, we now have the final version of the proposed AK in Definition 1, which can be used in any GP model. From the discussion above we have:
**Proposition 1**.: The AKGPR generative model is equivalent to a GPR model with the AK defined in Definition 1.
### Applying AK to GPR
We use the AK with a GPR model and optimize all the parameters by maximizing the log marginal likelihood \(\ln p(\mathbf{y}|\sigma,\alpha,\boldsymbol{\theta},\boldsymbol{\phi})\). Figure 8 shows the prediction results on the example from Figure 4. Now we can accurately model the highly varying part, the smooth parts, and the abrupt changes. Compared to Figure 4, where the uncertainty mainly depends on the proximity to training samples, the AKGPR assigns higher uncertainty to the high-error locations. The better uncertainty quantification is achieved by putting more weight on the GPs with small length-scales in partition#3 and those with large length-scales in the other partitions. Note that the AKGPR switches the membership vector \(\mathbf{z}\) in the circled area to mask the inter-partition correlations, which cannot be realized by length-scale selection in Figure 6. Due to this modeling advantage, the results in Figure 8 are qualitatively better than in Figure 6.
### Remark on The Attentive Kernel
In this section, we discuss how to parameterize the weighting and membership functions in the AK, the computational complexity of the proposed kernel, and some details on hyper-parameter optimization of non-stationary kernels.
#### 4.3.1 Parameterization
To instantiate an AK, we must specify the weighting function \(\mathbf{w}_{\boldsymbol{\theta}}(\mathbf{x})\) and the membership function \(\mathbf{z}_{\boldsymbol{\phi}}(\mathbf{x})\). In the experiments, we find that sharing a single neural network for length-scale selection and instance selection does not affect the performance but reduces the number of trainable parameters and sometimes helps the training of the instance selection mechanism (see Section 5.2.2). Therefore, we use the same set of parameters \(\boldsymbol{\theta}=\boldsymbol{\phi}\) for the two attention mechanisms and choose a simple neural network with two hidden layers (see Section 5.1.3 for more details). Using a simple neural network is an arbitrary choice for simplicity and modeling flexibility. Other parametric functions can also be used, and we leave the study of alternative parameterization to future work.
#### 4.3.2 Computational Complexity
Kernel matrix computations are typically performed in a batch manner to take advantage of the parallelism in linear algebra libraries. Figure 9 shows the computational diagram of the self-covariance matrix of an input matrix \(\mathbf{X}\in\mathbb{R}^{N\times D}\) for the case where the same function parameterizes \(\mathbf{w}_{\boldsymbol{\theta}}(\mathbf{x})\) and \(\mathbf{z}_{\phi}(\mathbf{x})\). The computation of a cross-covariance matrix and the case where \(\mathbf{w}_{\boldsymbol{\theta}}(\mathbf{x})\) and \(\mathbf{z}_{\phi}(\mathbf{x})\) are parameterized separately are handled similarly. We first pass \(\mathbf{X}\) to a neural network with two hidden layers to get \(\mathbf{W}\in\mathbb{R}^{N\times M}\) and \(\mathbf{Z}\in\mathbb{R}^{N\times M}\). The computational complexity of this step is \(\mathcal{O}(NDH+NH^{2}+NHM)\). Then, we compute a visibility masking matrix \(\mathbf{O}=\mathbf{Z}\mathbf{Z}^{\mathsf{T}}\), which takes \(\mathcal{O}(N^{2}M)\). After getting the pairwise distance matrix \(\big{(}\mathcal{O}(N^{2}D)\big{)}\), we can compute the base kernel matrices using different length-scales \(\big{(}\mathcal{O}(N^{2})\big{)}\).
Figure 8: Learning the Same Function as in Figure 4 using AKGPR. A weight or membership vector is visualized as a stack of bar plots produced by its elements. Different colors represent different length-scales or dimensions of the weight or membership vector.
Figure 7: Plate Notation of AKGPR.
The \(m\)-th kernel matrix is scaled by the outer-product matrix of the \(m\)-th column of \(\mathbf{W}\), which takes \(\mathcal{O}(N^{2}M)\). Finally, we sum up the scaled kernel matrices and multiply the result with the visibility masking matrix to get the AK matrix \(\big{(}\mathcal{O}(N^{2}M)\big{)}\). We defer the discussion of the choices of network size \(H\) and number of base kernels \(M\) to the sensitivity analysis in Section 5.2.1. In short, these will be relatively small numbers, so the overall computational complexity is still \(\mathcal{O}(N^{2}D)\). In practice, we find that the runtime of the AK experiments is around three times slower than that of the RBF kernel.
#### 4.3.3 Optimization
We note that the model complexity term discussed in Section 3.2.3 is insufficient for preventing over-fitting when training non-stationary kernels for many iterations, a point also mentioned in Tompkins et al. (2020) in their over-fitting analysis. Although the AK is more robust to over-fitting (see Section 5.2.3), we implement an incremental training scheme to improve the computational efficiency and optimization robustness when using non-stationary kernels in RIG. Specifically, we train the model on all the collected data for \(N_{t}\) iterations after collecting \(N_{t}\) samples at the \(t\)-th decision epoch, which corresponds to line 7 to line 10 in Algorithm 1.
This training scheme can be considered a rule-of-thumb early-stopping regularization. We also find that, when using a neural network in a non-stationary kernel, the initial learning rate of the network parameters should be smaller than that of other hyper-parameters. For example, when using the AK, the initial learning rates of \(\mathbf{\theta}\) or \(\mathbf{\phi}\) should be smaller than that of \(\{\alpha,\sigma\}\).
Another important aspect is when to start optimizing the hyper-parameters. Optimizing the parameters when the data is too sparse and not representative can lead to wrong length-scale prediction, which can bias the informative planning. In RIG, exploring the environment and sampling data at different locations is necessary before optimizing the hyper-parameters. In our experiments, this is done by following a predefined Bezier curve that explores the environment. An alternative way to achieve this behavior is by fixing the hyper-parameters to some appropriate values and training the model only after collecting a certain amount of samples. This approach does not require a pilot survey of the environment. However, the user should have some prior knowledge of the target environment in order to set the initial hyper-parameters.
This training setup works well empirically, but we acknowledge that developing more principled ways to learn non-stationary GPs is an essential future direction, which is still an open research problem and has recently received increasing attention (Ober et al., 2021; van Amersfoort et al., 2021; Lofti et al., 2022).
### Active Mapping with The Attentive Kernel
Algorithm 1 shows how the AK can be used for active mapping. The system requires the following input arguments: the maximum number of training data \(N_{\text{max}}\), the initial kernel amplitude \(\alpha\), the initial noise scale \(\sigma\), a set of \(M\) base kernels \(\{\mathbf{k}_{m}(\mathbf{x},\mathbf{x}^{\prime})\}_{m=1}^{M}\), functions \(\mathbf{u}_{\mathbf{\theta}}(\mathbf{x})\), \(\mathbf{z}_{\mathbf{\phi}}(\mathbf{x})\), and a sampling strategy. First, we need to compute the statistics to normalize the inputs \(\mathbf{X}\) roughly to the range \([-1,1]\) and standardize the targets \(\mathbf{y}\) to nearly have zero mean and unit variance (line 1). We can get these statistics from prior knowledge of the environment. The workspace extent is typically known, allowing the normalization statistics to be readily calculated. The target-value statistics can be rough estimates or computed from a pilot environment survey (Kenna et al., 2018). Then, we instantiate an AK and a GPR with the given parameters (lines 2-3). At each decision epoch \(t\), the sampling strategy proposes informative waypoints by optimizing an objective function derived from the predictive uncertainty of the GPR (line 6). The robot tracks the informative waypoints and collects \(N_{t}\) samples along the trajectory (line 7). Note that the number of collected samples is typically larger than the number of informative waypoints. The new samples are normalized and standardized and then appended to the model's training set (lines 8-9). Finally, we maximize the log marginal likelihood for \(N_{t}\) iterations (line 10). The robot repeats predicting (hidden in line 6), planning, sampling, and optimizing until the sampling budget is exceeded (line 5).
```
0:\(N_{\text{max}},\alpha,\sigma,\{\mathbf{k}_{m}(\mathbf{x},\mathbf{x}^{\prime}) \}_{m=1}^{M}\) \(\mathbf{u}_{\mathbf{\theta}}(\mathbf{x}),\mathbf{z}_{\mathbf{\phi}}(\mathbf{x}),\mathbf{\mathrm{ strategy}}\)
1:compute normalization and standardization statistics
2:\(\texttt{kernel}\leftarrow\texttt{AK}(\alpha,\{\mathbf{k}_{\text{a}}(\mathbf{x}, \mathbf{x}^{\prime})\}_{m=1}^{M},\mathbf{u}_{\mathbf{\theta}}(\mathbf{x}),\mathbf{z}_{\mathbf{ \phi}}(\mathbf{x}))\)
3:\(\texttt{model}\leftarrow\texttt{GPR}(\texttt{kernel},\sigma)\)
4:\(t\gets 0\)
5:while\(\texttt{model}.N_{\text{train}}<N_{\text{max}}\)do\(\triangleright\) sampling budget
6:\(\mathbf{x}_{\text{init}}\leftarrow\texttt{strategy}(\texttt{model})\)\(\triangleright\) informative waypoint
7:\(\mathbf{X}_{\text{t}},\mathbf{y}_{\text{t}}\leftarrow\texttt{tracking}\_ {\text{and}\_sampling}(\mathbf{X}_{\text{init}})\)\(\triangleright\)\(N_{t}\) samples
8:\(\mathbf{\hat{X}}_{\text{t}},\mathbf{\hat{y}}_{\text{t}}\leftarrow\texttt{ normalize}\_{\text{and}\_standardize}(\mathbf{X}_{\text{t}},\mathbf{y}_{\text{t}})\)
9:\(\texttt{model}.\texttt{addadd}_{\text{data}}(\mathbf{X}_{\text{t}}, \mathbf{\hat{y}}_{\text{t}})\)
10:\(\texttt{model}.\texttt{optimize}(N_{t})\)\(\triangleright\) maximize marginal likelihood
11:\(t\gets t+1\)
12:returnmodel
```
**Algorithm 1**Active Mapping with the AK.
## 5 Experiments
We design our experiments to address the following questions.
Figure 9: Computational Diagram of the AK.
* How does the AK compare to its stationary counterpart and other non-stationary kernels in prediction accuracy and uncertainty quantification?
* If non-stationary kernels have better uncertainty quantification capability, can we use the uncertainty for active data collection and to further improve the prediction accuracy?
Figure 10: **The Four Environments Used in the Elevation Mapping Tasks. Note that the 3D perspectives are rotated and rescaled to highlight the visual features of the environments.**
* Some parameters in the AK need to be determined, _i.e._, the number and range of the primitive lengthscales and the network hyper-parameters. Are these parameters hard to tune? Is the performance of AK sensitive to its parameter settings?
* The AK consists of two ideas: length-scale selection and instance selection. Which one contributes more to the performance in the experiments?
* How does the AK compare to the other non-stationary kernels in over-fitting?
To answer **Q1**, we use random sampling experiments in Section 5.1.6 to evaluate the AK and the compared kernels. We run the random sampling experiments first because the performance of a RIG system depends on not only the model's prediction and uncertainty but also the informative planner. Sampling data uniformly at random (without an informative planner) provides controlled experiments to understand the effects of using different kernels. For **Q2**, we conduct both active learning (Section 5.1.7) and RIG experiments (Section 5.1.8) to disentangle the influence of the model's uncertainty and the planner. RIG considers the physical constraints of the robot embodiment, while active learning can sample arbitrary locations. We assess the AK via sensitivity analysis, ablation study, and over-fitting analysis to address the remaining questions.
### Simulated Experiments
We have conducted extensive simulations in four representative environments that exhibit various non-stationary features. The elevation maps are downloaded from the NASA Shuttle Radar Topography Mission (dwtkns.com/strm30m). Supplemental materials can be found at weizhe-chen.github.io/attentive_kernels.
#### 5.1.1 Environments
Figure 10 shows the 3D perspectives of all the environments and the corresponding bird's-eye views. Note that the 3D plots are rotated for better visualization. When comparing to the model prediction, we use the bird's-eye map as the ground truth, and we will describe the environmental features in the 3D plots. Looking at environment N17E073 from left to right, it consists of a flat part, a mountainous area, and a rocky region with many ridges. A good non-stationary GP model should use decreasing length-scales from left to right. Also, note that the most complex area (_i.e._, the red region) occupies roughly one-third of the whole environment. N43W080 presents sharp elevation changes indicated by the arrows while the lakebed is virtually flat. Using a large length-scale can model most of the areas well, albeit better prediction can be achieved by sampling densely around the high-variability spots indicated by the arrows. It is worth noting that better predictions will be more evident in the visualization compared to the evaluation metrics that average across the whole environment since the important area only occupies a small portion of the environment, and the improvements might be negligible in the metrics. In N45W123, the environment has a narrow complex upper part and a smoother lower part. The size of the complex region is smaller than one-third of the environment. There is also a "river" passing through the middle. The right part of N47W124 varies drastically, while its left part is relatively flat. Loosely speaking, N47W124 has the most significant change in spatial variability, followed by N17E073 and then N45W123, so the possible improvement margins of non-stationary models in these environments should also decrease in this order. Only after discovering and sampling the two arrow-indicated spots can non-stationary models show an advantage over a stationary one in predicting environment N43W080.
#### 5.1.2 Robot
We set the extent of the environment to \(20\times 20\) meters and simulate a planar robot that has a simple Dubins' car model \([\dot{x}_{1},\dot{x}_{2},\dot{v},\dot{\omega}]=[v\cos(\omega),v\sin(\omega),a _{1},a_{2}]\). Here, \(\mathbf{x}_{\mathbf{b}}\triangleq[x_{1},x_{2}]^{\intercal}\) is the position, \(\omega\in[-\pi,\pi)\) is the orientation, and \(\mathbf{a}\triangleq[a_{1},a_{2}]^{\intercal}\) is the action that represents the change in the linear velocity and angular velocity. The maximum linear velocity is set to \(1\;m/s\), and the control frequency is \(10\;Hz\). Although we assume perfect localization in the simulated experiments, to keep the same interface with the field experiments, we consider that the robot has achieved a goal if it is within a \(0.1\)-meter radius. This radius is an arbitrary choice within the dimension of the robot. The robot has a single-beam range sensor that collects one noisy elevation measurement per second with a unit Gaussian observational noise. In the random and active sampling experiments, the robot can "jump" to an arbitrary sampling location to collect data, so it does not follow Dubins' car model. In the RIG experiment, the robot tracks some informative locations under the Dubins' car kinematic constraint and collects elevation measurements along its trajectory.
#### 5.1.3 Models
The GPR takes two-dimensional sampling locations as inputs and predicts the elevation. We only allow the robot to collect \(700\) samples, among which the first \(50\) data points are collected along a pilot survey path pre-computed for the environment. As shown in Figure 11, the path is generated from a Bezier curve with \(18\) control points. The positions of the control points adapt to the extent of the workspace accordingly. These \(50\) samples are used to initialize the GPR and compute the statistics to normalize the inputs and standardize the target values. If the statistics are known in advanced, the pilot survey is not necessary. One can also use a relatively large length-scale and fix the hyper-parameters of the GPR in the early stage so that the robot can explore the environment and collect
Figure 11: **Pilot Survey Path. The red stars are control points to generate the Bézier curve.**
diverse data for hyper-parameter optimization and statistics calculation. After normalization and standardization, we initialize the hyper-parameters to \(\ell=0.5,\alpha=1.0,\sigma=1.0,\ell_{\text{min}}=0.01,\ell_{\text{max}}=0.5\). We use the default PyTorch settings for initializing the network parameters. These hyper-parameters and the neural network parameters in the non-stationary kernels are jointly optimized by two Adam optimizers (Kingma and Ba, 2014) with initial learning rates \(0.01\) and \(0.001\), respectively. We first run an initial optimization of all the parameters for \(50\) steps. The model's prediction is evaluated on a \(50\times 50\) linearly spaced evaluation grid, _i.e._, \(2500\) query inputs, comparing with the ground-truth elevation values.
We compare the AK with two existing non-stationary kernels: the Gibbs kernel and Deep Kernel Learning (DKL). Since the RBF kernel is widely used in RIG, we also add this kernel as a stationary baseline. The Gibbs kernel extends the length-scale to be any positive function of the input, degenerating to an RBF kernel when using a constant length-scale function. Following Remes et al. (2018), which showed improved results, the length-scale function is modeled using a neural network instead of another Gaussian process. DKL addresses non-stationarity through input warping. A neural network transforms the inputs to a feature space where the stationary RBF kernel is assumed to be sufficient. We use the same neural network with \(2\times 10\times 10\times 10\) neurons and hyperbolic tangent activation function for the AK and DKL and change the output dimension to \(1\) for the Gibbs kernel because it requires a scaler-valued length-scale function.
#### 5.1.4 Sampling Strategies
We use different sampling strategies in the three sets of experiments. We randomly draw a sample from a uniform distribution at each decision epoch in random sampling experiments. In active sampling experiments, we evaluate the predictive uncertainty on \(1000\) randomly generated candidate locations and then sample from the location with the highest predictive entropy. While the AK can be plugged into any advanced informative planner for RIG, we use the naive informative planner in Algorithm 2 for simplicity. Specifically, in addition to the predictive entropy, this planner computes the distances from these locations to the robot's position. We normalize the predictive entropy and distance to \([0,1]\). Each candidate location's informativeness score is defined as the normalized entropy minus the normalized distance. This informativeness score considers the robot's physical constraints and encourages the robot to move to a location with high predictive uncertainty and close to the robot's current position. The planner outputs the informative waypoint with the highest score. A tracking controller is used to move the robot to the waypoint. Note that the number of collected samples \(N_{t}\) varies at different decision epochs depending on the distance from the robot to the informative waypoint.
#### 5.1.5 Evaluation Metrics
We care about the prediction performance and whether the predictive uncertainty can effectively reflect the prediction error. Following standard practice in the GP literature, we use _standardized mean squared error (SMSE)_ and _mean standardized log loss (MSLL)_ to measure these quantities (see Chapter 2.5 in Rasmussen and Williams (2005)). SMSE is the mean squared error divided by the variance of test targets. After this standardization, a trivial method that makes a prediction using the mean of the training targets has an SMSE of approximately \(1\). To take the predictive uncertainty into account, one can evaluate the negative log predictive density (NLPD), _a.k.a._, log loss, of a test target,
\[-\ln p(\boldsymbol{y}^{*}|\boldsymbol{\mathbf{x}}^{*})=\frac{\ln(2\pi\nu)}{2 }+\frac{(\boldsymbol{y}^{*}-\mu)^{2}}{(2\nu)},\]
where \(\mu\) and \(\nu\) are the mean and variance in Equations (7) and (8). MSLL standardizes the log loss by subtracting the log loss obtained under the trivial model, which predicts using a Gaussian with the mean and variance of the training targets. The MSLL will be approximately zero for naive methods and negative for better methods. In the experiments, we also measured the root-mean-square error (RMSE) and the mean absolute error (MAE). We report the mean and standard deviation of the metrics over ten runs of the experiments with different random seeds. For a more obvious quantitative comparison, we present all the benchmarking results in Tables 2 to 4. Each number summarizes a metric curve by averaging the curve across the x-axis, _i.e._, the number of samples, which indicates the averaged _area under the curve_. A smaller area implies a faster drop in the curve. For all the metrics, smaller values indicate better performance.
#### 5.1.6 Random Sampling Results
Table 2 gives a positive answer to **Q1** firmly. The AK consistently outperforms other kernels across all the considered environments and evaluation metrics. To avoid clutter, we only visualize the SMSE and MLSS curves because they are normalized versions of RMSE and NLPD and the results of MAE are consistent with that of RMSE.
Figure 12 shows the metrics versus the number of collected samples of the four kernels in all environments. From the SMSE curves, we can see that the advantage of the AK (_i.e._, the green line) is most significant in N47W124, followed by N17E073 and then N45W123. This order complies with the changes in the spatial variability of these environments. In environment N43W080, all the lines are overlapped. N43W080 is the environment that has two spots
with drastic variations. Too few random samples landed on the two spots to allow the AK to learn better prediction. That said, the MLSS curve of the AK is still outstanding in this environment. The advantage of the AK on uncertainty quantification is significant in all environments. The Gibbs
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline
**Environment** & **Method** & **SMSE\(\downarrow_{0}^{\downarrow_{0}}\)** & **MSLL\(\downarrow^{0}\)** & **NLPD\(\downarrow\)** & **RMSE\(\downarrow_{0}\)** & **MAE\(\downarrow_{0}\)** \\ \hline N17E073 & RBF & \((1.33\pm 0.03)\times 10^{-1}\) & \((-9.9\,\pm\,0.1)\,\times 10^{-1}\) & \(4.59\pm 0.01\) & \((2.33\pm 0.03)\times 10^{1}\) & \((1.69\pm 0.03)\times 10^{1}\) \\ & **AK** & \((\textbf{1.11}\pm 0.04)\times 10^{-1}\) & \(-1.24\pm 0.01\) & \(\textbf{4.34}\pm 0.01\) & \((\textbf{2.13}\pm 0.04)\times 10^{1}\) & \((\textbf{1.50}\pm 0.02)\times 10^{1}\) \\ \hline & Gibbs & \((1.33\pm 0.01)\times 10^{-1}\) & \(-1.09\pm 0.02\) & \(4.50\pm 0.03\) & \((2.33\pm 0.09)\times 10^{1}\) & \((1.66\pm 0.04)\times 10^{1}\) \\ \hline & DKL & \((1.37\pm 0.06)\times 10^{-1}\) & \((-9.9\,\pm\,0.3)\,\times 10^{1}\) & \(4.62\pm 0.03\) & \((2.27\pm 0.05)\times 10^{1}\) & \((1.68\pm 0.04)\times 10^{1}\) \\ N43W080 & RBF & \((7.1\,\pm\,0.3)\,\times 10^{-2}\) & \(-1.43\pm 0.02\) & \(3.87\pm 0.02\) & \((1.23\pm 0.03)\times 10^{1}\) & \(8.13\pm 0.06\) \\ & **AK** & \((\textbf{6.0}\pm 0.5)\times 10^{-2}\) & \(-1.69\pm 0.06\) & \(\textbf{3.62}\pm 0.06\) & \((\textbf{1.11}\pm 0.05)\times 10^{1}\) & \(\textbf{7.0}\pm\textbf{0.2}\) \\ \hline & Gibbs & \((7.2\,\pm\,0.4)\,\times 10^{-2}\) & \(-1.48\pm 0.06\) & \(3.83\pm 0.06\) & \((1.25\pm 0.05)\times 10^{1}\) & \(8.3\,\pm\,0.3\) \\ \hline & DKL & \((6.6\pm 0.8)\times 10^{-2}\) & \(-1.49\pm 0.04\) & \(3.81\pm 0.04\) & \((1.19\pm 0.07)\times 10^{1}\) & \(7.5\,\pm\,0.3\) \\ \hline & **AK** & \((1.65\pm 0.07)\times 10^{-1}\) & \((-9.4\,\pm\,0.3)\,\times 10^{1}\) & \(4.37\pm 0.03\) & \((1.97\pm 0.04)\times 10^{1}\) & \((1.28\pm 0.03)\times 10^{1}\) \\ & **AK** & \((\textbf{1.41}\pm 0.06)\times 10^{-1}\) & \(-1.28\pm 0.02\) & \(\textbf{4.03}\pm 0.02\) & \((\textbf{1.80}\pm 0.04)\times 10^{1}\) & \((\textbf{1.15}\pm 0.02)\times 10^{1}\) \\ \hline & Gibbs & \((1.8\,\pm\,0.1)\,\times 10^{-1}\) & \(-1.08\pm 0.01\) & \(4.24\pm 0.02\) & \((2.07\pm 0.07)\times 10^{1}\) & \((1.34\pm 0.02)\times 10^{1}\) \\ & DKL & \((2.0\pm 0.1)\times 10^{-1}\) & \((-9.1\,\pm\,0.1)\,\times 10^{1}\) & \(4.41\pm 0.01\) & \((2.18\pm 0.07)\times 10^{1}\) & \((1.42\pm 0.06)\times 10^{1}\) \\ N47W124 & RBF & \((2.26\pm 0.07)\times 10^{-1}\) & \((-7.2\,\pm\,0.1)\,\times 10^{1}\) & \(4.77\pm 0.01\) & \((2.77\pm 0.04)\times 10^{1}\) & \((1.97\pm 0.02)\times 10^{1}\) \\ & **AK** & \((\textbf{1.90}\pm 0.05)\times 10^{-1}\) & \((-1.06\pm 0.01\)\) & \(\textbf{4.43}\pm 0.01\) & \((\textbf{2.53}\pm 0.03)\times 10^{1}\) & \((\textbf{1.77}\pm 0.02)\times 10^{1}\) \\ & Gibbs & \((2.21\pm 0.08)\times 10^{-1}\) & \((-7.7\,\pm\,0.4)\,\times 10^{-4}\) & \(4.72\pm 0.05\) & \((2.74\pm 0.05)\times 10^{1}\) & \((1.94\pm 0.03)\times 10^{1}\) \\ \hline & DKL & \((2.34\pm 0.08)\times 10^{-1}\) & \((-7.1\,\pm\,0.2)\,\times 10^{1}\) & \(4.78\pm 0.02\) & \((2.82\pm 0.05)\times 10^{1}\) & \((\textbf{1.98}\pm 0.03)\times 10^{1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Random Sampling Benchmarking Results.
Figure 12: Random Sampling Metrics versus Number of Collected Samples.
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline
**Environment** & **Method** & **SMSE\(\downarrow_{0}^{\downarrow_{0}}\)** & **MSLL\(\downarrow^{0}\)** & **NLPD\(\downarrow\)** & **RMSE\(\downarrow_{0}\)** & **MAE\(\downarrow_{0}\)** \\ \hline N17E073 & RBF & \((1.41\pm 0.04)\times 10^{-1}\) & \((-9.8\,\pm\,0.2)\,\times 10^{-1}\) & \(4.61\pm 0.02\) & \((2.38\pm 0.03)\times 10^{1}\) & \((1.70\pm 0.03)\times 10^{1}\) \\ \hline & AK & \((\textbf{1.01}\pm 0.02)\times 10^{-1}\) & \(-1.32\pm 0.04\) & \(\textbf{4.36}\pm 0.02\) & \((\textbf{2.00}\pm 0.02)\times 10^{1}\) & \((\textbf{1.43}\pm 0.02)\times 10^{1}\) \\ \hline & Gibbs & \((1.37\pm 0.06)\times 10^{-1}\) & \(-1.20\pm 0.08\) & \(4.59\pm 0.03\) & \((2.35\pm 0.06)\times 10^{1}\) & \((1.72\pm 0.05)\times 10^{1}\) \\ \hline & DKL & \((1.33\pm 0.07)\times 10^{-1}\) & \(-1.09\pm 0.05\) & \(4.59\pm 0.03\) & \((2.32\pm 0.06)\times 10^{1}\) & \((1.62\pm 0.05)\times 10^{1}\) \\ \hline N43W080 & RBF & \((7.8\,\pm\,0.2)\,\times 10^{-2}\) & \(-1.41\pm 0.01\) & \(3.96\pm 0.01\) & \((1.28\pm 0.01)\times 10^{1}\) & \(9.0\pm 0.1\) \\ & AK & \((\textbf{5.1}\pm 0.2)\times 10^{-2}\) & \(-1.72\pm 0.02\) & \(\textbf{3.74}\pm 0.03\) & \((\textbf{1.02}\pm 0.02)\times 10^{1}\) & \(\textbf{6.0}\pm 0.1\) \\ \hline & Gibbs & \((8.0\,\pm\,0.6)\,\times 10^{-2}\) & \(-1.48\pm 0.05\) & \(3.98\pm 0.06\) & \((1.31\pm 0.06)\times 10^{1}\) & \(9.8\,
kernel also has better uncertainty quantification than the RBF kernel and DKL.
Figure 13 visually compares kernels' prediction, uncertainty, and absolute error after collecting \(570\) samples in environment N47W124. Note that the prediction and error maps use the same color scale for easy comparison across different methods. Each uncertainty map uses its color scale - red color only indicates relatively high uncertainty _within_ the map. These rules applied to other heat maps hereafter. The AK learns more detailed environmental features (_c.f._, Figure (b)b), hence obtaining better SMSE; the AK also assigns higher uncertainty to the region that is relatively more difficult to model, thus giving better MSLL. As a comparison, the RBF kernel ignores these details and assigns higher uncertainty to the sparsely sampled areas. The Gibbs kernel also has a smooth prediction in the complex region because it learns an incorrect length-scale function. Instead of assigning small length-scales to the complex region, it places them in the lower-right corner, indicated by the high uncertainty. DKL's prediction and uncertainty maps have similar patterns to the Gibbs kernel.
#### 5.1.7 Active Sampling Results
The objective of active sampling experiments is to investigate whether prediction uncertainty can influence sampling towards significant areas and ultimately enhance accuracy. By comparing the SMSE results of the AK in Table 2 and Table 3, we observe a clear improvement in accuracy when using the active sampling strategy. Specifically, the relative accuracy improvements are \(9\%,15\%,23\%\), and \(6\%\) in N17E073,
Figure 14: Active Sampling Metrics versus Number of Collected Samples.
Figure 13: Snapshots of the Random Sampling Experiments with Different Kernels.
N43W080, N45W123, and N47W124, respectively, which answers **Q2**. The AK's better uncertainty quantification can further enhance prediction accuracy when the data collection strategy is guided by predictive uncertainty. However, we do not observe consistent improvements when using active sampling with the other kernels. Although they all improve the SMSE in N45W123 and N47W124, they do not improve the accuracy in the other two environments. Note that the relative improvements in N17E073 and N47W124 are smaller because the AK has already achieved good accuracy in these two environments when using random samples, so there is less room to improve than in the other two environments.
The AK still performs the best in the active sampling experiments, as seen in Table 3 and Figure 14. The SMSE curves in Figure 14 and Figure 12 are similar, except that the advantage gap of the AK shrinks in N47W124 and increases in N43W080. We attribute the faster error drop in N43W080 to the better sample distribution. Figures (a)a and (e)e show that, when using the AK, more informative samples are collected in the complex regions in N43W080. Figure 15 also shows the prediction, uncertainty, and 570 samples of the three non-stationary kernels in N45W123, where all methods provide better accuracy when using active sampling strategies. The predictions of the AK and the Gibbs kernel are visually similar. The minor difference is located at the lower-right corner, where the AK learns more details (_c.f._, Figure (d)d). This difference comes from the different sampling patterns. The AK samples the right part densely while the Gibbs kernel emphasizes the upper-right (_c.f._, Figures (f)f and (g)g). Also, the Gibbs kernel samples the left part of the environment very sparsely.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Environment** & **Method** & **SMSE\({}_{\downarrow}^{\downarrow}\)** & **MSLL\({}_{\downarrow}^{\downarrow}\)** & **NLPD\({}_{\downarrow}\)** & **RMSE\({}_{\downarrow}\)** & **MAE\({}_{\downarrow}\)** \\ \hline N17E073 & RBF & \((1.45\pm 0.03)\times 10^{-1}\) & \((-9.7\pm 0.2)\times 10^{-1}\) & \(4.63\pm 0.02\) & \((2.42\pm 0.02)\times 10^{1}\) & \((1.73\pm 0.02)\times 10^{1}\) \\ & **AK** & \((\textbf{1.14\pm 0.04})\times\textbf{10^{-1}}\) & \(\textbf{-1.27\pm 0.03}\) & \(\textbf{4.41\pm 0.04}\) & \((\textbf{2.14\pm 0.04})\times\textbf{10^{1}}\) & \((\textbf{1.51\pm 0.02})\times\textbf{10^{1}}\) \\ \hline & Gibbs & \((1.43\pm 0.07)\times 10^{-1}\) & \(-1.16\pm 0.04\) & \(4.61\pm 0.04\) & \((2.40\pm 0.07)\times 10^{1}\) & \((1.76\pm 0.06)\times 10^{1}\) \\ \hline & DKL & \((1.38\pm 0.09)\times 10^{-1}\) & \(-1.01\pm 0.06\) & \(4.61\pm 0.04\) & \((2.38\pm 0.08)\times 10^{1}\) & \((1.67\pm 0.06)\times 10^{1}\) \\ N43W080 & RBF & \((7.7\pm 0.4)\times 10^{-2}\) & \(-1.40\pm 0.02\) & \(3.94\pm 0.02\) & \((1.27\pm 0.03)\times 10^{1}\) & \(8.8\pm 0.2\) \\ \hline & AK & \((\textbf{6.6\pm 0.2})\times\textbf{10^{-2}}\) & \(\textbf{-1.64\pm 0.04}\) & \(\textbf{3.78\pm 0.03}\) & \((\textbf{1.14\pm 0.02})\times\textbf{10^{1}}\) & \(\textbf{7.69\pm 0.09}\) \\ \hline & Gibbs & \((7.6\pm 0.9)\times 10^{-2}\) & \(-1.50\pm 0.05\) & \(3.91\pm 0.07\) & \((1.25\pm 0.07)\times 10^{1}\) & \(9.0\pm 0.6\) \\ \hline & DKL & \((7.0\pm 0.1)\times 10^{-2}\) & \(-1.56\pm 0.07\) & \(3.85\pm 0.06\) & \((1.19\pm 0.08)\times 10^{1}\) & \(8.1\pm 0.6\) \\ \hline N45W123 & RBF & \((1.60\pm 0.06)\times 10^{-1}\) & \((-9.3\pm 0.2)\times 10^{-1}\) & \(4.39\pm 0.02\) & \((1.93\pm 0.04)\times 10^{1}\) & \((1.29\pm 0.02)\times 10^{1}\) \\ \hline & **AK** & \((\textbf{1.32\pm 0.06})\times\textbf{10^{-4}}\) & \(\textbf{-1.43\pm 0.04}\) & \(\textbf{4.15\pm 0.03}\) & \((\textbf{1.71\pm 0.04})\times\textbf{10^{1}}\) & \((\textbf{1.21\pm 0.03})\times\textbf{10^{1}}\) \\ \hline & Gibbs & \((1.38\pm 0.07)\times 10^{-1}\) & \(-1.34\pm 0.04\) & \(4.30\pm 0.03\) & \((1.79\pm 0.05)\times 10^{1}\) & \((1.32\pm 0.04)\times 10^{1}\) \\ \hline & DKL & \((1.7\pm 0.2)\times 10^{-1}\) & \(-1.06\pm 0.08\) & \(4.41\pm 0.06\) & \((1.99\pm 0.09)\times 10^{1}\) & \((1.40\pm 0.06)\times 10^{1}\) \\ N47W124 & RBF & \((2.23\pm 0.06)\times 10^{-1}\) & \(-7.4\pm 0.1)\times 10^{-1}\) & \(4.76\pm 0.01\) & \((2.75\pm 0.03)\times 10^{1}\) & \((1.94\pm 0.02)\times 10^{1}\) \\ \hline & **AK** & \((\textbf{1.85\pm 0.04})\times\textbf{10^{-4}}\) & \(\textbf{-1.10\pm 0.03}\) & \(\textbf{4.48\pm 0.03}\) & \((\textbf{2.50\pm 0.03})\times\textbf{10^{1}}\) & \((\textbf{1.79\pm 0.03})\times\textbf{10^{1}}\) \\ \hline & Gibbs & \((2.12\pm 0.08)\times 10^{-1}\) & \((-9.0\pm 0.5)\times 10^{-1}\) & \(4.73\pm 0.03\) & \((2.69\pm 0.05)\times 10^{1}\) & \((1.91\pm 0.02)\times 10^{1}\) \\ \hline & DKL & \((2.36\pm 0.06)\times 10^{-1}\) & \((-7.7\pm 0.4)\times 10^{-1}\) & \(4.78\pm 0.03\) & \((2.83\pm 0.03)\times 10^{1}\) & \((1.99\pm 0.04)\times 10^{1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Robotic Information Gathering Benchmarking Results.
Figure 15: Snapshots of the Active Sampling Experiments with Different Kernels.
Figure 15d shows that DKL is good at depicting the river. However, it connects the two "hotspots" at the upper-right corner, which is an interesting phenomenon: two non-adjacent locations are correlated. This phenomenon can be found in all the DKL predictions (see Figures 13d and 17d). The cause of this behavior is that the neural network in DKL warps the geometry of the input space, so the correlation of two given data points is no longer proportional to their distance in the original input space. It is non-trivial to explain the prediction uncertainty and sampling distribution of DKL shown in Figure 15h.
#### 5.1.8 Informative Planning Results
The RIG experiments are more challenging than random and active sampling because once the robot decides to visit an informative waypoint, it has to collect the intermediate samples along the trajectory, so the results in Table 4 should not be compared with that of Tables 2 and 3. Given a fixed maximum number of samples, the number of decision epochs of RIG is much smaller than that of active sampling, which makes informed decisions more essential. Table 4 shows that AK consistently leads across all metrics in the four environments with the simple informative planning strategy described in Algorithm 2. The conclusions we can draw from Figure 16 are the same as in active sampling experiments. From Figure 16, we can see again that the AK has the fastest error reduction, especially in N47W124. All non-stationary kernels have better MLSS than the stationary baseline. The AK ranks first in MSLL, and the Gibbs kernel outperforms DKL.
Figure 16: Robotic Information Gathering Metrics versus Number of Collected Samples.
Figure 17: Snapshots of the Robotic Information Gathering Experiments with Different Kernels.
Figure 17 is a snapshot of different methods' prediction, uncertainty, and absolute error after collecting \(400\) samples in N17E073. The prediction maps show that the RBF kernel misses many environmental features that non-stationary kernels can capture. We observe the following behaviors by comparing the patterns in the uncertainty maps and error maps.
* Regardless of the prediction errors, the RBF kernel gives the less-sampled area higher uncertainty, so the robot's sampling path uniformly covers the space.
Figure 19: Sensitivity Analysis of the Number of Hidden Units \(H\).
Figure 18: Sensitivity Analysis of the Number of Base Kernels \(M\).
* The AK assigns higher uncertainty in the regions with more significant spatial variation; thus, the sampling path focuses more on the complex region.
* The Gibbs kernel also has higher uncertainty in the rocky region but does not assign high uncertainty to the lower right. Therefore, the sampling path concentrates on the upper-right corner and misses some high-error spots at the bottom.
* When using DKL, the robot also samples the upper-right corner densely, and the prediction error at the bottom of the map is the largest across different methods. However, DKL places high uncertainty in the
Figure 21: Sensitivity Analysis of the Maximum Primitive Length-Scale \(\ell_{\text{max}}\).
Figure 20: Sensitivity Analysis of the Minimum Primitive Length-Scale \(\ell_{\text{min}}\).
high-area region, which can guide the robot to visit these spots later.
### Further Evaluation and Analysis
We evaluate the AK under different parameter settings for sensitivity analysis and compare four variants of the AK for ablation study. The challenges of learning the model are also discussed in this section.
#### 5.2.1 Sensitivity Analysis
We use the same experiment configurations as the main experiments in the sensitivity analysis but only run the random sampling strategy. In each analysis, we only change one target parameter to different settings and keep all the other parameters fixed. Figure 18 presents the sensitivity analysis results of the number of base kernels \(M\), which should be larger than \(2\). Increasing \(M\) brings better performance, albeit with a diminishing return and higher computational complexity. Choosing a number in the range of \([5,10]\) is a good trade-off between performance and computational efficiency.
Figure 19 shows that the AK is not sensitive to the number of hidden units in the neural network as long as \(H\) is not too small. When \(H=2\), the uncertainty quantification ability decreases, as indicated by the blue MSLL curve. In this case, the AK can only blend the minimum and maximum primitive length-scales, and the instance selection mechanism can only use a two-dimensional membership vector.
Smaller \(\ell_{\text{min}}\) yields better performance, as shown in Figure 20, albeit with a diminishing improvement. The blue and green lines overlap, meaning that the advantage is negligible when choosing a minimum length-scale smaller than \(0.01\). If the inputs are normalized to \([-1,1]\), setting the minimum primitive length-scale to \(0.01\) is appropriate. It is worth noting that this is the minimum primitive length-scale for the length-scale selection component. It does not mean that the AK can only learn the minimum correlation corresponding to this minimum length-scale because the instance selection component can further decrease the kernel values.
As shown in Figure 21, the AK is robust to the choice of the maximum length-scale as long as it is not too small, _e.g._,
Figure 23: Results of the Over-Fitting Analysis in the _Volicano_ Environment Introduced in Figure 1 and N17E073.
Figure 22: Results of the Four Variants in the Ablation Study.
Figure 24: **An Active Elevation Mapping Field Experiment.****(a)** illustrates the physical space the ASV is mapping, and **(b)** shows the ASV and its components. **(c)** shows the rectangular workspace for the elevation mapping experiment. We can see two areas with significant elevation features in two highlighted areas, but other regions are opaque. **(d)** and **(f)** are two snapshots of the GPR prediction in the rectangular workspace, with the predictive-mean map at the bottom and the uncertainty map (_i.e._, standard deviation) at the top. In **(d)**, the lower part shows that some features of the highlighted areas have already been detected. The uncertainty is significant on the left side of the workspace and a smaller region in the top right. **(e)** shows the snapshot at the end of the experiment. The ASV has extensively explored the lower left portion and has a detailed estimate of its elevation map. The smooth portion in the middle shows differences in elevation, which are not visible in the satellite image. The remaining areas of high uncertainty are at boundaries of elevation changes in that region and the top right.
\(0.2\) or \(0.3\). If the inputs are normalized to \([-1,1]\), choosing a value in the range \([0.5,1.0]\) is reasonable.
To conclude, these results are positive indicators of addressing **Q3**: the AK has robust performance to various parameter settings and does not require laborious parameter tuning.
#### 5.2.2 Ablation Study
We compare four AK variants in the ablation study via random sampling experiments. Full means the AK presented in the paper, Weight represents the AK with only length-scale selection, Mask stands for instance selection alone, and NNx2 uses two separated neural networks to parameterize the similarity attention and visibility attention independently. Figure 22 shows that using only the instance selection component deteriorates the performance significantly, so the length-scale selection component contributes more to the performance, which answers **Q4**. We do not observe obvious performance change after dropping the instance selection component. Nonetheless, as illustrated in Figure 8, we expect instance selection to provide better modeling of sharp transitions. Since instance selection improves the prediction only in a small region, the improvement might be subtle in the aggregated evaluation metrics. With our current training scheme, using two separate neural networks does not provide a better performance, and one of the MSLL curves is surpassed by the one-network version (Figure 22). The two-network implementation might show its strength with a more refined approach to parameter training.
#### 5.2.3 Over-Fitting Analysis
Non-stationary kernels can enhance the modeling flexibility of GPR, but they are also more susceptible to over-fitting. This can lead to degraded prediction accuracy and uncertainty estimates. To evaluate the robustness of non-stationary kernels, we present an over-fitting analysis in N17E073 and the Mount St. Helens environment. The latter is referred to as the volcano environment hereafter. We sample \(600\) training data from the environment uniformly at random. All the training configurations are the same as in Section 5.1.3, except for the number of optimization iterations. We train all the models for \(2000\) iterations and evaluate the prediction on the training set and a \(100\times 100\) test grid at each optimization step. Figure 23 shows the training and test MSLL. In some environments, as shown in Figures 23a and 23b, the AK is fairly robust, while the Gibbs kernel and DKL show a clear over-fitting trend - the training MSLL goes down while the test MSLL goes up. However, as shown in Figures 23c and 23d, all the non-stationary kernels suffer from over-fitting in some environments, such as N17E073. To mitigate this issue, after collecting one new sample, the optimizer takes only one gradient step on the whole dataset. This heuristic training scheme works well in practice. We have tried to optimize the model for more iterations at each decision epoch. All the non-stationary kernels give poor prediction (the AK is still more robust in this case), and the issue persists even after collecting more data. Overall, the answer to **Q5** is positive: the AK is more robust to over-fitting than other non-stationary kernels, but it can still over-fit in some environments. Developing more advanced training schemes to mitigate over-fitting is an essential future direction.
### Field Experiment
The proposed AK is demonstrated in a RIG task - active elevation, _a.k.a._ bathymmetric mapping for underwater terrain. Figure 24a shows our robot working in the environment. The goal is to explore an _a priori_ unknown quarry lake and build an elevation map of the underwater terrain. There are two reasons for choosing this task. First, the underwater terrain is static, so the ground-truth environment is available by aggregating the sampled data across different field experiment trials after offsetting the water surface level. Second, the underwater terrain in our target environment has a clear separation between "interesting" regions and "boring" areas, which makes it an ideal testbed for RIG with non-stationary GPs.
#### 5.3.1 Target Environment
The target environment is a quarry lake formed by seeped-in groundwater and precipitation since mining and quarrying have been suspended for a long time. The floor of the quarry lake is complex in that there are many submerged quarry stones and even abandoned equipment. Our goal is to build an elevation map within the workspace, _i.e._, the white rectangle shown in Figure 24c, with a small number of samples. The workspace is \(80\times 88\) meters We chose this workspace because the central part is relatively flat, while the two circled areas have interesting spatial variations. We can vaguely see the environmental features in these circled spots from the satellite imagery.
#### 5.3.2 Hardware Setup
We deploy the Autonomous Surface Vehicle (ASV) shown in Figure 24b. The robot has a single-beam sonar pointing downward to collect depth measurements and a DJI Manifold 2-C computer for onboard computation. The sonar is the Ping Sonar Altimeter and Echosounder from BlueRobotics. Its maximum measurement distance is \(50\) meters underwater, and the beam width is \(30\) degrees. It comes with a Python software interface, and we implemented its ROS driver, which is publicly available at github.com/Weizhe-Chen/single_beam_sonar. The ASV from Clearpath Robotics has a built-in Extended Kalman Filter (EKF) localization module that fuses the GPS signals and the UM6 Inertial Measurement Unit (IMU) data. The robot also has an embedded WiFi router for communication in the field. The ASV is \(1.3,0.94\), and \(0.34\) meters in length, width, and height, respectively, and is actuated by two thrusters at the rear. It is a differential-drive robot, but its thrusters' maximum forward spinning speed is faster than the backward one. We restrict the maximum linear velocity to \(0.7\) meters per second and send linear and angular velocities to the robot to track an informative waypoint using a PD controller available at github.com/Weizhe-Chen/tracking_pid. Since the localization is unreliable, the robot only needs to reach a two-meter-radius circle centered at the waypoint.
#### 5.3.3 Results
Figure 24 shows the snapshots of the model prediction, uncertainty, and sampling path at different stages. We can see that the prediction uncertainty is effectively reduced after sampling. Most of the samples (_i.e._, yellow dots) are collected in critical regions with drastic elevation variations. Such a biased sampling pattern allows the robot to model the general trend of smooth regions with a
small number of samples while capturing the characteristic environmental features at a fine granularity.
## 6 Limitations and Future Work
Although the AK has the same asymptotic computational complexity as the RBF kernel, its empirical runtime is slower than that of the RBF kernel. Thus one important future work is to speed up the computation. We leverage heuristics to train the non-stationary kernels in our experiments, which can be improved by a more principled training scheme in the future. Using a stationary kernel in non-stationary environments is just one example of _model misspecification_. Investigating the influence of other types of model misspecification on RIG is interesting. For example, the Gaussian likelihood assumes no sensing outliers, and the observational noise scale is the same everywhere. Developing proper ways to handle sensing outliers and modeling _heteroscedastic noise_ can be important future work for RIG. We only tried neural-network parameterization for the weighting function and the membership function. Comparing different parameterization methods for the AK is also valuable. Although we have only showed the efficacy of AK in elevation mapping tasks, it has potential to benefit other applications such as 3D reconstruction, autonomous exploration and inspection, as well as search and rescue. Exploring its utility in these domains would be interesting. Additionally, while we focused on non-stationary kernels in the spatial domain, developing spatiotemporal kernels is crucial for RIG in dynamic environments.
## 7 Conclusion
In this paper, we investigate the uncertainty quantification of probabilistic models, which is decisive for the performance of RIG but has received little attention. We present a family of non-stationary kernels called the Attentive Kernel, which is simple, robust, and can extend any stationary kernel to a non-stationary one. An extensive evaluation of elevation mapping tasks shows that AK provides better accuracy and uncertainty quantification than the two existing non-stationary kernels and the stationary RBF kernel. The improved uncertainty quantification guides the informative planning algorithms to collect more valuable samples around the complex area, thus further reducing the prediction error. A field experiment demonstrates that AK enables an ASV to collect more samples in important sampling locations and capture the salient environmental features. The results indicate that misspecified probabilistic models significantly affect RIG performance, and GPR with AK provides a good choice for non-stationary environments.
## 8 Acknowledgement
We acknowledge the support of NSF with grant numbers 1906694, 2006886, and 2047169. We are also grateful for the computational resources provided by the Amazon AWS Machine Learning Research Award. The constructive comments by the anonymous conference reviewers are greatly appreciated. We thank Durugakant Pushp and Mahmoud Ali for their help in conducting the field experiment.
|
2305.18262 | Beyond Confidence: Reliable Models Should Also Consider Atypicality | While most machine learning models can provide confidence in their
predictions, confidence is insufficient to understand a prediction's
reliability. For instance, the model may have a low confidence prediction if
the input is not well-represented in the training dataset or if the input is
inherently ambiguous. In this work, we investigate the relationship between how
atypical(rare) a sample or a class is and the reliability of a model's
predictions. We first demonstrate that atypicality is strongly related to
miscalibration and accuracy. In particular, we empirically show that
predictions for atypical inputs or atypical classes are more overconfident and
have lower accuracy. Using these insights, we show incorporating atypicality
improves uncertainty quantification and model performance for discriminative
neural networks and large language models. In a case study, we show that using
atypicality improves the performance of a skin lesion classifier across
different skin tone groups without having access to the group attributes.
Overall, we propose that models should use not only confidence but also
atypicality to improve uncertainty quantification and performance. Our results
demonstrate that simple post-hoc atypicality estimators can provide significant
value. | Mert Yuksekgonul, Linjun Zhang, James Zou, Carlos Guestrin | 2023-05-29T17:37:09Z | http://arxiv.org/abs/2305.18262v2 | # Beyond Confidence: Reliable Models Should Also Consider Atypicality
###### Abstract
While most machine learning models can provide confidence in their predictions, confidence is insufficient to understand a prediction's reliability. For instance, the model may have a low confidence prediction if the input is not well-represented in the training dataset or if the input is inherently ambiguous. In this work, we investigate the relationship between how atypical (rare) a sample or a class is and the reliability of a model's predictions. We first demonstrate that atypicality is strongly related to miscalibration and accuracy. In particular, we empirically show that predictions for atypical inputs or atypical classes are more overconfident and have lower accuracy. Using these insights, we show incorporating atypicality improves uncertainty quantification and model performance for discriminative neural networks and large language models. In a case study, we show that using atypicality improves the performance of a skin lesion classifier across different skin tone groups without having access to the group attributes. Overall, _we propose that models should use not only confidence but also atypicality to improve uncertainty quantification and performance_. Our results demonstrate that simple post-hoc atypicality estimators can provide significant value. Our code will be released at [https://github.com/mertyg/beyond-confidence-atypicality](https://github.com/mertyg/beyond-confidence-atypicality).
## 1 Introduction
_Typicality_ is an item's resemblance to other category members (Rosch & Mervis, 1975). For example, while a dove and a sparrow are typical birds, a penguin is an atypical bird. Many works from cognitive science (e.g., Rips (1989); Rips et al. (1973); Mervis & Pani (1980)) suggest that typicality plays a crucial role in category understanding. For instance, humans have been shown to learn, remember, and refer to typical items faster (Murphy, 2004). Similarly, the representativeness heuristic is the tendency of humans to use the typicality of an event as a basis for decisions (Tversky & Kahneman, 1974). This cognitive bias is effective for making swift decisions, but it can lead to poor judgments of uncertainty. For instance, the likelihood of typical events can be overestimated (Tversky & Kahneman, 1974) or uncertainty judgments can be inferior for atypical events (Tversky & Kahneman, 1992).
While it is hard to quantify the uncertainty of human judgments, machine learning models provide confidence in their predictions. However, confidence alone can be insufficient to understand the
reliability of a prediction. For instance, a low-confidence prediction could arise from an ambiguity that is easily communicated, or due to the sample being underrepresented in the training distribution. Similarly, a high-confidence prediction could be reliable or miscalibrated. Our main proposal is that _models should quantify not only the confidence but also the atypicality_ to understand the reliability of predictions or the coverage of the training distribution. However, many machine learning applications rely on pretrained models that solely provide confidence levels, devoid of any measure of atypicality.
**Contributions:** To support our position, we use a simple formalization of atypicality estimation. With the following studies, we show that by using simple atypicality estimators, we can:
**1. Understand Prediction Quality:** Calibration is a measure that assesses the alignment between predicted probabilities of a model and the true likelihoods of outcomes (Gueiting and Raftery, 2007). Neural networks (Guo et al., 2017) or even logistic regression (Bai et al., 2021) can be miscalibrated out-of-the-box. Here, we argue that using atypicality can give insights into when a model's confidence is reliable. Through theoretical analysis and extensive experimentation, we demonstrate that atypicality results in lower-quality predictions. Specifically, _we show that predictions for atypical inputs and samples from atypical classes are **more overconfident and have lower accuracy**.
**2. Improve Calibration and Accuracy:**_Recalibration_ methods offer some mitigation to miscalibration (Guo et al., 2017) by adjusting a probabilistic model. We show that models need different adjustments according to the atypicality of inputs and classes, and atypicality is a key factor in recalibration. In light of these findings, we propose a simple method: _Atypicality-Aware Recalibration_. Our recalibration algorithm takes into account the atypicality of the inputs and classes and is simple to implement. We show that complementing recalibration methods with atypicality improves uncertainty quantification and the accuracy of predictors. Further, in a case study for skin lesion classification, we show that atypicality awareness can improve performance across different skin-tone subgroups without access to group annotations.
**3. Improve Uncertainty Sets:** An alternative approach to quantify uncertainty is to provide prediction sets that contain the label with high probability (Angelopoulos et al., 2020). Here, we investigate existing methods with atypicality and show that uncertainty sets could underperform for atypical or low-confidence samples. By using atypicality, we demonstrate the potential for improving uncertainty sets.
Overall, we propose that **models should also consider atypicality, and we show simple- and easy-to-implement atypicality estimators can provide significant value**.
## 2 Interpreting Uncertainty with Atypicality
**Motivation:** In many machine learning applications, we have access to a model's confidence, which aims to quantify the likelihood that a prediction will be accurate. In classification, model output is a probability distribution over classes and confidence is the predicted probability of the top class, i.e. \(\max_{y}\,\hat{\mathbb{P}}(Y=y|X=x)\). In practical scenarios, confidence is the primary tool used to evaluate the reliability of a prediction where higher confidence is associated with better predictions. However, the
Figure 1: **Atypicality in Uncertainty. Left:** We show examples from the ImageNet-R dataset with our atypicality framework. **Right:** We provide a conceptualization of the quadrants. Using atypicality, we can understand prediction quality (§3), improve predictions (§4), and uncertainty sets (§5).
uncertainty in confidence can stem from different sources that require different treatment (Mukhoti et al., 2021).
Here, we call a prediction _reliable_ if it is high-confidence and well-calibrated. High confidence could be reliable or miscalibrated, and low confidence could be due to ambiguity or rare inputs. We propose that _atypicality_ provides a natural way to understand reliability when combined with confidence. A sample is called typical if it is well-represented in the previously observed samples, e.g., an image of a dog that is similar to other dogs in the training data. However, if the image is unlike any other seen during training, it is atypical. We argue that atypicality can help us interpret a prediction's reliability. Below we categorize samples and predictions according to atypicality and confidence in four quadrants (Figure 1).
**High-confidence and representative:** Reliable predictions often fall within the **Reliable Quadrant**, which includes _typical, high-confidence_ samples. These samples are well-represented in the training dataset (typical), thus we expect the high-confidence prediction to be reliable. For instance, the first image on the top left (Figure 1) is a typical golden retriever and the model makes a reliable prediction.
High-confidence yet far from the support: Having high-confidence does not always indicate reliability. If the sample does not have support in the training distribution, the confidence could be miscalibrated. Such samples lie in the Extrapolation Quadrant which contains _atypical, high-confidence_ samples. For instance, the second image in the top right of Figure 1 is a _toy_ hog and the model has not seen similar ones during training.
Low confidence due to ambiguity: In contrast, low confidence could also be reliable when it correctly reflects an ambiguity. Such samples are in the Ambiguous Quadrant that contains _typical, low-confidence_ samples. These are typical since they may represent multiple classes; yet, due to ambiguity, the model's confidence is low. For instance, the second image in the bottom left of Figure 1 can both be a hog and a comic book.
Low confidence and rare: For samples that are not well-represented in training data, we expect to have low-quality predictions. Untrustworthy Quadrant comprises _atypical, low-confidence_ samples that can include extremely rare subgroups, for which we expect miscalibration and lower accuracy. For example, the image in Figure 1 bottom right is an origami hog that was not seen in training.
These examples suggest that relying solely on confidence does not provide a complete understanding of the reliability of the predictions, and we can use atypicality to interpret and improve reliability.
**Formalizing Atypicality:** Atypicality here is defined with respect to the training distribution. Informally, an input or a class is atypical if it is not _well-represented_ in the training distribution. For instance, if there are no or limited similar examples to an input, it can be called atypical. Note that this notion is not restricted to being 'out-of-distribution' (Hendrycks and Gimpel, 2016), since in-distribution groups could also be atypical or rare, and our goal is to perform reliably for the entire spectrum.
Formally, let \(X\in\mathbb{R}^{d}\) be the random variable denoting features and \(Y\in\mathcal{Y}=\{1,2,...,C\}\) denote the class, where we focus on classification.
**Definition 2.1** (Input Atypicality).: We define the atypicality of the input \(x\) as3
Footnote 3: Here atypicality differs from ‘typical sets’ in information theory that refers to a sequence of variables (Thomas and Joy, 2006).
\[a_{X}(x)=-\max_{y}\log\mathbb{P}(X=x|Y=y).\]
We use the logarithm of the class-conditional densities due to high dimensionality and density values being close to zero. Intuitively, for a dog image \(x\), if \(\mathbb{P}(X=x|Y=\text{dog})\) has a low value, we call \(x\) an atypical dog image. Overall, if \(a(x)\) is high, then we call \(x\) an atypical input. Specifically, if an input is not typical for any class, then it is atypical with respect to the training distribution. Similarly, we can also use marginal density, \(\mathbb{P}(X=x)\), or distance4 to quantify atypicality.
Footnote 4: For an input \(x\), if the nearest neighbor (NN) distance is large, then we call \(x\) atypical as all inputs in the training set are far from \(x\). Density and distance are connected through non-parametric density estimation and Jiang et al. (2018) shows that NN distance can recover high-density regions.
Similarly, the notion of atypical (rare) classes is prevalent in imbalanced classification (Cao et al., 2019; Zhong et al., 2021). Ensuring reliable performance for atypical classes can be safety-critical,
e.g., for a rare presence of dangerous melanoma (Daneshjou et al., 2022). We define class atypicality in the following:
**Definition 2.2** (Class Atypicality).: For a class \(y\), atypicality of a class is defined as
\[a_{Y}(y)=-\log\mathbb{P}(Y=y).\lx@note{footnote}{When the meaning is unambiguous, we omit the subscript to denote $a(X)$ or $a(Y)$ for notational brevity.}\]
**Estimating Atypicality for Discriminative Models:** Quantifying input atypicality requires access to the class-conditional / marginal distributions. In practice, for neural networks trained for classification, these distributions are unavailable and we need to perform the estimation. This estimation can be challenging if the dimensionality is large, or the data is unstructured, requiring assumptions about the distributions. Prior works (Mukhoti et al., 2021; Lee et al., 2018) showed that Gaussian Mixture Models (GMMs) in the embedding space of neural networks can be used to model these distributions.
In experiments, we use Gaussians with shared covariance, i.e. \(\hat{\mathbb{P}}(X=x|Y=c)\sim\ N(\hat{\mu}_{c},\hat{\Sigma})\), to estimate input atypicality. We perform the estimation in the penultimate layer of neural networks used to make predictions, using maximum-likelihood estimation with samples from the training data. We explore other metrics, such as \(k\)-Nearest Neighbors distance. We give implementation details and results with different metrics in Appendix B.1. With these estimators, atypicality estimation is cheap and can run on a CPU. Our goal is to show that simple estimators can already reap large benefits. Our framework is flexible and exploring more sophisticated estimators is a topic for future work.
**Atypicality for LLMs:** LLMs are increasingly used for classification (Brown et al., 2020). Modern LLMs are autoregressive models that compute a marginal distribution, \(\hat{\mathbb{P}}_{\text{LLM}}(X)\). We compute the negative log-likelihood of a prompt or a label and use this as an atypicality metric, i.e. \(a_{X}(x)=-\log\hat{\mathbb{P}}_{\text{LLM}}(x)\), \(a_{Y}(y)=-\log\hat{\mathbb{P}}_{\text{LLM}}(y)\). For instance, below are typical and atypical prompts for AGNews dataset:
**Classify the news articles into the categories of World, Sports, Business, and Technology.**
**Article:** Safin tallest obstacle to host #93s-patrictg games hope AS tennis fans go, Houston #39s-Ism #39;Matress Mack #39; McIngvale is very rich, extremely forthright, exceedingly patricath and unlinchingly Republican.
**Answer:**
**Classify the news articles into the categories of World, Sports, Business, and Technology.**
**Article:** Delta Air Lines Pregraves Chapter 11 Filing Delta Air Lines Inc. could file for Chapter 11 bankruptcy protection as soon as next week, a source familiar with the matter said yesterday.
**Answer:**
_Applicability:_ 171.50. _Percentile:_ 56.9
## 3 Understanding the Prediction Quality with Atypicality
In this section, we show how our framework can be applied to understand the quality of predictions.
**Experimental Setup:** We investigate three classification settings across a range of datasets:
1. **Balanced Supervised Classification:** We use ResNet18-50-152 (He et al., 2016), WideResNet28 (Zagoruyko and Komodakis, 2016), RoBERTa (Liu et al., 2019) trained on ImageNet (Deng et al., 2009), CIFAR10,100 (Krizhevsky, 2009), MNLI (Williams et al., 2018) respectively.
2. **Imbalanced Supervised Classification:** We use ResNet18, ResNet50, ResNet152 trained on CIFAR-LT, ImageNet-LT and Places365-LT where models and data are mostly from (Zhong et al., 2021; Mukhoti et al., 2020). Note that all of the test and validation sets have balanced class distributions.
3. **Classification with LLMs:** We use open-source Alpaca7B (Taori et al., 2023) on IMDB (Maas et al., 2011), TREC (Li and Roth, 2002), and AG News (Zhang et al., 2015) datasets with the prompts from Zhao et al. (2021).
Details on datasets, models, and prompts are in Appendix C. Our experiments were run on a single NVIDIA A100-80GB GPU. We report error bars over 10 random calibration/test splits.
### Atypicality is Correlated with Miscalibration
We first explore the importance of atypicality to understand model calibration. Calibration quantifies the quality of a probabilistic model (Gneiting and Raftery, 2007). Informally, a model is considered
perfectly calibrated if all events that are predicted to occur \(P\%\) of the time occur \(P\%\) of the time for any \(P\in[0,100]\).
For the sake of simplicity, consider a binary classification problem where the predictor is \(\hat{\mathbb{P}}:\mathcal{X}\rightarrow[0,1]\). We quantify miscalibration with Calibration Error (CE):
\[\text{CE}[\hat{\mathbb{P}}]=\mathbb{E}[|\mathbb{P}(Y|\hat{\mathbb{P}}(X)=p)-p |].\]
It is computationally infeasible to calculate the above expectation with the conditional probability \(\mathbb{P}(Y|\hat{\mathbb{P}}(X)=p)\). In practice, we use a binned version of this quantity, Expected Calibration Error (ECE) (Naeini et al., 2015; Guo et al., 2017), to estimate CE. See Appendix D.1 for a formal definition.
Here, we aim to examine the relationship between model calibration and atypicality. Given any \(K>1\), we consider the quantiles of \(a(X)\), \(a_{1},a_{2},\ldots,a_{K+1}\) such that \(\mathbb{P}(a(X)\in(a_{k},a_{k+1}])=1/K\) for \(k\in[K]\). For imbalanced classification problems, we compute the quantiles using the class atypicality. Specifically, we investigate the atypicality-conditional calibration error \(\text{ECE}[\hat{\mathbb{P}}\mid a(X)\in(a_{k},a_{k+1}]]\), i.e., the expected calibration error of an input that falls within the atypicality quantile \(k\).
**Atypical Examples are Poorly Calibrated:** In Figure 1(a), we show the distribution of miscalibration where each bin within the grid contains the intersection of the corresponding confidence and atypicality quantiles. We observe that within the same confidence range, predictions for atypical points have lower accuracies and are more overconfident. In other words, predictions in the Extrapolation or Untrustworthy regions are more miscalibrated than the ones in the typical regions.
In Figure 1(b), we split inputs into quantiles according to atypicality and compute the ECE and Accuracy for each group. Results show a monotonic relationship between atypicality and ECE or Accuracy across the three settings. Specifically, we see that predictions for atypical inputs or samples from rare classes are more miscalibrated and have lower accuracy. For samples from rare classes, the model overpredicts the probabilities of the typical class, hence we have overconfidence and low accuracy. Appendix D.3, and SS4 present figures and tables for all model and dataset pairs.
### Theoretical Analysis: Characterizing Calibration Error with atypicality
We characterize how calibration error varies with atypicality in a tractable model that is commonly used in machine learning theory (Bai et al., 2021, 2022; Zhang et al., 2022; Clarke et al., 2022). Our theoretical analysis further supports our empirical findings.
**Data Generative Model:** We consider the well-specified logistic model for binary classification with Gaussian data, where \(Y\in\{-1,1\}\) and the \(\mathbb{P}(Y=1|X)\) is defined by the sigmoid function:
\[\mathbb{P}(Y=1\mid X)=\sigma(\langle\beta^{*},X\rangle),\quad X\sim N(0,I_{d}).\]
Where \(I_{d}\) denotes the \(d\)-dimensional identity matrix, \(\beta^{*}\) is the ground truth coefficient vector, \(\sigma(x)=1/(1+e^{-x})\), and we have \(i.i.d.\) observations \(\{(x_{i},y_{i})\}_{i=1}^{n}\) sampled from the above distribution.
Figure 2: **Atypical Samples Have Low-Quality Predictions.****(a)** Here, samples are grouped according to the Input Atypicality (x-axis) and Confidence (y-axis), to the right meaning more atypical. Values show the difference between the confidence and the accuracy, lighter color indicates more overconfidence. Within the same confidence range, atypical groups have more miscalibration and are more overconfident. **(b,c,d)** Predictions for atypical samples are less accurate and more miscalibrated in balanced and imbalanced supervised classification and classification with LLMs.
The Estimator:We focus on studying the solution produced by minimizing the logistic loss
\[\hat{\beta}=\arg\min_{\beta}\frac{1}{n}\sum_{i=1}^{n}[\log(1+\exp(\beta^{\top}x_{ i}))-y_{i}\cdot\beta^{\top}x_{i}].\]
For \(k\in\{-1,1\}\), \(\hat{\mathbb{P}}_{k}(x)\) is an estimator of \(\mathbb{P}(y=k|x)\), with the form \(\hat{\mathbb{P}}_{k}(x)=\frac{1}{e^{-k\cdot\hat{\beta}^{\top}x}+1}\).
**Calibration:** We consider all \(x\) where \(\mathbb{P}_{1}(x)>1/2\), as \(\mathbb{P}_{1}(x)\leq 1/2\) can be analyzed similarly by symmetry (see Appendix H). For \(u\in(1/2,1)\), the signed calibration error at a confidence level \(u\) is
\[u-\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u).\]
We want to show that when \(X\) is atypical, i.e., when \(a(X):=\|X\|^{2}/2\) is larger6, the accuracy \(\mathbb{P}(Y=1\mid\hat{\mathbb{P}}_{1}(X)=u)\) would be generally smaller than the confidence \(u\) (over-confidence).
Footnote 6: The definition of atypicality follows from the marginal likelihood of the data model: density for the Gaussian with zero mean and identity covariance.
**Theorem 3.1**.: _Consider the data generative model and the learning setting above. For any \(K>1\), suppose we consider the quantiles of \(a(X)\), \(a_{1},a_{2},...,a_{K},a_{K+1}\) such that \(\mathbb{P}(a(X)\in(a_{k},a_{k+1}])=1/K\) for \(k\in[K]\). We assume \(\|\beta^{*}\|\leq c_{0}\), and \(d/n=\kappa\), for some sufficiently small \(c_{0}\). Then, for sufficiently large \(n\), for \(k=2,\ldots,K\), we have_
\[\mathbb{E}_{u\sim\hat{\mathbb{P}}_{1}(X)}[u-\mathbb{P}(Y=1\mid \hat{\mathbb{P}}_{1}(X)=u)\mid a(X)\in(a_{k},a_{k+1}]]>\] \[\mathbb{E}_{u\sim\hat{\mathbb{P}}_{1}(X)}[u-\mathbb{P}(Y=1\mid \hat{\mathbb{P}}_{1}(X)=u)\mid a(X)\in(a_{k-1},a_{k}]]\geq 0.\]
That is, the resulting classifier is over-confident, and the level of over-confidence becomes larger when the data is more atypical (with larger \(a(X)\)). Further, the gap becomes larger for smaller sample sizes \(n\). The proof of the theorem is in Appendix H.2 and builds on the results from Bai et al. (2021); Sur and Candes (2019).
## 4 Using Atypicality to Improve Recalibration
Here, we show how atypicality can complement and improve post-hoc calibration. In SS2, we observed that predictions for atypical inputs and samples from atypical classes are more overconfident with lower accuracy. We next show that taking input and class atypicality into account improves calibration.
### Parametric Recalibration: Different Groups need Different Temperatures
Temperature scaling (TS), a single parameter variant of Platt Scaling (Platt et al., 1999), is a simple recalibration method that calibrates the model using a single parameter. The predictor is of the form
\[\log\hat{\mathbb{P}}_{\text{TS}}(Y|X)\propto\log\hat{\mathbb{P}}(Y|X)/\tau, \tag{1}\]
where \(\hat{\mathbb{P}}(Y|X)\) is the model that takes an input and outputs scores/logits, and \(\tau\) is the temperature parameter. In practice, \(\tau\) is optimized using a calibration set to minimize a proper scoring rule (Gneiting and Raftery, 2007; Bolin and Wallin, 2019) such as the cross-entropy loss.
To understand the behavior of TS with respect to atypicality, we separately perform TS on points grouped according to the atypicality quantiles. Let us denote the temperature fitted to the quantile covering \(a(X)\in(a_{k-1},a_{k}]\) by \(\tau_{a_{k}}\). In Appendix Figure 10 we observe an increasing relationship between \(a_{k}\) and \(\tau_{a_{k}}\). Different atypicality groups need different adjustments, and more atypical groups need larger temperatures. _This suggests that being atypicality-aware can improve calibration. While a single temperature value improves average calibration, it may hurt certain groups._
### Atypicality-Aware Recalibration
We showed that predictions are more reliable when the input is typical. However, predictions are less reliable for atypical inputs, and we may need further revision. An analogy can be drawn to decision-making literature where opinions of individuals are combined with geometric averaging
weighted by their expertise (Forman & Peniwati, 1998; Aczel & Roberts, 1989). Analogously, we propose _Atypicality-Aware Recalibration (AAR)_ a method designed to address the reliability issues identified in dealing with atypical inputs:
\[\hat{\mathbb{P}}_{\text{AAR}}(Y|X)=\frac{\hat{\mathbb{P}}(Y|X)^{\psi(a(X))}\exp(S _{Y})^{1-\psi(a(X))}}{Z(X)}, \tag{2}\]
where \(\psi(a(X))\) is a function of input atypicality, \(S_{Y}\) is a tunable score for class \(Y\), \(Z(X)\) is the normalization term. Intuitively, when the input is typical, we trust the model confidence; otherwise, we use a score for the given class estimated from the calibration set. Note that this form simplifies to
\[\log\hat{\mathbb{P}}_{\text{AAR}}(Y|X)\propto\phi(a(X))\log\hat{\mathbb{P}}(Y| X)+S_{Y}, \tag{3}\]
where we subsume \((1-\psi(a(X))\) into \(\phi(a(X))\). We give a simple interpretation of this form: the multiplicative term is an atypicality-dependent temperature, and the additive term is a class-dependent correction where \(\exp{(S_{Y})}\) can be considered to induce a correction distribution over classes estimated from the calibration set. In Appendix Figure 11, we show how these values behave with class atypicality. We find that rare classes require larger positive corrections with larger \(S_{Y}\).
**Implementation Details:** Following TS, we minimize the cross-entropy loss on a calibration set. With the temperature-atypicality relationship observed in Figure 10 we choose to instantiate the multiplicative factor as a quadratic function, where \(\phi(a(X))=c_{2}a(X)^{2}+c_{1}a(X)+c_{0}\) and in total
Figure 3: **Post-hoc Recalibration for Classification.****(a) Balanced Supervised Classification:** Atypicality-Aware Recalibration improves the calibration of models trained with balanced datasets, across atypicality groups. **(b) Imbalanced Supervised Classification:** Atypicality-Aware Recalibration improves both the calibration across groups and the overall accuracy of models trained with imbalanced datasets. **(c) Classification with LLMs:** Atypicality-Aware Recalibration improves both the calibration across groups and the overall accuracy of LLMs performing classification.
we have \(|\{S_{1},..,S_{|\mathcal{Y}|},c_{0},c_{1},c_{2}\}|=|\mathcal{Y}|+3\) interpretable parameters. Once the embeddings and logits are computed, AAR runs on a CPU in under 1 minute for all experimented settings.
Similar to our adaptive interpretation, a concurrent work, Adaptive Temperature Scaling (AdaTS) (Joy et al., 2023), uses temperature scaling where the temperature is parameterized by a Variational Autoencoder(VAE) (Kingma and Welling, 2013) and a multi-layer perceptron on top of the VAE embeddings. In the below experiments, we give results with AdaTS as a baseline when applicable.
For **Balanced Supervised Classification**, in Figure 2(a) we observe that being atypicality aware improves recalibration across all groups. We perform comparably to AdaTS, where the temperature function has in the order of millions of parameters, whereas AAR has only \(|\mathcal{Y}|+3\) parameters.
In **Imbalanced Supervised Classification** (Figure 2(b)), our algorithm not only provides better calibration rates across all classes but also improves overall accuracy. Note that only our method can change accuracy (due to the additive term), and it performs better than other baselines in terms of ECE across all classes. Further, the second column shows using Progressive Balancing (Zhao et al., 2021) in training, showing that our post-hoc method can complement methods that modify training procedures.
For **Classification with LLMs**, we add an LLM calibration baseline Content-Free Calibration (CF) (Zhao et al., 2021). We cannot use AdaTS as the embeddings are not fixed in size. In Figure 2(c), we see AAR has better calibration and accuracy across the three datasets. Namely, by adjusting the LLM output using the LLM atypicality, we can adjust the probabilities to increase the prediction quality.
### Case Study: Fairness through Atypicality-Awareness
Machine learning models reportedly have performance disparity across subgroups (Barocas et al., 2017) due to factors such as varying sample size or noise levels (Chen et al., 2018). For instance, skin lesion classifiers can exhibit performance disparity across different skin tones Daneshjou et al. (2022). Fitzpatrick17k (Groh et al., 2021) is a dataset of clinical images with Fitzpatrick skin tone annotations between 1-to-6, where a larger number means darker skin tones, and when annotators do not agree, it is labeled as 'Unknown'. We explore the classification problem with 9 classes indicating the malignancy and the type of skin condition, using a ResNet18/34 pretrained on ImageNet and finetuned on this task (See Appendix G).
When the goal is to improve performance across groups, one can use group annotations and optimize performance within each group (Hebert-Johnson et al., 2018; Kim et al., 2019). Here, we investigate how complementing recalibration with atypicality can improve prediction quality across all groups _without group annotations_. For comparison, we perform 3 recalibration methods: TS, AAR, and Skin-Tone Conditional TS which calibrates the model individually for each skin-tone group with TS. Since the skin-tone conditional calibration uses group attributes, ideally it should act as an oracle. In Figure 4, we give the Accuracy and ECE analyses where AAR improves performance across all groups. For instance, the worst-group Accuracy (0.69) or ECE (0.072) with AAR is close to the best-group Accuracy (0.63) or ECE (0.062) with the other two methods. Overall, _our findings suggest that Atypicality-Awareness can complement fairness-enforcing methods, and improve performance even when the group annotations are unavailable_. We hypothesize that with AAR, we can perform better than using supervised group attributes since groups may not have sufficient sample size in the calibration set (131, 1950, 1509, 555 samples for Unknown, 1&2, 3&4, and 5&6 respectively), and we can leverage atypicality to offer some mitigation. Further investigating how to leverage atypicality to improve fairness and factors affecting performance disparities is a promising direction for future work (Chen et al., 2018).
## 5 Improving Uncertainty Sets with Atypicality
**Conformal Prediction**(Shafer and Vovk, 2008; Angelopoulos and Bates, 2021) is a framework that assigns a calibrated uncertainty set to each instance. The goal is to find a function \(\mathcal{C}:\mathcal{X}\to 2^{\mathcal{Y}}\) that returns a subset of the label space such that \(Y\in\mathcal{C}(X)\) with high probability. The framework aims to guarantee _marginal coverage_, i.e., \(\mathbb{P}(Y\in\mathcal{C}(X))\geq 1-\alpha\), for a choice of \(\alpha\). We investigate two conformal calibration methods, Adaptive Prediction Sets (APS) (Romano et al., 2020) and Regularized APS (RAPS) (Angelopoulos et al., 2020). Let \(\pi(X)\) be the permutation of the label
set that sorts \(\hat{\mathbb{P}}(Y=c|X)\), i.e. the predicted probabilities for each class \(c\) after TS. The uncertainty sets are produced by the function \(\mathcal{C}(x)=\{y:s(x,y)\leq\hat{q}\}\), and these methods fit the threshold \(\hat{q}\) for a choice of the scoring function. APS uses the cumulative sum of the predicted probabilities \(s(x,y)=\sum_{j=1}^{c}\hat{\mathbb{P}}(Y=j|X)\), where \(y=\pi_{c}(X)\). Intuitively, if the model was perfectly calibrated, we would have expected to have \(\hat{q}=1-\alpha\). Similarly, RAPS builds on the idea that tail probabilities are noisy and regularizes the number of samples in the uncertainty set.
Building on our ideas in the previous sections we implement Atypicality-Aware uncertainty sets, namely _AA-APS_ and _AA-RAPS_ in the following way: We group points according to their confidence and atypicality quantiles, and fit separate thresholds to each group with APS or RAPS as subroutines. This allows us to have adaptive threshold depending on the atypicality and confidence of predictions.
In Figure 5, we provide the coverage plots for APS and RAPS in the first and third columns. Even though marginal coverage is satisfied, models do not satisfy conditional coverage for atypical inputs or low-confidence predictions. We observe that being Atypicality-Aware improves coverage across otherwise underperforming groups. Further, AA-APS has lower set sizes on average than APS (\(15.6\) vs \(21.3\)). While RAPS has a lower average set size than AA-RAPS (\(4.2\) vs \(9.1\)) AA-RAPS has smaller set sizes for high-confidence samples, whereas a larger set size for low-confidence samples where the coverage is not met for RAPS. In Appendix E.3, we provide the same analysis for ResNet18,50,152 at different coverage levels along with analyzing the performance in the Confidence and Atypicality dimensions individually. For instance in Figure 8, we observe that RAPS and APS do not satisfy coverage for high atypicality regions, even when averaged across different confidence levels.
## 6 Additional Related Work
**Uncertainty and Atypicality:**Mukhoti et al. (2021); Postels et al. (2020) use density estimation to disentangle epistemic and aleatoric uncertainty. Following this, they show improvements in active learning and OOD detection (Lee et al., 2018). We note that our goal is not this disentanglement (e.g. Untrustworthy quadrant can have both aleatoric or epistemic uncertainty), or Ambiguity could be due to a lack of features or noise. Liu et al. (2020) propose the related notion of distance awareness, and that it leads to better uncertainty quantification. They offer architecture and training modifications whereas we analyze existing models using our framework including imbalanced and
Figure 4: **Improving Group Performance through Atypicality-Awareness. Here we show that AAR improves the calibration and accuracy of models across different skin tone groups. With AAR, we can improve both the worst group performance and overall performance significantly without using group attributes. TS curve is less visible since it significantly overlaps with Skin Tone Conditional.**
Figure 5: **Improving Conformal Calibration with Atypicality for ResNet50 on ImageNet. Here we show that Atypicality-Awareness improves conformal calibration performance across different groups. Methods are fitted to satisfy \(95\%\) coverage. We observe that APS and RAPS do not satisfy conditional coverage for high atypicality regions or low confidence regions.**
LLM settings, and propose simple and post-hoc approaches. 'OOD' (Hendrycks and Gimpel, 2016) or 'anomaly' (Hendrycks et al., 2018) notions are tied to atypicality, yet our goal is not to make a binary distinction between 'in' or 'out'. We argue that in-distribution samples could also be atypical (e.g. rare groups), and the goal is to perform reliably in the entire spectrum. Other works with an atypicality notion include bounding calibration of groups by the excess risk (Liu et al., 2019), miscalibration under distribution shifts (Osadia et al., 2019), uncertainty in Gaussian Processes (Rasmussen, 2004), forgetting time for rare examples (Maini et al., 2022), the poor performance of groups with lower sample sizes (Chen et al., 2018), energy-based models improving calibration (Grathwohl et al., 2020), relating perplexity to zero-shot classification performance for LLMs (Gonen et al., 2022), grouping loss and local definitions of miscalibration (Perez-Lebel et al., 2023), the relationship between active learning and atypicality (Hacohen et al., 2022), sample size as a factor for subgroup performance disparity (Chen et al., 2018). Our new findings include showing that predictions for atypical samples are more miscalibrated and overconfident, and atypicality awareness improves prediction quality. _Overall, while there are other relevant notions in the literature, our distinct goal is to show that post-hoc atypicality estimation and recalibration is a simple yet useful framework to understand and improve uncertainty quantification that complements existing methods._
**Recalibration:** There is a rich literature on post-hoc recalibration methods: TS (Guo et al., 2017), Platt Scaling (Platt et al., 1999), conformal calibration (Shafer and Vovk, 2008; Angelopoulos et al., 2020) among many. Lu et al. (2022); Romano et al. (2019); Barda et al. (2021); Bastani et al. (2022) make a relevant observation, showing that the coverage of conformal prediction is not equal across all groups. They propose group conformal calibration, which requires group labels whereas our proposal is unsupervised and does not depend on any attribute information. Concurrent work (Joy et al., 2023) explores AdaTS, where they train a separate VAE and MLP to produce an adaptive temperature. However, our parameterization of temperature has 3 parameters and is interpretable.
## 7 Conclusion
Atypicality offers a simple yet flexible framework to better understand and improve model reliability and uncertainty. We propose that pretrained models should be released not only with confidence but also with an atypicality estimator. While there are other relevant notions in the literature, our main goal is to show that atypicality can provide a unifying perspective to discuss uncertainty, understand individual data points, and improve fairness. Here we focus on classification problems; it would be interesting to extend atypicality to regression and generation settings. Furthermore, we would like to extend the theoretical analysis to more general settings, as our empirical results demonstrate that the observed phenomena hold more broadly.
## Acknowledgments
We would like to thank Adarsh Jeewajee, Bryan He, Edward Chen, Federico Bianchi, Kyle Swanson, Natalie Dullerud, Ransalu Senanayake, Sabri Eyuboglu, Shirley Wu, Weixin Liang, Xuechen Li, Yongchan Kwon, and Zach Izzo for their comments and suggestions on the manuscript.
|
2309.00920 | Trustworthy Distributed Average Consensus based on Locally Assessed
Trust Evaluations | This paper proposes a distributed algorithm for average consensus in a
multi-agent system under a fixed bidirectional communication topology, in the
presence of malicious agents (nodes) that may try to influence the average
consensus outcome by manipulating their updates. The proposed algorithm
converges asymptotically to the average of the initial values of the
non-malicious nodes, which we refer to as the trustworthy average, as long as
the underlying topology that describes the information exchange among the
non-malicious nodes is connected. We first present a distributed iterative
algorithm that assumes that each node receives (at each iteration or
periodically) side information about the trustworthiness of the other nodes,
and it uses such trust assessments to determine whether or not to incorporate
messages received from its neighbors, as well as to make proper adjustments in
its calculation depending on whether a previously trustworthy neighbor becomes
untrustworthy or vice-versa. We show that, as long as the trust assessments for
each non-malicious node eventually reflect correctly the status (malicious or
non-malicious) of its neighboring nodes, the algorithm guarantees asymptotic
convergence to the trustworthy average. We subsequently discuss how the
proposed algorithm can be enhanced with functionality that enables each node to
obtain trust assessments about its neighbors by utilizing information that it
receives from its two-hop neighbors at infrequent, perhaps randomly chosen,
time instants. | Christoforos N. Hadjicostis, Alejandro D. Dominguez-Garcia | 2023-09-02T11:55:30Z | http://arxiv.org/abs/2309.00920v1 | # Trustworthy Distributed Average Consensus based on Locally Assessed Trust Evaluations
###### Abstract
This paper proposes a distributed algorithm for average consensus in a multi-agent system under a fixed bidirectional communication topology, in the presence of malicious agents (nodes) that may try to influence the average consensus outcome by manipulating their updates. The proposed algorithm converges asymptotically to the average of the initial values of the non-malicious nodes, which we refer to as the _trustworthy average_, as long as the underlying topology that describes the information exchange among the non-malicious nodes is connected. We first present a distributed iterative algorithm that assumes that each node receives (at each iteration or periodically) side information about the trustworthiness of the other nodes, and it uses such trust assessments to determine whether or not to incorporate messages received from its neighbors, as well as to make proper adjustments in its calculation depending on whether a previously trustworthy neighbor becomes untrustworthy or vice-versa. We show that, as long as the trust assessments for each non-malicious node eventually reflect correctly the status (malicious or non-malicious) of its neighboring nodes, the algorithm guarantees asymptotic convergence to the trustworthy average. We subsequently discuss how the proposed algorithm can be enhanced with functionality that enables each node to obtain trust assessments about its neighbors by utilizing information that it receives from its two-hop neighbors at infrequent, perhaps randomly chosen, time instants.
**Keywords: Distributed averaging, multi-agent systems, fault-tolerant consensus, resilience, trustworthy computation, distributed trust assessment.**
## I Introduction and Motivation
Average consensus and, more generally, consensus and distributed function calculation have received attention by many communities, including the control community (which has considered applications in multi-agent systems, formation control, and sensor networks), the communication community, and the computer science community [1, 2, 3, 4, 5, 6]. In particular, average consensus has been studied extensively, primarily in settings where convergence is asymptotic and each node processes and transmits real-valued states with infinite precision [7, 8, 9, 4]; however, issues of finite time completion [10, 11, 12, 13], quantized transmissions [14, 15, 16, 17, 18], and event-triggered operation [19, 20, 21] have also been considered. Reference [22] discusses several applications of distributed average consensus.
This paper addresses asymptotic average consensus in the presence of malicious nodes, which try to influence the outcome of the distributed computation by arbitrarily manipulating their initial values and/or their updates, possibly in a colluding manner. This is a topic that has recently received some attention as described later in this section. In this paper, we propose and analyze a novel distributed algorithm, which enables the non-malicious nodes of a distributed system to distributively identify and isolate malicious nodes, and to eventually calculate the _exact_ average of their initial values, despite the actions of malicious nodes. The communication topology is assumed to be bidirectional and we require that (i) the induced topology when restricting attention to non-malicious nodes is connected, and (ii) non-malicious nodes have access, perhaps periodically, to certain information provided by their two-hop neighbors (i.e., the neighbors of their neighbors).
The proposed scheme essentially takes a rather standard distributed algorithm for average consensus in bidirectional communication topologies (that relies on linear updates with weights that form a doubly stochastic matrix--see, e.g., [22]) and enhances it in two ways. First, by having each node maintain one additional _running-sum_ variable for each of its neighbors, we devise a scheme that allows each node to virtually remove from (or add to) the distributed computation neighboring nodes that become untrustworthy (or trustworthy). This is done in a way that ensures that, if all non-malicious nodes eventually learn the trustworthiness of all of their neighbors, then they will asymptotically converge to the _trustworthy average_ (i.e., the average of the initial values of the non-malicious nodes), despite the actions (e.g., erroneous computational updates) and ignoring the initial values of the malicious nodes. Second, by establishing an invariant that holds during the execution of the proposed algorithm, we devise a scheme that allows each node to determine whether its neighbors are trustworthy or not. The checking is done in a distributed manner assuming each node has infrequent access to information sent by its two-hop neighbors (i.e., by the neighbors of its neighbors).
The proposed algorithm (see Algorithm 1) removes/adds untrustworthy/trustworthy nodes utilizing trust evaluations that become available at each iteration. It possesses some of the features of the variation of running-sum ratio consensus algorithm we proposed in [23], which can be used by non-malicious nodes in fixed, possibly directed communication topologies, to asymptotically converge to the _exact_ value of the trustworthy average, as long as the induced graph obtained by focusing on non-malicious nodes is strongly connected, and the non-malicious nodes eventually learn which nodes among their in-neighbors and out-neighbors are malicious. However, the work in [23] did not identify an invariant and did not
discuss how the non-malicious nodes can exploit it to obtain the needed trust assessments about each neighbor.
It is worth pointing out that the idea of adding and removing nodes from a distributed average consensus computation appears in works that rely on the running-sum ratio consensus algorithm (e.g., [24, 25]) or in works that deal with dynamic average consensus (e.g., [26, 27]). Unlike the work in this paper, however, the aforementioned works deal with nodes that willingly remove themselves (or collaborate with their neighboring nodes in order to remove themselves) from the computation of the average. Instead, the solution proposed in this paper (as also the solution in [23]), ensures that all influence that a malicious node had on the distributed computation (including past influence) is nullified by its non-malicious neighbors.
Related ideas about utilizing trust assessments towards trustworthy average consensus appear in [28], which considers a distributed system where stochastic values of trust between the nodes are available, and nodes use these trust values to reach agreement to a common limit value. The authors of [28] show that, under certain conditions on the trust values, the deviation of the common limit value that is reached from the true consensus value is bounded in a manner that can be characterized. Moreover, correct classification of malicious and non-malicious nodes can be attained in finite time almost surely. Unlike the setting in [28], the setting in this paper is deterministic and the solution we propose is based on a completely different algorithm that guarantees convergence to the exact average of the trustworthy nodes.
The proposed scheme for obtaining trust assessments can be embedded in the proposed distributed average consensus algorithm and exploits information from two-hop neighbors. It does not require a mechanism to inform all nodes about which specific node has been identified as malicious (because trust assessments are only needed for neighboring nodes, and each node has a direct way of obtaining the trust assessments it needs). Unlike [29, 30, 31] (which also exploit information from two-hop neighbors), the proposed scheme for obtaining trust assessments can be performed at each node by requiring information from two-hop neighbors, infrequently or even at randomly selected points of time, which significantly reduces the communication overhead; this is achieved by exploiting an invariant that holds during the execution of the proposed algorithm, as shown in this paper.
Related ideas about stochastic side information that can serve as trust assessments also appear in [32], but the focus in that paper is about how the different nodes can learn whether to trust the other agents. The assumption is that nodes can directly evaluate whether to trust their in-neighbors and the focus is on how to propagate trust assessments from other nodes to their out-neighbors and eventually, via a consensus mechanism, to the whole network.
In addition to [29, 30, 31, 32, 23] discussed above, there is some related work on distributed average consensus algorithms that aim at removing/limiting the effect of malicious nodes on the computation. For example, the authors of [33] exploit the connectivity of the underlying topology in order to detect and isolate the effects of malicious nodes when performing distributed function calculation. More specifically, if the underlying graph is \((2f+1)\)-connected (i.e., any two non-neighboring nodes have at least \(2f+1\) node-disjoint paths that connect them), then one can systematically exploit information that arrives from these disjoint paths in order to withstand up to \(f\) malicious nodes. Alternatively, one could use the approach proposed in [34] to limit the effect of malicious nodes on the computation; however, such approach would only guarantee reaching consensus to a value between the minimum and maximum value of non-malicious nodes (not necessarily the average).
The remainder of the paper is organized as follows. In Section II, we introduce some preliminary concepts from graph theory and distributed averaging over bidirectional communication topologies. In Section III, we formulate the trustworthy distributed average consensus problem and outline an algorithm, including its pseudocode description, to solve it. In this section, we also establish an invariant that holds during the execution of the algorithm and which is used to prove the algorithm's correctness. In Section IV, we provide simulation studies to illustrate the operation of the algorithm, under different scenarios for the convergence of the trust assessments and the behavior of the malicious nodes. Section V describes the scheme for distributively obtaining trust assessments while executing the proposed distributed algorithm. Finally, Section VI concludes with some directions for future research.
## II Mathematical Background and Notation
In this section we provide some needed background on graph theory and review some existing distributed protocols for reaching average consensus in multi-agent systems,
### _Graph-Theoretic Notions and Communication Topology_
A directed graph (digraph) of order \(N\) (\(N\geq 2\)), is defined as \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{v_{1},v_{2},\ldots,v_{N}\}\) is the set of vertices (nodes) and \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}-\{(v_{j},v_{j})\mid v_{j}\in \mathcal{V}\}\) is the set of links (edges). A directed edge from node \(v_{i}\) to node \(v_{j}\) is denoted by \((v_{j},v_{i})\in\mathcal{E}\), and indicates that node \(v_{i}\) can send information to node \(v_{j}\). A digraph is called _strongly connected_ if for each pair of nodes \(v_{j},v_{i}\in\mathcal{V}\), \(v_{j}\neq v_{i}\), there exists a directed _path_ from \(v_{i}\) to \(v_{j}\), i.e., we can find a sequence of nodes \(v_{i}:=v_{l_{0}},v_{l_{1}},\ldots,v_{l_{t}}:=v_{j}\) such that \((v_{l_{r+1}},v_{l_{r}})\in\mathcal{E}\) for \(\tau=0,1,\ldots,t-1\). All nodes that can send information to node \(v_{j}\) directly are said to be its in-neighbors and belong to the set \(\mathcal{N}_{j}^{-}=\{v_{i}\in\mathcal{V}\mid(v_{j},v_{i})\in\mathcal{E}\}\), the cardinality of which is referred to as the _in-degree_ of \(v_{j}\) and is denoted by \(D_{j}^{-}\). The nodes that can receive information from node \(v_{j}\) are said to be its out-neighbors and belong to the set \(\mathcal{N}_{j}^{+}=\{v_{l}\in\mathcal{V}\mid(v_{l},v_{j})\in\mathcal{E}\}\), the cardinality of which is referred to as the _out-degree_ of \(v_{j}\) and is denoted by \(D_{j}^{+}\).
In this paper, we consider multi-agent systems in which the exchange of information between a pair of nodes, if allowed, is bidirectional. Then, the communication topology of the multi-agent system can be described by a bidirectional communication graph, which we define as follows.
**Definition 1**: _A digraph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) is called a bidirectional communication graph if \((v_{j},v_{i})\in\mathcal{E}\) implies that \((v_{i},v_{j})\in\mathcal{E}\)._
Under a bidirectional communication graph \(\mathcal{G}\), all nodes that can send/receive information to/from node \(v_{j}\) directly are said to be its neighbors and belong to the set \(\mathcal{N}_{j}=\{v_{i}\in\mathcal{V}\mid(v_{j},v_{i})\in\mathcal{E}\}=\{v_{l }\in\mathcal{V}\mid(v_{l},v_{j})\in\mathcal{E}\}\), which satisfies \(\mathcal{N}_{j}=\mathcal{N}_{j}^{+}=\mathcal{N}_{j}^{-}\). The cardinality of \(\mathcal{N}_{j}\) is referred to as the _degree_ of \(v_{j}\) and is denoted by \(D_{j}=|\mathcal{N}_{j}|\). A bidirectional communication graph is said to be _connected_ if, for each pair of nodes \(v_{j},v_{i}\in\mathcal{V}\), \(v_{j}\neq v_{i}\), there exists a _path_ from \(v_{i}\) to \(v_{j}\) i.e., we can find a sequence of nodes \(v_{i}=:v_{l_{0}},v_{l_{1}},\ldots,v_{l_{t}}:=v_{j}\) such that \((v_{l_{\tau+1}},v_{l_{\tau}})\in\mathcal{E}\) (thus, also \((v_{l_{\tau}},v_{l_{\tau+1}})\in\mathcal{E}\)) for \(\tau=0,1,\ldots,t-1\).
In this paper, we assume a broadcast model under a fixed bidirectional communication graph \(\mathcal{G}\). Specifically, when node \(v_{j}\) broadcasts information, its transmissions are received at all of its neighbors in the set \(\mathcal{N}_{j}\); similarly, node \(v_{j}\) receives all transmissions sent by each of its neighbors in the set \(\mathcal{N}_{j}\). Note, however, that node \(v_{j}\) may choose to ignore a transmission from a certain neighbor \(v_{i}\) (e.g., because it considers \(v_{i}\) to be untrustworthy); such actions by transmitting/receiving nodes may result in a virtual communication topology that is not necessarily a bidirectional communication graph.
**Assumption 0**.: _Each transmission by node \(v_{j}\in\mathcal{V}\) is received by all neighbors of node \(v_{j}\) (i.e., all nodes in the set \(\mathcal{N}_{j}\)). Furthermore, we assume that each transmission is associated with a unique node ID that allows receiving nodes to identify the sending node._
### _Average Consensus via Linear Iterations_
Consider a distributed system, captured by a bidirectional communication graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), in which each node \(v_{j}\in\mathcal{V}\) has a value \(x_{j}\). Average consensus aims to have all the nodes calculate \(\overline{X}=\frac{\sum_{j=1}^{N}x_{j}}{\sum_{j=1}^{N}x_{j}}\) in a distributed manner. This can be achieved via a linear iteration where each node \(v_{j}\) maintains a scalar state variable \(x_{j}[k]\), which it updates based on the values received from its neighbors. Specifically, each node \(v_{j}\) uses a linear time-invariant update of the form
\[x_{j}[k+1]=w_{jj}x_{j}[k]+\sum_{v_{i}\in\mathcal{N}_{j}}w_{ji}x_{i}[k]\, \tag{1}\]
where \(x_{j}[0]=x_{j}\), \(v_{j}\in\mathcal{V}\), and the \(w_{ji}\)'s are constant weights. If we let \(x[k]=[x_{1}[k],x_{2}[k],\ldots,x_{N}[k]]^{\mathrm{T}}\), then the iteration in (1) can be written compactly in matrix form as
\[x[k+1]=Wx[k]\,\ \ x[0]=[x_{1},x_{2},\ldots,x_{N}]^{\mathrm{T}}, \tag{2}\]
where \(W=[w_{ji}]\in\mathbb{R}^{N\times N}\) is referred to as the weight matrix, with the entry \(w_{ji}\) at its \(j\)th row and \(i\)th column such that \(w_{ji}=0\) if \(v_{i}\notin\mathcal{N}_{j}\cup\{v_{j}\}\). The nodes are said to reach asymptotic average consensus if
\[\lim_{k\to\infty}x_{j}[k]=\overline{X}\,\quad\forall v_{j}\in\mathcal{V}. \tag{3}\]
The necessary and sufficient conditions for the iteration in (2) to asymptotically reach average consensus are [4, 35]: (i) \(W\) has a simple eigenvalue at \(1\), with left eigenvector \(1_{N}^{T}\) and right eigenvector \(1_{N}\) (where \(1_{N}\) denotes the \(N\)-dimensional all-ones column vector), and (ii) all other eigenvalues of \(W\) have magnitude strictly less than \(1\). If one focuses on nonnegative weights, these conditions are equivalent to \(W\) being a primitive doubly stochastic matrix. In the case of a bidirectional communication graph, there are very simple ways for the nodes to choose the weights, in a distributed manner, so that \(W\) forms a doubly stochastic matrix (see, e.g., [22]). For example, assuming the nodes know the total number of nodes \(N\) or an upper bound \(N^{\prime}\geq N\), each node \(v_{j}\) can set the weights on all of its incoming links to be \(w_{ji}=\frac{1}{N^{\prime}}\) for all \(v_{i}\in\mathcal{N}_{j}\) and \(w_{jj}=1-\frac{D_{j}}{N^{\prime}}\) (zero otherwise). It is easy to verify that \(W\) will be a (symmetric) doubly stochastic matrix. Furthermore, \(W\) will be primitive as long as \(\mathcal{G}\) is connected.
The linear iteration in (1) can also be extended to a time-varying topology setting as follows. Consider that at each iteration \(k\), the topology is captured by a bidirectional communication graph \(\mathcal{G}[k]=(\mathcal{V},\mathcal{E}[k])\), where the set of nodes remains fixed but the set of edges \(\mathcal{E}[k]\) can vary for different values of \(k\). We can now consider the iteration
\[x_{j}[k+1]=w_{jj}[k]x_{j}[k]+\sum_{v_{i}\in\mathcal{N}_{j}[k]}w_{ji}[k]x_{i}[k]\, \tag{4}\]
with \(x_{j}[0]=x_{j}\), \(v_{j}\in\mathcal{V}\), and time-varying weights \(w_{ji}[k]\), where \(\mathcal{N}_{j}[k]\) is the set of neighbors of node \(v_{j}\) in \(\mathcal{G}[k]\). We can easily choose the time-varying weights to form a matrix \(W[k]=[w_{ji}[k]]\) that is doubly stochastic and conforms to the topology captured by \(\mathcal{G}[k]\), such that \(w_{ji}[k]\geq c\) for \(v_{i}\in\mathcal{N}_{j}[k]\cup\{v_{j}\}\), where \(c\) is some positive constant. For example, if we let \(D_{j}[k]=|\mathcal{N}_{j}[k]|\) denote the number of neighbors of node \(v_{j}\) at iteration \(k\), we can have each node \(v_{j}\) set the weights on all of its incoming links to be \(w_{ji}[k]=\frac{1}{N^{\prime}}\) for all \(v_{i}\in\mathcal{N}_{j}[k]\) and \(w_{jj}[k]=1-\frac{D_{j}[k]}{N^{\prime}}\) (zero otherwise); this results in a weight matrix \(W[k]\) that is symmetric and doubly stochastic, but not necessarily primitive (that will depend on whether or not \(\mathcal{G}[k]\) is connected). We can write (4) in matrix form as
\[x[k+1]=W[k]x[k]\,\ \ x[0]=[x_{1},x_{2},\ldots,x_{N}]^{\mathrm{T}}, \tag{5}\]
and one can show that average consensus is reached under some mild joint connectivity conditions on the graphs \(\mathcal{G}[k]\), \(k=0,1,2,...\). For instance, it can be shown (see, e.g., [22]) that asymptotic average consensus in (3) is reached if we can find a finite \(K\) such that each union graph
\[\mathcal{G}[\tau K]\cup\mathcal{G}[\tau K+1]\cup\ldots\cup\mathcal{G}[ \tau K+K-1]\] \[:=(\mathcal{V},\mathcal{E}[\tau K]\cup\mathcal{E}[\tau K+1]\cup \ldots\cup\mathcal{E}[\tau K+K-1])\]
for \(\tau=0,1,2,\ldots\), is connected.
Note that when implementing the distributed algorithm in (2) (or, more generally, in (5)), each node \(v_{j}\) simply needs to broadcast its value \(x_{j}[k]\) at iteration \(k\); at the same time, node \(v_{j}\) receives the values \(\{x_{i}[k]\mid v_{i}\in\mathcal{N}_{j}\}\) (or, more generally, \(\{x_{i}[k]\mid v_{i}\in\mathcal{N}_{j}[k]\}\)). For notational simplicity, in our development in the remainder of this paper, we make the following assumption:1
**Assumption 1**.: For each node \(v_{j}\in\mathcal{V}\), the nonzero weights \(w_{ji}\) for \(v_{i}\in\mathcal{N}_{j}\) when implementing iteration (2) (or \(w_{ji}[k]\) for \(v_{i}\in\mathcal{N}_{j}[k]\) when implementing iteration (5)) are equal to \(1/N\) and \(w_{jj}\) (or \(w_{jj}[k]\)) is equal to \(1-D_{j}/N\) (or \(1-D_{j}[k]/N\)).
In our developments later in the paper, we will find it necessary to have the nodes execute a variation of the distributed averaging time-varying iteration of (4), where each node \(v_{j}\) maintains a running-sum variable [24], defined as
\[\sigma_{j}[k+1]:=\sum_{t=0}^{k}x_{j}[t]\, \tag{6}\]
and transmits, at iteration \(k\), the running sum \(\sigma_{j}[k+1]\) instead of \(x_{j}[k]\). Then, the iteration in (4) (with the weights in Assumption 1) can be executed by each node as follows:
\[x_{j}[k+1]=\left(1-\frac{D_{j}[k]}{N}\right)x_{j}[k]+\frac{1}{N}\sum_{v_{i}\in \mathcal{N}_{j}[k]}(\rho_{ji}[k+1]-\rho_{ji}[k]) \tag{7}\]
where \(\rho_{ji}[k+1]=\sigma_{i}[k+1]\) for \(v_{i}\in\mathcal{N}_{j}\). Note that in order to implement this running-sum based version of the iteration, each node \(v_{j}\) maintains one variable for its running sum \(\sigma_{j}\) (which it can easily update as \(\sigma_{j}[k+1]=\sigma_{j}[k]+x_{j}[k]\) and also \(D_{j}\) additional variables, namely \(\{\rho_{ji}[k]\mid v_{i}\in\mathcal{N}_{j}\}\), to remember the previous value of the running sum of each neighbor.
## III Trustworthy Distributed Averaging
### _Problem Formulation_
We are given a bidirectional communication graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), which describes the (fixed) topology among a set of nodes in a distributed system. We assume a broadcast model as described in Assumption 0. Each node \(v_{j}\in\mathcal{V}\) has a value \(x_{j}\). A certain subset \(\mathcal{V}_{T}\), \(\mathcal{V}_{T}\subseteq\mathcal{V}\), of the nodes is trustworthy (non-malicious), whereas the remaining nodes in \(\mathcal{V}_{M}=\mathcal{V}\setminus\mathcal{V}_{T}\) are malicious (untrustworthy). Malicious nodes can collude to behave unpredictably during the execution of the algorithm. The goal of the trustworthy nodes is to compute the _average of the trustworthy nodes_, defined by
\[\overline{X}_{T}=\frac{\sum_{v_{l}\in\mathcal{V}_{T}}x_{l}}{|\mathcal{V}_{T}| }\, \tag{8}\]
despite any incorrect updates by the malicious nodes. We make the following assumption.
**Assumption 2**.: The bidirectional communication graph induced from \(\mathcal{G}\) by restricting attention to the trustworthy nodes, denoted by \(\mathcal{G}_{T}=(\mathcal{V}_{T},\mathcal{E}_{T})\), where \(\mathcal{E}_{T}=\{(v_{j},v_{i})\in\mathcal{E}\mid v_{j},v_{i}\in\mathcal{V}_{T}\}\), is connected.
### _Trust Assessment Model_
At each iteration \(k\), each node \(v_{j}\in\mathcal{V}_{T}\) has access to an assessment about the trustworthiness of each other node.2 This assessment could be derived based on measurements of some sort, such as the _stochastic values of trust_, \(a_{ij}\in(0,1)\), used in [28, 36], which approach \(1\) when node \(v_{i}\) should be trusted by node \(v_{j}\), and approach \(0\) when node \(v_{i}\) should not be trusted by node \(v_{j}\); or it could be based on checks like the ones in [29, 30, 31], where, using two-hop communication in bidirectional communication graphs, neighbors of node \(v_{i}\) assess the computations performed by node \(v_{i}\) in order to determine their correctness.
Footnote 2: It will become obvious in our development that node \(v_{j}\) only needs information about the trustworthiness of its neighbors, and not necessarily all other nodes; however, for ease of notation, we assume that such information is made available at node \(v_{j}\) for all other nodes.
For now, we abstract away from the specific mechanism of measuring and assessing trust (this issue is addressed in Section V), and assume that, at each iteration \(k\), each node \(v_{j}\) has access to an assessment about the trustworthiness of another node \(v_{i}\). In particular, \(t_{ij}[k]\in\{0,1\}\) is a binary indicator that captures the trustworthiness of node \(v_{i}\) as perceived by node \(v_{j}\) at iteration \(k\): \(t_{ij}[k]=1\) (\(t_{ij}[k]=0\)) indicates that node \(v_{j}\) considers node \(v_{i}\) to be trustworthy (malicious) at iteration \(k\). We use \(\mathcal{T}_{j}[k]=\{v_{i}\mid t_{ij}[k]=1\}\) to denote the set of nodes that are considered trustworthy by node \(v_{j}\) at iteration \(k\); we assume that \(t_{jj}[k]=1\) and thus \(v_{j}\in\mathcal{T}_{j}[k]\) for all \(k\) (i.e., node \(v_{j}\) always trusts itself). We also use \(\mathcal{M}_{j}[k]=\mathcal{V}\setminus\mathcal{T}_{j}[k]\) to denote the nodes that are considered malicious by node \(v_{j}\) at iteration \(k\).
Without loss of generality, we assume that initially \(\mathcal{T}_{j}[0]=\mathcal{V}\) for each node \(v_{j}\) (i.e., at the start of the algorithm execution, each node \(v_{j}\) considers all of its neighbors to be trustworthy). We also require that asymptotically \(\lim_{k\rightarrow\infty}\mathcal{T}_{j}[k]=\mathcal{V}_{T}\), at least for nodes \(v_{j}\) that are trustworthy. Note that convergence of \(\mathcal{T}_{j}[k]\) to \(\mathcal{V}_{T}\) as \(k\) goes to infinity does not have to be monotonic as \(t_{ij}[k]\) may fluctuate between \(1\) and \(0\); however, \(t_{ij}[k]\) has to eventually settle to \(0\) if \(v_{i}\) is malicious or \(1\) if \(v_{i}\) is trustworthy. Also, note that at any given \(k\), \(t_{ij}[k]\) and \(t_{ji}[k]\) need not coincide, e.g., node \(v_{j}\) may trust node \(v_{i}\) but not vice-versa. Finally, two different trustworthy nodes, \(v_{j}\) and \(v_{l}\) (\(v_{j},v_{l}\in\mathcal{V}_{T}\)), may have different trust assessments about node \(v_{i}\) at a given iteration \(k\) (i.e., \(t_{ij}[k]\neq t_{il}[k]\)); however, we require that eventually these assessments would have to be equal and correctly reflect the status of node \(v_{i}\) (i.e., for large \(k\), \(t_{ij}[k]=t_{il}[k]=1\) if node \(v_{i}\) is trustworthy and \(t_{ij}[k]=t_{il}[k]=0\) otherwise). This assumption is stated below.
**Assumption 3**.: The trust assessments \(t_{ij}[k]\), \(v_{i},v_{j}\in\mathcal{V}\), are such that for each \(v_{j}\in\mathcal{V}_{T}\), there exists a finite \(k_{j}\) such that
\[\mathcal{T}_{j}[k]:=\{v_{i}\mid t_{ij}[k]=1\}=\mathcal{V}_{T}\,\text{ for }k\geq k_{j}\.\]
Clearly, for \(k\geq k_{\text{max}}:=\max_{v_{j}\in\mathcal{V}_{T}}\{k_{j}\}\), we have \(\mathcal{T}_{j}[k]=\mathcal{V}_{T}\) for all \(v_{j}\in\mathcal{V}_{T}\).
### _Trust Assessment-Based Average Consensus_
The trustworthy distributed calculation of the average \(\overline{X}_{T}\) in (8) is based on a variation of the linear iteration in (7). The basic idea is for each node \(v_{j}\) to carefully track its trustworthy neighbors at iteration \(k\), i.e., the nodes in the set \(\mathcal{N}_{j}[k]=\mathcal{N}_{j}\cap\mathcal{T}_{j}[k]\), and to isolate neighbors that it does not consider trustworthy. This is done in two ways: (i) node \(v_{j}\) ignores any values it receives at iteration \(k\) from neighbors outside the set \(\mathcal{N}_{j}[k]\), and (ii) node \(v_{j}\) considers that its degree at iteration
is \(|\mathcal{N}_{j}[k]|=D_{j}[k]\). Effectively, node \(v_{j}\) updates its value as
\[x_{j}[k+1] =\left(1-\frac{D_{j}[k]}{N}\right)x_{j}[k]+\] (9) \[+\frac{1}{N}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
were computed by node \(v_{j}\) using a degree that did not consider node \(v_{i}\)). The above two adjustments that node \(v_{j}\) needs to make are as follows:
\[\varepsilon_{ji}^{+}[k]=-\frac{1}{N}\sigma_{j}[k]+\frac{1}{N}\rho_{j}[k]\.\]
Note that if, at iteration \(k\), node \(v_{j}\) changes its perception about multiple neighboring nodes that were previously considered untrustworthy, the above adjustments have to be performed for each such neighbor. In other words, if we let \(\Delta\mathcal{T}_{j}[k]=\mathcal{N}_{j}\cap(\mathcal{T}_{j}[k]\setminus \mathcal{T}_{j}[k-1])\) be the set of previously untrustworthy neighbors of node \(v_{j}\) that become trustworthy at iteration \(k\), then, the total adjustment that node \(v_{j}\) needs to make is as follows:
\[\varepsilon_{j}^{+}[k] = \sum_{v_{i}\in\Delta\mathcal{T}_{j}[k]}\varepsilon_{ji}^{-}[k]\] \[= -|\Delta\mathcal{T}_{j}[k]|\frac{1}{N}\sigma_{j}[k]+\frac{1}{N} \sum_{v_{i}\in\Delta\mathcal{T}_{j}[k]}\rho_{ji}[k]\.\]
It is interesting to note that Cases 1 and 2 can be easily merged together as follows. At iteration \(k\), node \(v_{j}\) sets its \(\varepsilon_{j}[k]\) value as \(\varepsilon_{j}[k]=\varepsilon_{j}^{-}[k]+\varepsilon_{j}^{+}[k]\), i.e.,
\[\varepsilon_{j}[k] = (|\Delta\mathcal{U}_{d}[k]|-|\Delta\mathcal{T}_{j}[k]|)\frac{1}{ N}\sigma_{j}[k]-\] \[\ \ -\frac{1}{N}\sum_{v_{i}\in\Delta\mathcal{U}_{d}[k]}\rho_{ji}[k]+ \frac{1}{N}\sum_{v_{i}\in\Delta\mathcal{T}_{j}[k]}\rho_{ji}[k]\,\]
and then updates \(x_{j}[k+1]\) following (9):
\[x_{j}[k+1] = \left(1-\frac{D_{j}[k]}{N}\right)x_{j}[k]+\] \[+\frac{1}{N}\sum_{v_{i}\in\Delta\mathcal{N}_{j}[k]}(\rho_{ji}[k+1 ]-\rho_{ji}[k])+\varepsilon_{j}[k]\,\]
where \(\rho_{ji}[k+1]=\sigma_{i}[k+1]\).
In our analysis and the pseudocode of Algorithm 1, we use an equivalent but simpler way to perform the updates. More specifically, at iteration \(k\), node \(v_{j}\) receives \(\{\sigma_{i}[k+1]\mid v_{i}\in\mathcal{N}_{j}\}\) and sets \(\mu_{ji}\) for each neighbor \(v_{i}\in\mathcal{N}_{j}\) as follows:
\[\mu_{ji}[k+1]=\left\{\begin{array}{ll}\sigma_{i}[k+1]\,&\forall v_{i}\in \mathcal{N}_{j}[k]\,\\ 0\,&\text{otherwise};\end{array}\right.\]
then node \(v_{j}\) sets
\[x_{j}[k+1] = \left(1-\frac{D_{j}[k]}{N}\right)x_{j}[k]+\] \[+\frac{1}{N}\sum_{v_{i}\in\mathcal{N}_{j}}(\mu_{ji}[k+1]-\mu_{ji} [k])+e_{j}[k]\]
where \(e_{j}[k]\) is given by
\[e_{j}[k]=(|\Delta\mathcal{U}_{j}[k]|-|\Delta\mathcal{T}_{j}[k]|)\frac{1}{N} \sigma_{j}[k]\.\]
Note that if neighbor \(v_{i}\) becomes untrustworthy for the first time at iteration \(k\), \(\mu_{ji}[k+1]\) becomes zero, so that the summation above effectively subtracts \(\mu_{ji}[k]=\sigma_{i}[k]\). In addition, as long as neighbor \(v_{i}\) remains untrustworthy to node \(v_{j}\), its \(\mu_{ji}\)'s are zero (thus, the difference between two consecutive \(\mu_{ji}\) is also zero, i.e., \(v_{i}\) is effectively ignored in the summation). Finally, if \(v_{i}\) becomes trustworthy again at some iteration \(k^{\prime}\), \(\mu_{ji}[k^{\prime}+1]=\sigma_{i}[k^{\prime}+1]\), which means that the running sum is added back into the computation (since \(\mu_{ji}[k^{\prime}]=0\)).
```
1Input: Node \(v_{j}\) knows \(x_{j}\), \(\mathcal{N}_{j}\), and has access to sets \(\mathcal{T}_{j}[k]\) for \(k=0,1,2,..\) Initialization:
2Node \(v_{j}\) initializes \(\mathcal{T}_{j}[-1]=\mathcal{V}\), \(\mathcal{N}_{j}[-1]=\mathcal{N}_{j}\), \(x_{j}[0]=x_{j}\), \(\sigma_{j}[0]=0\), and \(\mu_{ji}[0]=0\), \(\forall v_{i}\in\mathcal{N}_{j}\) for\(k\geq 0:\)
3 Receive \(\mathcal{T}_{j}[k]\) Update Sets of Trustworthy Neighbors:
4 Set \(\mathcal{N}_{j}[k]=\mathcal{N}_{j}\cap\mathcal{T}_{j}[k]\) Set \(D_{j}[k]=|\mathcal{N}_{j}[k]|\) Set \(\Delta\mathcal{U}_{d}[k]=\mathcal{N}_{j}\cap(\mathcal{T}_{j}[k-1]\setminus \mathcal{T}_{j}[k])\) Set \(\Delta\mathcal{T}_{j}[k]=\mathcal{N}_{j}\cap(\mathcal{T}_{j}[k]\setminus \mathcal{T}_{j}[k-1])\) Compute:
5\(\sigma_{j}[k+1]=\sigma_{j}[k]+x_{j}[k]\)
6 Broadcast: \(\sigma_{j}[k+1]\) to all \(v_{i}\in\mathcal{N}_{j}\) Receive: \(\sigma_{i}[k+1]\) from each \(v_{i}\in\mathcal{N}_{j}\) Update \(\mu_{ji}\)'s:
7 For each \(v_{i}\in\mathcal{N}_{j}\) set \(\mu_{ji}[k+1]=\left\{\begin{array}{ll}\sigma_{i}[k+1]\,&\forall v_{i}\in \mathcal{N}_{j}[k]\\ 0\,&\text{otherwise}\end{array}\right.\)
8 Compute:
9\(e_{j}[k+1]=(|\Delta\mathcal{U}_{j}[k]|-|\Delta\mathcal{T}_{j}[k]|)\frac{1}{N} \sigma_{j}[k]\)
10\(x_{j}[k+1]=\left(1-\frac{D_{j}[k]}{N}\right)x_{j}[k]+\) \(+\frac{1}{N}\sum_{v_{i}\in\mathcal{N}_{j}}(\mu_{ji}[k+1]-\mu_{ji}[k])+e_{j}[k]\)
11
12
13
14
15
16
17
18
19
20
21Algorithm 1: Trustworthy Distributed Averaging
**Remark 1**: _It is worth pointing out that all of the above is described from the perspective of node \(v_{j}\) based on its own trust assessments (i.e., its own sets of trustworthy nodes \(\mathcal{T}_{j}[k]\) and \(\mathcal{T}_{j}[k-1]\)). All other (trustworthy) nodes are assumed to follow an identical procedure based on their own trust assessments. Finally, note that malicious nodes are allowed to behave arbitrarily._
### _Proof of Correctness_
Algorithm 1 is presented from the perspective of node \(v_{j}\). Each node \(v_{j}\) is assumed to know the set of its neighbors \(\mathcal{N}_{j}\) and receives at each iteration \(k\) (or can determine based on trust measurements) its set of trustworthy nodes \(\mathcal{T}_{j}[k]\). Node \(v_{j}\) computes the set of trustworthy neighbors at iteration \(k\) as \(\mathcal{N}_{j}[k]=\mathcal{N}_{j}\cap\mathcal{T}_{j}[k]\) and its degree as \(D_{j}[k]=|\mathcal{N}_{j}[k]|\). To establish the main convergence result, we first state an important invariant that holds during the execution of Algorithm 1.
**Theorem 1**: _Consider a distributed system, captured by a bidirectional communication graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), in which each node \(v_{j}\in\mathcal{V}\) has value \(x_{j}\). A certain subset \(\mathcal{V}_{T}\), \(\mathcal{V}_{T}\subseteq\mathcal{V}\), of the nodes are trustworthy (non-malicious), whereas the remaining nodes in \(\mathcal{V}_{M}=\mathcal{V}\backslash\mathcal{V}_{T}\) are untrustworthy (malicious). Consider the execution of Algorithm 1 where at each iteration \(k\), each node \(v_{j}\) has a binary indicator \(t_{ij}[k]\) regarding the trust it places to node \(v_{i}\). Let \(\mathcal{N}_{j}[k]=\mathcal{N}_{j}\cap\mathcal{T}_{j}[k]\), where \(\mathcal{T}_{j}[k]=\{v_{i}\mid t_{ij}[k]=1\}\) is the set of nodes that are considered trustworthy by node \(v_{j}\) at iteration \(k\) (i.e., \(\mathcal{N}_{j}[k]\) is the set
of neighbors that are considered trustworthy by node \(v_{j}\) at iteration \(k\)). Under Assumptions 0-2,3 for each trustworthy node \(v_{j}\in\mathcal{V}_{T}\), it holds at each iteration \(k\) (\(k=0,1,...\))
Footnote 3: Note that Assumption 3 is not really needed for the invariant to hold; in fact, Assumption 1 can also be relaxed.
\[x_{j}[k]-x_{j}=-\frac{D_{j}[k-1]}{N}\sigma_{j}[k]+\frac{1}{N}\sum_{v_{i}\in \mathcal{N}_{j}[k-1]}\sigma_{i}[k]\, \tag{10}\]
where \(D_{j}[k]=|\mathcal{N}_{j}[k]|\) is the number of trustworthy neighbors of node \(v_{j}\) at iteration \(k\) (recall that, according to the initialization of Algorithm 1, we have \(D_{j}[-1]=|\mathcal{N}_{j}[-1]|=D_{j}\)).
_Proof 1:_ We prove the invariant by induction on \(k\). At \(k=0\), the invariant clearly holds since \(x_{j}[0]=x_{j}\) and all \(\sigma_{i}[0]=0\). Notice that the exact \(\mathcal{T}_{j}[-1]\) and \(D_{j}[-1]\) are not really relevant here (because all \(\sigma_{i}[0]=0\)) but Algorithm 1 sets them to \(\mathcal{T}_{j}[-1]=\mathcal{V}\) and \(D_{j}[-1]=D_{j}\).
Suppose that at \(k=t\) the invariant holds, i.e.,
\[x_{j}[t]-x_{j}=-\frac{D_{j}[t-1]}{N}\sigma_{j}[t]+\frac{1}{N}\sum_{v_{i}\in \mathcal{N}_{j}[t-1]}\sigma_{i}[t]. \tag{11}\]
At iteration \(k=t+1\), we need to show that
\[x_{j}[t+1]-x_{j}=-\frac{D_{j}[t]}{N}\sigma_{j}[t+1]+\frac{1}{N}\sum_{v_{i}\in \mathcal{N}_{j}[t]}\sigma_{i}[t+1]\.\]
From lines 8 and 14 of Algorithm 1, we have the following:
\[\sigma_{j}[t+1] = \sigma_{j}[t]+x_{j}[t]\,\] \[x_{j}[t+1] = \left(1-\frac{D_{j}[t]}{N}\right)x_{j}[t]+\] \[\ \ \ +\frac{1}{N}\sum_{v_{i}\in\mathcal{N}_{j}}(\mu_{ji}[t+1]-\mu_{ ji}[t])+e_{j}[t]\,\]
where \(e_{j}[t]=(|\Delta d_{j}[t]|-|\Delta\mathcal{T}_{j}[t]|)\frac{1}{N}\sigma_{j}[t]\) (line 13) and \(\mu_{ji}[t+1]=\sigma_{i}[t+1]\) if \(v_{i}\in\mathcal{N}_{j}[t]\), otherwise, \(\mu_{ji}[t+1]=0\) (line 12).
Substituting \(e_{j}[t]\) and using the iteration invariant in (11), we have
\[x_{j}[t+1] = \underbrace{x_{j}-\frac{D_{j}[t-1]}{N}\sigma_{j}[t]+\frac{1}{N} \sum_{v_{i}\in\mathcal{N}_{j}[t-1]}\sigma_{i}[t]}_{=x_{j}[t]\text{ by \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq
\(\sum_{v_{j}\in\mathcal{V}_{\mathcal{V}}}x_{j}[k_{0}+1]=\sum_{v_{j}\in\mathcal{V}_{ \mathcal{T}}}x_{j}\), which essentially proves the theorem.
## IV Numerical Simulations
### _5-Node Network_
Consider the bidirectional communication graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) in Fig. 1, with nodes \(\mathcal{V}=\{v_{1},v_{2},v_{3},v_{4},v_{5}\}\) and edges \(\mathcal{E}=\{(v_{1},v_{2}),(v_{2},v_{1}),...,(v_{4},v_{5}),(v_{5},v_{4})\}\) as indicated in the figure. We assume that the set of trustworthy nodes is \(\mathcal{V}_{T}=\{v_{1},v_{2},v_{3},v_{4}\}\) and the set of malicious nodes is \(\mathcal{V}_{M}=\{v_{5}\}\). Notice that the induced graph \(\mathcal{G}_{T}=(\mathcal{V}_{T},\mathcal{E}_{T})\) (where \(\mathcal{E}_{T}=\{(v_{2},v_{1}),(v_{1},v_{2}),(v_{2},v_{3}),(v_{3},v_{2}),(v_{3 },v_{4}),\)\((v_{4},v_{3}),(v_{4},v_{1}),(v_{1},v_{4})\}\)) is connected. We assume that the initial values of the nodes are \(x_{i}=i\) for \(i=1,2,...,5\), so that the average is \(\overline{X}=3\) and the average of the trustworthy nodes is \(\overline{X}_{T}=2.5\). Throughout this example, we set the edge weights equal to \(\frac{1}{5}\).
We next execute different scenarios to illustrate the operation of the proposed trustworthy distributed averaging algorithm. More specifically, we illustrate three runs of the proposed algorithm, which differ in terms of when trust assessments converge to the correct values and/or the behavior of the malicious node. In all three cases, the non-malicious nodes in \(\mathcal{V}_{T}\) converge to the average of the trustworthy nodes \(\overline{X}_{T}=2.5\), whereas the malicious node may or may not converge depending on its own behavior.
On the left of Fig. 2, we see the behavior of the network when nodes perceive node \(v_{5}\) as untrustworthy from the very beginning. In this simulation, node \(v_{5}\) behaves normally (despite the fact that all other nodes perceive it as untrustworthy) and we see that it also converges to the average of the trustworthy nodes. In the middle of Fig. 2, we see the behavior of the network when, up to iteration 20, nodes receive randomly generated binary values for the trust assessments of other nodes; however, after iteration 20, trust assessments settle. We see that the trustworthy nodes converge to the average \(\overline{X}_{T}\) after about 30 iterations. In this simulation, we also assume that the malicious node \(v_{5}\) behaves normally and we see that it also converges to the average \(\overline{X}_{T}\). On the right of Fig. 2, we see a simulation with the same characteristics as the one in the middle, except that the malicious node \(v_{5}\) behaves arbitrarily (more specifically, at each iteration, node \(v_{5}\) adds a random offset to its \(x\) value, which then propagates to the values it transmits to its neighbors). In this case, the malicious node does not converge to a value; however, the trustworthy nodes are able to converge to the average \(\overline{X}_{T}\) after about 30 iterations. Moreover, the transmissions of the malicious node stop influencing the distributed computation, including any offsets it added before iteration \(k=21\).
### _20-Node Network_
In this section, we present two simulations with larger (randomly generated) connected bidirectional communication graphs that consist of \(20\) nodes (\(\mathcal{V}=\{v_{1},v_{2},...,v_{20}\}\)) with initial values \(x_{i}=i\) for \(i=1,2,...,20\), so that \(\overline{X}=10.5\). In both simulations, nodes have randomly generated binary values for their trust assessments about other nodes up to iteration 20; however, after iteration 20, they acquire correct trust assessments. In Fig. 3, we see that the non-malicious nodes in \(\mathcal{V}_{T}\) converge to the average \(\overline{X}_{T}\), whereas malicious nodes may or may not converge depending on their own behavior. Specifically, on the left of Fig. 3, we see the behavior of the network when the set of malicious nodes is \(\mathcal{V}_{M}=\{v_{6},v_{8},v_{11},v_{14},v_{15},v_{19}\}\) and the malicious nodes behave correctly. In this case, all nodes converge to the average of the trustworthy nodes, which is \(\overline{X}_{T}=9.7857\), after about \(60\) iterations. In the middle of Fig. 3, we see the behavior of the network when the set of malicious nodes is \(\mathcal{V}_{M}=\{v_{2},v_{6},v_{9}\}\) and the malicious nodes behave arbitrarily (more specifically, at each iteration, each node in \(\mathcal{V}_{M}\) adds a random offset to its \(x\) value, which then propagates to the values it transmits to its neighbors). Again, we see that the non-malicious nodes converge to the average of the trustworthy nodes, which is \(\overline{X}_{T}=11.3529\), after about \(60\) iterations. However, the malicious nodes do not converge, as seen clearly in the zoomed-in plot on the right of Fig. 3.
## V Distributed Trust Evaluation Protocol
The approach described in the earlier sections relies on the availability of trust assessments but it is independent of how such trust assessments are obtained (as long as they satisfy Assumption 3). In the literature, there are many proposals for obtaining trust assessments, including the schemes in [28, 32] described earlier in the paper. Inspired by the work in [29, 30, 31], which considers bidirectional communication graphs where each node is in charge of verifying the proper functionality of each of its neighbors by having access to two-hop information (i.e., by having access to information sent by the neighbors of its neighbors), we propose in this section a scheme that allows nodes that are running Algorithm 1 to assess the trustworthiness of each of their neighbors. This scheme can be embedded in the iterations of Algorithm 1, effectively allowing nodes to obtain the needed trust assessments.
Initially, all nodes consider their neighbors to be trustworthy (thus, their initial value is included in the average computation); however, if a node attempts to alter the outcome of the average computation (by calculating and transmitting incorrect values), then it should be declared malicious and its initial
Fig. 1: Bidirectional communication graph considered in the 5-node example.
value should also be removed from the computation of the trustworthy average. Thus far in the paper, we did not have to explicitly define what constitutes a malicious node (since that was seamlessly provided by the trust assessments under Assumption 3). To obtain the trust assessments in this section, we define a malicious node as follows.4
Footnote 4: A subtle difference from the setting in the previous section is that, unless a node performs an incorrect update, it is not considered malicious and its initial value is included in the average calculation.
**Definition 2**: _A node \(v_{i}\) executing Algorithm 1 is malicious if, at any iteration \(k\), it provides incorrect values \(\sigma_{i}[k+1]\) to its neighbors (or computes incorrect values \(x_{j}[k+1]\) which will inevitably alter \(\sigma_{i}\) at later steps)._
Unlike [29, 30, 31], the approach proposed in this section does not require a separate (centralized) mechanism that allows all nodes to instantly learn who has been declared malicious, because Algorithm 1 requires each trustworthy node to only have trust assessments about its neighbors and, as we will see, such trust assessments become directly available to trustworthy nodes under the proposed scheme. Moreover, due to the ability of Algorithm 1 to completely remove the effect of earlier exchange of information with malicious nodes, the trust assessments in the proposed protocol can be finalized over several time steps. Finally, as we argue at the end of this section, the invariant that we identified and established in Theorem 1 can be used to perform such checking infrequently (more precisely, at random time instants) and not necessarily at each time step as done in [29, 30, 31]; this is an important feature as it significantly relaxes the two-hop communication overhead imposed by the necessity to obtain trust assessments.
Assumptions 0t and 2t below replace Assumptions 0 and 2 respectively. Assumption 3 is no longer necessary as we will explicitly describe how to obtain \(\mathcal{T}_{j}[k]\) for each node.
**Assumption 0t.** Consider a distributed system whose topology is captured by a bidirectional communication graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where the following hold. (i) Each node \(v_{j}\) is aware of the local topology around it up to two hops, i.e., it is aware of its neighbors and the neighbors of its neighbors, which we refer to as its two-hop neighbors and denote by \(\mathcal{N}_{j}^{(2)}:=\mathcal{N}_{j}\cup(\cup_{v_{i}\in\mathcal{N}_{j}} \mathcal{N}_{i})\), as well as the interconnections among them. (ii) Each node \(v_{j}\) is capable of two types of transmissions, broadcasting messages that are received by all of its neighbors in \(\mathcal{N}_{j}\), and two-hop broadcasting messages that are received by all of its two-hop neighbors in \(\mathcal{N}_{j}^{(2)}\). Furthermore, we assume that both types of transmissions are associated with a unique node ID that allows receiving nodes
Fig. 3: The values of non-malicious nodes in the large examples converge to the average \(\overline{X}_{T}\) when trust assessments are taken into account: correct trust assessments from iteration \(k=21\) onwards (random trust assessments before \(k=21\)), with malicious nodes \(\mathcal{V}_{M}=\{v_{6},v_{8},v_{11},v_{14},v_{15},v_{19}\}\) behaving correctly (left); correct trust assessments from iteration \(k=21\) onwards (random trust assessments before \(k=21\)), with malicious nodes \(\mathcal{V}_{M}=\{v_{2},v_{6},v_{9}\}\) behaving incorrectly (middle); zoomed-in version of plot in the middle, which clearly shows that nodes in \(\mathcal{V}_{M}\) do not converge (right).
Fig. 2: The values of nodes in the 5-node example converge to the average \(\overline{X}_{T}=2.5\) when trust assessments are taken into account: correct trust assessments from iteration \(k=0\) onwards, with malicious node \(v_{5}\) behaving correctly (left); correct trust assessments from iteration \(k=21\) onwards (random trust assessments before \(k=21\)), with malicious node \(v_{5}\) behaving correctly (middle); correct trust assessments from iteration \(k=21\) onwards (random trust assessments before \(k=21\)), with malicious node \(v_{5}\) behaving incorrectly (right).
to identify the sending node.
**Remark 2**: _Messages to two-hop neighbors can be sent in various ways, e.g., by transmitting at a higher power. This is likely to be more expensive and undesirable and is one of the reasons we propose, at the end of this section, a scheme that limits the use of such transmissions. An alternative way of thinking about two-hop transmissions is that we start with a dense network topology, but we divide the neighborhood of each node, on purpose, to one-hop and two-hop neighbors in order to provide a mechanism for trust assessments (and thus resilience to untrustworthy nodes). In other words, we carefully design the network topology to have the properties that we need for trust assessment evaluation._
**Assumption 2t.** The bidirectional communication graph induced from \(\mathcal{G}\) by restricting attention to the trustworthy nodes, denoted by \(\mathcal{G}_{T}=(\mathcal{V}_{T},\mathcal{E}_{T})\), where \(\mathcal{E}_{T}=\{(v_{j},v_{i})\in\mathcal{E}\ |\ v_{j},v_{i}\in\mathcal{V}_{T}\}\), is connected. Furthermore, there are no malicious nodes that are neighbors, i.e., for \(v_{j},v_{i}\in\mathcal{V}_{M}\) (recall that \(\mathcal{V}_{M}=\mathcal{V}\setminus\mathcal{V}_{T}\)), we have that \((v_{j},v_{i})\notin\mathcal{E}\) and \((v_{i},v_{j})\notin\mathcal{E}\).
In the remainder of this section, unless we explicitly indicate otherwise, a "transmission" or a "broadcast" by node \(v_{j}\) indicates that the message is sent to its immediate neighbors (in \(\mathcal{N}_{j}\)), whereas a "two-hop transmission" or a "two-hop broadcast" by node \(v_{j}\) indicates that the message is sent to all of its two-hop neighbors (in \(\mathcal{N}_{j}^{(2)}\)), including its immediate neighbors. Furthermore, when we say that a node \(v_{i}\) is declared malicious by another node \(v_{j}\) at iteration \(k\), we mean that \(t_{ij}[t]=0\) for all \(t>k\).
### _Two-Hop Information Received at Each Iteration_
Under Assumptions 0t, 1, and 2t, let us consider that, at initialization, each node \(v_{\ell}\in\mathcal{V}\) sends to its neighbors its value \(x_{\ell}[0]=x_{\ell}\), \(\sigma_{\ell}[0]=0\) and \(\mathcal{T}_{\ell}[-1]=\mathcal{V}\) (the last two are not really needed since its neighbors already expect these values). Subsequently, at the end of each iteration \(k\) of Algorithm 1 (\(k=0,1,2,...\)), each node \(v_{\ell}\in\mathcal{V}\) sends the following information:
* Node \(v_{\ell}\) sends to its (immediate) neighbors in \(\mathcal{N}_{\ell}\):
* the set of its trustworthy neighbors \(\mathcal{T}_{\ell}[k]\);
* its updated value \(x_{\ell}[k+1]\);
* Node \(v_{\ell}\) also sends to all of its two-hop neighbors in \(\mathcal{N}_{\ell}^{(2)}\) (including its immediate neighbors):
* its updated running sum \(\sigma_{\ell}[k+1]\) (this is sent anyway to the immediate neighbors of node \(v_{\ell}\) when executing Algorithm 1, but here we require that this running sum is sent to all two-hop neighbors as well).
The above communication strategy is followed by all nodes, including each neighbor \(v_{i}\) of node \(v_{j}\) (\(v_{i}\in\mathcal{N}_{j}\)). Thus, at the end of each iteration \(k\), each node \(v_{j}\) can check the computations performed by each neighbor \(v_{i}\), as it has access to all information that is needed from node \(v_{i}\) to execute an iteration step. More specifically, node \(v_{j}\) knows: (i) the running sums \(\{\sigma_{l}[k]\ |\ v_{l}\in\mathcal{N}_{i}\}\), obtained via the two-hop transmissions by nodes \(v_{l}\), \(v_{l}\in\mathcal{N}_{i}\), at iteration \(k-1\) (\(v_{l}\in\mathcal{N}_{i}\), thus \(v_{l}\in\mathcal{N}_{j}^{(2)}\)); (ii) the value \(x_{i}[k]\), obtained via the one-hop transmission by node \(v_{i}\) at iteration \(k-1\); (iii) the value \(\sigma_{i}[k]\), obtained via the two-hop transmission by node \(v_{i}\) at iteration \(k-1\) (this information can be obtained via a one-hop transmission from node \(v_{i}\) to node \(v_{j}\), but it is sent by node \(v_{i}\) to all of its two-hop neighbors, see \(B\) above); (iv) the sets \(\mathcal{T}_{i}[k]\) and \(\mathcal{T}_{i}[k-1]\) are obtained via the one-hop transmissions by node \(v_{i}\) at iterations \(k\) and \(k-1\). Table I summarizes the information received at node \(v_{j}\) at iterations \(k-1\) and \(k\) from neighbor \(v_{i}\) and from a generic neighbor of node \(v_{i}\), denoted by \(v_{l}\) (note that \(v_{l}\) is a two-hop neighbor of node \(v_{j}\)). Let us reemphasize that \(\sigma_{i}[k]\) and \(\sigma_{i}[k+1]\) could have been obtained via one-hop transmissions from node \(v_{i}\) to node \(v_{j}\); however, node \(v_{i}\) also sends this information to all of its two-hop neighbors (following \(B\) above). Thus, we treat it as two-hop information._
#### Iii-A1 Concurrent Checking
Node \(v_{j}\) can directly assess whether node \(v_{i}\) has correctly updated its values \(x_{i}[k+1]\) and \(\sigma_{i}[k+1]\) (by comparing the values it calculates against those broadcasted by node \(v_{i}\) at the current iteration). More specifically, based on the information it has available (refer to Table I), node \(v_{j}\) performs the following parity checks at the end of iteration \(k\):
\[p_{ij}[k] = x_{i}[k+1]-\widehat{x}_{i}[k+1]\, \tag{15}\] \[q_{ij}[k] = \sigma_{i}[k+1]-(\sigma_{i}[k]+x_{i}[k])\, \tag{16}\]
where \(\widehat{x}_{i}[k+1]\) is calculated by node \(v_{j}\) as follows:
\[\widehat{x}_{i}[k+1] = \left(1-\tfrac{D_{i}[k]}{N}\right)x_{i}[k]+\] \[+\tfrac{1}{N}\sum_{v_{l}\in\mathcal{N}_{i}}(\mu_{l}[k+1]-\mu_{l }[k])+\] \[+\tfrac{1}{(|\mathcal{U}_{i}[k]|-|\Delta\mathcal{T}_{i}[k]|)} \tfrac{1}{N}\sigma_{i}[k]\,\]
with \(D_{i}[k]=|\mathcal{N}_{i}[k]|\), \(\mathcal{N}_{i}[k]=\mathcal{N}_{i}\cap\mathcal{T}_{i}[k]\), \(\Delta\mathcal{U}_{i}[k]=\mathcal{N}_{i}\cap(\mathcal{T}_{i}[k-1]\setminus \mathcal{T}_{i}[k])\), and \(\Delta\mathcal{T}_{i}[k]=\mathcal{N}_{i}\cap(\mathcal{T}_{i}[k]\setminus \mathcal{T}_{i}[k-1])\). Note that \(\mathcal{T}_{i}[k]\) and \(\mathcal{T}_{i}[k-1]\) are reported to node \(v_{j}\) directly by node \(v_{i}\) and the \(\mu_{il}\)'s can be obtained from \(\sigma_{l}\)'s (just like node \(v_{i}\) would obtain them):
\[\mu_{il}[k+1]=\left\{\begin{array}{ll}\sigma_{l}[k+1]\,&\forall v_{l}\in \mathcal{N}_{i}[k],\\ 0\,&\text{otherwise.}\end{array}\right.\]
Effectively, node \(v_{j}\) checks the computation of node \(v_{i}\) based on information reported by the neighbors of node \(v_{i}\) and node \(v_{i}\) itself. It is worth pointing out that the information from the neighbors of node \(v_{i}\) is the same information as the one node \(v_{i}\) is using. Thus, if node \(v_{j}\) discovers that \(p_{ij}[k]\neq 0\) and/or \(q_{ij}[k]\neq 0\), then it can safely declare node \(v_{i}\) as malicious (i.e., set \(t_{ij}[t]=0\) for all \(t>k\)). In fact, all other neighbors of node \(v_{i}\) (not just node \(v_{j}\)) will also discover
at iteration \(k\) that node \(v_{i}\) is misbehaving because their parity checks will evaluate to the same values as the ones in (15)-(16) (since they are based on identical information). This means that all neighbors of node \(v_{i}\), which are all trustworthy under Assumption 2t, will declare node \(v_{i}\) malicious at iteration \(k\); this effectively isolates node \(v_{i}\) from the remainder of the network.
#### Iv-A2 Parity Check Sufficiency
The previous section described necessary conditions that need to be satisfied for node \(v_{i}\) to be considered trustworthy by node \(v_{j}\) (namely, the parity checks \(p_{ij}[k]\) and \(q_{ij}[k]\) need to be zero at each iteration \(k\)). Under some mild conditions, we argue in this section that checking these two parity checks is also sufficient.
One interesting case is the following: the above scheme allows node \(v_{i}\) to try to manipulate the distributed computation by declaring one (or more) of its neighbors, say node \(v_{l^{\prime}}\), \(v_{l^{\prime}}\in\mathcal{N}_{i}\), as untrustworthy and then performing the correct computation (i.e., report the correct values under the assumption that node \(v_{l^{\prime}}\) is untrustworthy). Of course, this restricts how node \(v_{i}\) can affect the outcome of the computation as it cannot arbitrarily change its values, but it still gives node \(v_{i}\) a finite number of ways in which to alter the outcome of its computation (by including or excluding one of the \(2^{D_{i}}\) different subsets5 of its neighbors via the set \(\mathcal{T}_{i}[k]\) that it is reporting). Note that in such case, node \(v_{l^{\prime}}\) and any other (trustworthy) neighbor of node \(v_{i}\) that might have been unfairly declared untrustworthy by node \(v_{i}\), will immediately realize that node \(v_{i}\) is untrustworthy (under Assumption 0t, node \(v_{l^{\prime}}\) becomes aware that node \(v_{i}\) is declaring it untrustworthy since \(v_{l^{\prime}}\) also receives the set \(\mathcal{T}_{i}[k]\)). Thus, we assume that, if node \(v_{l^{\prime}}\), which is trustworthy is declared untrustworthy by node \(v_{i}\), then node \(v_{l^{\prime}}\) will immediately declare node \(v_{i}\) as untrustworthy. A potential problem, however, is the fact that the other neighbors of node \(v_{i}\) (including node \(v_{j}\)) may not necessarily be in position to declare node \(v_{i}\) as untrustworthy (as they may have no direct knowledge of the trustworthiness of node \(v_{l^{\prime}}\)). In such case, the untrustworthy node \(v_{i}\) is not removed but the combined actions of nodes \(v_{i}\) and \(v_{l^{\prime}}\) effectively remove the link between nodes \(v_{i}\) and \(v_{l^{\prime}}\). Note that the graph remains connected (recall that, under Assumption 2t, the graph \(\mathcal{G}_{T}\) is assumed to be connected even if untrustworthy node \(v_{i}\) and _all_ of its links, not just the link between nodes \(v_{i}\) and \(v_{l^{\prime}}\), are removed). Therefore, node \(v_{i}\)'s initial value will be included in the computation of the average, but node \(v_{i}\) will not be able to affect the average computation in any other way. This is legitimate behavior under our assumptions (effectively node \(v_{i}\) is not behaving maliciously, other than forcing a link in the graph to be removed). Note that node \(v_{i}\)'s initial value will be included in the average calculation (unless node \(v_{i}\) misbehaves in a different manner).
Footnote 5: Node \(v_{i}\) has \(D_{i}\) neighbors and can declare one or more of them as untrustworthy; it has \(2^{D_{i}}\) different ways of declaring its \(D_{i}\) neighbors as trustworthy or untrustworthy.
In the above scenario, if we require that trustworthy nodes need to have \(\mathcal{T}_{i}[k+1]\subseteq\mathcal{T}_{i}[k]\) for all \(k\), node \(v_{i}\) will easily be identified as untrustworthy if it tries to change the status of its neighbor \(v_{l^{\prime}}\) from trustworthy to untrustworthy and back (e.g., in order to delay the convergence of the algorithm). Note, however, that Assumption 3 did not require this monotonicity: as long as, for \(k>k_{\max}\), the sets \(\mathcal{T}_{j}[k]=\mathcal{V}_{T}\) for all trustworthy nodes \(v_{j}\in\mathcal{V}_{T}\), Algorithm 1 will allow nodes to converge to the trustworthy average.
The above distributed scheme allows nodes to eventually identify their untrustworthy neighbors and remove them from the computation. Since the graph \(\mathcal{G}_{T}\) (after removing all untrustworthy nodes and the links associated with them) remains connected, the trustworthy nodes will eventually compute the trustworthy average. Untrustworthy nodes can slow down the computation (by removing links with neighboring nodes by declaring (unfairly) one or more of their trustworthy neighbors as untrustworthy) but cannot affect the computation in any other way (other than including their own initial value in the calculation, which is a legitimate action to take). Untrustworthy nodes can also sacrifice themselves to delay convergence to the average (e.g., by behaving correctly to remain in the computation and then, at some later time step, misbehave in order to cause a disturbance in the computation).
**Example 1**: _We use this example to point out that if there are untrustworthy nodes that are neighbors (i.e., if Assumption 2t is violated), then the concurrent checking scheme described in this section is not guaranteed to capture the trustworthy nodes. Consider a bidirectional line graph of six nodes, \(v_{1}\), \(v_{2}\), \(v_{3}\),..., \(v_{6}\), that are connected via edges between \(v_{1}\) and \(v_{2}\), between \(v_{2}\) and \(v_{3}\),..., and between \(v_{5}\) and \(v_{6}\). If nodes \(v_{5}\) and \(v_{6}\) are untrustworthy, the induced graph involving trustworthy nodes is connected. However, since trustworthy nodes are only aware of their two-hop neighborhood, it is not possible for them to capture violations of the untrustworthy nodes: node \(v_{5}\) can collude with node \(v_{6}\) to pretend that any changes it incorporates in the computation are coming from neighbor \(v_{6}\) (which cannot be checked by any trustworthy node). In other words, node \(v_{5}\) can act correctly so that when node \(v_{4}\) performs its checks does not identify any problems, at least based on what node \(v_{6}\) is reporting; however, the only node that can check what node \(v_{6}\) reports is node \(v_{5}\), which is itself malicious._
### _Two-Hop Information Received Infrequently_
In this section, we perform trust evaluations, similar to the ones developed in the previous section, but infrequently (more precisely, at random instants of time). The fact that trust evaluations are performed infrequently does not compromise the ability of nodes to capture misbehavior by their neighboring nodes (even when this misbehavior occurs in between checks) because we utilize the invariant established in Theorem 1 about Algorithm 1 in a way that guarantees that even if a node manipulates its values at an iteration during which it is not checked by any of its neighbors, its misbehavior (if significant6) will be captured when it is next checked by its neighbors.
Footnote 6: Misbehavior by node \(v_{i}\) is insignificant if it does not change the average value the nodes converge to.
Under Assumptions 0t, 1, and 2t, let us consider that, at initialization, each node \(v_{\ell}\) sends to its (immediate) neighbors the values \(x_{\ell}[0]=x_{\ell}\), \(\sigma_{\ell}[0]=0\), and \(\mathcal{T}_{\ell}[-1]=\mathcal{V}\). Subsequently,
at the end of each iteration \(k\) of Algorithm 1 (\(k=0,1,2,...\)), each node \(v_{\ell}\) sends to its (immediate) neighbors its values \(x_{\ell}[k+1]\), \(\sigma_{\ell}[k+1]\), and \(\mathcal{T}_{\ell}[k]\). At each iteration \(k\), node \(v_{\ell}\) also saves its previous \(\sigma_{\ell}[k]\) value.
The above communication strategy is followed by all nodes, including each neighbor \(v_{i}\) of node \(v_{j}\) (\(v_{i}\in\mathcal{N}_{j}\)). We next describe the checking performed from the point of view of node \(v_{j}\) (each node does something similar). At random points in time that node \(v_{j}\) selects, node \(v_{j}\) initiates a check of all of the nodes in its neighborhood. More specifically, at the end of a randomly selected iteration, denoted here by \(K\), node \(v_{j}\) requests and receives additional information from its two-hop neighbors in \(\mathcal{N}_{j}^{(2)}:=\mathcal{N}_{j}\cup(\cup_{v_{i}\in\mathcal{N}_{j}} \mathcal{N}_{i})\). At this point, each two-hop neighbor \(v_{l}\in\mathcal{N}_{j}^{(2)}\) sends to node \(v_{j}\) the following information:
1. its current running sum \(\sigma_{l}[K+1]\), and
2. its previous running sum \(\sigma_{l}[K]\) (which is information that each node stores for one time step).
Table II summarizes the information received at node \(v_{j}\) at iterations \(K-1\) and \(K\) from neighbor \(v_{i}\) and from a generic neighbor of node \(v_{i}\), denoted by \(v_{l}\) (note that \(v_{l}\) is a two-hop neighbor of node \(v_{j}\)). We point out that two-hop information is only sent at iteration \(K\) (when a check is initiated by node \(v_{j}\)) and involves the running sums \(\sigma_{l}[K]\) (one iteration step ago) and \(\sigma_{l}[K+1]\) (current iteration) for each \(v_{l}\in\mathcal{N}_{j}^{(2)}\). Note that we denote these running sums by \(\widetilde{\sigma}_{l}[K]\) and \(\widetilde{\sigma}_{l}[K+1]\) because they could be different from the actual runnings sum \(\sigma_{l}[K]\) and \(\sigma_{l}[K+1]\) (the latter were sent by node \(v_{l}\) to its one-hop neighbors at iterations \(K-1\) and \(K\) respectively). In particular, if node \(v_{l}\) is malicious, we could have \(\widetilde{\sigma}_{l}[K]\neq\sigma_{l}[K]\) and/or \(\widetilde{\sigma}_{l}[K+1]\neq\sigma_{l}[K+1]\).
#### Iii-B1 Infrequent Checking
At the end of iteration \(K\), node \(v_{j}\) can check the computations performed by each neighbor \(v_{i}\) (\(v_{i}\in\mathcal{N}_{j}\)), as it has access to all information that is needed from node \(v_{i}\) to execute an iteration step. More specifically, node \(v_{j}\) knows: (i) the running sums \(\{\widetilde{\sigma}_{l}[K]\mid v_{l}\in\mathcal{N}_{i}\}\), obtained via the two-hop transmissions by each node \(v_{l}\) in \(\mathcal{N}_{i}\) at iteration \(K\) (\(v_{l}\in\mathcal{N}_{i}\), thus \(v_{l}\in\mathcal{N}_{j}^{(2)}\)); (ii) the value \(x_{i}[K]\), obtained via the one-hop transmission by node \(v_{i}\) at iteration \(K-1\); (iii) the value \(\sigma_{i}[K]\), obtained via the one-hop transmission by node \(v_{i}\) at iteration \(K-1\); (iv) the sets \(\mathcal{T}_{i}[K-1]\) and \(\mathcal{T}_{i}[K]\), obtained via the one-hop transmissions of node \(v_{i}\) at iterations \(K-1\) and \(K\), respectively.
Therefore, node \(v_{j}\) can directly assess whether node \(v_{i}\) has correctly updated its values \(x_{i}[K+1]\) and \(\sigma_{i}[K+1]\) (by comparing the values it calculates against those broadcasted by node \(v_{i}\) at the current iteration). More specifically, based on the information it has available (refer to Table II), node \(v_{j}\) performs the following parity checks:
\[p_{ij}[K] = x_{i}[K+1]-\widehat{x}_{i}[K+1]\;, \tag{17}\] \[q_{ij}[K] = \sigma_{i}[K+1]-(\sigma_{i}[K]+x_{i}[K])\;, \tag{18}\]
where \(\widehat{x}_{i}[K+1]\) is calculated by node \(v_{j}\) as
\[\widehat{x}_{i}[K+1] = \left(1-\tfrac{D_{i}[K]}{N}\right)x_{i}[K]\] \[+\;\tfrac{1}{N}\sum_{v_{i}\in\mathcal{N}_{i}}(\widetilde{\mu}_{ il}[K+1]-\widetilde{\mu}_{il}[K])+\] \[+\;(|\Delta\mathcal{U}_{i}[K]|-|\Delta\mathcal{T}_{i}[K]|)\tfrac{1}{ N}\sigma_{i}[K]\;,\]
with \(D_{i}[k]=|\mathcal{N}_{i}[k]|\), \(\mathcal{N}_{i}[k]=\mathcal{N}_{i}\cap\mathcal{T}_{i}[k]\), for \(k=K-1\) and \(k=K\), and \(\Delta\mathcal{U}_{i}[K]=\mathcal{N}_{i}\cap(\mathcal{T}_{i}[K-1]\setminus \mathcal{T}_{i}[K])\), and \(\Delta\mathcal{T}_{i}[K]=\mathcal{N}_{i}\cap(\mathcal{T}_{i}[K]\setminus \mathcal{T}_{i}[K-1])\). Note that \(\mathcal{T}_{i}[K]\) and \(\mathcal{T}_{i}[K-1]\) are reported to node \(v_{j}\) directly by node \(v_{i}\), but the \(\widetilde{\mu}_{il}\)'s are obtained from \(\widetilde{\sigma}_{l}\)'s as follows:
\[\widetilde{\mu}_{il}[k+1]=\left\{\begin{array}{ll}\widetilde{\sigma}_{l}[k+ 1]\;,&\forall v_{l}\in\mathcal{N}_{i}[k],\\ 0\;,&\text{otherwise},\end{array}\right.\]
for \(k=K-1\) and \(k=K\).
In addition, node \(v_{j}\) checks that the invariant in (10) holds for node \(v_{i}\) by performing the following check:
\[r_{ij}[K] = x_{i}[K+1]-x_{i}+\frac{D_{i}[K]}{N}\sigma_{i}[K+1]- \tag{19}\] \[-\;\sum_{v_{l}\in\mathcal{N}_{i}[K]}\widetilde{\sigma}_{l}[K+1]\;,\]
where \(x_{i}\) became available to node \(v_{j}\) at the initialization of the algorithm.
Finally, node \(v_{j}\) checks whether the running sums transmitted by node \(v_{i}\) to node \(v_{i}\)'s neighbors and to node \(v_{i}\)'s two-hop neighbors are in agreement (i.e., \(\widetilde{\sigma}_{i}[K]\neq\sigma_{i}[K]\) and/or \(\widetilde{\sigma}_{i}[K+1]\neq\sigma_{i}[K+1]\)); this can be done via the following parity check:
\[s_{ij}[K]=|\widetilde{\sigma}_{i}[K]-\sigma_{i}[K]|+|\widetilde{\sigma}_{i}[K+1] -\sigma_{i}[K+1]|\;. \tag{20}\]
We assume that, when node \(v_{j}\) initiates a check at iteration \(K\) for node \(v_{i}\) (and all of node \(v_{j}\)'s neighbors), all other neighbors of node \(v_{i}\) (not just node \(v_{j}\)) will also perform at iteration \(K\) the exact same checks in (17)-(20) as node \(v_{j}\) (this is possible because they receive information similar to the one received by node \(v_{j}\)). For example, node \(v_{l}\) will also be able to check node \(v_{i}\) though it will not necessarily be in position to check its other neighbors. This means that all neighbors of node \(v_{i}\) will reach a decision regarding the trustworthiness of node \(v_{i}\); as we explain next, however, the decisions of the neighbors of node \(v_{i}\) regarding the status of node \(v_{i}\) need not coincide (though they will eventually coincide).
#### Iii-B2 Parity Check Analysis
Node \(v_{j}\) checks the computation of node \(v_{i}\) based on information reported by the neighbors of node \(v_{i}\) and node \(v_{i}\) itself. If node \(v_{j}\) discovers that \(q_{ij}[K]\neq 0\) or \(s_{ij}[K]\neq 0\), then it can safely declare node \(v_{i}\) to be malicious, i.e., set \(t_{ij}[t]=0\) for all \(t>K\), because \(q_{ij}[K]\) and \(s_{ij}[K]\) are computed purely on information provided by node \(v_{i}\) and one of them being nonzero is an indication that node \(v_{i}\) has provided inconsistent information). However, if
node \(v_{j}\) discovers that \(p_{ij}[K]\neq 0\) and/or \(r_{ij}[K]\neq 0\), then it cannot safely declare node \(v_{i}\) as malicious. Then, there are two cases to be considered:
Case 1: node \(v_{i}\) is malicious; and/or
Case 2: one or more neighbors of node \(v_{i}\), say node \(v_{l}\), has reported an incorrect running sum (i.e., \(\widetilde{\sigma}_{l}[K]\neq\sigma_{l}[K]\) and/or \(\widetilde{\sigma}_{l}[K+1]\neq\sigma_{l}[K+1]\)).
We first discuss Case 2, which is more straightforward. If node \(v_{l}\) is sending to its two-hop neighbors a different running sum than the one that it sent to its one-hop neighbors (i.e., \(\widetilde{\sigma}_{l}[K]\neq\sigma_{l}[K]\) and/or \(\widetilde{\sigma}_{l}[K+1]\neq\sigma_{l}[K+1]\)), this will prompt all (trustworthy) neighbors of node \(v_{l}\) (including node \(v_{i}\), if trustworthy) to declare node \(v_{l}\) as untrustworthy. The effectively removes node \(v_{l}\) from the computation (under Assumption 2t, all neighbors of node \(v_{l}\) are trustworthy and will set their trust assessment about node \(v_{l}\) to zero). This includes node \(v_{i}\) itself because \(v_{i}\) will also realize that its neighboring node \(v_{l}\) is acting maliciously (reporting mismatched running sums, which will manifest itself as a violation of the fourth parity check, i.e., \(s_{ll}[K]\neq 0\)). This means that, for subsequent iterations \(k>K\), \(\mathcal{T}_{i}[k]\) will _not_ include node \(v_{l}\).
Note that, at iteration \(K\), node \(v_{j}\) cannot be certain about the status of node \(v_{i}\) (because it cannot discriminate between Case 1 and Case 2). For this reason, node \(v_{j}\) (as well as all other trustworthy nodes that are neighbors of node \(v_{i}\)) will consider node \(v_{i}\) to be "possibly untrustworthy," which means that it might be (permanently) declared untrustworthy when these nodes perform their next checks. In particular, after the check at iteration \(K\), node \(v_{j}\) expects node \(v_{i}\) (if trustworthy) to remove at least one node from its computation, i.e., to report a set \(\mathcal{T}_{i}[K+1]\) that is strictly contained in \(\mathcal{T}_{i}[K]\). If that does not happen at the next iteration, then node \(v_{j}\) can safely declare node \(v_{i}\) to be untrustworthy. Note that node \(v_{j}\) does not necessarily know which neighbor of node \(v_{i}\) might be untrustworthy (it is simply aware that there is a disagreement in terms of what node \(v_{i}\) and its neighbors are reporting); however, node \(v_{j}\) is expecting node \(v_{i}\), if trustworthy, to remove at least one of its neighboring nodes from the set \(\mathcal{T}_{i}[K+1]\) (otherwise, Case 1 holds and node \(v_{j}\) declares node \(v_{i}\) untrustworthy at iteration \(K+1\)).
Note that node \(v_{j}\) can also easily check whether or not \(\mathcal{T}_{i}[k+1]\subseteq\mathcal{T}_{i}[k]\), which should hold for all \(k\) (as it receives this information at each iteration). In particular, if node \(v_{i}\) was considered to be "possibly untrustworthy" (due to the check at iteration \(K\) when node \(v_{j}\) last initiated two-hop information exchange), then strict inequality \(\mathcal{T}_{i}[K+1]\subset\mathcal{T}_{i}[K]\) should hold. It is possible for a node \(v_{i}\) that was deemed "possibly untrustworthy" by node \(v_{j}\) to present a \(\mathcal{T}_{i}[K+1]\) that is strictly contained in \(\mathcal{T}_{i}[K]\) (and thus become trustworthy), but node \(v_{j}\) to identify another inconsistency at a subsequent check, say at iteration \(K^{\prime}\). In such case, node \(v_{j}\) again considers node \(v_{i}\) to be "possibly untrustworthy" because Case 1 is not certain (it could be that Case 2 holds for a different neighbor \(v_{l^{\prime}}\) of node \(v_{i}\)). Of course, the procedure continues because at iteration \(K^{\prime}+1\) node \(v_{j}\) expects \(\mathcal{T}_{i}[K^{\prime}+1]\) to be reduced even further (otherwise, node \(v_{j}\) can safely declare node \(v_{i}\) as untrustworthy). Note that this can only happen a finite number of times (at most as many as the number of neighbors of node \(v_{i}\)).
#### V-B3 Parity Check Sufficiency
The reason one also needs to check the invariant in (10) (via \(r_{ij}\)) is because an untrustworthy node \(v_{i}\) has the opportunity to change its \(x\) and \(\sigma\) values arbitrarily during the time steps when it is not being checked. Of course, if it attempts such a change, \(v_{i}\) is risking getting caught by node \(v_{j}\) (or by any other of its trustworthy neighbors) because the latter may randomly decide to request two-hop information. Nevertheless, if node \(v_{i}\) takes the risk at one iteration and does not get caught, node \(v_{j}\) will not be able to determine this misbehavior if it only checks the computational updates at later iterations (because those updates might be correctly performed by node \(v_{i}\) and the damage has already been done). Checking the invariant ensures that node \(v_{i}\) will get caught at a later iteration (unless its invariant is the proper one, which guarantees that node \(v_{i}\) is contributing the correct initial value).
In the above scenario, it is possible for node \(v_{i}\) to misbehave multiple times during checks and not get caught if it somehow manages to present the correct invariant when the checks are taking place. In fact, if the invariant for node \(v_{i}\) is correct, Theorem 2 effectively implies that node \(v_{i}\) is behaving correctly (we know that nodes will converge to the correct average). Therefore, node \(v_{i}\) can indeed manipulate its updates in-between checks without getting caught if it somehow manages to report \(x\) and \(\sigma\) values that satisfy the update check and the invariant check (for instance, it might attempt to do that in order to delay convergence); however, such manipulations cannot happen indefinitely because node \(v_{i}\) will eventually get caught, since, due to the randomness of the checks of node \(v_{j}\), node \(v_{i}\) will be identified as untrustworthy with probability one.
**Remark 3**: _As in the previous section, the above scheme allows node \(v_{i}\) to try to manipulate the average by declaring one (or more) of its neighbors, say node \(v_{l^{\prime}}\), as untrustworthy and then performing the correct computation (i.e., report the correct values under the assumption that node \(v_{l^{\prime}}\) is untrustworthy). This will effectively remove the link \((v_{i},v_{l^{\prime}})\) and can only happen a finite number of times. Under such strategy, node \(v_{i}\) manages to incorporate its value in the calculation, but cannot affect the average in any other way. Furthermore, if node \(v_{i}\) its trustworthy, the set \(\mathcal{T}_{i}[k+1]\) is a subset of \(\mathcal{T}_{i}[k]\) for all \(k\), and this is something that can be verified by its neighbors at each iteration. Note that \(\mathcal{T}_{i}[k+1]\subseteq\mathcal{T}_{i}[k]\) for all \(k\) is also a feature of the scheme proposed in [29, 30, 31]. However, in our case, node \(v_{j}\) may assign the status "possibly untrustworthy" to its neighbor \(v_{i}\), while waiting for node \(v_{i}\) to determine any untrustworthy neighbors it might have._
## VI Conclusions and Future Work
In this paper, we have considered the problem of trustworthy distributed average consensus in multi-agent systems, in the presence of malicious nodes that may try to influence the outcome of the computation via their updates that might be chosen in a collusive manner. The proposed algorithm allows the nodes to asymptotically converge to the average of the initial values of the trustworthy nodes, assuming that
(i) the underlying bidirectional communication topology that describes the information exchange among the non-malicious nodes is connected, and (ii) the non-malicious nodes eventually receive correct information about the trustworthiness of other nodes. The proposed algorithm allows the nodes to continuously adjust their values and updating strategy as they receive new information about the trustworthiness of other nodes; assuming that eventually this information correctly represents the trustworthiness of the various nodes in the distributed system, the non-malicious nodes asymptotically converge to the average of the initial values of the trustworthy nodes. When the nodes are capable of performing (perhaps periodically and at a higher cost) two-hop communication transmissions, we have also proposed strategy for the nodes to perform checks and obtain the trust assessments, in a way that guarantees that all malicious nodes are eventually identified by the trustworthy nodes, effectively ensuring convergence to the trustworthy average.
In our future work we plan to further research into mechanisms for distributively obtaining the trust assessments that are required by Algorithm 1 (e.g., in directed communication topologies) and/or relaxing the topological requirements we imposed (e.g., consider cases where neighboring nodes may be malicious). We also plan to consider dynamic versions of the above problem where nodes can update the initial measurement, or enter and leave the distributed system.
|
2308.09393 | Learning MDL logic programs from noisy data | Many inductive logic programming approaches struggle to learn programs from
noisy data. To overcome this limitation, we introduce an approach that learns
minimal description length programs from noisy data, including recursive
programs. Our experiments on several domains, including drug design, game
playing, and program synthesis, show that our approach can outperform existing
approaches in terms of predictive accuracies and scale to moderate amounts of
noise. | Céline Hocquette, Andreas Niskanen, Matti Järvisalo, Andrew Cropper | 2023-08-18T08:49:30Z | http://arxiv.org/abs/2308.09393v1 | # Learning MDL Logic Programs From Noisy Data
###### Abstract
Many inductive logic programming approaches struggle to learn programs from noisy data. To overcome this limitation, we introduce an approach that learns minimal description length programs from noisy data, including recursive programs. Our experiments on several domains, including drug design, game playing, and program synthesis, show that our approach can outperform existing approaches in terms of predictive accuracies and scale to moderate amounts of noise.
## 1 Introduction
The goal of inductive logic programming (ILP) [19] is to induce a logic program (a set of logical rules) that generalises training examples and background knowledge. A common criticism of ILP is that it cannot handle noisy data [1, 1]. This criticism is unfounded: most ILP approaches can learn from noisy data [10, 11, 12]. For instance, set-covering approaches [19, 18, 13, 14, 15] search for rules that generalise a subset of the examples.
Although most ILP approaches can learn from noisy data, they struggle to learn recursive programs and perform predicate invention, two important features when learning complex algorithms [10, 12]. Moreover, they are not guaranteed to learn optimal programs, such as textually minimal programs, and tend to overfit.
Recent approaches overcome these limitations and can learn recursive and textually minimal programs [1, 16] and perform predicate invention [19, 17]. However, these approaches struggle to learn from noisy data because they search for a program that strictly generalises all the positive and none of the negative examples.
In this paper, our goal is to learn recursive programs and support predicate invention in a noisy setting. Following [17], we first search for small programs that generalise a subset of the examples. We then search for a combination of these smaller programs to form a larger program. [10] search for a combination that strictly generalises all the positive and none of the negative examples, i.e. they cannot learn from noisy data. By contrast, we relax this condition to learn from noisy data. To avoid overfitting, we search for a combination that trades off model complexity (program size) and data fit (training accuracy). To do so, we use the minimal description length (MDL) principle [12]. In other words, we introduce an approach that learns MDL programs from noisy data.
To explore our idea, we build on _learning from failures_ (LFF) [10]. LFF frames the ILP problem as a constraint satisfaction problem (CSP), where each solution to the CSP represents a program (a hypothesis). The goal of a LFF learner is to accumulate constraints to restrict the hypothesis space (the set of all hypotheses) and thus constrain the search. We use LFF to explore our idea because it can learn recursive programs and perform predicate invention. We build on LFF by learning MDL programs from noisy examples. We introduce constraints which are optimally sound in that they do not prune MDL programs. To find an MDL combination, we use a maximum satisfiability (MaxSAT) solver [1].
Novelty and contributionsThe main novelty of this paper is the idea of learning small programs from noisy examples and using a MaxSAT solver to find an MDL combination. The benefits, which we show on diverse domains, are (i) the ability to learn complex programs from noisy examples, and (ii) improved performance compared to existing approaches. Overall, our contributions are:
1. We introduce MaxSynth, which learns MDL programs from noisy examples, including recursive programs.
2. We introduce constraints for this noisy setting and prove that they are optimally sound (Propositions 1 and 2).
3. We prove the correctness of MaxSynth, i.e. that it always learns an MDL program (Theorem 1).
4. We experimentally show on multiple domains, including drug design, game playing, and program synthesis, that MaxSynth can (i) substantially improve predictive accuracies compared to other systems, and (ii) scale to moderate amounts of noise (30%). We also show that our noisy constraints can reduce learning times by 99%.
Related Work
**ILP.** Most ILP approaches support noise Quinlan (1990); Muggleton (1995); McCreath and Sharma (1997); Blockeel and De Raedt (1998); Srinivasan (2001); Oblak and Bratko (2010); Ahlgren and Yuen (2013); Zeng, Patel, and Page (2014); De Raedt et al. (2015). However, these approaches do not support predicate invention, struggle to learn recursive programs, and are not guaranteed to learn an MDL program. Recent approaches can learn textually minimal and recursive programs but are not robust to noisy examples Corapi, Russo, and Lupu (2011); Muggleton, Lin, and Tamaddoni-Nezhad (2015); Kaminski, Eiter, and Inoue (2019); Cropper and Morel (2021); Dai and Muggleton (2021); Purgal, Cerna, and Kaliszyk (2022). There are two notable exceptions. \(\delta\)ILP Evans and Grefenstette (2018) frames the ILP problem as a differentiable neural architecture and is robust to noisy data. NoisyPopper Wahlig (2022) can learn MDL and recursive programs from noisy examples. However, these approaches can only learn programs with a small number of small rules. For instance, \(\delta\)ILP cannot learn programs with more than a few rules and can only use binary relations. By contrast, MaxSynth can learn MDL programs with many rules and any arity relation.
**Rule selection.** Many systems formulate the ILP problem as a rule selection problem Corapi, Russo, and Lupu (2011); Kaminski, Eiter, and Inoue (2019); Si et al. (2019); Raghothaman et al. (2020); Evans et al. (2021); Bembenek, Greenberg, and Chong (2023). These approaches precompute every possible rule in the hypothesis space and then search (often using a constraint solver) for a subset that entails all the positive and none of the negative examples. Some approaches relax this requirement to find a subset with the best coverage using solver optimisation Law, Russo, and Broda (2018); Evans et al. (2021) or numerical methods Si et al. (2019). However, because they precompute every possible rule, these approaches cannot learn rules with a large number of literals. By contrast, we do not precompute all possible rules.
**Sampling.** Sampling can mitigate noise. Raychev et al. (2016) pair a data sampler which selects representative subsets of the data with a regularised program generator to avoid overfitting. Metagol\({}_{nt}\)Muggleton et al. (2018) finds hypotheses consistent with randomly sampled subsets of the training examples and evaluates each resulting program on the remaining training examples. Metagol\({}_{nt}\) needs as input a parameter about the noise level. By contrast, MaxSynth does not need a user-provided noise level parameter and is guaranteed to learn an MDL program.
**Rule mining.** AMIE+ Galarraga et al. (2015) learns rules from noisy knowledge bases. However, AMIE+ can only use unary and binary relations, so it cannot be used on most of the datasets in our experiments, which require relations of arity greater than two. By contrast, MaxSynth can learn programs with relations of any arity.
**MDL.** Several approaches use cost functions based on MDL Quinlan (1990); Muggleton (1995); Srinivasan (2001); Huang and Pearce (2007). However, they are not guaranteed to find a program that minimises this cost function because they greedily learn a single rule at a time. By contrast, MaxSynth learns a global MDL program. Jain et al. (2021) learn propositional CNF using MDL. By contrast, we learn first-order theories.
## 3 Problem Setting
We describe our problem setting. We assume familiarity with logic programming Lloyd (2012) but have included a summary in the appendix.
### Learning From Failures
We use the LFF setting. A _hypothesis_ is a definite program with the least Herbrand model semantics. A _hypothesis space_\(\mathcal{H}\) is a set of hypotheses. LFF uses _hypothesis constraints_ to restrict the hypothesis space. Let \(\mathcal{L}\) be a meta-language that defines hypotheses. For instance, consider a language with two literals _h_lit/3_ and _b_lit/3_ which represent _head_ and _body_ literals respectively. With this language, we denote the rule _last(A,B) \(\leftarrow\)__tail(A,C), head(C,B)_ as the set of literals {_h lit(0,last,(0,1)), b_lit(0,tail,(0,2)), b_lit(0,head,(2,1))_}. The first argument of each literal is the rule index, the second is the predicate symbol, and the third is the literal variables, where \(0\) represents _A, \(1\) represents \(B\), etc. A _hypothesis constraint_ is a constraint (a headless rule) expressed in \(\mathcal{L}\). Let \(C\) be a set of hypothesis constraints written in a language \(\mathcal{L}\). A hypothesis is _consistent_ with \(C\) if when written in \(\mathcal{L}\) it does not violate any constraint in \(C\). We denote as \(\mathcal{H}_{C}\) the subset of the hypothesis space \(\mathcal{H}\) which does not violate any constraint in \(C\).
We define a LFF input:
**Definition 1** (**LFF input)**.: A _LFF input_ is a tuple \((E,B,\mathcal{H},C,cost)\) where \(E=(E^{+},E^{-})\) is a pair of sets of ground atoms denoting positive (\(E^{+}\)) and negative (\(E^{-}\)) examples, \(B\) is a definite program denoting background knowledge, \(\mathcal{H}\) is a hypothesis space, \(C\) is a set of hypothesis constraints, and \(cost\) is a function that measures the cost of a hypothesis.
We define a solution to a LFF input in the non-noisy setting:
**Definition 2** (**Non-noisy solution)**.: Given a LFF input \((E,B,\mathcal{H},C,cost)\), where \(E=(E^{+},E^{-})\), a hypothesis \(h\in\mathcal{H}_{C}\) is a _non-noisy solution_ when \(h\) is _complete_ (\(\forall e\in E^{+},\ B\cup h\models e\)) and _consistent_ (\(\forall e\in E^{-},\ B\cup h\not\models e\)).
A hypothesis that is not a non-noisy solution is a _failure_. A LFF learner builds constraints from failures to restrict the hypothesis space. For instance, if a hypothesis \(h\) is inconsistent (entails a negative example), a generalisation constraint prunes generalisations of \(h\) as they are also inconsistent.
In the non-noisy setting, a cost function only takes as input a hypothesis, i.e. they are of the type \(cost:\mathcal{H}\mapsto\mathbb{N}\). For instance, the cost of a hypothesis is typically measured as its size (the number of literals in the hypothesis). An _optimal_ non-noisy solution minimises the cost function:
**Definition 3** (**Optimal non-noisy solution)**.: Given a LFF input \((E,B,\mathcal{H},C,cost)\), a hypothesis \(h\in\mathcal{H}_{C}\) is an _optimal_ non-noisy solution when (i) \(h\) is a non-noisy solution, and (ii) \(\forall h^{\prime}\in\mathcal{H}_{C}\), where \(h^{\prime}\) is a non-noisy solution, \(cost(h)\leq cost(h^{\prime})\).
### Noisy Learning From Failures
A non-noisy solution must entail all the positive and none of the negative examples. To tolerate noise, we relax this requirement. We generalise a LFF input to allow a cost function to also take as input background knowledge \(B\) and examples \(E\), i.e. cost functions of the type \(cost_{B,E}:\mathcal{H}\mapsto\mathbb{N}\). In our noisy setting, any hypothesis \(h\in\mathcal{H}\) is a noisy solution. An _optimal_ noisy solution minimises the cost function:
**Definition 4** (Optimal noisy solution).: Given a noisy input \((E,B,\mathcal{H},C,cost_{B,E})\), a hypothesis \(h\in\mathcal{H}_{C}\) is an _optimal_ noisy solution when \(\forall h^{\prime}\in\mathcal{H}_{C}\), \(cost_{B,E}(h)\leq cost_{B,E}(h^{\prime})\).
### Minimal Description Length
Our noisy LFF setting generalises the LFF setting to allow for different cost functions. A challenge in machine learning is choosing a suitable cost function. According to complexity-based induction, the best hypothesis is the one that minimises the number of bits required to communicate the examples [12]. This concept corresponds to the hypothesis with minimal description complexity [10]1, where the idea is to trade off the complexity of a hypothesis (its size) with the fit to the data (training accuracy).
Footnote 1: Selecting an MDL hypothesis is equivalent to selecting a hypothesis with the maximum Bayes’ posterior probability [13].
We use MDL as our cost function. To define it, we use the terminology of [12]. The MDL principle states that the most probable hypothesis \(h\) for the data \(E\) is the one that minimises the complexity \(L(h|E)\) of the hypothesis given the data. The MDL principle can be expressed as finding a hypothesis that minimises \(L(h)+L(E|h)\), where \(L(h)\) is the syntactic complexity of a hypothesis \(h\) and \(L(E|h)\) is the complexity of the examples when coded using \(h\). We evaluate \(L(h)\) with the function \(size:\mathcal{H}\mapsto\mathbb{N}\), which measures the size of a hypothesis \(h\) as the number of literals in it. In a probabilistic setting, \(L(E|h)\) is the log-likelihood of the data with respect to the hypothesis \(h\). However, there is debate about how to interpret \(L(E|h)\) in a logical setting. For instance, [13] use an encoding based on Turing machines (a proof complexity measure). We evaluate \(L(E|h)\) as the cost of sending the exceptions to the hypothesis, i.e. the number of false positives \(fp_{E,B}(h)\) (simply \(fp(h)\)) and false negatives \(fn_{E,B}(h)\) (simply \(fn(h)\)). We define our MDL cost function:
**Definition 5** (MDL cost function).: Given examples \(E\) and background knowledge \(B\), the MDL cost of a hypothesis \(h\in\mathcal{H}\) is \(cost_{B,E}(h)=size(h)+fn_{E,B}(h)+fp_{E,B}(h)\).
In other words, the MDL cost of a hypothesis \(h\) is the number of literals in \(h\) plus the number of false positives and false negatives of \(h\) on the training data.
In the rest of the paper, any reference to an _optimal noisy solution_ refers to an optimal noisy solution with our MDL cost function.
### Noisy Constraints
A LFF learner builds constraints from failures to restrict the hypothesis space. The existing constraints for LFF are intolerant to noise. For instance, if a hypothesis \(h\) is inconsistent, a non-noisy generalisation constraint prunes generalisations of \(h\) as they are also inconsistent. However, in a noisy setting, a generalisation of \(h\) might have a lower MDL cost. Therefore, the existing constraints can prune optimal noisy solutions from the hypothesis space.
To overcome this limitation, we introduce constraints that tolerate noise. These constraints are optimally sound for the noisy setting because they do not prune optimal noisy solutions from the hypothesis space. Due to space limitations, we only describe one specialisation and one generalisation constraint. The appendix contains a description of three other constraints. All the proofs are in the appendix.
Let \(h_{1}\) be a hypothesis with \(tp(h_{1})\) true positives and \(h_{2}\) be a specialisation of \(h_{1}\). Then \(h_{2}\) has at most \(tp(h_{1})\) true positives. Therefore, if \(size(h_{2})>tp(h_{1})\) then the size of \(h_{2}\) is greater than the number of positive examples it covers so \(h_{2}\) cannot be in an optimal noisy solution:
**Proposition 1** (Noisy specialisation constraint).: Let \(h_{1}\) be a hypothesis, \(h_{2}\) be a specialisation of \(h_{1}\), and \(size(h_{2})>tp(h_{1})\). Then \(h_{2}\) cannot be in an optimal noisy solution.
Similarly, let \(h_{1}\) be a hypothesis with \(fp(h_{1})\) false positives and \(h_{2}\) be a generalisation of \(h_{1}\). Then \(h_{2}\) has at least \(fp(h_{1})\) false positives and a cost of at least \(fp(h_{1})+size(h_{2})\). We show that the cost of \(h_{2}\) is greater than the cost of the empty hypothesis when \(size(h_{2})\geq|E^{+}|-fp(h_{1})\):
**Proposition 2** (Noisy generalisation constraint).: Let \(h_{1}\) be a hypothesis, \(h_{2}\) be a generalisation of \(h_{1}\), and \(size(h_{2})\geq|E^{+}|-fp(h_{1})\). Then \(h_{2}\) cannot be in an optimal noisy solution.
In the next section, we introduce MaxSynth which uses these optimally sound noisy constraints to learn programs.
## 4 Algorithm
We now describe our MaxSynth algorithm. To explain our approach, we first describe [13, 14], which MaxSynth builds on.
**Popper.**Popper takes as input background knowledge, positive and negative training examples, and a maximum hypothesis size. Popper starts with an ASP program \(\mathcal{P}\). Each model (answer set) of \(\mathcal{P}\) corresponds to a hypothesis (a definite program). Popper uses a generate, test, combine, and constrain loop to find a textually minimal non-noisy solution. In the generate stage, Popper uses Clingo [13], an ASP system, to search for a model of \(\mathcal{P}\) for increasing hypothesis sizes. If there is no model, Popper increments the hypothesis size and loops again. If there is a model, Popper converts it to a hypothesis \(h\). In the test stage, Popper uses Prolog to test \(h\) on the examples. If \(h\) is a non-noisy solution, Popper returns it. If \(h\) covers at least one positive example and no negative examples, Popper adds \(h\) to a set of promising programs. In the combine stage, Popper searches for a combination (a union)
of promising programs that covers all the positive examples and is minimal in size. If Popper finds a combination, it sets the combination as the best solution so far and updates the maximum hypothesis size. In the constrain stage, Popper uses \(h\) to build hypothesis constraints (represented as ASP constraints). Popper adds these constraints to \(\mathcal{P}\) to prune models and thus prune the hypothesis space. For instance, if \(h\) is inconsistent, Popper builds a generalisation constraint to prune the generalisations of \(h\) from the hypothesis space. Popper repeats this loop until it finds a textually minimal non-noisy solution or there are no more hypotheses to test.
### MaxSynth
MaxSynth (Algorithm 1) is similar to Popper except for a few key differences. Popper returns the smallest hypothesis that entails all the positive and none of the negative examples, i.e. it is intolerant to noisy data. By contrast, MaxSynth returns an MDL hypothesis, i.e. it is tolerant to noisy data. To find an MDL hypothesis, MaxSynth differs by (i) also saving inconsistent programs as promising programs, (ii) finding an MDL combination in the combine stage, and (iii) using noise-tolerant constraints to prune non-MDL programs. We describe these differences in turn.
```
1defmaxsynth(bk,pos,neg):
2cons,promising,best_solution={},{},{},{},{}size,max_md1=1,len(pos)
3whilesize\(\leq\)max_mdl:
5h=generate(cons,size)
6ifh==UNSAT:
7size+=1
8continue
9tp,fn,fp=test(pos,neg,bk,h)
10h_mdl=fn+fp+size(h)
11ifh_mdl==max_mdl:
12test_solution=h
13max_mdl=h_mdl-1
14iftp>0andnot_rec(h)andnot_pi(h):
15promising+=h
16combination=combine(promising,max_mdl)
17ifcombination!=UNSAT:
18best_solution=combination
19tp,fn,fp=test(pos,neg,bk,combination)
20max_mdl=fn+fp+size(combination)-1
21cons+=constrain(h,fn,fp)
22returnbest_solution
```
**Algorithm 1**MaxSynth
Promising ProgramsPopper only saves consistent programs as promising programs. Popper is, therefore, intolerant to false negative training examples. To handle noise, MaxSynth relaxes this requirement. If a program \(h\) covers at least one positive example, MaxSynth saves \(h\) as a promising program (line 15), even if \(h\) is inconsistent. MaxSynth does not save a program if it is recursive or has predicate invention. The reason is that a combination of recursive programs or programs with invented predicates can cover more examples than the union of the examples covered by each individual program. For instance, consider the examples \(\{f([1,3]),f([3,0]),f([3,1])\}\) and the hypotheses \(h_{1}\) and \(h_{2}\):
\[\begin{array}{l}h_{1}=\{\textit{ ff(A) }\leftarrow\textit{head(A,1) }\}\\ h_{2}=\{\textit{ ff(A) }\leftarrow\textit{head(A,0)}\\ \textit{ fail(A,B),f(B)}\}\end{array}\]
The hypothesis \(h_{1}\) covers the first example and \(h_{2}\) covers the second example but the hypothesis \(h_{1}\cup h_{2}\) covers all three examples. Therefore, in the combine stage, we cannot simply reason about the coverage of a combination of programs using the union of coverage of the individual programs in the combination. However, MaxSynth can learn MDL programs with recursion or predicate invention as they can be output by the generate stage and evaluated (lines 11-13).
CombineIn the combine stage, Popper searches for a combination of promising programs that covers all the positive examples and is minimal in size. By contrast, MaxSynth searches for a combination of promising programs with MDL cost (line 16). The initial maximum MDL cost is the number of positive examples which is the cost of the empty hypothesis. If we find a combination in the combine stage, we update the maximum MDL cost (line 20).
We formulate the search for an MDL combination of programs as a MaxSAT problem [1]. In MaxSAT, given a set of hard clauses and a set of soft clauses with an associated weight, the task is to find a truth assignment which satisfies each hard clause and minimises the sum of the weights of falsified soft clauses.
Our MaxSAT encoding is as follows. For each promising program \(h\), we use a variable \(p_{h}\) to indicate whether \(h\) is in the combination. For each example \(e\in E^{+}\cup E^{-}\), we use a variable \(c_{e}\) to indicate whether the combination covers \(e\). For each positive example \(e\in E^{+}\), we include the hard clause \(c_{e}\rightarrow\bigvee_{B\cup h\models e}p_{h}\) to ensure that, if the combination covers \(e\), then at least one of the programs in the combination covers \(e\). For each negative example \(e\in E^{-}\), we include the hard clause \(\neg c_{e}\rightarrow\bigwedge_{B\cup h\models e}\neg p_{h}\) to ensure that, if the combination does not cover \(e\), then none of the programs in the combination covers \(e\). We encode the MDL cost function as follows. For each promising program \(h\) we include the soft clause \((\neg p_{h})\) with weight \(size(h)\). For each positive example \(e\in E^{+}\), we include the soft clause \((c_{e})\) with weight \(1\). For each negative example \(e\in E^{-}\), we include the soft clause \((\neg c_{e})\) with weight \(1\). We use a MaxSAT solver on this encoding. The MaxSAT solver finds an optimal solution which corresponds to a combination of promising programs that minimises the MDL cost function.
ConstrainIn the constrain stage (line 21), MaxSynth uses our optimally sound constraints (Section 3.4) to prune the hypothesis space. For instance, given a hypothesis \(h_{1}\), MaxSynth prunes all generalisations of \(h_{1}\) with size greater than \(|E^{+}|-fp(h_{1})\) (Proposition 2). By contrast, Popper prunes all generalisations of an inconsistent hypothesis.
CorrectnessWe show that MaxSynth returns an optimal noisy solution.
**Theorem 1** (Correctness).: MaxSynth returns an optimal noisy solution if one exists.
**Proof.**_The proof is in the appendix. We first show that MaxSynth without any noisy constraints returns an optimal noisy hypothesis, and then that our noise-tolerant constraints are optimally sound (Propositions 1 and 2)._
## 5 Experiments
To test our claim that MaxSynth can learn programs from noisy data, our experiments aim to answer the question:
**Q1**: Can MaxSynth learn programs from noisy data?
To answer **Q1**, we evaluate MaxSynth on a variety of tasks with noisy data. We compare MaxSynth against Aleph [12], Popper, and NoisyPopper [2]2. We use these systems because they can learn definite recursive programs. Aleph is a set covering approach that supports noise. NoisyPopper can handle noisy data but can only learn small programs. Because of space limitations and its poor performance, the results for NoisyPopper are in the appendix.
Footnote 2: We considered other systems. Rule selection approaches [1, 1, 12] precompute every possible rule which is infeasible on our datasets. Metarule-based approaches [13] are unusable in practice [14]. Rule learning systems [15] can only use unary and binary relations but our experiments need relations with arity greater than two.
To evaluate how MaxSynth handles different amounts of noise, our experiments aim to answer the question:
**Q2**: How well does MaxSynth handle progressively more noise?
To answer **Q2**, we evaluate the performance of MaxSynth on domains where we can progressively increase the amount of noise. For an increasing noise amount \(p\), we randomly change the label of a proportion \(p\) of the training examples.
We claim that our noisy constraints (Section 3.4) can improve learning performance by pruning non-MDL programs from the hypothesis space. To evaluate this claim, our experiments aim to answer the question:
**Q3**: Can noisy constraints reduce learning times compared to unconstrained learning?
To answer **Q3**, we compare the learning time of MaxSynth with and without noisy constraints.
Our approach should improve learning performance when learning programs from noisy data. However, it is often unknown whether the data is noisy. To evaluate the overhead of handling noise, our experiments aim to answer the question:
**Q4**: What is the overhead of MaxSynth on noiseless problems?
To answer **Q4**, we compare the performance of MaxSynth and Popper on standard benchmarks which are not noisy.
DomainsWe briefly describe our five domains. The appendix includes more details.
**IGGP.** The goal of _inductive general game playing_[14] is to induce rules to explain game traces from the general game playing competition [1].
**Program synthesis.** We use a program synthesis dataset [14]. These tasks are list transformation tasks which involve learning recursive programs.
**Zendo.** Zendo is an inductive game where the goal is to find a rule by building structures of pieces. The game interests cognitive scientists [1].
**Alzheimer.** These real-world tasks [15, 16] involve learning rules describing four properties desirable for drug design against Alzheimer's disease.
**Wn18RR.** Wn18rr [17] is a real-world knowledge base with 11 relations from WordNet.
SystemsMaxSynth, Popper, and NoisyPopper use identical biases so the comparison between them is fair. MaxSynth uses the UWrMaxSat solver [14] in the combine stage. To perform a direct comparison, we modify Popper to also use the UWrMaxSat solver in its combine stage. We use the default cost function (coverage) for Aleph. We have tried to make a fair comparison with Aleph but, since it has many additional settings, it is naturally plausible that further parameter tuning could improve its performance [12]. The appendix contains more details about the systems.
Experimental SetupWe measure predictive accuracy and learning time given a maximum learning time of 20 minutes. We repeat all the experiments 10 times and calculate the mean and standard error. We use an 8-Core 3.2 GHz Apple M1 and a single CPU.
Experimental Results
Experiment 1: Comparison against SOTATable 1 shows the predictive accuracies of the systems on the datasets. It shows that MaxSynth (i) consistently achieves high accuracy on most tasks, and (ii) comprehensively outperforms existing systems in terms of predictive accuracy. A paired t-test shows MaxSynth significantly (\(p<0.01\)) outperforms Popper on 25/42 tasks, achieves similar accuracies on 13/42 tasks, and is significantly outperformed by Popper on 4/42 tasks. For instance, MaxSynth has high accuracy (at least 94%) on all _emado_ tasks while Popper struggles when there is noise. While Popper searches for a hypothesis that entails all the positive and no negative examples, MaxSynth tolerates misclassified examples.
MaxSynth outperforms Aleph on the recursive tasks because Aleph struggles to learn recursive programs. MaxSynth also outperforms Aleph on some non-recursive tasks. For instance, on _iggp-coins_, MaxSynth achieves 100% predictive accuracy on the testing examples even with 20% noise in the training examples. One reason is that Aleph does not consider the size of a hypothesis in its (default) cost function and thus often overfits. Aleph also sometimes timeouts, such as on the _wn18rr_ tasks, in which case it does not return any hypothesis.
MaxSynth does not always achieve 100% predictive accuracy despite learning an MDL hypothesis, such as on the _iggp-md_ tasks. The reason is that an MDL hypothesis is not necessarily the hypothesis with the highest predictive accuracy [10, 12].
MaxSynth and Popper are anytime systems. If the search time exceeds a timeout, MaxSynth and Popper return the best hypothesis found thus far. MaxSynth terminates on all _iggp_, _zendo_, and _alzheimer_ tasks, which means it learns an MDL solution. MaxSynth returns the best solution found within timeout for most _program synthesis_ tasks and _wn18rr2_.
To understand how accuracy varies with learning time, we set a timeout of \(t\) seconds for increasing values of \(t\). Figures 2 and 2 show the predictive accuracies of the best hypothesis found when increasing the timeout. This result shows that MaxSynth can often quickly find an optimal (MDL) hypothesis. For instance, on _alzheimer-toxic_, MaxSynth takes only 13s to find an optimal hypothesis but needs 48s to prove that this hypothesis is optimal. Likewise, on _zendo2_ (_20_), MaxSynth takes only 60s to find an optimal hypothesis but needs 151s more to prove this hypothesis is optimal.
Overall, these results suggest that the answer to **Q1** is that MaxSynth can (i) learn programs, including recursive programs, with high accuracy from noisy data, and (ii) outperform existing systems in terms of predictive accuracies.
Experiment 2: Noise ToleranceFigures 4 and 4 show the predictive accuracies of the systems on two tasks when increasing the amount of noise. These results show that the performance of MaxSynth degrades slower with increasing amounts of noise compared to Popper. MaxSynth can scale to problems with up to 30% of noise while Popper struggles from 10% of noise. For instance, on _iggp-prs_ with 30% of noise, Popper and Aleph have less than 55% accuracy, whereas MaxSynth has over 90% accuracy. Popper is not robust to false positives. It returns a hypothesis which is consistent but only covers a fraction of the positive examples. Aleph typically overfits the data. Overall, these results suggest that the answer to **Q2** is that MaxSynth can scale to moderate amounts of noise.
Experiment 3: Noisy ConstraintsTable 2 shows the learning times of MaxSynth with and without noisy constraints (Section 3.4). It shows that our constraints can drastically reduce learning times. A pai
significance of the difference for all tasks (\(p<0.01\)). The appendix shows the predictive accuracies, which are equal or higher with noisy constraints. This result shows that our noisy constraints are highly effective at soundly pruning the hypothesis space. For instance, on _iggp-md (10)_, MaxSynth considers 21,025 programs without constraints. By contrast, with constraints, MaxSynth considers only 136 programs, a 99% reduction. Similarly, MaxSynth considers 176,453 programs for _zendo2 (20)_ without constraints and only 5,503 programs with constraints, a 97% reduction. The overhead of analysing hypotheses and imposing constraints is small. For instance, on _iggp-md (10)_, MaxSynth spends 0.7s building constraints but this pruning reduces the total learning time from 109s to 2s. Overall, these results suggest that the answer to **Q3** is that noisy constraints can drastically reduce learning times.
Experiment 4: OverheadTable 1 shows the predictive accuracies of MaxSynth and Popper on noiseless problems. These results show that MaxSynth often can find a non-noisy solution. However, MaxSynth may return a simpler hypothesis than Popper. For instance, on _iggp-md (0)_, MaxSynth returns a hypothesis of size 5. This hypothesis misclassifies 2 training positive examples and therefore has a cost of 7. Its predictive accuracy is 75%. By contrast, Popper finds a hypothesis of size 11 with maximal predictive accuracy (100%) on the test data. As Vitanyi and Li (2000) discuss, MDL interprets perfect data as data obtained from a simpler hypothesis subject to measuring errors.
Table 3 shows the learning times. It shows that MaxSynth often has similar learning times to Popper. For instance, both systems require 7s on _iggp-buttons (0)_ and around 50s on _iggp-rps (0)_. MaxSynth can be faster than Popper. A paired t-test shows MaxSynth significantly outperforms Popper on 7/12 tasks (\(p<0.01\)). For instance, on _zendo2 (0)_, MaxSynth takes 48s whilst Popper takes 102s. The pruning by MaxSynth can be effective. For instance, given any hypothesis \(h\), MaxSynth prunes specialisations of size greater than \(fp(h)\), whereas Popper only prunes specialisations of consistent hypotheses. Also, MaxSynth sometimes returns a smaller hypothesis than Popper and thus searches up to a smaller depth, such as for _iggp-md (0)_.
Overall, these results suggest that the answer to **Q4** is that unnecessarily tolerating noise is not prohibitively expensive and often leads to similar performance.
## 6 Conclusions and Limitations
We have introduced an ILP approach that learns MDL programs from noisy examples, including recursive programs. Our approach first learns small programs that generalise a subset of the positive examples and then combines them to build an MDL program. We implemented our idea in MaxSynth, which uses a MaxSAT solver to find an MDL combination of programs. Our empirical results on multiple domains show that MaxSynth can (i) substantially improve predictive accuracies compared to other systems, and (ii) scale to moderate amounts of noise (30%). Our results also show that our noisy constraints can reduce learning times by 99%. Overall, this paper shows that MaxSynth can learn accurate hypotheses for noisy problems that other systems cannot.
\begin{table}
\begin{tabular}{l|c c c}
**Task** & **Without** & **With** & **Difference** \\ \hline _iggp-md (0)_ & 14 \(\pm\) 0 & 1 \(\pm\) 0 & **-92\%** \\ _iggp-md (10)_ & 109 \(\pm\) 2 & 2 \(\pm\) 0 & **-98\%** \\ _iggp-md (20)_ & 103 \(\pm\) 1 & 2 \(\pm\) 0 & **-98\%** \\ _iggp-buttons (0)_ & 61 \(\pm\) 0 & 7 \(\pm\) 0 & **-88\%** \\ _iggp-buttons (10)_ & 61 \(\pm\) 1 & 9 \(\pm\) 0 & **-85\%** \\ _iggp-buttons (20)_ & 57 \(\pm\) 0 & 10 \(\pm\) 0 & **-82\%** \\ _iggp-coins (0)_ & 615 \(\pm\) 5 & 138 \(\pm\) 1 & **-77\%** \\ _iggp-coins (10)_ & 631 \(\pm\) 14 & 141 \(\pm\) 2 & **-77\%** \\ _iggp-coins (20)_ & 596 \(\pm\) 2 & 144 \(\pm\) 2 & **-75\%** \\ _iggp-rps (0)_ & 195 \(\pm\) 1 & 50 \(\pm\) 1 & **-74\%** \\ _iggp-rps (10)_ & 197 \(\pm\) 1 & 60 \(\pm\) 2 & **-69\%** \\ _iggp-rps (20)_ & 193 \(\pm\) 1 & 66 \(\pm\) 1 & **-65\%** \\ \hline _zendo1 (0)_ & 33 \(\pm\) 9 & 13 \(\pm\) 3 & **-60\%** \\ _zendo1 (10)_ & 648 \(\pm\) 3 & 77 \(\pm\) 4 & **-88\%** \\ _zendo1 (20)_ & 688 \(\pm\) 7 & 100 \(\pm\) 13 & **-85\%** \\ _zendo2 (0)_ & 603 \(\pm\) 4 & 48 \(\pm\) 1 & **-92\%** \\ _zendo2 (10)_ & 611 \(\pm\) 1 & 48 \(\pm\) 3 & **-92\%** \\ _zendo2 (20)_ & 766 \(\pm\) 70 & 118 \(\pm\) 36 & **-84\%** \\ _zendo3 (0)_ & 613 \(\pm\) 2 & 49 \(\pm\) 3 & **-92\%** \\ _zendo3 (10)_ & 626 \(\pm\) 2 & 62 \(\pm\) 2 & **-90\%** \\ _zendo3 (20)_ & 834 \(\pm\) 66 & 190 \(\pm\) 112 & **-77\%** \\ _zendo4 (0)_ & 594 \(\pm\) 4 & 43 \(\pm\) 3 & **-92\%** \\ _zendo4 (10)_ & 616 \(\pm\) 3 & 58 \(\pm\) 2 & **-90\%** \\ _zendo4 (20)_ & 767 \(\pm\) 36 & 122 \(\pm\) 32 & **-84\%** \\ \hline _dropk (0)_ & 541 \(\pm\) 5 & 7 \(\pm\) 1 & **-98\%** \\ _evens (0)_ & 770 \(\pm\) 2 & 7 \(\pm\) 0 & **-99\%** \\ _reverse (0)_ & _timeout_ & 45 \(\pm\) 7 & **-96\%** \\ _sorted (0)_ & 1182 \(\pm\) 8 & 31 \(\pm\) 3 & **-97\%** \\ \hline _dzheimer\_accfyl_ & _timeout_ & 133 \(\pm\) 5 & **-88\%** \\ _dzheimer\_amine_ &timeout_ & 73 \(\pm\) 3 & **-93\%** \\ _dzheimer\_mem_ &timeout_ & 79 \(\pm\) 3 & **-93\%** \\ _dzheimer\_toxic_ &timeout_ & 61 \(\pm\) 6 & **-94\%** \\ \hline _wn18rr1_ & _timeout_ & 534 \(\pm\) 14 & **-55\%** \\ \end{tabular}
\end{table}
Table 2: Learning time for MaxSynth with and without noisy constraints. We show tasks where approaches differ. The full table is in the appendix.
\begin{table}
\begin{tabular}{l|c c}
**Task** & **MaxSynth** & **Popper** \\ \hline _iggp-md (0)_ & \(\mathbf{1\pm 0}\) & 11 \(\pm\) 0 \\ _iggp-buttons (0)_ & \(\mathbf{7\pm 0}\) & \(\mathbf{7\pm 0}\) \\ _iggp-coins (0)_ & \(\mathbf{138\pm 1}\) & 147 \(\pm\) 2 \\ _iggp-rps (0)_ & \(\mathbf{50\pm 1}\) & 53 \(\pm\) 1 \\ \hline _zendo1 (0)_ & \(\mathbf{13\pm 3}\) & 23 \(\pm\) 4 \\ _zendo2 (0)_ & \(\mathbf{48\pm 1}\) & 102 \(\pm\) 2 \\ _zendo3 (0)_ & \(\mathbf{49\pm 3}\) & 85 \(\pm\) 3 \\ _zendo4 (0)_ & \(\mathbf{43\pm 3}\) & 79 \(\pm\) 4 \\ \hline _dropk (0)_ & \(\mathbf{7\pm 1}\) & \(\mathbf{4\pm 1}\) \\ _evens (0)_ & \(\mathbf{7\pm 0}\) & \(\mathbf{8\pm 0}\) \\ _reverse (0)_ & \(\mathbf{45\pm 7}\) & 65 \(\pm\) 9 \\ _sorted (0)_ & 31 \(\pm\) 3 & \(\mathbf{19\pm 2}\) \\ \end{tabular}
\end{table}
Table 3: Learning times on non-noisy tasks.
Limitations.We use MDL as our criterion for optimality. Our experiments show that an MDL hypothesis does not necessarily have the lowest generalisation error, as discussed by Domingos (1999). To overcome this limitation, future work should investigate alternative cost functions (Lavrac, Flach, and Zupan 1999). For instance, Hernandez-Orallo and Garcia-Varea (2000) discuss creative alternatives to MDL.
|
2310.13181 | Locational Marginal Pricing of Energy in Pipeline Transport of Natural
Gas and Hydrogen with Carbon Offset Incentives | We propose an optimization formulation for locational pricing of energy
transported through a pipeline network that carries mixtures of natural gas and
hydrogen from distributed sources to consumers. The objective includes the
economic value provided by the pipeline to consumers of energy and suppliers of
natural gas and green hydrogen, as well as incentives to lower carbon emissions
by consuming the latter instead of the former. The optimization is subject to
the physics of gas flow and mixing in the pipeline network as well as
engineering limits. In addition to formulating this mathematical program, we
synthesize the Lagrangian and derive analytical expressions for the dual
variables. We propose that the dual solution can be used to derive locational
marginal prices of natural gas, hydrogen, and energy, as well as the
decarbonization premium paid by consumers that receive hydrogen. We derive
several properties of solutions obtained using the proposed market mechanism,
and demonstrate them using case studies for standard 8-node and 40-node
pipeline test networks. Finally, we show that optimization-based analysis of
the type proposed here is critical for making sound decisions about economic
policy and infrastructure expansion for blending green hydrogen into existing
natural gas pipelines. | Mo Sodwatana, Saif R. Kazi, Kaarthik Sundar, Adam Brandt, Anatoly Zlotnik | 2023-10-19T22:29:45Z | http://arxiv.org/abs/2310.13181v2 | Locational Marginal Pricing of Energy in Pipeline Transport of Natural Gas and Hydrogen with Carbon Offset Incentives
###### Abstract
We propose an optimization formulation for locational pricing of energy transported through a pipeline network that carries mixtures of natural gas and hydrogen from distributed sources to consumers. The objective includes the economic value provided by the pipeline to consumers of energy and suppliers of natural gas and green hydrogen, as well as incentives to lower carbon emissions by consuming the latter instead of the former. The optimization is subject to the physics of gas flow and mixing in the pipeline network as well as engineering limits. We synthesize the Lagrangian and derive analytical expressions for the dual variables, which can be used to derive locational marginal prices of natural gas, hydrogen, and energy, as well as congestion prices and the decarbonization premium paid by consumers that receive hydrogen. We derive several properties of solutions obtained using the proposed market mechanism, and demonstrate them using case studies for standard 8-junction and 40-junction pipeline test networks. Finally, we show that optimization-based analysis of the type proposed here is critical for making sound decisions about economic policy and infrastructure expansion for blending green hydrogen into existing natural gas pipelines.
Energy market economics, hydrogen and natural gas blends, carbon emissions, mitigation incentives
## I Introduction
Climate change, caused by uncontrolled greenhouse gas (GHG) emissions during past century [1], is one of the most pressing global challenges today [2]. The use of fossil fuels for energy production and industrial processes is the major contributor to GHG emissions, and replacing fossil fuels with cleaner alternatives is critical to meeting the emissions reduction targets set forth [3]. The transition away from an energy system that is fossil fuel-intensive to one that is low-carbon requires a mix of renewable energy generation distributed throughout the grid, large-scale energy storage technologies, and carbon-free chemical energy carriers.
Green hydrogen refers to hydrogen gas produced via electrolysis using electricity from renewable sources. Green hydrogen is considered a promising alternative to fossil fuels, because it can serve as an alternative energy carrier and feedstock in hard-to-abate industries such as the petrochemical sectors, the cement and steel-making industries, and in heavy-duty transport [4]. Hydrogen facilities can also be integrated into power systems and offer an alternative form of long-duration or seasonal storage [5, 6]. In addition to its use in fuel cells, hydrogen can be burned to produce heat or drive turbines, so it can be injected into existing gas pipelines so that the blend of hydrogen and natural gas can be consumed in end-use appliances and furnaces [7]. Challenges with blending of hydrogen include leakage, material degradation, and embrittlement of steel, as well as changes to distribution system pressures and a lowered volumetric heating value [8, 9]. In the case that engineering issues are addressed, conceptual challenges remain with respect to characterizing the economic impacts of pipeline hydrogen blending [10].
Given the differences in the physical and chemical characteristics of hydrogen and natural gas, blending alters the energy throughput capacity of pipeline systems. This leads to significant changes in markets as well as practical operation of pipeline systems. The electric power sector employs network optimization to compute location-specific prices for electricity based on the physics of energy flow [11], and a similar market mechanism for computing the value of natural gas transport through pipeline networks was proposed [12]. Whereas the optimization-based economic analysis for natural gas considered prices and quantities of gas with homogeneous heating value, a market analysis for a pipeline that carries blends of natural gas and hydrogen with multiple users would need to produce locational prices of natural gas and hydrogen for various suppliers, as well as prices of energy for downstream consumers that receive blends of various concentrations.
In this study, we use optimization subject to physical constraints that include flow equations, pressure limits, and compressor boost limits to evaluate the value of hydrogen injection and the impact on the market clearing price. We extend recent results on the optimization formulations for optimal pipeline flow allocation and capacity evaluation for gas mixtures [10, 13], and build upon the locational marginal pricing (LMP) concepts for energy networks [12] by examining heterogeneous gas flows in pipelines with incentives for
offsetetting carbon emissions by consuming hydrogen in end use instead of natural gas. By solving for the dual variables, or the Lagrange multipliers, we derive the value of natural gas and hydrogen at each location in the network along with the decarbonization premium paid by end-users who consume gas that includes hydrogen. These can be used to determine equitable subsidies or credits for hydrogen integration.
The paper is organized as follows. In Section II, gas pipeline network modeling that includes heterogeneous physical flow is introduced. Section III presents a market design concept and an optimization formulation that includes terms for carbon offset incentives. The key conceptual contribution of our study follows in Section IV, in which we examine the first order optimality conditions and economic properties of market equilibria. We demonstrate the application and scalability of the optimization model using 8-node and 40-node test networks in Section V, and show conditions for counterintuitive market outcomes. We review the results of our case studies in Section VI. Throughout the manuscript, we append SI units for variables in brackets after they are introduced.
## II Heterogeneous Gas Flow Modeling
We model a gas pipeline network as a connected and directed graph \((\mathcal{E},\mathcal{V})\), with physical junctions represented by nodes \(j\in\mathcal{V}\), and where each edge \((i,j)\in\mathcal{E}\) represents a pipe with flow from junction \(i\) to junction \(j\). The subset \(\mathcal{C}\subset\mathcal{E}\) of edges contains compressors that boost gas pressure between pairs of nodes. We also introduce the set \(\mathcal{G}\) of gNodes, following previously developed notation [12]. gNodes are used to indicate when there are multiple suppliers or consumers at a physical node. Each gNode \(m\in\mathcal{G}\) corresponds to a user of the pipeline network and can either be a hydrogen supplier in the set \(\mathcal{G}_{s}^{H_{2}}\), a natural gas supplier in the set \(\mathcal{G}_{s}^{NG}\), or a consumer of gas in the set \(\mathcal{G}_{d}\). Multiple gNodes can be co-located at the same physical node \(j(m)\in\mathcal{V}\), but they must all either be suppliers or consumers. Finally, we specify a set of slack physical nodes \(\mathcal{V}_{s}\) that have nominal pressure value. Notations for sets in the pipeline network and indices are
\[\begin{split} j\in\mathcal{V}&\text{ set of all physical nodes,}\\ (i,j)\in\mathcal{E}&\text{ set of edges representing pipes,}\\ (i,j)\in\mathcal{C}&\text{ set of edges representing compressors,}\\ m\in\mathcal{G}_{s}^{H_{2}}&\text{ set of gNodes that inject hydrogen,}\\ m\in\mathcal{G}_{s}^{NG}&\text{ set of gNodes that inject natural gas,}\\ m\in\mathcal{G}_{d}&\text{ set of gNodes that withdraw gas,}\\ j\in\mathcal{V}_{s}&\text{ set of slack physical nodes, subset of }\mathcal{V}.\end{split} \tag{3b}\]
Operational constraints are imposed on the nodal pressure \(P_{j}\), compressor boost ratio \(\alpha_{ij}\), the total mass flow \(\phi_{ij}\), and the mass fraction of hydrogen in junctions and along pipes, denoted by \(\gamma_{j}\) and \(\gamma_{ij}\) respectively. Supply and demand limits are imposed at the respective injection and withdrawal gNodes.
### _Pipe Flow Equations_
We use the Weymouth equation to model the relation between pressures at the endpoints of a pipe [14, 15], of form
\[P_{i}^{2}-P_{j}^{2}=\frac{f_{ij}L_{ij}}{D_{ij}A_{ij}^{2}}V_{ij}\phi_{ij}\left| \phi_{ij}\right|,\quad\forall(i,j)\in\mathcal{E}, \tag{1}\]
where \(f_{ij}\), \(L_{ij}\), \(D_{ij}\) and \(A_{ij}\) are the friction factor, length, diameter, and cross-sectional area of pipe \((i,j)\), respectively. We suppose that the hydrogen concentration along the pipe is uniform and that flow is steady-state. We use \(V_{ij}\) [(m/s)\({}^{2}\)] to denote the square of the wave speed in the blended gas, approximated as a linear combination of the squared wave speeds \(a_{H_{2}}^{2}\) and \(a_{NG}^{2}\) [(m/s)\({}^{2}\)] in hydrogen and natural gas as
\[V_{ij}=\gamma_{ij}a_{H_{2}}^{2}+(1-\gamma_{ij})a_{NG}^{2},\quad\forall(i,j) \in\mathcal{E}. \tag{2}\]
In this study, we set \(a_{H_{2}}\) = 1090m/s and \(a_{NG}\) = 370m/s. Section V-A details the calculation for the wave speeds in hydrogen and natural gas.
### _Nodal Compatibility Equations_
The key nodal equations for gas transport through a junction represent mass flow balance, which is linear for a homogeneous gas. In the setting of blending gases with concentration tracking, we require mass balance constraints at each physical node \(j\) for every gas constituent, such that net incoming and outgoing flows of natural gas and hydrogen through each node are balanced. These mass balance equations depend on concentration and are thus nonlinear, of form
\[(1-\gamma_{j})\sum_{k\in\partial_{j}^{-}}\phi_{jk}-\sum_{i\in \partial_{j}^{+}}(1-\gamma_{ij})\phi_{ij}\] \[=\sum_{m\in\partial_{j}^{g}}s_{m}^{NG}-(1-\gamma_{j})\sum_{m\in \partial_{j}^{g}}d_{m},\forall j\in\mathcal{V}, \tag{3a}\] \[\gamma_{j}\sum_{k\in\partial_{j}^{+}}\phi_{jk}-\sum_{i\in \partial_{j}^{+}}\gamma_{ij}\phi_{ij}\] \[=\sum_{m\in\partial_{j}^{g}}s_{m}^{H_{2}}-\gamma_{j}\sum_{m\in \partial_{j}^{g}}d_{m},\forall j\in\mathcal{V}, \tag{3b}\]
where \(s_{m}^{NG}\) and \(s_{m}^{H_{2}}\) [kg/s] are the mass flow rate of natural gas and hydrogen at the injection gNode \(m\) of the physical node \(j\) and \(d_{m}\) [kg/s] is the mass flow rate of the delivered blended gas. Here, \(\partial_{j}^{+}\) and \(\partial_{j}^{-}\) are sets of nodes connected to \(j\) by incoming and outgoing edges, respectively. Adding equations (3a) and (3b) yields the total mass balance for the blended gas. To ensure continuity of gas concentration along the nodes and edges, we enforce the compatibility constraint
\[\gamma_{i}=\gamma_{ij},\quad\forall(i,j)\in\mathcal{E}, \tag{4}\]
which specifies that the hydrogen concentration of flow through edges \((i,j)\) outgoing from a node \(i\) must equal that at node. We do not impose a continuity constraint from \((i,j)\) to node \(j\), because the concentration at node \(j\) would depend on the concentrations of flows through all incoming edges. We suppose that all input gas flows are mixed volumetrically at nodes and flow as a homogeneous mixture along any output streams. Finally, the pressure \(\sigma_{j}\) at the slack node is
\[P_{j}=\sigma_{j},\quad\forall j\in\mathcal{V}_{s}. \tag{5}\]
The slack node is included following the convention in pipeline modeling for simulation, which is done to ensure a well-posed boundary value problem [16].
### _Compressor Modeling_
Compressor stations help maintain the flow of gas through transmission pipelines and compensate for pressure decrease in the direction of flow caused by friction. We apply a simplified model of such facilities, which may be complex sites with many machines, for large scale systems modeling. We suppose that the action of a compressor station \((i,j)\in\mathcal{C}\) can be described by the boost ratio \(\alpha_{ij}\) between the suction and discharge pressures. The pressure at the end nodes \(i\) and \(j\) can then be related as
\[P_{j}^{2}=\alpha_{ij}^{2}P_{i}^{2},\quad\forall(i,j)\in\mathcal{C}. \tag{6}\]
Higher hydrogen concentration in the gas blend requires more compression work to deliver the same amount of energy. To quantify the cost of compressor work, we first quantify the amount of work required to compress gas following traditional practice [17] using the adiabatic relation
\[W_{c}\!=\!\left(\!\frac{286.76\cdot(\kappa_{ij}-1)\cdot T}{G_{ij}\kappa_{ij}} \!\right)\!\left(\alpha_{ij}^{m}\!-\!1\right)\left|\phi_{ij}\right|,\,\forall (i,j)\!\in\!\mathcal{C} \tag{7}\]
where \(T\) [K] is the temperature of gas entering the compressor, and is nominally 288.7 K in our study. Here, \(m=(\kappa_{ij}-1)/\kappa_{ij}\), where \(\kappa_{ij}\) denotes the specific heat capacity ratio for the mixed gas. \(G_{ij}\) denotes the specific gravity of the blend. The values of these parameters for the blend are approximated using linear combinations of the parameters of each gas, as
\[\kappa_{ij}=\kappa_{H_{2}}\gamma_{ij}+\kappa_{NG}(1-\gamma_{ij}), \quad\forall(i,j)\in\mathcal{C}, \tag{8a}\] \[G_{ij}=G_{H_{2}}\gamma_{ij}+G_{NG}(1-\gamma_{ij}), \quad\forall(i,j)\in\mathcal{C}. \tag{8b}\]
We further simplify the expression (7) by assuming \(\gamma_{ij}=0.05\) in equations (8) and setting \(\kappa\) and \(G\) as constants, with \(\kappa_{av}=1.308\) and \(G_{av}=0.574\). With the resulting expression for compressor work, we can formulate the economic cost of applied compressor work. We introduce \(\eta\) [$/kw-s] to be the conversion factor. In our computational studies, we use \(\eta\) = $0.13/3600kw-s, equivalent to an electricity price of 13\(\epsilon\)/kWh. This results in
\[\eta W_{c}=K\left(\alpha_{ij}^{m_{av}}-1\right)\left|\phi_{ij}\right|,\quad \forall(i,j)\in\mathcal{C}. \tag{9}\]
Following the above simplification process, we end up with \(K\) = 22.18 and \(m_{av}\) = 0.325.
### _Pressure, Compressor, and Concentration Limits_
In addition to the equality constraints in equations (1)-(9), inequality constraints are imposed on the pressure, compressor boost ratio, and hydrogen concentration to reflect the engineering, operating, and contractual limitations that the gas transmission pipeline is subject to. We suppose that the minimum operational pressure limit and the minimum and maximum hydrogen concentration are specified at each node:
\[P_{j}^{min}\leq P_{j} \quad\forall j\in\mathcal{V}, \tag{10a}\] \[\gamma_{j}^{min}\leq\gamma_{j}\leq\gamma_{j}^{max} \quad\forall j\in\mathcal{V}. \tag{10b}\]
We suppose that compressors can only increase pressure, because regulation to reduce pressure is typically not done along midstream pipelines, but at citygates to local distribution systems. Therefore, we suppose that \(\alpha_{ij}\) has a lower bound of 1. In addition, the compressor boost ratio is bounded by the maximum allowable pressure \(P_{j}\) at the discharge node and the maximum boost ratio \(\alpha_{ij}^{max}\):
\[\alpha_{ij}P_{i}\leq P_{j}^{max} \quad\forall(i,j)\in\mathcal{C}, \tag{11a}\] \[1\leq\alpha_{ij}\leq\alpha_{ij}^{max} \quad\forall(i,j)\in\mathcal{C}. \tag{11b}\]
The above constraints define the physical state of the pipeline system. The next section includes additional constraints and an objective function that will be used to define an optimization problem to clear a double auction market.
## III Optimization Formulation for Market Design with Incentives
We design an auction market mechanism for natural gas pipeline capacity following inspiration from an early study published by the U.S. Federal Energy Regulatory Commission [18], and include additional details to account for hydrogen blending in the capacity market. The mechanism is designed such that the objective can represent the economic value generated by the pipeline system for its users, who provide offers to sell and bids to buy energy in the form of the delivered gas mixture. The objective includes the cost to transport gas using gas compressors as discussed in Section II-C, incentives for reducing carbon emissions by using hydrogen to displace natural gas combustion, and the revenue collected because of differences between buyer and seller prices for the gas mixture arising due to network congestion.
### _Supply and Demand Limits_
To formulate the proposed market structure, we suppose that each seller of hydrogen gives a price and quantity offer (\(c_{m}^{H_{2}}\) [$/kg], \(s_{m}^{max,H_{2}}\) [kg/s]) at their injection gNode. Similarly, each natural gas seller provides a price and quantity offer (\(c_{m}^{NG}\) [$/kg], \(s_{m}^{max,NG}\) [kg/s]). The resulting supplier-side constraints are
\[0\leq s_{m}^{NG}\leq s_{m}^{max,NG},\quad\forall m\in\mathcal{G} _{s}^{NG}, \tag{12a}\] \[0\leq s_{m}^{H_{2}}\leq s_{m}^{max,H_{2}},\quad\forall m\in\mathcal{G} _{s}^{H_{2}}, \tag{12b}\]
where the quantity offer from the supplier is used as the upper bound value on the supply. The optimized scheduled injection flows are \(s_{m}^{H_{2}}\) and \(s_{m}^{NG}\). We suppose that each flexible customer places a bid consisting of the price and quantity (\(c_{m}^{d}\) [$/M], \(g_{m}^{max}\) [MJ/s]) for energy. There may also be customers with fixed demand (\(\bar{g}_{m}\) [MJ/s]) pre-determined outside this market, whose energy consumption must be served by the pipeline. We obtain demand-side constraints of the form
\[0\!\leq\!d_{m}\left(R_{H_{2}}\gamma_{j(m)}+R_{NG}(1-\gamma_{j(m)})\right)\! \leq\!g_{m}^{max},\ \forall m\in\mathcal{G}_{d,o}, \tag{13a}\] \[d_{m}\left(R_{H_{2}}\gamma_{j(m)}+R_{NG}(1-\gamma_{j(m)})\right)=\bar{g}_{m}, \ \forall m\in\mathcal{G}_{d,f} \tag{13b}\]
where \(d_{m}\) [kg/s] is the optimized withdrawal mass flow rate of the blended gas. The demands of flexible customers at gNodes in the set \(\mathcal{G}_{d,o}\) impose constraint (13a) while consumptions of customers at gNodes \(\mathcal{G}_{d,f}\) with fixed demand are subject to constraint (13b). Customers do not choose how much
hydrogen blend they receive, but rather the blend is governed by the physics of mixing, the location relative to the injection point, and the economics of hydrogen blending. Because the price and quantity bids are in units of equivalent power provided, we use the expression
\[R(\gamma_{j(m)})=R_{H_{2}}\gamma_{j(m)}+R_{NG}(1-\gamma_{j(m)}), \tag{14}\]
which gives the calorific value of the gas based on the hydrogen mass fraction, where \(R_{H_{2}}\) = 141.8 MJ/kg and \(R_{NG}\) = 44.2 MJ/kg are the calorific values for hydrogen and natural gas. The quantity \(R(\gamma_{j(m)})\) gives the composition dependent conversion factor between mass flow and energy flow. We will henceforth be using the shorthand notation \(R(\gamma_{j(m)})\).
### _Carbon Offset Incentives_
A straightforward method for creating incentives to lower carbon dioxide emissions in optimization-based markets is to add a term to the objective function that quantifies the value of avoided emissions. We suppose that \(c_{m}^{CO_{2}}\) [$/kg] is an incentive paid to the market administrator for all CO\({}_{2}\) emissions that are avoided when consumers use hydrogen instead of natural gas to produce the same unit of energy. These avoided carbon emissions are denoted by \(E_{m}\) [kg/s], and are approximated as
\[E_{m}=d_{m}\gamma_{j(m)}\cdot\frac{R_{H_{2}}}{R_{NG}}\cdot\zeta\quad\forall m \in\mathcal{G}_{d}, \tag{15}\]
where \(\zeta\) = 44/16 is the approximate ratio of the molecular weight of carbon dioxide to methane. The total credits received by the market administrator can be passed through to consumers to make up for any higher prices paid for energy because of hydrogen blending, which we refer to as the locational decarbonization premium. We will derive and examine the latter concept below.
The objective function for the proposed optimization-based market mechanism is to maximize the economic value produced by operating the pipeline system, which consists of revenues from selling energy to consumers and receiving carbon emissions mitigation incentives minus the cost of buying hydrogen and natural gas from suppliers and the cost of compressor operation. The objective function is therefore
\[\begin{split} J_{EV}=\sum_{m\in G}\left(c_{m}^{d}d_{m}R(\gamma_{ j(m)})-c_{m}^{H_{2}}s_{m}^{H_{2}}-c_{m}^{NGs}s_{m}^{NG}\right.\\ \left.+c_{m}^{CO_{2}}E_{m}\right)-\eta\sum_{c\in C}W_{c}.\end{split} \tag{16}\]
Combining equations (1)-(16), the optimization formulation is
\[\begin{split}\max&\quad J_{EV}\triangleq\text{ max economic value objective (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq
ing constraint and dimensional units. The complementary slackness conditions for the inequality constraints are listed below in equations (19), together with the indexing in the network component sets.
\[\beta_{j}^{e}\left(P_{j}^{\min}-P_{j}\right)=0 \forall j\in\mathcal{V} \tag{19a}\] \[\theta_{ij}^{u}\left(\alpha_{ij}P_{i}-P_{ij}^{\max}\right)=0 \forall(i,j)\in\mathcal{E}\] (19b) \[\theta_{ij}^{c,j}\left(1-\alpha_{ij}\right)=0 \forall(i,j)\in\mathcal{E}\] (19c) \[\theta_{ij}^{c,u}\left(\alpha_{ij}-\alpha_{ij}^{\max}\right)=0 \forall(i,j)\in\mathcal{E}\] (19d) \[\omega_{j}^{l}\left(\gamma_{j}^{\min}-\gamma_{j}\right)=0 \forall j\in\mathcal{V}\] (19e) \[\omega_{j}^{u}\left(\gamma_{j}-\gamma_{j}^{\max}\right)=0 \forall j\in\mathcal{V}\] (19f) \[\chi_{m^{2},l}^{m^{2},l}\left(-s_{m}^{H_{2}}\right)=0 \forall m\in\mathcal{G}_{d}^{s_{2}}\] (19g) \[\chi_{m^{2},u}^{m^{2},u}\left(s_{m}^{H_{2}}-s_{m}^{\max,H_{2}} \right)=0 \forall m\in\mathcal{G}_{s}^{H_{2}}\] (19h) \[\chi_{m}^{NG,l}\left(-s_{m}^{NG}\right)=0 \forall m\in\mathcal{G}_{s}^{NG}\] (19i) \[\chi_{m}^{NG,u}\left(s_{m}^{NG}-s_{m}^{\max,NG}\right)=0 \forall m\in\mathcal{G}_{s}^{NG}\] (19j) \[\chi_{m}^{l}\left(-d_{m}R(\gamma_{j(m)})\right)=0 \forall m\in\mathcal{G}_{d,o}\] (19k) \[\chi_{m}^{u}\left(d_{m}R(\gamma_{j(m)})-s_{m}^{\max}\right)=0 \forall m\in\mathcal{G}_{d,o} \tag{19l}\]
The multiplier sign condition requires that the multipliers on the inequality constraints are positive, i.e.,
\[\beta_{j}^{e},\,\theta_{ij}^{u},\,\theta_{ij}^{c,l},\,\,\phi_{ij} ^{c,u},\,\omega_{j}^{l},\,\omega_{j}^{u}\geq 0, \tag{20a}\] \[\chi_{m}^{H_{2},l},\,\chi_{m}^{H_{2},u},\,\chi_{m}^{NG,l},\,\chi_{m }^{NG,u},\,\chi_{m}^{l},\chi_{m}^{u}\geq 0. \tag{20b}\]
We can then take the first derivative of \(\mathcal{L}\) in equation (18) with respect to each primal variable, to obtain a subset of the Karush-Kuhn-Tucker (KKT) first derivative conditions for optimality. The conditions are listed in equations (22).
### _Economic Interpretation_
The concepts of spot pricing and congestion rents are well developed in electricity markets [19], and similar concepts have been developed for gas pipelines [12]. Observe that
\[s_{m}^{NG}: 0=c_{m}^{NGs}-\lambda_{j(m)}^{NG}-\chi_{m}^{NGl}+\chi_{m}^{NGu}\] \[s_{m}^{H_{2}}: 0=c_{m}^{H_{2}}-\lambda_{j(m)}^{H_{2}}-\chi_{m}^{H_{2}l}+\chi_{m }^{H_{2}u}\] \[d_{m}: 0=-c_{m}^{d}R(\gamma_{j(m)})-c_{m}^{CO_{2}}\gamma_{j(m)}\frac{R _{H_{2}}}{R_{NG}}\zeta+\lambda_{j(m)}^{NG}(1-\gamma_{j})+\lambda_{j(m)}^{H_{2} }\gamma_{j}\] \[-\chi_{m}^{l}R(\gamma_{j(m)})+\chi_{m}^{u}R(\gamma_{j(m)})+\chi_{ m}^{f}R(\gamma_{j(m)})\] \[\phi_{ij}: 0=(\lambda_{i}^{NG}(1-\gamma_{i})-\lambda_{j}^{NG}(1-\gamma_{ij }))+(\lambda_{i}^{H_{2}}\gamma_{i}-\lambda_{j}^{H_{2}}\gamma_{ij})\] \[-2\mu_{ij}\frac{f_{ij}L_{ij}}{D_{ij}A_{ij}^{2}}(\gamma_{ij}a_{H_{ 2}}^{2}+(1-\gamma_{ij})a_{NG}^{2})\left|\phi_{ij}\right|\] \[\phi_{ij}: 0=(\lambda_{i}^{NG}(1-\gamma_{i})-\lambda_{j}^{NG}(1-\gamma_{ij }))+(\lambda_{i}^{H_{2}}\gamma_{i}-\lambda_{j}^{H_{2}}\gamma_{ij})+K\left( \alpha_{ij}^{m_{\text{ave}}}-1\right) \forall(i,j)\in\mathcal{C} \tag{22e}\] \[\alpha_{ij}: 0=K|\phi_{ij}|m_{av}\alpha_{ij}^{m_{\text{ave}}-1}-2\theta_{ij}^{ e}\alpha_{ij}P_{i}^{2}+\theta_{ij}^{u}P_{i}-\theta_{ij}^{c,l}+\theta_{ij}^{e,u}\] \[\gamma_{ij}: 0=(\lambda_{j}^{NG}-\lambda_{j}^{H_{2}})\phi_{ij}-\mu_{ij}\frac {f_{ij}L_{ij}}{D_{ij}A_{ij}^{2}}\phi_{ij}\left|\phi_{ij}\right|(a_{H_{2}}^{2}-a _{NG}^{2})-\omega_{j}^{e}\] \[\gamma_{j}: 0=\sum_{m\in\mathcal{G}_{j}^{l}}\bigg{(}-c_{m}^{d}d_{m}(R_{H_{2 }}-R_{NG})+c_{m}^{CO_{2}}d_{m}\frac{R_{H_{2}}}{R_{NG}}\zeta-\lambda_{j}^{NG}d_{ m}\bigg{)}-\lambda_{j}^{NG}\phi_{jk}+\lambda_{j}^{H_{2}}\phi_{jk}\] \[+\sum_{m\in\mathcal{G}_{j}^{l}}\left(\lambda_{j}^{H_{2}}d_{m}+( \chi_{m}^{u}+\chi_{m}^{f})d_{m}(R_{H_{2}}-R_{NG})-\chi_{m}^{l}d_{m}(R_{H_{2}}-R_{ NG})\right)-\omega_{j}^{l}+\omega_{j}^{u}+\omega_{ij}^{e} \forall j\in\mathcal{V}\] (22h) \[P_{j}: 0=2\mu_{jk}P_{j}-2\mu_{ij}P_{j}-\beta_{j}^{l}+\beta_{j}^{e}+2 \theta_{ij}^{e}P_{j}-2\theta_{jk}^{e}\alpha_{jk}^{2}P_{j}+\theta_{jk}^{u}\alpha_{jk}\] \[\forall j\in\mathcal{V} \tag{22i}\]
(3) are enforced for physical nodes and we may assume no-cost exchange of commodities between market participants co-located at the same physical node. Inspecting the first derivative condition (22c) taken with respect to \(d_{m}\), we can re-arrange this condition and simplify using equation (21b) to obtain an expression for the locational price charged by the market administrator for delivery of energy to a gNode \(m\):
\[\lambda_{j(m)}^{e} = c_{m}^{d}+\chi_{m}^{l}-\chi_{m}^{u}-\chi_{m}^{f}+\frac{c_{m}^{ CO2}\gamma_{j(m)}}{R(\gamma_{j(m)})}\cdot\frac{R_{H_{2}}}{R_{NG}}\zeta. \tag{23}\]
The equation (23) can be decomposed as
\[\lambda_{j(m)}^{e} = \lambda_{m}^{c}+\lambda_{m}^{d}, \tag{24a}\] \[\lambda_{m}^{c} = c_{m}^{d}+\chi_{m}^{l}-\chi_{m}^{u}-\chi_{m}^{f},\] (24b) \[\lambda_{m}^{d} = \frac{c_{m}^{CO2}\gamma_{j(m)}}{R(\gamma_{j(m)})}\cdot\frac{R_{H_ {2}}}{R_{NG}}\zeta, \tag{24c}\]
where \(\lambda_{m}^{c}\) is the price component of congestion rent and \(\lambda_{m}^{d}\) is the _decarbonization premium_. Observe that the equations (24b) for congestion rent in the consumer price can be compared with Eq. (27b) in [12], which examined locational marginal pricing for homogeneous natural gas pipelines. The equations are similar, except for an additional term \(\chi_{m}^{f}\) related to the baseline flow \(\bar{g}_{m}\), which was not considered in the earlier study. In the case that \(c_{m}^{CO2}=\$0/\text{kg}\) CO\({}_{2}\) and when the gas consumer at \(m\in\mathcal{G}_{d}\) is a marginal consumer, meaning that neither constraint in Eq. (13a) is binding, then the corresponding complementary slackness conditions (19k) and (19l) imply that \(\chi_{m}^{l}=\chi_{m}^{u}=0\) and equation (24) is reduced to \(\lambda_{j(m)}^{e}=c_{m}^{d}\). With no congestion or carbon emissions mitigation incentive, the nodal cleared price of energy [S/MJ] at gNode \(m\in\mathcal{G}_{d}\) is equivalent to the bid price for energy provided by the marginal consumer at that location. When a pipe is at physical pipe capacity, one of the bounds in constraint (13a) is binding and the cleared market price would reflect such congestion. In the case that \(\gamma_{j}\equiv 0\) system-wide, we have \(\lambda_{m}^{d}\equiv 0\) for all \(m\in\mathcal{G}_{d}\) in (24c). In this sense, the price formation done by solving problem (17) is consistent, because the locational premium on energy prices due to the avoided carbon emissions as given in equation (15) and added with price \(c_{m}^{CO2}\) in the objective (16) has physically meaningful dependence on hydrogen mass fraction, calorific values, molecular weights, etc.
## V Computational Case Studies
We demonstrate the use of the proposed optimization formulation for operating a pipeline market mechanism for natural gas and hydrogen transport using an 8-node network and a 40-node network. The problem (17) is implemented in the Julia programming language v1.7.3 using the JuMP optimization toolkit v1.0.0 [20]. Our implementation makes use of the general purpose interior point solver IPOPT v1.2.1 [21], a large-scale nonlinear optimization software. The case studies are evaluated using an Apple M1 chip with 8 GB of RAM, with solve time of about 60 milliseconds for both networks.
### _Re-scaling and Non-Dimensionalization_
Prior to solving problem (17), we non-dimensionalize the physical variables in the governing equations in order to avoid numerical issues [22]. We re-scale equation (1) because the squared wave speed \(V\) in blended gas is much larger than other parameters in the equation. Given the transformations \(\bar{P}=P/P_{0}\), \(\bar{L}=L/l_{0}\), \(\bar{D}=D/l_{0}\), \(\bar{A}=A/A_{0}\), and \(\bar{\phi}=\phi/\phi_{0}=\phi/(\rho_{0}u_{0}A_{0})\), equation (1) becomes
\[\bar{P}_{i}^{2}-\bar{P}_{j}^{2}=\frac{f_{ij}\bar{L}_{ij}}{\bar{D} _{ij}\bar{A}_{ij}^{2}}\bar{V}_{ij}\bar{\phi}_{ij}\big{|}\,\bar{\phi}_{ij} \big{|}\cdot\frac{u_{0}^{2}}{a_{0}^{2}}\quad\forall(i,j)\in\mathcal{E}, \tag{25a}\] \[\bar{V}_{ij}\triangleq\frac{V_{ij}(\gamma_{ij})}{a_{0}^{2}}\quad \forall(i,j)\in\mathcal{E}. \tag{25b}\]
The nominal length, area, density, and velocity used for both network models are \(l_{0}=5000\) m, \(A_{0}=1\) m\({}^{2}\), \(\rho_{0}=P_{0}/a_{0}^{2}\), and \(u_{0}=\lceil a_{0}\rceil/300\), where \(a_{0}\) is the geometric mean of wave speeds in the gases. We compute \(a_{0}\) as \(a_{0}=\sqrt{a_{NG}\cdot a_{H_{2}}}\) where \(a_{NG}\) and \(a_{H_{2}}\) are the wave speeds in NG and H\({}_{2}\), obtained by \(a_{NG}=\sqrt{RT/M_{NG}}\) and \(a_{H_{2}}=\sqrt{RT/M_{H_{2}}}\), respectively. Here, \(R=8.314\) J/mol/K is the universal gas constant, and \(M_{NG}=0.01737\) kg/mol and \(M_{H_{2}}=0.002016\) kg/mol are molecular masses of NG and H\({}_{2}\). Using these parameters results in a nominal wave speed of \(a_{0}=635.06\) m/s used for re-scaling. The parameter values used in our study are summarized in Table II.
### _8-Node Network Case Study_
Our first case study applies problem (17) to the 8-node network shown in Figure 1. The network is served by two
Fig. 1: 8-node test network schematic, with physical characteristics. The network consists of three compressors with \(\alpha_{j}^{max}=1.4\), one natural gas supplier and one hydrogen supplier, and three flexible offakers.
unconstrained gas suppliers. The natural gas supplier S1 is located at physical node J1, and the hydrogen supplier S2 is located downstream at node J7. There is an offtaker D1 at node J3 and two offtaker gNodes, D2 and D3, located at physical node J5. The withdrawals of the three offtakers are flexible, following equation (13a), so that their actual consumption is constrained at \(g_{m}^{max}=2000\) MJ/s and is determined by the market solution produced by solving problem (17). In addition, there are three compressors, each with a maximum pressure boost ratio of 1.4. The maximum allowable hydrogen injection is 10% by mass. The network has one slack node (J1) with nominal pressure \(P_{0}=4\) MPa. The complete operational constraints and pipeline characteristics are summarized in Figure 1. Model parameters are shown in Table II. We consider two market scenarios for the 8-node network.
In scenario (1), we suppose that \(c_{m}^{CO_{2}}=\$0\)/kg CO\({}_{2}\). In the optimal market solution, all three customers received their requested amount of energy in the form of natural gas. The cleared market price for the delivered gas is $0.0045/MJ, which is equivalent to the supply price of natural gas, indicating no pipeline congestion. Because there were no CO\({}_{2}\) offsets incentives, it was not profitable for hydrogen to be injected into the system, and the decarbonization premium \(\lambda_{m}^{d}\) is zero at all withdrawal nodes. The objective function value in scenario (1) is \(J_{EV}^{1}=\$86.85/s\) and the total carbon dioxide emitted from natural gas consumption is 373 kg/s.
In scenario (2), we introduce an emissions offset incentive of \(c_{m}^{CO_{2}}=\$0.055\)/kg CO\({}_{2}\), which is approximately $50 per U.S.ton. All three consumers still receive their requested amount of energy. However, the total amount of natural gas injected into the system decreased from 135 kg/s to 110 kg/s, a 23% decrease, and the equivalent energy was replaced with 7.7 kg/s of hydrogen. Consumer D1, which is immediately downstream of the hydrogen injection point - received 10% hydrogen blend, while consumers D2 and D3 received 5% hydrogen blend. Examining the market solutions, we see that the cleared market price for the delivered gas increased to $0.0050/MJ at D1 and $0.0046/MJ at D2 and D3, which are 11% and 2% increases, respectively. The differences in price between the two physical nodes reflect the difference in the locational values of energy and of the decarbonization premium paid by the end-users. D1 pays a decarbonization premium, \(\lambda^{d}\), of $0.00090/MJ while D2 and D3 pay $0.00052/MJ, which are 18% and 11.3% of the locational prices, respectively. Overall, the economic value produced by the pipeline operator increased to \(J_{EV}^{2}=\$89.45/s\) in scenario (2), which is about 3% higher than \(J_{EV}^{1}\), while the total carbon dioxide emitted is 303 kg/s, which is a 23% reduction with respect to scenario (1). The carbon intensity of energy delivered at D1 and at D2 and D3 is reduced by 35% and 18%, respectively. The solutions obtained by solving (17) are shown in Table III for scenario (1) and Table IV for scenario (2). Note that \(\lambda_{j}(m)\), the marginal price of the blended gas, is presented in units of [$/MJ] by dividing through by \(R(\gamma_{j(m)})\).
### _40-Node Network Case Studies_
The 40-node network case is a modification of the GasLib-40 network [23], with one slack node (J38) with a nominal pressure \(P_{0}=5\) MPa, and 26 physical withdrawal nodes. Energy withdrawal by all customers is flexible, following (13a). The bid price and quantity of all consumers are \(c_{m}^{d}\) = $0.019/MJ and \(g_{m}^{max}\) = 1600 MJ/s. We suppose a global \(c_{m}^{CO_{2}}\) = $0.055/kg. Model parameters are in Table II.
#### Iv-C1 Example Scenario
In this first case, we suppose there are three unlimited supply physical nodes that inject natural gas (J38, J39 and J40) and three separate physical nodes that inject hydrogen (J10, J27 and J30). We can visualize the physical and market solutions across the network in Figure 2. Calorific value of the gas is dependent on the hydrogen mass fraction (14). Nodes with higher calorific values correspond to those with greater blends of H\({}_{2}\), such as downstream of J10 and J27. Carbon intensity (CI) is based on the amount of CO\({}_{2}\) emitted per unit energy delivered. We see that CI is inversely related to the hydrogen concentration, with higher CI at nodes with no H\({}_{2}\) blends such as at J19 and J28.
#### Iv-C2 Counter-intuitive Market Outcomes
Next, we develop a price structure for the 40-node network in which decarbonization incentives cause greater carbon emissions in the optimal market solution. We suppose that the network has three physical injection nodes (J38, J39 and J40), where each has two gNodes that inject hydrogen and natural gas. Hydrogen injection sites in the previous case are now withdrawal nodes, thus we have a total of 29 withdrawal nodes in this case study. All other parameters are the same as in the example scenario.
We start with Scenario 1, which is the baseline scenario. The price and quantity bids for all offtake gNodes are \(c_{m}^{d}\) = $0.019/MJ and \(g_{m}^{max}\) = 1600 MJ/s with a global \(c_{m}^{CO_{2}}\) fixed at
S0.055/kg. In Scenario 2, we suppose four offakers shown in green squares in Fig. 3 decreased their bids for energy by half to \(c_{m}^{d}\) = $0.0085/MJ to reflect low demand, such as that of a gas-fired power plant at a time with low prevailing prices in the wholesale electricity market. In Scenario 3, we suppose that \(c_{m}^{CO_{2}}\) is increased to $0.155/kg, which represents a scenario with low demand and high decarbonization incentives. We present the optimal withdrawal energy and the LMP of the blended gas, \(\lambda\), in Fig. 3 along with details in the Appendix.
In Scenario 1, the pipeline is constrained by the lower and upper pressure limits. Across the network, hydrogen is being blended at its maximum limit of 10%. In Figure 3, we see that nodes far from injection sites and compressor have significantly lower pressure. In the physical solution, the pipeline did not deliver the requested amount of energy to the nodes farthest from the injection sites and compressors. Examining the market solution, we see that the cleared market price for the delivered gas increases with decreasing pressure. The objective function value is \(J_{EV}=$622/s$\) and the total carbon dioxide emitted is 1891 kg/s.
Bid prices of four offakers are reduced in Scenario 2. Hydrogen is being blended at its maximum limit of 10%. Node 16 still receives its quantity bid for energy, at the maximum constraint bound value. Inspecting the pressure solution, we see that nodal pressure at node 16 remains high at 5.48 MPa given its proximity to a compressor. However, the pipeline now delivers only 1110 MW to node 17 and node 37 obtains no energy given their reduced bid prices. Node 12 receives no energy as well. The objective function value is \(J_{EV}=$609/s$\) and the total carbon dioxide emitted is 1853 kg/s.
We increase the global carbon reduction incentive in Scenario 3. Here, the pipeline maintains a pressure of 5.48 MPa at node 17, which still withdraws its requested amount of 1600 MW. The pipeline delivers 1600 MW at node node 17, which is a 45% increase from 1110 MW in Scenario 2. Meanwhile, nodes 37 and 12 receive no energy. The total energy delivered by the system increases though the hydrogen fraction remains constant at the upper limit of 10%. Consequently, this change leads to more natural gas consumption and higher carbon dioxide emissions. The objective function value is \(J_{EV}=$681/s$\)
Fig. 2: Physical and market solution for the example scenario using the 40-node network. From left to right: top row - pressure, node concentration, and calorific value; middle row - carbon intensity, withdrawal energy, and decarbonization premium; bottom row - LMP of the blended gas, LMP of natural gas, and LMP of hydrogen. For graphical clarity, we limit the values of the nodes in dashed boxes in the bottom right plot to be 10, with actual values above.
and the total carbon dioxide emitted is 1868 kg/s, which is a slight increase with respect to Scenario 2.
Inspecting \(\lambda_{m}^{d}\) in (24), increasing \(c_{m}^{CO_{2}}\) increases the cleared market price for delivered gas. In V-B, where demand is met by only natural gas and the system is not constrained by hydrogen injection limits, this increase in \(c_{m}^{CO_{2}}\) encourages a substitution of natural gas with some hydrogen. However, in V-C2, where there are unmet demands - not due to operational constraints but due to unfavorably low bid price - increasing \(c_{m}^{CO_{2}}\) encourages pipeline operators to deliver energy to those once uneconomical, low bid customers. Additional energy transported is a blend of natural gas and hydrogen, which adds to the total natural gas in the system. Therefore, increasing \(c_{m}^{CO_{2}}\) can lead to more carbon emissions under the specific instance where the system has unmet demands due to the prevailing price structure.
## VI Conclusion
We demonstrated an optimization-based market mechanism for pricing energy transported through a pipeline network that carries mixtures of natural gas and hydrogen and includes decarbonization incentives, and analytically derived the premium paid by consumers that receive hydrogen to lower their carbon intensity. Our case studies shows that when the pipeline is not constrained by hydrogen injection limits, introducing a carbon emissions reduction incentive leads to a decrease in overall natural gas consumption and consequently a reduction in total carbon dioxide emitted. However, where there are unmet demands due to low bidding customers, we encounter counter-intuitive behavior of solutions to problem (17), where increasing the decarbonization incentive can lead to greater emissions. The results for the 40-node network extend our previous results [10, 13] by demonstrating the scalability of the computational implementation to a large, complex test case with multiple loops and numerous injection and withdrawal sites. The computational examples also show that optimization-based analysis is critical for making sound decisions about economic policy for blending green hydrogen into existing natural gas pipeline systems.
|
2305.17151 | Versatile, open-access opto-mechanics platform for optical microscopes
prototyping | Prototype optical microscopes, built to pursue developments in advanced
imaging techniques, need specific optomechanical constructions: preferably with
high flexibility in the elements arrangement, easy access to the optical paths,
straightforward integration with external optical subsystems - light sources
and detectors - as well as good mechanical stability. Typically they are either
built around an adapted commercial microscope body or as a home-built setup,
based on standard optomechanical elements, and neither solution delivers the
desired characteristics. We developed a series of versatile platforms for
prototyping optical microscopes in various configurations that use folding
mirror(s) to maintain the optical paths horizontal throughout most of the
setup, thus enabling the use of standard optical components in the excitation
and detection paths and, last but not least, increasing the laser safety of the
optical system. | Łukasz Zinkiewicz, Milena Królikowska, Alexander Krupiński-Ptaszek, Piotr Wasylczyk | 2023-05-26T09:10:55Z | http://arxiv.org/abs/2305.17151v1 | **Versatile, open-access opto-mechanics platform**
## Abstract
Prototype optical microscopes, built to pursue developments in advanced imaging techniques, need specific optomechanical constructions: preferably with high flexibility in the elements' arrangement, easy access to the optical paths, straightforward integration with external optical subsystems - light sources and detectors - as well as good mechanical stability. Typically they are either built around an adapted commercial microscope body or as a home-built setup, based on standard optomechanical elements, and neither solution delivers the desired characteristics. We developed a series of versatile platforms for prototyping optical microscopes in various configurations that use folding mirror(s) to maintain the optical paths horizontal throughout most of the setup, thus enabling the use of standard optical components in the excitation and detection paths and, last but not least, increasing the laser safety of the optical system.
## Introduction
After a period of relative stagnation, many new ideas emerged in optical microscopy around the turn of the century. As a result, it is now possible to achieve astonishing resolutions, well beyond the Abbe limit [1] and image whole organs and entire organisms with cellular resolution [2]. This spectacular progress has been enabled by exploring previously ignored light-matter interactions, such as nonlinear light-matter interactions in STED microscopy [3], using clever optical configurations, like in light sheet microscopy [4], newly developed components, such as super-sensitive cameras in SOFI [5], data processing, as in STORM [6], or combinations of these. As a result, a contemporary super-resolving optical microscope has little in common with its 50 years old predecessor in terms of the optical setup, sample illumination, signal acquisition and processing. Yet, the opto-mechanics, the microscope "body" in particular, looks strikingly similar to the early designs, with perhaps the biggest difference in the case of the inverted configuration, developed to offer better access from the top to the sample plane.
Early optical microscopes were built with the sample laying on a horizontal stage and vertical light path, going through the objective and the eyepiece, the latter arranged conveniently for a user sitting or standing at the desk - Fig. 1. Today, eyepieces have been replaced with high resolution and high dynamic range cameras, also due to safety concerns with laser illumination being used in many microscopes. Illumination is no longer provided with concentrated sunlight or light bulbs, but rather by sophisticated LED or laser systems, the latter, e.g. in the case of femtosecond lasers for two- or three-photon excitation, often being larger in size than the microscope itself.
Apart from the majority of microscope users, who simply want the device best suited for their application, there is a much smaller number of researchers who build or modify their microscopes, usually to go beyond what is possible with commercially available equipment [7]. The modifications in and around the microscope body can include using non-standard light sources and/or detectors, inserting additional optical elements in the excitation and/or detection light paths, or reconfiguring the sample holder. One quickly realizes that when using a commercial microscope body, it is often very inconvenient to access certain points along the optical path - sometimes additional relay optics may be added to address this problem. It is not unlikely that what remains in the new construction from the sophisticated (and expensive) inverted microscope is the nosepiece and the focusing block and all the other components, mechanical and optical, are added, often in awkward locations, determined by the original mechanical design.
Some companies offer extensions to commercial microscope bodies, e.g. small breadboards that can be installed to replace the standard filter wheels, or vertical breadboards that can be mounted above the sample plane in an inverted microscope [8]. There are also modular microscopy systems available on the market [9] as well as universal opto-mechanical systems, intended, among other applications, for constructing microscope frames [10].
Even a cursory overview of the above mentioned solutions reveals that the designs, commercial and custom-built, tend to be vertical: the optical path in a substantial part of its way travels vertically below and above the horizontally mounted sample. As a result, the opto-mechanics is built in a form of a tower, with optical elements installed in rail or cage systems or, quite often, on vertically mounted breadboards. In optical labs, on the contrary, the majority of experiments are performed on optical tables, where the light beams are sent horizontally, whenever possible on the
Figure 1: From handheld magnifying glass to the contemporary inverted multi-modal imaging workstation – the evolution of the optical microscope optical layout. For good reasons the preferred orientation of the sample – especially in the case of a wet environment – is horizontal and thus the natural orientation of the principal optical axis is vertical. The last panel presents the schematic of the idea of the horizontal microscope platform, where light travels vertically over a very short distance only, between the illumination and collection optics (objectives).
same height above the table level, throughout the setup. This latter layout is preferred due to laser safety, as the beams are always well below the eye level as well as due to practical reasons. First, the optical elements can be placed on the table and remain there, even before they are permanently attached (clamped, screwed). The beam height above the mounting surface may vary, depending on the experiment design, the optical and opto-mechanical elements used. On the lower extreme are heights on the order of 20-30 mm, with 1/2 inch optics and low-profile mirror and lens mounts. Such a low height guarantees the ultimate mechanical stability and compactness and is often a design of choice in laser design. It is, however, challenging if additional degrees of freedom are to be used, e.g. more than one stage (translation or rotation) mounted in series on top of each other. If more flexibility is required or many large elements are used (e.g. large cameras, gas cells in ovens and/or magnetic shields) the beam height might be up to around 250 mm.
In this paper we present a series of opto-mechanics platforms, developed originally for prototype Raman microscopes: a spontaneous Raman systems in upright configuration with custom-made objective turrets, and two others for inverted Stimulated Raman Scattering (SRS) microscopes, all of them built for imaging biological cells. The general concept in all these designs is to have two horizontal bread boards, connected with a microscope head, where microscope objective(s) and folding mirror(s) are mounted, and which is the only part where light travels vertically. Thus, most of the light beams are aligned horizontally and standard optical and opto-mechanical components can be used, just like on the optical table. The size of the bread boards can be chosen to accommodate the optics on the lower (typically - excitation) and upper (detection) levels. The prototypes were built using many off-the-shelf mechanical components, some of them adapted from commercial microscopes, as well as custom-machined parts of various complexity. The main custom assembly - the microscope head - may be easily adapted for prototype optical microscopes in many different configurations.
## 2 Methods and Results
The departure point for the idea of the opto-mechanics platform was to design it in such a way that as many of the light paths as possible are arranged horizontally, at the height(s) above the horizontal breadboards that would match standard opto-mechanical components, light sources and detectors. The sample plane should be horizontal to allow for water-immersed samples (e.g. cell cultures) and water-dipping objectives. We also wanted to have the objective nosepiece with multiple objective ports, e.g. on a turret. All our prototypes use a galvanometric scanner for point-by-point imaging, but they are designed to be easily adopted for translation stages moving the sample instead.
The first question is: how to move the microscope objective (or the sample) for finding the focus? In commercial microscopes, this is usually done with a rack-and-pinion stage, often with a pair of coarse-fine concentric knobs, conveniently located on the side of the microscope body. Precision rack and pinion stages are rarely used in optical labs, hence most often the vertical (focusing) motion of the objective is realized with standard translation stages installed vertically, with the micrometer screw awkwardly pointing up (or down). Table 1 summarizes possible solutions for the vertical movement of the objective (or the sample) with their respective pros and cons. In our case, designs 1
and 3 use the Z stage with a side micrometer, and design 2 uses a rack-and-pinion stage adapted from a commercial (stereo) microscope with a coarse-fine focusing knobs.
The second question is: how to provide a quick, reliable objective exchange? For the first design, of the upright microscope, we developed a series of custom-made "radial" objective turrets, in the rarely used configuration, where the objectives are arranged concentrically in one plane, perpendicular to the turret rotation axis (Figure 2). This is the design of choice for the custom-made objective turret (revolver), as sufficient precision in machining the elements may be achieved, even in a basic workshop. The inverted microscopes, use a commercial objective turret with six ports, available from microscope manufacturers as a separate part (in our case T12-N-N sextuple nosepiece from Nikon).
The last question is: how to mount the sample to provide a coarse (manual) translation in the XY plane? Here, the upright microscope uses a standard XY stage with micrometers installed on the sides and the sample Z stage mounted on top. Inverted microscopes typically use large, heavy translation stages for the sample positioning. The stage must have a large footprint to provide enough room for the objective exchange from the bottom. This seems to be far from optimum, as a few gram sample (a microscope slide in many cases) is held in place by a component weighting above a kilogram. Our first inverted prototype (design 2) still has a large translation stage, adapted from a
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline & **Standard optical translation stage with rack-and-pinion drive** & **Precision translation stage with rack-and-pinion drive** & **Precision optical stage with side micrometer** & **Translation stage with rack-and-pinion drive** & **Precision optical (Z) stage with side micrometer** \\ \hline
**Movement guiding** & Dovetail bearing & Linear ball bearing & Linear ball/roller bearing & Linear ball bearing & Linear ball bearing \\ \hline
**Drive** & Rack-and-pinion & Rack-and-pinion & Micrometer screw & Rack-and-pinion & Micrometer screw \\ \hline
**Translation precision** & Usually coarse, some coarse/fine & Coarse (fine possible with differential micrometers) & Coarse/fine possible with differential micrometers) & Coarse (fine possible with differential micrometers) \\ \hline
**Range of adaptation** & Medium & Medium & Large & Medium & Small \\ \hline
**Ergonomics** & Good & Good & Poor & Good & Average/good \\ \hline
**Example (* means that the element was used in one of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the prototype of the of
commercial inverted microscope, but the last iteration (design 3) has a small XY stage, installed upside-down, with the sample holder extending to the side. This last configuration results in the most compact design of the microscope head, having all the degrees of freedom in the elements' movements, easy access to the sample and the light beams and an ergonomic design. Table 2 summarizes how different solutions have been combined in the three prototypes.
### Design 1
The idea of a horizontal microscope was first tested in a prototype shown in Fig. 2 - a simple spontaneous Raman microscope in the epi configuration, with a solid state laser illumination and an external grating spectrometer. The microscope has two base plates: the lower one is made of a 180x32x500 mm long aluminum extrusion (Alutec) and the upper one, mounted on four standard posts, is a 180x400x12 mm solid aluminum plate. The light beams (laser and back-scattered Raman) are guided as low as 35 mm above the upper plate and are directed to/from the microscope objective with a folding mirror (1 inch diameter, silver coated, PF10-03-P01, Thorlabs). The objectives (up to four) are mounted in a custom-made radial turret, presented in different variants in Figure 3.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline & **Design 1** & **Design 2** & **Design 3** \\ \hline
**Configuration** & Upright & Inverted & Inverted \\ \hline
**Focusing (moving element)** & Z stage with side micrometer (sample) & Rack-and-pinion (objective) & Z stage with side micrometer (objective) \\ \hline
**Objective exchange** & Radial turret (custom-made) & Commercial sextuple nosepiece & Commercial sextuple nosepiece \\ \hline
**XY sample translation** & Small XY stage with side micrometers & Large commercial XY stage with rack and pinion drives & Small XY stage with side micrometers, installed upside-down \\ \hline
**Beam height** & 35 mm (only the upper level was used) & 72 mm (lower level), 90 mm (upper level) & 62 mm (lower level), 80 mm (upper level) \\ \hline \end{tabular}
\end{table}
Table 2: Summary of the three microscope platform designs.
Figure 2: The upright horizontal microscopes with the custom-made objective turret – a) with the grating spectrometer in the background, b,c) with the green excitation beam visualized. Spontaneous Raman microscope (with the radial turret presented in Fig. 3 a,c) with the Raman excitation laser, the galvo scanner, dichroic mirror, beam guiding mirrors, tube and scan lenses, installed on the upper base plate. Manual XY and Z stages are mounted (stacked) on the lower base plate for focusing and coarse
sample positioning. A grating spectrometer is visible in the background - the Raman signal beam is still delivered to the spectrometer slit via a telescope - but it could be built onto the lower platform as well.
One feature of the radial objective turret with a 45 degree folding mirror is that it can be easily reconfigured to be used for the upright (with the mirror pointing down, as in Fig. 3c) or inverted microscope (with the mirror pointing up, Fig. 3b) by turning the turret body by 180 degrees in its support. In Fig. 3e,f this concept is further developed in an objective head where the optic axis overlaps with the rotation axis of the turret support when the latter is reconfigured between the upright and the inverted position. Thus the configuration can be changed with all the optics remaining in the same position, as the beam height leaving (entering) the turret remains the same. What is more, the turret support with the folding mirror can be rotated to any position if a microscope configuration is needed where the objective axis is not vertical, in particular with the objective axis being oriented horizontally or at any angle with respect to the base plate.
The last configuration - the conical objective turret - comes in two variants: with either one stationary folding mirror or with a number of folding mirrors, each mirror arranged for one microscope objective. The conical objective arrangement allows for the sleek design of the head and the additional advantage is that the unused objectives on the turret do not point upwards, in which case there is often a significant dust accumulation on the front lens surface, even in a clean environment. The designs presented here are intended to demonstrate the radial turret concept and are by no means optimized - detailed solutions will depend on specific applications, space constrains etc.
Figure 3: Different designs of the microscope radial objective turret for compact horizontal microscopes. a) General view of the basic design with four objectives mounted in one plane. b) The
cross section of the design in a) configured for the inverted microscope. c) The cross section of the design in a) configured for the upright microscope. d) General view of the basic design, similar to a), but with the horizontal optic axis overlapping with the axis of rotation of the turret stationary part in the support. e) The cross section of the design in d) configured for the inverted microscope. f) The cross section of the design in d) configured for the upright microscope. g) General view of the conical turret with six objectives. h) The cross section of the design in g) with one mirror used for all the objectives. i) The cross section of the design in g) with each objective using its own mirror. Green: support, yellow: turret stationary part, red: turret revolving part, blue: 45" folding mirror(s).
## Design 2
In this prototype we merged the idea of the light paths being mostly horizontal with the layout of a typical inverted microscope. Two solid supports are mounted on a 300x600x12.7 mm breadboard (MB3060/M, Thorlabs) that is the microscope lower base plate and hold the manual XY stage, adapted from a commercial inverted microscope (Figure 4). The sextuple nosepiece (TI2-N-N, Nikon) is mounted onto a manual coarse/fine focusing block (T4 Stereo Microscope Coaxial Coarse and Fine Focusing Holder, Wally Sky), adapted from a commercial stereo microscope. In this prototype the sample plane is 240 mm above the base (breadboard) plane and the beam height is 72 mm above the lower bread board. The elliptical, 1 inch aperture, silver coated 45 degrees folding mirror, with a special holder (PFE10-P01 and H45E1,Thorlabs) is mounted on a 2-axis kinematic mirror mount, attached horizontally to a standard aluminum post. The infinity space is accessible from approximately 120 mm from the objective parfocal plane.
The upper bread board (180x400x10 mm) has the rack-and-pinion translation stage for the Z movement of the upper objective (25 mm travel range) and a simple mechanism for centering the two objectives: the upper objective is mounted in a conical collar (with the inner thread matching this of the objective, M27x0.75 in our case) that can be translated in the XY plane with two screws (M6/0.5) against the third, spring-loaded screw, the three being mounted every 120 degrees in the XY plane. The upper folding mirror is mounted, without any precision adjustments, to direct the beam 90 mm above the upper bread board, where the detector(s) and an LED for trans illumination are installed as well.
Figure 4: Design 2 – the first approach to the inverted horizontal microscope. a) In this photo, a large X-Y translation stage is replaced with a solid plate; two spacers compensate for the plate and the stage height mismatch. The focusing (Z) rack and pinion stage is visible with the elliptical folding mirror in a standard kinematic mirror mount (mounted horizontally) on a post. At this stage, the two
supporting plates were located at the sides of the base plate. b,c) The final version with one of the supporting plates moved to the front and the other replaced by two pillars on the sides. It uses a 300x300 mm X-Y translation stage (adapted from Nikon Eclipse). Another three pillars hold the upper breadboard with the upper objective, folding mirror, light detector and LED trans illumination module.
## Design 3
The last prototype, presented in Fig. 5 is an attempt to make the overall microscope system more compact, to be ultimately integrated in a transportable Stimulated Raman Scattering (SRS) microscope (Fig. 5d). This design combines the idea of folding the beams right before (and after) the objective(s) with one more concept: instead of using a large X-Y translation stage, typically found in inverted microscopes, it relies on a much more compact, lighter X-Y stage, installed upside-down under the upper bread board. A typical inverted microscopes use large XY stages with the central opening that must be wide enough to provide enough space for the objectives (moving on the turret) to approach the sample (usually flush with the stage upper surface) from below. As a result, the stage is typically at least 250x250 mm, with a few cm of travel in each direction, thick (20-30 mm) and heavy. This, in turn, requires a very strong support structure. In our prototype the sample is mounted on a small (80x80 mm, 25 mm travel) XY manual translation stage. The stage is mounted upside-down to the bottom surface of the upper bread board (12 mm thick solid aluminum alloy plate), with the sample support plate (6 mm thick solid aluminum alloy plate) protruding to the side. This way, the small translation stage replaces the large XY translation stage, providing unobstructed access for the objectives from below (the bottom surface of the sample support tray is the lowest element). Focusing is provided with a 10 mm travel Z translation stage (TSD-603, OptoSigma), configured with differential micrometer (MHF2-13F, OptoSigma) and with the same sextuple nospeciee as in design 2. The differential micrometer has 0.5 micron per division fine and 10 micron per division coarse movement, well suited for precision manual focusing. Interestingly, in such configuration, the center of gravity of the objective turret is conveniently located almost exactly above the center of the Z stage platform, thus any torques that could deteriorate the Z stage performance, are avoided. The sample plane is located 165 mm above the base (breadboard) plane and the beam height is 62 mm above the base plane. The infinity space is available from 40 mm from the objective parfocal plane. We also replaced the 2-axis kinematic mirror mount with a monolithic, custom made flexure mount with 45 degrees platform on the moving part to install the lower folding mirror, as its position is only set once. The flexure is used for pitch and the entire mount can be rotated for yaw before being secured to the lower bread board.
The upper bread board is supported with two columns made of aluminum extrusion (90x18.5 mm, Alutec) and has the rack-and-pinion translation stage for the Z movement of the upper objective (20 mm travel range) and the same mechanism for centering the two objectives as in the previous prototype, except it now uses three solid screws, instead of two solid and one spring-loaded ones. Here the upper folding mirror is fixed for 81 mm beam height above the upper base plate. In the later version, the upper folding mirror was mounted on a rack-and-pinion Z stage (20 mm travel range), to allow for adjusting the beam height above the upper base plate between 40-60 mm and the aluminum extrusions have been replaced by solid, 20 mm thick plates to increase the stiffness of the construction (which is always a trade-off with weight). The upper base plate can be larger, if more complex detection setups are to be installed and can be supported with additional (thinner) columns.
## Conclusions
Prototype optical microscopes often require specific, non-standard opto-mechanical solutions, beyond those used typically in optics laboratories. We proposed a horizontal microscope platform with a custom-made microscope head that can be used in many configurations and with various components. In a series of prototypes we tested a few approaches to mounting the microscope objectives, the sample and light beam path configurations, using both commercially
Figure 5: First version of the Design 3 – a compact inverted horizontal microscope head. It uses a 80x80 mm manual X-Y translation stage (25x25 mm travel range) and the supporting columns are made of aluminum extrusions. The lower folding mirror is installed in a flexure mount with only one degree of freedom (pitch). Design 3 of the platform used for a Stimulated Raman Scattering microscope on the optical table (trans illumination from below with two tunable picosecond fiber lasers (not shown), white LED for wide field trans illumination is also visible on top). Compared to the first variant (a.b), the 180x500 mm base plate is made of an aluminum extrusion and is large enough to accommodate the tube lens, the scan lens, the two-axis galvo scanner and a CMOS camera for wide field sample viewing. The Z stage was also turned by 180 degrees. (d) The mobile, self-contained SRS microscope setup for leukemia cell imaging. Only the microscope head is shown in detail, installed in a standard 19-inch mobile rack with lasers and electronics modules below.
available, as well as custom-made components. We believe the concepts and designs presented here will be a valuable departure point for the future generation of microscope builders who will adopt them to their specific needs.
|
2304.10517 | Segment Anything Model for Medical Image Analysis: an Experimental Study | Training segmentation models for medical images continues to be challenging
due to the limited availability of data annotations. Segment Anything Model
(SAM) is a foundation model that is intended to segment user-defined objects of
interest in an interactive manner. While the performance on natural images is
impressive, medical image domains pose their own set of challenges. Here, we
perform an extensive evaluation of SAM's ability to segment medical images on a
collection of 19 medical imaging datasets from various modalities and
anatomies. We report the following findings: (1) SAM's performance based on
single prompts highly varies depending on the dataset and the task, from
IoU=0.1135 for spine MRI to IoU=0.8650 for hip X-ray. (2) Segmentation
performance appears to be better for well-circumscribed objects with prompts
with less ambiguity and poorer in various other scenarios such as the
segmentation of brain tumors. (3) SAM performs notably better with box prompts
than with point prompts. (4) SAM outperforms similar methods RITM, SimpleClick,
and FocalClick in almost all single-point prompt settings. (5) When
multiple-point prompts are provided iteratively, SAM's performance generally
improves only slightly while other methods' performance improves to the level
that surpasses SAM's point-based performance. We also provide several
illustrations for SAM's performance on all tested datasets, iterative
segmentation, and SAM's behavior given prompt ambiguity. We conclude that SAM
shows impressive zero-shot segmentation performance for certain medical imaging
datasets, but moderate to poor performance for others. SAM has the potential to
make a significant impact in automated medical image segmentation in medical
imaging, but appropriate care needs to be applied when using it. | Maciej A. Mazurowski, Haoyu Dong, Hanxue Gu, Jichen Yang, Nicholas Konz, Yixin Zhang | 2023-04-20T17:50:18Z | http://arxiv.org/abs/2304.10517v3 | # Segment Anything Model for Medical Image Analysis: an Experimental Study
###### Abstract
Training segmentation models for medical images continues to be challenging due to the limited availability of data annotations. Segment Anything Model (SAM) is a foundation model that is intended to segment user-defined objects of interest in an interactive manner. While the performance on natural images is impressive, medical image domains pose their own set of challenges. Here, we perform an extensive evaluation of SAM's ability to segment medical images on a collection of \(19\) medical imaging datasets from various modalities and anatomies. We report the following findings: (1) SAM's performance based on single prompts highly varies depending on the dataset and the task, from IoU=\(0.1135\) for spine MRI to IoU=\(0.8650\) for hip X-ray. (2) Segmentation performance appears to be better for well-circumscribed objects with prompts with less ambiguity and poorer in various other scenarios such as the segmentation of brain tumors. (3) SAM performs notably better with box prompts than with point prompts. (4) SAM outperforms similar methods RITM, SimpleClick, and FocalClick in almost all single-point prompt settings. (5) When multiple-point prompts are provided iteratively, SAM's performance generally improves only slightly while other methods' performance improves to the level that surpassed SAM's point-based performance. We also provide several illustrations for SAM's performance on all tested datasets, iterative segmentation, and SAM's behavior given prompt ambiguity. We conclude that SAM shows impressive zero-shot segmentation performance for certain medical imaging datasets, but moderate to poor performance for others. SAM has the potential to make a significant impact in automated medical image segmentation in medical imaging, but appropriate care needs to be applied when using it.
## 1 Introduction
Image segmentation is a central task in medical image analysis, ranging from the segmentation of organs [31], abnormalities [3], bones [17], and others [28], which has received significant advancements from deep learning [13, 30]. However, developing and training segmentation models for new medical imaging data and/or tasks is practically challenging, due to the expensive and time-consuming nature of collecting and curating medical images, primarily because trained radiologists must typically provide careful mask annotations for images.
These difficulties could be significantly mitigated with the advent of foundation models [39] and zero-shot learning [6]. Foundation models are neural networks trained on an extensive amount of data, using creative learning and prompting objectives that typically do not require traditional supervised training labels, both of which contribute towards the ability to perform zero-shot learning on completely new data in a variety of settings. Foundation models have shown paradigm-shifting abilities in the domain of natural language processing [27]. The recently developed Segment Anything Model is a foundation model that has achieved promising zero-shot segmentation performance on a variety of natural image datasets [18].
### What is SAM?
Segment Anything Model (SAM) is designed to segment an object of interest in an image given certain prompts provided by a user. Prompts can take the form of a single point, a set of points (including an entire mask), a bounding box, or text. The model is asked to return a valid segmentation mask even in the presence of ambiguity in the prompt. The general idea behind this approach is that the model has learned the concept of an object and thus can segment any object that is pointed out. This results in a high potential for
it to be able to segment objects of types that it has not seen without any additional training, _i.e._, high performance in the zero-shot learning regime. In addition to the prompt-based definition of the task, the SAM authors utilized a specific model architecture and a uniquely large dataset to achieve this goal, described as follows.
SAM was trained progressively alongside the development of the dataset of images with corresponding object masks (SA-1B). The dataset was developed in three stages. First, a set of images was annotated by human annotators by clicking on objects and manually refining masks generated by SAM, which at that point was trained on public datasets. Second, the annotators were asked to segment masks that were not confidently generated by SAM to increase the diversity of objects. The final set of masks was generated automatically by prompting the SAM model with a set of points distributed in a grid across the image and selecting confident and stable masks.
### How to segment medical images with SAM?
SAM is designed to require a prompt or a set of prompts to produce a segmentation mask. Technically, the model can be run without a prompt to provide any visible object, but we do not expect this to be useful for medical images, where there are often many other objects in the image beside the one of interest. Given this prompt-based nature, in its basic form, SAM cannot be used the same way as most segmentation models in medical imaging where the input is simply an image and the output is a segmentation mask or multiple masks for the desired object or objects.
We propose that there are three main ways in which SAM can be used in the process of segmentation of medical images. The first two involve using the actual Segment Anything Model in the process of annotation, mask generation, or training of additional models. These approaches do not involve changes to the SAM. The third approach involves the process of training/fine-tuning a SAM-like model targeted for medical images. We detail each approach next. Note that we do not comment here on text-based prompting, as it is still in the proof-of-concept stage for SAM.
**Semi-automated annotation ("human in the loop").** The manual annotation of medical images is one of the main challenges of developing segmentation models in this field since it typically requires the valuable time of physicians. SAM could be used in this setting as a tool for faster annotation. This could be done in different ways. In the simplest case, a human user provides prompts for SAM, which generates a mask to be approved or modified by the user; this could be refined iteratively. Another option is where SAM is given prompts distributed in a grid across the image (the "segment everything" mode), and generates masks for multiple objects which are then named, selected, and/or modified by the user. This is only the start; many other possibilities could be imagined.
**SAM assisting other segmentation models.** One version of this usage mode is where SAM works alongside another algorithm to automatically segment images (an "inference mode"). For example, SAM, based on point prompts distributed across the image, could generate multiple object masks which then could be classified as specific objects by a separate classification model. Similarly, an independent detection model, _e.g._, ViTDet [20], could generate object-bounding boxes of images to be used as prompts for SAM to generate precise segmentation masks.
Furthermore, SAM could be used in the training loop of some other semantic segmentation model. For example, the masks generated by a segmentation model on unlabeled images during training could be used as prompts to SAM to generate more precise masks for these images, which could be used as iteratively refined supervised training examples for the model being trained. One could conceptualize many other specific modes of including SAM in the process of training new segmentation models.
**New medical image foundation segmentation models.** In this usage mode, the development process of a new segmentation foundation model for medical images could be guided by SAM's own development process. The largest difficulty of this would be in the much lower availability of medical images and quality annotations, compared to natural images, but this is possible in principle. A more feasible option could be to fine-tune SAM on medical images and masks from a variety of medical imaging domains, rather than training from scratch, as this would likely require fewer images.
## 2 Methodology
In the previous section, we described various usage scenarios of SAM for medical image segmentation. These are conceptually promising but largely rely on the assumption that SAM can generate accurate segmentations of medical images. Here, we experimentally evaluate this claim and evaluate the performance of SAM within a variety of different realistic usage scenarios and datasets in medical imaging.
### Datasets
We compiled and curated a set of 19 publicly available medical imaging datasets for image segmentation. While the phrase "medical imaging" is sometimes used to refer to all images pertaining to medicine, we focus on the common definition of radiological images. Our dataset includes planar X-rays, magnetic resonance images (MRIs), computed tomography (CT) images, ultrasound (US) images, and positron emission tomography (PET) images. The datasets are summarized in Tables 1 and 2. For datasets
containing more than one type of object, we considered the segmentation of each object as a separate task.
### Experiments
We performed a thorough evaluation of SAM with both non-iterative prompts (generated prior to SAM being applied) and iterative prompts (generated after seeing the model's predictions). We also explored the "segment everything" mode of SAM and analyzed the different outputs that SAM generates in response to ambiguity in the prompts.
#### 2.2.1 Prompting Strategies
**Non-iterative prompts.** In this primary mode of evaluation, prompts were simulated to reflect how a human user might generate them while looking at the objects. We focus on five modes of non-iterative prompting designed to capture the realistic usage cases of SAM for generating image masks, using either points or bounding boxes. An essential thing to consider is that a single "object" of interest / "ground truth" mask may consist of multiple disconnected parts, which is especially common in medical images. An example of this is a cross-sectional image of a liver (such as MRI or CT) where in one 3D slice, the liver is portrayed as two non-contiguous areas. Given this consideration, we introduce the following five prompting modes:
* One prompt point is placed at the center of the **largest** contiguous region of the object of interest/ground truth mask.
* A prompt point is placed at the center of **each** separate contiguous region of the object of interest (up to three points).
* One box prompt is placed to tightly enclose the **largest** contiguous region of the object of interest.
* A box prompt is placed to tightly enclose **each** separate contiguous region of the object of interest (up to three
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Abbreviated** & **Full dataset name and citation** & **Modality** & \begin{tabular}{l} **Num.** \\ **classes** \\ \end{tabular} & **Object(s) of interest** & \begin{tabular}{l} **Num.** \\ **masks** \\ \end{tabular} \\ \hline MRI-Spine & \begin{tabular}{l} Spinal Cord Grey Matter \\ Segmentation Challenge [29] \\ \end{tabular} & MRI & 2 & \begin{tabular}{l} Gray matter, \\ spinal cord \\ \end{tabular} & 551 \\ \hline MRI-Heart & \begin{tabular}{l} Medical Segmentation Decathlon [33] \\ \end{tabular} & MRI & 1 & Heart & 1,301 \\ \hline MRI-Prostate & \begin{tabular}{l} Initiative for Collaborative \\ Computer Vision Benchmarking [19] \\ \end{tabular} & MRI & 1 & Prostate & 893 \\ \hline MRI-Brain & \begin{tabular}{l} The Multimodal Brain Tumor Image \\ Segmentation Benchmark (BraTS) [26] \\ \end{tabular} & MRI & 3 & \begin{tabular}{l} GD-enhancing tumor, \\ Peritumoral edema, \\ necrotic and non- \\ enhancing tumor core \\ \end{tabular} & 12,591 \\ \hline MRI-Breast & \begin{tabular}{l} Duke Breast Cancer MRI: \\ Breast + FGT Segmentation [32, 14] \\ \end{tabular} & MRI & 2 & \begin{tabular}{l} Breast, fibrog- \\ landular tissue \\ \end{tabular} & 503 \\ \hline Xray-Chest & \begin{tabular}{l} Montgomery County and Shenzhen \\ Chest X-ray Datasets [16] \\ \end{tabular} & X-ray & 1 & Chest & 704 \\ \hline Xray-Hip & X-ray Images of the Hip Joints [11] & X-ray & 2 & \begin{tabular}{l} Ilium, femur \\ \end{tabular} & 140 \\ \hline US-Breast & Dataset of Breast Ultrasound Images [1] & Ultrasound & 1 & Breast & 647 \\ \hline US-Kidney & CT2US for Kidney Segmentation [35] & Ultrasound & 1 & Kidney & 4,586 \\ \hline US-Muscle & \begin{tabular}{l} Transverse Musculoskeletal \\ Ultrasound Image Segmentations [25] \\ \end{tabular} & Ultrasound & 1 & Muscle & 4,044 \\ \hline US-Nerve & \begin{tabular}{l} Ultrasound Nerve Segmentation \\ Identify ([2]) \\ \end{tabular} & Ultrasound & 1 & Nerve & 2,323 \\ \hline US-Ovarian-Tumor &
\begin{tabular}{l} Multi-Modality Ovarian Tumor \\ Ultrasound (MMOTU) [38] \\ \end{tabular} & Ultrasound & 1 & Ovarian tumor & 1,469 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **All datasets evaluated in this paper. “num. masks” refers to the number of images with non-zero masks. For 3D modalities, 2D slices are used as inputs.**
boxes)
* A single box is placed to tightly enclose the **entire** object mask.
Juxtaposed examples of each prompting mode for the same image are shown in Figure 1. Modes 1 and 2 are equivalent if the object consists of only one contiguous region/part, and the same is true for modes 3, 4, and 5. The point prompts were generated as the point farthest from the boundary of the mask for the object or its part.
**Iterative prompts.** We use a common, intuitive strategy for simulating realistic iterative point prompts, which reflects how those could be generated by a user in an interactive way [24]. The details of the prompt generation are illustrated in Algorithm 1. Specifically, once the network makes a prediction, we compute an error map where both false positive and false negative predictions are marked as 1, _i.e.,_ the point furthest from 0. Then we can find the location of the next prompt as the central location of the largest component of the error mask. The label of the prompt is based on whether the new location is foreground or background.
**Prompt ambiguity and oracle performance.** Prompts can be ambiguous in the sense that it may be unclear which object in the image the prompt is referring to. A typical scenario is when objects are nested within each other in the image. For example, when a user provides a point prompt within a necrotic component of a brain tumor, they could intend to segment that component, the entire tumor, one hemisphere of the brain, the entire brain, or the entire head. In response to this issue, SAM provides multiple outputs aiming at disambiguating the prompts. This is a very important and practical feature of SAM since in the interactive segmentation setting, multiple potential outputs could be presented to the user, from which they could select the one which is the closest to the object that they intended. In our experiments, we display some examples of the multiple outputs generated by SAM to illustrate how it deals with the ambiguity
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Abbreviated** & **Full dataset name and citation** & **Modality** & \begin{tabular}{l} **Num.** \\ **classes** \\ \end{tabular} & **Object(s) of interest** & \begin{tabular}{l} **Num.** \\ **masks** \\ \end{tabular} \\ \hline CT-Colon & Medical Segmentation Decathlon [33] & CT & 1 & \begin{tabular}{l} Colon cancer \\ primaries \\ \end{tabular} & 1,285 \\ \hline CT-HepaticVessel & Medical Segmentation Decathlon [33] & CT & 1 & Vessels, tumors & 13,046 \\ \hline CT-Pancreas & Medical Segmentation Decathlon [33] & CT & 1 & \begin{tabular}{l} parenchyma \\ and mass \\ \end{tabular} & 8,792 \\ \hline CT-Spleen & Medical Segmentation Decathlon [33] & CT & 1 & spleen & 1,051 \\ \hline CT-Liver & The Liver Tumor & \begin{tabular}{l} CT \\ Segmentation Benchmark (LiTS) [4] \\ \end{tabular} & CT & 1 & Liver & 5,501 \\ \hline CT-Organ & CT Volumes with Multiple & \begin{tabular}{l} CT \\ Organ Segmentations (CT-ORG) [31] \\ \end{tabular} & CT & 5 & \begin{tabular}{l} Liver, bladder, lungs, \\ kidney, bone \\ \end{tabular} & 4,776 \\ \hline PET-Whole-Body &
\begin{tabular}{l} A FDG-PET/CT dataset \\ with annotated tumor lesions [10] \\ \end{tabular} & PET/CT & 1 & Lesion & 1,015 \\ \hline \hline \end{tabular}
\end{table}
Table 2: (**Continued) All datasets evaluated in this paper. “num. masks” refers to the number of images with non-zero masks. For 3D modalities, 2D slices are used as inputs.**
Figure 1: Examples of prompt(s) generated by the five modes respectively. Green contours show the ground-truth masks, and blue star(s) and box(es) indicate the prompts.
of the prompts.
Related to the ambiguity, in all experiments, we also present what the developers of SAM call "oracle performance". This is the performance of the model when the prediction closest (in terms of IoU) to the true mask, _i.e._, oracle prediction, is always used, out of SAM's three generated predictions. Note that this prediction may differ from SAM's most confident prediction. While this assumes knowledge of the true mask and is a biased way to assess performance when there is no additional interaction with the user after providing the initial prompts, it is a practical reflection of performance in a setting where the user can select one of the masks generated by SAM. When prompts are generated iteratively, the oracle prediction is used to create the error map that guides the location of the next prompt.
#### 2.2.2 Comparison with other methods
We compared SAM with three interactive segmentation methods, namely RITM [34], SimpleClick [21], and FocalClick [7]. RITM adopted HRNet-18 [36] as the backbone segmentation model and an iterative mask correction approach based on the \(\{\) previous mask, new click \(\}\) set to achieve outstanding performance on multiple natural imaging datasets. Based on the framework proposed by RITM, SimpleClick replaced the HRNet-18 backbone in RITM with a plain-ViT [9] and included a "click embedding" layer symmetric to patch embedding. It allows for fewer clicks than RITM to achieve an above-threshold performance. FocalClick replaced the HRNet-18 backbone in RITM with a SegFormer [37] and restricted the update responding to new clicks to occur only as local. The method proposed "progressive merge" that can exploit morphological information and prevent unintended changes far away from users' clicks,
Figure 2: Performance of SAM under 5 modes of use. Left: Performance of SAM across 28 segmentation tasks, with results ranked in descending order based on Mode 4. Oracle performance for each mode is indicated by the inverted triangle. Right: A summarized performance comparison of all five modes across all tasks, presented in a box and whisker plot format.
Figure 3: Visualization of SAM’s segmentation results in two different modes. Each dataset is shown in two sequential rows, with its name along the left side. For each dataset, it displays three examples from left to right, reflecting the 25th, 50th, and 75th percentiles of IoU across all images for that dataset. For each example, we visualize (top left) the raw image; (bottom left) the zoom-in image with the area of interest; (top right) the segmented results for mode 2: 1 point at each object region; (bottom right) the segmented results for mode 4: 1 box region at each object region. Additionally, the IoU is represented above each segmented result. Examples of all the datasets are shown in Appendix Figure 1-5.
resulting in faster inference time.
#### 2.2.3 Performance evaluation metric
For each dataset, we evaluated the accuracy of the masks that SAM and aforementioned methods generate given prompts, with respect to the "ground truth" mask annotations for the given dataset and task. In the quantitative evaluation, we always use the mask with the highest confidence generated by SAM for a given prompt. We used IoU as the evaluation metric, similar to as in SAM's original paper [18]. For datasets containing multiple types of objects, performance was reported independently for each type of object.
## 3 Results
### Performance of SAM for different modes of use for 28 tasks
The performance of SAM for our five prompting modes, introduced in Section 2.2.1, of use is shown in Figure 2. We draw several conclusions. First, SAMs performance varies widely across different datasets. It ranges from an impressive IoU of \(0.9118\) to a very poor IoU of \(0.1136\).
Comparing the performance for different prompting
Figure 4: Comparison of SAM with three other competing methods, namely RITM, SimpleClick, and Focalclick, under the 1-point prompt setting. The results are presented in the form of the difference between SAM and other methods (\(\Delta\) IoU), and ranked based on the descending order of the largest \(\Delta\) IoU for each task.
Figure 5: Comparison of SAM and other methods under an interactive prompt setting. (Left) it presents the average performance of SAM and other methods across all tasks with respect to the number of prompt changes. (Right) it shows the detailed performance of SAM over each task.
modes shows clear superiority of box prompts over point prompts. Moreover, as expected, prompts where each separate part of the object is indicated separately are generally superior to those where only one part is indicated, or all parts are outlined in one box. This was particularly pronounced for datasets where objects typically consist of more than one part. Following these two trends, Mode 4, where each part of an object is indicated by a separate box, showed the best performance with an average IoU of \(0.6542\).
Additionally, the oracle mode showed a moderate improvement over the default mode. The magnitude of this improvement was highly dependent on the dataset.
Figure 6: Examples of SAM’s prediction under the interactive prompt setting. For each dataset, we display the results from 1-point prompts to 9-point prompts, respectively. The positive prompts are represented as green stars, and the negative prompts are represented as red stars.
Figure 7: Visualizations of examples with ambiguity based on SAM; the 1st, 2nd, and 3rd confident predictions are shown sequentially.
Figure 3 shows examples of segmentations generated in prompting Mode 2 (a point for each object part) and Mode 4 (a box around each object part) for 4 selected datasets. For each dataset, we provide examples of SAM's segmentations in the 25th, 50th, and 75th percentile of IoU. Green contours represent the ground truth masks, red contours represent SAM's prediction, and teal points or boxes represent the prompts given to SAM. This figure illustrates a high variability of SAM's performance ranging from near-perfect for well-circumscribed objects with unambiguous prompts to very poor, particularly for objects with ambiguous prompts. A similar illustration for all datasets is provided in Appendix Section B.
### Comparing SAM to other interactive segmentation methods
We compare SAM to other interactive methods in the non-iterative prompting setting in Figure 4. SAM performed better than all other methods on \(24\) out of \(28\) tasks, with a dramatic improvement in performance for some of them. When used in oracle mode, SAM was better than all other methods for \(26\) out of \(28\) tasks. The average performance across different datasets was \(0.4595\) IoU for SAM, \(0.5137\) IoU for SAM in oracle mode, \(0.2240\) IoU for FocalClick, \(0.1910\) IoU for SimpleClick, and \(0.1322\) IoU for RITM. Note that SAM exhibits notably better overall performance than all other methods even though it is not used in its best-performing mode of using boxes as prompts, as we used point prompts to have a fair comparison between different methods. If SAM used Mode 3 (single box), it outperformed all other methods in all but one task. The average performance for Mode 3 was \(0.5891\) IoU.
### Performance of SAM and other methods for iterative segmentation
We show the average performances of SAM, RITM, SimpleClick, and FocalClick in Figure 5. The detailed performance of the three competing methods over 28 tasks under the interactive prompt setting is shown in Appendix Section C. As seen in the results above, for the scenario where prompts are provided prior to any interaction with the output of the models, SAM performs notably better than all other methods. However, when additional clicks are provided iteratively with the goal of refining the segmentations
Figure 8: (Top) the relative size of the object in each dataset; (Bottom) the object size vs. detection performance for mode 2 and mode 4 separately; we also show a regression fitted curve each.
returned by the models, the superiority of SAM diminishes, and it is surpassed by two other methods (SimpleClick and RITM) for five or more points provided by the users. This is due to the fact that SAM does not appear to draw hardly any benefits from additional information provided through the interactive points after two or three points have been provided. Figure 6 illustrates this phenomenon.
We also see that SAM, when given a single prompt, has difficulty segmenting objects with multiple non-contiguous regions. It is more likely to segment one contiguous region instead of trying to find additional semantically similar regions in the entire image. Therefore, in the scenario where multiple regions of interest exist for an object, additional prompt points for SAM can be beneficial if they target additional regions, but beyond that, the benefit of further prompt points is negligible and in some cases such additional input is detrimental.
### Performance of SAM in the presence of ambiguity of prompts
When applying SAM on medical images, we found a consistent tendency of SAM in interpreting the point prompts. As shown in Figure 7, the SAM's highest-confidence map predictions tend to look similar to results generated by region-growing-based algorithms. In most cases, a connected region with similar intensity in the image bounded by regions with dissimilar intensities would be segmented. On the other hand, we found masks predictions generated with lower confidence scores may expand the highest-confidence predictions and tend to have more variety in intensity/texture.
### Performance of SAM for objects of different sizes
In Figure 8(a) we show how object size relates to the performance of SAM. We describe object size as the ratio of the number of pixels in the object to the total in the image. We found object size to vary broadly across our different datasets, by up to over two orders of magnitude. Some datasets also showed high variability of object size within the dataset itself (e.g., kidneys in ultrasound images) while others showed relative consistency of object size (such as bones in hip X-ray images).
Figures 8(b) and 8(c) show the relationship between the average object size in a dataset and the performance of SAM. While the correlations are low, there is a trend towards a higher performance of SAM for larger objects. Note that the correlation analysis looks at average object sizes in the image and does not consider the variation of object sizes in the individual datasets.
### Segment-everything mode for medical images
In Figure 9, we provide an example of using SAM's "segment everything mode" on medical images. Segment-everything mode uses a dense evenly-spaced mesh of prompt points, designed to direct SAM to segment the image into many different regions. We find that this mode provides mixed usefulness and that the results are somewhat dependent on the number of prompts. While imperfect, this setup could potentially be useful in some applications,, as a "starting point" for creating segmentation annotations for different objects in a new image. Selecting masks from the output of segment everything mode is effectively a special case of the prompting mode 2 in our experiments except with additional post-processing. We hence would not elaborate on this mode in our paper.
## 4 Conclusions and discussion
In this study, we evaluated the new Segment Anything Model for the segmentation of medical images. We reached the following conclusions:
* SAM's accuracy for zero-shot medical image segmentation is moderate on average and varies significantly across different datasets and different images within a dataset.
Figure 9: Examples of segment everything mode. For each example, we sampled a different number of grid points at each side as \(2^{5}\),\(2^{6}\) and \(2^{7}\).
* The model performs best with box prompts, particularly when one box is provided for each separate part of the object of interest.
* SAM outperforms RITM, SimpleClick, and FocalClick in the vast majority of the evaluated settings where a single non-iterative prompt point is provided.
* In the setting where multiple iteratively-refined point prompts are provided, SAM obtains very limited benefit from additional point prompts, except for objects with multiple parts. On the other hand, the other algorithms improve notably with additional point prompts, to the level of surpassing SAM's performance. However, the point prompting modes are inferior to SAM's box prompting modes.
* We find a small but non-statistically significant correlation between the average object size in a dataset and SAM performance.
One of the contributions of our study is that we identified five different modes of use for interactive segmentation methods. This is of particular importance for models which have multiple components, a common feature in medical imaging. These modes also showed different performances in such scenarios. While these modes demonstrate the variety of uses, future work could focus on prompt engineering, both non-iterative and iterative, which could potentially reach even higher performance.
The segment anything model, associated preprint, and the code illustrate the strengths of open science and the activity of machine learning and machine learning in medical imaging communities. Since we made the first version of this manuscript available, approximately 2 weeks after the release of the SAM paper (or even some before our release), multiple preprints have appeared evaluating SAM in broadly understood medical imaging and radiological imaging [8, 15]. Some preprints already showed extensions of SAM to medical imaging [12, 23] and one paper showed integration of SAM into 3D Slicer [22]. This demonstrates a high likelihood that SAM will become an important part of image segmentation in medical imaging.
Overall, SAM shows promise for use in medical images, as long as suitable prompting strategies are used for the dataset and task of choice. Future work will include the development of different ways to adapt it to construct medical imaging-specific models as well as extend to 3D segmentation.
## Acknowledgments
Research reported in this publication was supported by the National Institute Of Biomedical Imaging And Bioengineering of the National Institutes of Health under Award Number R01EB031575 and by the National Heart Lung and Blood Institute of the National Institutes of Health under Award Number R44HL152825. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
|
2307.09516 | Fifth forces from QCD axions scale differently | We reexamine the low-energy potential for a macroscopic fifth force generated
from the exchange of two axions. The shift-symmetry of the linear axion
interactions leads to a potential falling off as $V(r) \sim 1/r^5$. We find
that in the case of the QCD axion higher-order terms in the Lagrangian break
the shift symmetry and lead to the dominant contribution to the potential
scaling as $V(r) \sim 1/r^3$. These terms are generated by the same physics
responsible for the axion mass and therefore the new contributions to the
potential induce a different force for external nucleons and leptons. We
demonstrate how this result affects the sensitivity of searches for new
long-range forces. | Martin Bauer, Guillaume Rostagni | 2023-07-18T18:00:33Z | http://arxiv.org/abs/2307.09516v1 | # Fifth forces from QCD axions scale differently
###### Abstract
We reexamine the low-energy potential for a macroscopic fifth force generated from the exchange of two axions. The shift-symmetry of the linear axion interactions leads to a potential falling off as \(V(r)\sim 1/r^{5}\). We find that in the case of the QCD axion higher-order terms in the Lagrangian break the shift symmetry and lead to the dominant contribution to the potential scaling as \(V(r)\sim 1/r^{3}\). These terms are generated by the same physics responsible for the axion mass and therefore the new contributions to the potential induce a different force for external nucleons and leptons. We demonstrate how this result affects the sensitivity of searches for new long-range forces.
+
Footnote †: preprint: IPPP/23/35
## I Introduction
Fifth forces have been identified as potential probes for axions very early [1]. The early focus was on the exchange on spin-dependent interactions, which are the consequence of the exchange of single light CP-odd scalars. The leading contribution to a spin-independent long-range force mediated by axions is generated by axion-pair exchange at one loop [3] and therefore similarly suppressed as the 'neutrino-force' generated by the exchange of neutrino pairs [4; 5]. The potential corresponding to the exchange of a pair of neutrinos scales as \(V(r)\sim 1/r^{5}\), as does the potential generated by the exchange of pairs of massless axions [3; 6]. In contrast, the exchange of pairs of pseudoscalars leads to a non-relativistic potential scaling as \(V(r)\sim 1/r^{3}\), whereas the potential for an axion-Higgs portal scales as \(V(r)\sim 1/r^{7}\)[7; 8]. The difference between the potentials induced by pseudoscalars and axions is a consequence of the manifest shift-symmetry that protects all linear axion interactions and have been discussed already in the case of the pion in very early literature [9; 10; 11; 12].
In this letter we show that in the case of the QCD-axion the dominant contribution to the potential is generated by the same physics responsible for the axion mass and that these contributions generate a \(V(r)\sim 1/r^{3}\) potential even though they are induced by higher-order operators in the effective field theory (EFT) expansion in the axion decay constant. Since the axion mass is generated by strong dynamics these additional contributions to the low-energy potential only occur for external hadrons. Axion-induced forces between leptons as well as between hadrons and leptons are substantially weaker. This could allow to directly measure the contribution to the axion mass from the chiral anomaly by comparing different searches for fifth forces.
There are several ways to search for the effects of a new, macroscopic force including searches with Cavendish-type experiments [13], searches for new forces in atoms and molecules [14], measurements of the effective Casimir pressure [15; 16] and experiments specifically designed to suppress the Casimir force [17]. We introduce the different contributions to axion interactions at low energy in Section II, derive the potential for axions including the new contributions from high-order operators in Section III, and demonstrate the effect of the new contribution for the Casimir-less experiment [17] in Section IV.
## II Two axion interactions
The Lagrangian for an axion interacting with fermions can be written in a form that is explicitly shift invariant apart from the axion mass \(m_{a}\),
\[\mathcal{L}=\frac{1}{2}(\partial a)^{2}-\frac{m_{a}^{2}}{2}a^{2}-\sum_{\psi} \frac{c_{\psi}}{2}\frac{\partial_{\mu}a}{f}\bar{\psi}\gamma_{5}\gamma^{\mu} \psi\,, \tag{1}\]
up to linear order in the axion field over the axion decay constant. This Lagrangian can be rewritten by using the divergence of the axial-vector current
\[\mathcal{L} =\frac{1}{2}(\partial a)^{2}-\frac{m_{a}^{2}}{2}a^{2}\] \[\quad+\sum_{\psi}c_{\psi}im_{\psi}\frac{a}{f}\bar{\psi}\gamma_{5 }\psi-c_{\psi}\frac{\alpha Q_{\psi}^{2}}{4\pi}\frac{a}{f}F_{\mu\nu}\tilde{F}^{ \mu\nu}, \tag{2}\]
where we assume the fermions only carry electric charge, otherwise there would be additional couplings to gauge bosons. Even though (1) and (2) are both linear in \(a/f\) they lead to contradicting results for processes with more than one axion involved. The reason is that the divergence of the axial-vector current or equivalently the equations of motion for the axion only capture terms up to linear order in the fields. A consistent rescaling of the fermion fields generates higher order terms in \(a/f\) that precisely account for the difference between results obtained from (1) and (2) (details are given in Appendix A). The effects can be accounted for by modifying the anomaly equation for the divergence of the axial-vector current
\[\frac{c_{\psi}}{2}\frac{\partial_{\mu}a}{f}\bar{\psi}\gamma_{5} \gamma^{\mu}\psi =-c_{\psi}im_{\psi}\frac{a}{f}\bar{\psi}\gamma_{5}\psi+c_{\psi}^{ 2}m_{\psi}\frac{a^{2}}{f^{2}}\bar{\psi}\psi\] \[\quad+c_{\psi}\frac{\alpha Q_{\psi}^{2}}{4\pi}\frac{a}{f}F_{\mu \nu}\tilde{F}^{\mu\nu}+\mathcal{O}\Big{(}\frac{a^{3}}{f^{3}}\Big{)}\,. \tag{3}\]
To quadratic order in the axion fields the inclusion of the additional operator in (2) restores the results obtained using the shift invariant coupling. However the shift invariance in (1) is explicitly broken by the presence of an axion mass. Treating \(m_{a}^{2}\) as the only spurion that breaks the shift invariance suggests the existence of higher order shift symmetry breaking operators
\[\mathcal{L}_{\rm{ssb}}\ni\sum_{\psi}c_{m}\frac{m_{a}^{2}a^{2}}{f^{3}}\bar{\psi} \psi\,. \tag{4}\]
These operators spoil the cancellation in (3). In general it is a conservative assumption that the spurion is given by \(m_{a}^{2}\), because the source of shift symmetry breaking responsible for generating the axion mass can induce higher-order operators that are less suppressed than (4). An example of such an enhancement is the coupling of the QCD axion to nucleons. The shift symmetry is broken by the presence of light quark masses and the QCD confinement scale. Interactions between the QCD axion and nucleons are therefore shift-invariant or suppressed by these spurious. At leading order the operators of the two-flavor chiral Lagrangian coupling baryons to pions and axions are
\[\mathcal{L}^{(1)}=\bar{N}\left(i\not{D}-m_{N}+\frac{g_{A}}{2}\gamma ^{\mu}\gamma^{5}u_{\mu}+g_{0}\gamma^{\mu}\gamma^{5}a_{\mu}^{(s)}\right)N\,. \tag{5}\]
Couplings to the axion enter via the covariant derivative and the vielbeins \(u_{\mu}\) and \(a_{\mu}^{(s)}\), which both contain the axion in an explicitly shift-invariant way [18; 19].
At second order there are four operators
\[\mathcal{L}^{(2)} =c_{1}\text{tr}[\chi_{+}]\bar{N}N-\frac{c_{2}}{4m^{2}}\text{tr}[ u_{\mu}u_{\nu}](\bar{N}D^{\mu}D^{\nu}N+\text{h.c.})\] \[+\frac{c_{3}}{2}\text{tr}[u_{\mu}u^{\mu}]\bar{N}N-\frac{c_{4}}{4} \bar{N}\gamma^{\mu}\gamma^{\nu}[u_{\mu},u_{\nu}]N\,. \tag{6}\]
All operators in \(\mathcal{L}^{(2)}\) are shift-invariant apart from the operator with coefficient \(c_{1}\), which contains a shift-symmetry breaking interaction
\[c_{1}\text{tr}[\chi_{+}]\bar{N}N=c_{N}\frac{a^{2}}{f^{2}}\bar{N}N+\ldots \tag{7}\]
The axion field enters via
\[\chi_{+} =2B_{0}\big{(}\xi^{\dagger}m_{q}(a)\xi^{\dagger}+\xi m_{q}^{ \dagger}(a)\xi\big{)}\,, \tag{8}\] \[m_{q}(a) =e^{-i\kappa_{q}\frac{a}{f^{2}}(2c_{GG}+c_{u}+c_{d})}m_{q}e^{-i \kappa_{q}\frac{a}{f^{2}}(2c_{GG}+c_{u}+c_{d})}, \tag{9}\]
where \(\xi=\exp(i/\sqrt{2}\,\Pi/f_{\pi})\) contains the pion fields, the quark masses read \(m_{q}=\text{diag}(m_{u},m_{d})\), \(\kappa_{q}=\text{diag}(\kappa_{u},\kappa_{d})\) are unphysical parameters subject to the constraint \(\kappa_{u}+\kappa_{d}=1\), and \(c_{GG}\) denotes the axion coupling to gluons
\[\mathcal{L}\ni c_{GG}\frac{\alpha_{s}}{4\pi}\frac{a}{f}G_{\mu\nu}\tilde{G}^{ \mu\nu}\,. \tag{10}\]
After rotating into the mass eigenbasis and taking into account contributions from pion mixing one can write the leading terms for the amplitude of axions coupled to nucleons from (5) and (6) as
\[i\mathcal{A}(N(k^{\prime})\to N(k)+a(q))=-\frac{g_{N}}{4f}\bar{u}_{N}(k^{ \prime})\not{q}\gamma_{5}u_{N}(k), \tag{11}\] \[i\mathcal{A}(N(k^{\prime})\to N(k)+2a(q/2))=-\frac{c_{N}}{f^{2}} \bar{u}_{N}(k^{\prime})u_{N}(k), \tag{12}\]
respectively. Here, the couplings are defined for protons and neutrons \(N=p,n\), as
\[g_{p/n} =g_{0}(c_{u}+c_{d}+2c_{GG})\] \[\pm g_{A}\frac{1}{1-\tau_{a}^{2}}\left(c_{u}-c_{d}+2c_{GG}\frac{m_ {d}-m_{u}}{m_{u}+m_{d}}\right)\,,\] \[c_{N} =c_{1}\frac{m_{\pi}^{2}}{2}\frac{4c_{GG}^{2}(1-\tau_{a})^{2}+(c_ {u}-c_{d})^{2}\tau_{a}^{2}}{(1-\tau_{a})^{2}}\,, \tag{13}\]
where \(\tau_{a}=m_{a}^{2}/m_{\pi}^{2}\). The iso-scalar and iso-vector coupling constants are determined using lattice gauge theory [20; 21]\(g_{0}=0.440(44)\) and experimentally extracted from nucleon beta decay [22]\(g_{A}=1.2754(13)\), respectively. The low energy coefficients \(c_{1},c_{2},c_{3},c_{4}\) can be found in [23] and we use \(c_{1}=-1.26(14)\) GeV\({}^{-1}\) here. The axion couplings to gluons and quarks in (13) are to be evaluated at the QCD scale [24; 25].
Expanding \(c_{N}\) in small axion masses and using the expression for the QCD axion mass with \(m_{u}=m_{d}\) one can write the coefficient in (4) as \(c_{m}=-8c_{1}f^{3}/f_{\pi}^{2}\), which corresponds to a substantial enhancement compared with the naive assumption.
Since the axion has a potential, in principle any quadratic interaction can also give rise to a linear spin-independent interaction _if_ the axion vacuum expectation value doesn't vanish. The Vafa-Witten theorem guarantees that \(\langle a\rangle=0\) in vacuum [26], but in a high density environment the potential is modified and \(\langle a\rangle=a_{0}\neq 0\), leading to long-range forces for large, dense objects such as neutron stars [27; 28]. Linear interactions proportional to the theta angle are strongly suppressed [29; 30; 31]. For the remainder of this paper we focus on the spin-independent force induced by the exchange of axion pairs. The importance of the shift-symmetry breaking operator has been pointed out previously in the context of coherent axion-nucleon scattering [32].
## III The axion force
In the following we will derive the potential for the spin-independent force induced by the exchange of a pair of axions [3; 3; 7]. We show explicitly that the contributions from the linear and quadratic axion interactions in (3) cancel and that the shift-symmetry breaking interaction induced by (7) spoils this cancellation and provides the most important contribution to the potential. We obtain the non-relativistic potential for the exchange of two axions can be obtained by taking the discontinuities in the scattering amplitude in the non-relativistic limit and
perform the Fourier transform. Feynman diagrams for the two-axion exchange are shown in Fig. 1. In the basis with derivative axion-interactions (1) only the diagrams \(a)\) and \(b)\) contribute. We instead use the non-derivative basis for which one needs to include diagrams \(c)\), \(d)\) and \(e)\), taking into account the quadratic axion coupling in (3) to obtain a consistent result. Operators breaking the shift invariance generate additional contributions to \(c),d)\) and \(e)\).
In the heavy fermion limit and retaining only terms odd in the momentum exchanged \(\sqrt{t}\) in the amplitudes1 we obtain the following spin-independent contributions at next-to-leading order in \(m_{a}\) for diagrams 1\(a)\) and \(b)\)
Footnote 1: Terms even in \(\sqrt{t}\) are cancelled by the contribution from the iterated single-axion exchange potential in the massless pseudoscalar limit [33], we assume this is still the case for a massive pseudoscalar.
\[V_{ab}(r) =-\frac{c_{\psi_{1}}^{2}c_{\psi_{2}}^{2}}{64\pi^{3}f^{4}}m_{\psi_ {1}}m_{\psi_{2}}\bigg{\{}\frac{1}{r^{3}}x_{a}K_{1}(x_{a})\] \[+\left(\frac{1}{m_{\psi_{1}}^{2}}+\frac{1}{m_{\psi_{2}}^{2}}- \frac{1}{2m_{\psi_{1}}m_{\psi_{2}}}\right)\frac{3}{r^{5}}\] \[\quad\times\left[\left(x_{a}+\frac{x_{a}^{3}}{6}\right)K_{1}(x_{ a})+\frac{x_{a}^{2}}{2}K_{0}(x_{a})\right]\bigg{\}} \tag{14}\]
in which we define the dimensionless variable \(x_{a}=2m_{a}r\) and \(K_{0}(x_{a})\) and \(K_{1}(x_{a})\) are modified Bessel functions of the second kind. In the case of a pseudoscalar particles described by the linear coupling in (2) the potential (14) would be the full potential and one recovers the leading term
\[V_{ab}(r)=-\frac{c_{\psi_{1}}^{2}c_{\psi_{2}}^{2}}{64\pi^{3}f^{4}}\frac{m_{ \psi_{1}}m_{\psi_{2}}}{r^{3}}+O\left(\frac{m_{a}^{2}}{r^{3}},\frac{1}{r^{5}} \right)\,. \tag{15}\]
The contributions from diagrams \(c)\) and \(d)\) are given by
\[V_{c}(r) =\frac{c_{\psi_{1}}^{2}c_{\psi_{2}}^{2}}{64\pi^{3}f^{4}}m_{\psi_{ 1}}m_{\psi_{2}}\bigg{\{}\frac{1}{r^{3}}x_{a}K_{1}(x_{a})\] \[+\frac{1}{m_{\psi_{2}}^{2}}\frac{3}{r^{5}}\bigg{[}\left(x_{a}+ \frac{x_{a}^{3}}{6}\right)K_{1}(x_{a})+\frac{x_{a}^{2}}{2}K_{0}(x_{a})\bigg{]} \bigg{\}}\,,\] \[V_{d}(r) =V_{c}(r)\quad\text{with}\quad m_{\psi_{1}}\leftrightarrow m_{ \psi_{2}} \tag{16}\]
whereas diagram \(e)\) gives
\[V_{e}(r)=-\frac{c_{\psi_{1}}^{2}c_{\psi_{2}}^{2}}{64\pi^{3}f^{4}}m_{\psi_{1}}m _{\psi_{2}}\frac{1}{r^{3}}x_{a}K_{1}(x_{a})\,. \tag{17}\]
In the sum of these contributions the terms proportional to \(r^{-3}\) cancel out and we are left with
\[V(r) =V_{ab}(r)+V_{c}(r)+V_{d}(r)+V_{e}(r) \tag{18}\] \[=\frac{3c_{\psi_{1}}^{2}c_{\psi_{2}}^{2}}{128\pi^{3}f^{4}}\frac{ 1}{r^{5}}\bigg{[}\left(x_{a}+\frac{x_{a}^{3}}{6}\right)K_{1}(x_{a})+\frac{x_{ a}^{2}}{2}K_{0}(x_{a})\bigg{]}\]
Expanding this result around \(x_{a}=0\) we recover the familiar \(r^{-5}\) potential
\[V(r)=\frac{3c_{\psi_{1}}^{2}c_{\psi_{2}}^{2}}{128\pi^{3}f^{4}}\left[\frac{1}{ r^{5}}-\frac{1}{3}\frac{m_{a}^{2}}{r^{3}}+O(m_{a}^{4})\right]\,. \tag{19}\]
In the case of axions with an explicit mass term the potential (18) is proportional to \(V(r)\sim 1/r^{5}\) up to terms suppressed by \(m_{a}^{2}<1/r^{2}\) as a result of the shift symmetry of the Lagrangian. Additional contributions from shift-symmetry breaking operators (4) are suppressed by \(\sim 1/f^{6}\). However, in the case of the QCD axion there are additional terms at the same order in \(1/f^{4}\) induced by the quadratic interaction terms (7) proportional to the shift-symmetry breaking spurion responsible for the axion mass. Evaluated for a potential between two nucleons \(N_{1}\) and \(N_{2}\) the additional diagrams generate the potential
\[V_{\text{sp.}}(r) =\frac{1}{64\pi^{3}f^{4}}\bigg{\{}-c_{N_{1}}c_{N_{2}}\frac{1}{r^{ 3}}x_{a}K_{1}(x_{a})\] \[+\frac{3}{4}\left[c_{N_{1}}g_{N_{2}}^{2}\frac{1}{m_{N_{2}}}+c_{N_ {2}}g_{N_{1}}^{2}\frac{1}{m_{N_{1}}}\right]\frac{1}{r^{5}}\] \[\quad\times\bigg{[}\left(x_{a}+\frac{x_{a}^{3}}{6}\right)K_{1}(x _{a})+\frac{x_{a}^{2}}{2}K_{0}(x_{a})\bigg{]}\bigg{\}}\] \[=-\frac{c_{N_{1}}c_{N_{2}}}{64\pi^{3}f^{4}}\frac{1}{r^{3}}+O \left(\frac{m_{a}^{2}}{r^{3}},\frac{1}{r^{5}}\right)\,, \tag{20}\]
where \(g_{N}\) and \(c_{N}\) are defined in (13). The contributions from the quadratic axion interaction induced by the spurion dominate over the contribution from the interaction induced by shift-invariant operators even though the latter appear at leading order in the EFT expansion. Note that this is different from the corrections in the expansion (19) which are suppressed by the axion mass, which in the case of the QCD axion scales as \(m_{a}^{2}\propto f_{a}^{4}/f^{2}\)
Figure 1: Diagrams contributing to the potential generated by two-axion exchange from linear interactions \(a)\) and \(b)\), from linear and quadratic interactions \(c)\) and \(d)\) and from purely quadratic interactions in \(e)\).
While (18) results in a repulsive potential, (20) can in principle have either sign, but is universally attractive for a QCD axion only interacting with gluons. The effect of the shift-symmetry breaking interaction -to leading order- doesn't affect leptons, because Feynman diagrams \(c)\) and \(d)\) in Fig. 1 don't contribute to the leading term in (20).
## IV Fifth force constraints on QCD axions
In the following we demonstrate the effect of the shift-symmetry breaking interaction on the sensitivity of experiments searching for a fifth force. We consider the simplest QCD axion model with a single coupling to gluons described by the Wilson coefficient \(c_{GG}\) keeping it's mass \(m_{a}\) a free parameter. Bounds from atomic and molecular spectroscopy aren't substantially changed by the inclusion of the higher order operators (6) because the leading effects only affect nucleon-nucleon interactions. We instead consider experiments probing macroscopic, spin-independent forces such as the one described in [17] in which the difference in the force between a sphere and a plate of two different materials is probed, which minimises the contribution from the Casimir effect. The accuracy in measuring this force (or absence thereof) has been used in [34] to obtain the best limits on the pseudoscalar-to-nucleon coupling in the meV-eV range for an experiment of this type.
The corresponding differential force between a sphere of radius \(R\) and a disk with thickness \(D\) with Au and Si coating placed at a distance \(\ell\) from the sphere reads
\[\Delta F(\ell) = 2\pi C_{s}\left[C_{\rm Au}-C_{\rm Si}\right]\!\int_{\ell}^{2R+ \ell}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Conclusions
We identify the dominant contribution to the low-energy potential for the macroscopic fifth force incuded by axion pair exchange for axions that -like the QCD axion- interact with gluons and thus obtain part of their mass from the chiral anomaly. This contribution arises from higher-order operators of the axion Lagrangian that would be naively expected to produce subleading effects. We show explicitly that these operators not only generate the most important contribution to the low-energy potential but result in a scaling of the non-relativistic potential \(V(r)\sim 1/r^{3}\) as opposed to the leading term \(V(r)\sim 1/r^{5}\) expected from derivative interactions. Moreover, since to the QCD axion mass is generated via strong dynamics, this new contribution is only present for interactions between nucleons and so the nature of the shift-symmetry breaking for an axion can be probed via the comparison of different searches for fifth forces. We demonstrate the impact at the example of a Casimir-less fifth-force experiment and find an improved sensitivity of almost 5 orders of magnitude.
## Appendix A
The axion interaction with chiral fermions \(\psi=\psi_{L}+\psi_{R}\) in the UV theory can be derived from the Lagrangian
\[\mathcal{L}_{\text{UV}}=\frac{1}{2}\bar{\psi}i\overset{\leftrightarrow}{ \not{\partial}}\psi-y\bar{\psi}_{L}S\psi_{R}+h.c \tag{10}\]
after the scalar\(S\) developes a vacuum expectation value \(f\) such that the fermion mass is given by \(m=yf\) and
\[S=(f+s)\exp\left(2i\frac{a}{f}\right) \tag{11}\]
with a scalar field \(s\) and the Goldstone boson \(a\). Ignoring interactions of the scalar mode the Lagrangian reads
\[\mathcal{L}=\frac{1}{2}\bar{\psi}i\not{\partial}\psi-m\bar{\psi}_{L}\exp{ \left(2i\frac{a}{f}\right)}\psi_{R}+h.c \tag{12}\]
For small \(a/f\) one can expand the exponent and obtains interactions
\[\mathcal{L}=\frac{1}{2}\bar{\psi}i\overset{\leftrightarrow}{\not{\partial}} \psi-m\left(2i\frac{a}{f}-2\frac{a^{2}}{f^{2}}+\mathcal{O}\left(\frac{a^{3}}{ f^{3}}\right)\right)\bar{\psi}_{L}\psi_{R}+h.c \tag{13}\]
Alternatively one can rescale the fermion fields
\[\psi_{L}\rightarrow\exp\left(i\frac{a}{f}\right)\psi_{L},\quad\psi_{R} \rightarrow\exp\left(-i\frac{a}{f}\right)\psi_{R}\,, \tag{14}\]
and find instead the explicitly shift invariant Lagrangian
\[\mathcal{L}=\frac{1}{2}\bar{\psi}i\overset{\leftrightarrow}{\not{\partial}} \psi-\frac{a}{f}\bar{\psi}\gamma_{5}\gamma^{\mu}\psi-m\bar{\psi}_{L}\psi_{R}+h.c \tag{15}\]
This leads to a non-relativistic potential scaling like \(1/r^{5}\). Often this interaction term is rewritten using the equation of motion of the fermion fields
\[\mathcal{L} =\frac{1}{2}\bar{\psi}i\overset{\leftrightarrow}{\not{\partial}} \psi-\frac{a}{f}\Big{(}\bar{\psi}\gamma_{5}\overset{\rightarrow}{\not{ \partial}}\psi+\bar{\psi}\gamma_{5}\overset{\leftarrow}{\not{\partial}}\psi \Big{)}-m\bar{\psi}_{L}\psi_{R}+h.c\] \[=\frac{1}{2}\bar{\psi}i\overset{\leftrightarrow}{\not{\partial}} \psi+2im\frac{a}{f}\bar{\psi}\gamma_{5}\psi-m\bar{\psi}_{L}\psi_{R}+h.c \tag{16}\]
This form of the Lagrangian leads to different Feynman rules and for example the non-relativistic potential from two axion exchange has a \(1/r^{3}\) dependence, because higher-order terms aren't captured by the naive application of the equations of motion. Instead, we rescale the fermion fields with field dependent factors that are linear in the axion field \(L,R\propto a\) and factors that are quadratic in the axion fields \(N,S\propto a^{2}\) such that
\[\psi_{L}+L\psi_{L}+N\psi_{L}\,, \tag{17}\] \[\psi_{R}+R\psi_{R}+S\psi_{R}\,. \tag{18}\]
We then find to linear order in \(a\):
\[\mathcal{L}(a) =\frac{1}{2}\big{(}L+L^{\dagger}\big{)}\bar{\psi}_{L}i\overset{ \leftrightarrow}{\not{\partial}}\psi_{L}+\frac{1}{2}(\partial_{\mu}L-\partial _{\mu}L^{\dagger})\bar{\psi}_{L}i\gamma^{\mu}\psi_{L}\] \[+\frac{1}{2}\big{(}R+R^{\dagger}\big{)}\bar{\psi}_{R}i\overset{ \leftrightarrow}{\not{\partial}}\psi_{R}+\frac{1}{2}(\partial_{\mu}R-\partial _{\mu}R^{\dagger})\bar{\psi}_{R}i\gamma^{\mu}\psi_{R}\] \[-\frac{a}{f}\left(\bar{\psi}\gamma_{5}\overset{\rightarrow}{ \not{\partial}}\psi+\bar{\psi}\gamma_{5}\overset{\leftarrow}{\not{\partial}} \psi\right)\] \[-m(L^{\dagger}+R)\bar{\psi}_{L}\psi_{R}-m(L+R^{\dagger})\bar{\psi} _{R}\psi_{L}\,. \tag{19}\]
Applying the equations of motions for the axion is equivalent to the choice \(L=ia/f\) and \(R=-ia/f\). For this choice the first term vanishes and the terms in line 2 and 3 in (A) cancel and the remaining term reads
\[\mathcal{L}(a)=2mi\frac{a}{f}\psi\gamma_{5}\psi\,, \tag{20}\]
in agreement with (16). Now we consistently shift the terms quadratic in \(a\) and find
\[\mathcal{L}(a^{2}) =\frac{1}{2}\big{(}LL^{\dagger}+N+N^{\dagger}\big{)}\bar{\psi}_{L} i\overset{\leftrightarrow}{\not{\partial}}\psi_{L}\] \[+\frac{1}{2}(L^{\dagger}\partial_{\mu}L-L\partial_{\mu}L^{ \dagger}+\partial_{\mu}N-\partial_{\mu}N^{\dagger})\bar{\psi}_{L}i\gamma^{\mu} \psi_{L}\] \[+\frac{1}{2}\big{(}RR^{\dagger}+S+S^{\dagger}\big{)}\bar{\psi}_{R }i\overset{\leftrightarrow}{\not{\partial}}\psi_{R}\] \[+\frac{1}{2}(R^{\dagger}\partial_{\mu}R-R\partial_{\mu}R^{ \dagger}+\partial_{\mu}S-\partial_{\mu}S^{\dagger})\bar{\psi}_{R}i\gamma^{\mu} \psi_{R}\] \[+\frac{a}{f}\Big{[}(L+L^{\dagger})\Big{(}\bar{\psi}_{L} \overset{\rightarrow}{\not{\partial}}\psi_{L}+\bar{\psi}_{L}\overset{ \leftarrow}{\not{\partial}}\psi_{L}\Big{)}\] \[+(\partial_{\mu}L+\partial_{\mu}L^{\dagger})\bar{\psi}_{L}\gamma^ {\mu}\psi_{L} \tag{21}\] \[-(R+R^{\dagger})\Big{(}\bar{\psi}_{R}\overset{\rightarrow}{\not{ \partial}}\psi_{R}+\bar{\psi}_{R}\overset{\leftarrow}{\not{\partial}}\psi_{R} \Big{)}\] \[-(\partial_{\mu}R+\partial_{\mu}R^{\dagger})\bar{\psi}_{R}\gamma^{ \mu}\psi_{R}\Big{]}\] \[-m(LR^{\dagger}+N^{\dagger}+S)\bar{\psi}_{R}\psi_{L}-m(LR^{ \dagger}+S^{\dagger}+N)\bar{\psi}_{R}\psi_{L}\,.\]
Choosing \(N\) and \(S\) to be real eliminates every term apart from the first, third and last line. Setting
\[2N=2S=-L^{\dagger}L=-R^{\dagger}R=-\frac{1}{2}\frac{a^{2}}{f^{2}} \tag{30}\]
cancels the terms with derivative interactions and yields
\[\mathcal{L}(a^{2})=2m\frac{a^{2}}{f^{2}}\bar{\psi}\psi\,. \tag{31}\]
Including this operator in the calculation of the non-relativistic potential cancels terms that scale as \(1/r^{3}\) and reproduces the \(1/r^{5}\) potential obtained from the explicitly scale invariant form (29).
## Appendix B
Integrating (21) for the potential (18) and (20) yields respectively
\[\Delta F(\ell) =\frac{3}{64\pi m_{a}}\frac{1}{f^{4}}|C_{\text{Au}}-C_{\text{Si}}| \tag{32}\] \[\times\int_{1}^{\infty}\mathrm{d}u\frac{\sqrt{u^{2}-1}}{u^{3}} \sum_{l}C_{l}\Psi(m_{a}u)\,,\] \[\Delta F_{\text{sp}}(\ell) =\frac{1}{32\pi m_{a}}\frac{1}{f^{4}}|C_{\text{Au}}-C_{\text{Si}}| \int_{1}^{\infty}\mathrm{d}u\] (33) \[\times\frac{\sqrt{u^{2}-1}}{u^{3}}e^{-2m_{a}u\ell}\left(1-e^{-2m_ {a}uD}\right)X(m_{a}u)\]
where \(\ell\) is the separation between the sphere and the surface of the disc with thickness \(D\). The coefficients \(C_{\text{X}}\) are given by (22) and (23), respectively, and the function \(X(x)\) is given by eq (11) in [34]. The experiment measured the differential force between either Au or Si sectors of a rotating disc.
The sphere is made of sapphire (sa.) coated with Au and Cr, and the sum in (33) is
\[\sum_{l}C_{l}\Psi(x) =C_{\text{Au}}\Psi\left(x;R,r\right)\] \[+\left(C_{\text{Cr}}-C_{\text{Au}}\right)\Psi\left(x;R-d_{\text{ Au}},r+d_{\text{Au}}\right)\] \[+\left(C_{\text{sa}}-C_{\text{Cr}}\right)\] \[\times\Psi\left(x;R-d_{\text{Au}}-d_{\text{Cr}},r+d_{\text{Au}}+ d_{\text{Cr}}\right) \tag{34}\]
with \(R\) the radius of the sphere, \(d_{\text{Au}}\) and \(d_{\text{Cr}}\) the thicknesses of the gold and chrome coatings, and the function
\[\Psi(x;R_{l},r_{l}) =8x^{4}\int_{r_{l}}^{2R_{l}+r_{l}}\mathrm{d}z\ \left[R_{l}^{2}-(R_{l}+r_{l}-z)^{2}\right]\] \[\times\Bigg{\{}-\frac{e^{-2xz}}{2xz}\left(1-\frac{z}{D+z}e^{-2xD}\right)\] \[+\mathrm{Ei}\left[-2x(D+z)\right]-\mathrm{Ei}\left[-2xz\right] \Bigg{\}}\,, \tag{35}\]
where \(\mathrm{Ei}(x)\) is the exponential integral function.
|
2306.07612 | An Evaluation of Multi-Component Weft-Knitted Twill Structures for
Sensing Tensile Force | We present multi-component knitted resistive sensors for tracking tensile
force. The knits were fabricated using a Twill structure, which is a simple
pattern featuring anisotropic elastic behavior, providing high stability along
course-direction. Our sensors are made of two commercially available conductive
yarn types, with highly different linear resistance. We present a variety of
integration methods using the proposed Twill structure, all of which can be
easily replicated on a two-bed weft-knitting machine. We evaluate the
performance of the resulting sensor variations, with respect to consistency,
hysteresis, short-term and long-term relaxation and drift, among other metrics.
We found that particulars of the knit's loop composition have a crucial effect
on the consistency of the sensor readings. Furthermore, we show that knitting
resistive yarn more tightly than the substrate material gives superior results
and that improving elastic recoil by adding Lycra to the supporting substrate
can considerably improve performance. | Roland Aigner, Frank Hepper | 2023-06-13T08:12:00Z | http://arxiv.org/abs/2306.07612v1 | # An Evaluation of Multi-Component Weft-Knitted Twil Structures for Sensing Tensile Force
###### Abstract
We present multi-component knitted resistive sensors for tracking tensile force. The knits were fabricated using a Twill structure, which is a simple pattern featuring anisotropic elastic behavior, providing high stability along course-direction. Our sensors are made of two commercially available conductive yarn types, with highly different linear resistance. We present a variety of integration methods using the proposed Twill structure, all of which can be easily replicated on a two-bed weft-knitting machine. We evaluate the performance of the resulting sensor variations, with respect to consistency, hysteresis, short-term and long-term relaxation and drift, among other metrics. We found that particulars of the knit's loop composition have a crucial effect on the consistency of the sensor readings. Furthermore, we show that knitting resistive yarn more tightly than the substrate material gives superior results and that improving elastic recoil by adding Lycra to the supporting substrate can considerably improve performance.
## Glossary
The following is a short and arguably incomplete description of the terms used in the text, however we refrain to go into more detail, since this should be sufficient for the scope of the paper. For more details please refer to [1].
**knit, tuck, float**: different stitch types performed by the needles. While a _knit_ operation forms a new loop by pulling the new yarn through the currently held loop, a _tuck_ just adds the yarn to the current loop, i.e., holding/securing the new yarn. In contrast, in a _float_ (aka. "miss"), the yarn is guided behind the needle and not held at all.
**wale, course**: terms describing the dimensions of a knit.
Oversimplified but adequate for the scope of this paper, wales and courses can be considered the "columns" and "rows" in a knit, when using matrices as an analogy.
## 1 Introduction
Textile based sensors provide beneficial features such as high flexibility and breathability, which can make them comfortable for wearing them on skin, e.g., when compared to foil-based solutions [2, 3, 4, 5]. This is attractive for use cases requiring long duration of direct skin contact, such as therapy scenarios via bio-monitoring [6] and activity tracking [7, 8], but also for user interfaces such as data gloves, for tracking of hand posture [9, 10] or gesture detection [11]. Many of those use cases are already implemented with knitted fabrics, since they are particularly suitable for sensing strain due to their innate stretchability.
Although there already is a large body of work focusing on knitted strain sensors, we found that most of them are based on highly stretchable fabrics, which are not always desirable. Many scenarios, also beyond garments, require solutions that provide higher tensile stability, or even anisotropic elasticity. Those properties are mostly subject to the geometric composition of the knit, i.e, the knitting pattern. In weft-knitting, those pattern can be thoroughly engineered down to loop level, which is in contrast to warp-knitting [12]. Examples for patterns with relatively high extensibility are Plain Knit (aka. Single Jersey) [13, 14, 15, 16, 17], Double Jersey [13], and rib structures [18, 13, 19]. In contrast, patterns with higher stability are relatively rare in related literature, examples for those are Interlock used by Atalay et al. [6], and Cardigan used by Ehrmann et al. [20], which showed they provide better sensitivity in low-elongation ranges, when compared to Double Jersey. However, both Interlock and Cardigan represent patterns that occupy both of the machine's needle beds at all times, which is a potential limitation in flexibility for fabrication and design. Therefore, and in contrast to the stated works, we use a pattern that is inconsistently called _Twill_ in the textile industry, due to its structural similarity with the weave pattern of the same name. It is a generally widespread and simple pattern, however not touched in textile sensor literature, to our knowledge. As
illustrated in Figure (a)a, it consists of courses of alternating knit and float stitches, while the sequence is shifted by one needle for every other course. Due to this high number floats, it provides exceptional stability in course-direction (i.e., "horizontally"), while being more extensible along wake-direction (i.e., "vertically"), when compared to Plain or Double Jersey Knits. This represents a distinct property of a Twill, as opposed to Cardigan or Interlock, with exactly opposite behavior, as a preliminary study confirmed (see supplement). Note that manufacturing of a Twill only requires one needle bed, which increases flexibility and can be a design advantage over other patterns. For example, using a two-bed machine, the entirety of the sensing part can be hidden away to one face of the knit, as done in the work presented in this paper, which can be of aesthetic preference and/or protect it from exposure and therefore from abrasion and damage.
In contrast to most of related work, which focuses on sensing strain [17, 9, 13, 21, 22, 18, 23], our primary interest is in sensing _force_, which cannot be trivially inferred from strain directly, due to short-term wear-out effects, exposing hysteresis. Furthermore, knits are subject to considerable structure-dependent relaxation, causing a gradual, nonlinear decrease of force at constant elongation. Based on our observations during this work, we noticed that recorded displacement values (and thus inferred strain values) are not entirely adequate to reflect the state of the fabric, since it may be slack when the actuator returns to its initial position. We argue that due to these effects, it would be necessary to record and reconstruct the true fabric lengths by different means, e.g., by optically tracking its geometric state, which would complicate not only the setup, but also reporting and its comprehensibility. Within our work, we consequently investigate the sensors' response to force directly and thus avoid this issue. For the sake of compatibility with related work however, we still include strain data in this paper.
By combining two types of conductive yarn that are knitted directly into the fabric, we produce a fully functional textile force sensor without requiring manual finishing steps. This is opposed to augmenting pre-existing knits by embroidering [9] or printing [24] functional parts, or by sewing patches of conductive fabric[25]. Other works demonstrate the method of polymerizing parts of textiles with Polypyrrole, e.g., the seminal work of DeRossi et al. [26] showed a data glove with resistive sensing areas. However, this process is challenging to do in a consistent manner, when compared to computerized flatbed knitting, which provides loop-level control. Hence, in contrast to the stated works, our method enables to precisely design and tune the sensor structure even to create highly intricate sensor shapes and complex connector traces (cf. [27]).
The goal of our work was to find the optimal variant with respect to general sensor consistency (for repeated actuation with equal or varying force, as well as different actuation speeds), hysteresis, dynamic range, offset, relaxation, drift, and anisotropic behavior. We therefore explored different implementations and contrasted their behavior in a systematic evaluation.
In a nutshell, the top contributions of this paper are:
* Three methods of integrating a Twill-based resistive sensor on a Twill substrate fabric, including conductive knit connector traces for attaching electronics at remote positions.
* 10 variations of these sensor designs, using different substrate material compositions and yarn tensions.
* An in-depth evaluation of those 10 variations and our findings regarding consistency, relaxation, offset, and drift in different scenarios.
## 2 Sensor Implementation
In the following chapter, we outline the sensor design, including potential for slight modifications that we expected to have an impact on sensor performance. We present our
Figure 1: Illustration of a Twill knitting pattern (a). Where current flows along the yarn, loop intermeshing points act as variable resistors (b), increasing conductivity with physical stress at the contact positions. The overall knit geometry can therefore be modeled as a network of variable resistors (c). We maximize the ratio of sensor yarn resistivity and connector yarn resistivity, so changes in connecting parts’ resistance \(R_{C}\) are negligible over the much higher absolute values from sensor loops \(R_{S}\).
knitted samples and specify manufacturing details. All our patches where knit on a flat-bed knitting machine of type ADF 530-32 KI W Multi Gauge from KARL MAYER STOLL, at gauge E 7.2. Knitting programs were created with Patternsoftware M1 PLUS Version 7.5.
### General Sensor Design and Sensing Principle
The functional principle of our knit force sensors is according to Holm's Theory [28], which states that contact resistance is depending on material resistivity \(\rho\) and hardness \(H\), as well as contact point count \(n\) and pressure \(P\), with
\[R=\frac{\rho}{2}\sqrt{\frac{\pi H}{nP}}\,.\]
Since contact pressure between loops is varying (and may even be zero when loops lose contact), the resistance drops when force is applied. Consequently, each intermeshing point in the sensor knit can be considered a variable resistor (cf. Figure 1b), and moreover, the overall structure can be modeled as a network of resistors, as done for analytical solutions by [29, 30, 22].
Similar to Baribina et al. [31] and Semjonova et al. [27], we utilize a multi-material sensor layout, combining conductive and resistive yarn, i.e., two types with largely different linear resistance, which provides several advantages:
* Conductive parts can be utilized to knit connector traces, that enable to comfortably attach readout electronics at remote positions of the fabric. This is unlike other work that requires attaching of connecting wires directly at the sensing structures, such as [11, 8].
* In increasing the resistance ratio of sensor area to connector trace we ensure that the sensor area operates in vastly different absolute value ranges, when compared to the connecting parts. Hence, resistance changes caused by deformation of the connectors are minor and therefore negligible, when compared to the sensor's operational range. This is similar to [31] and addresses an issue often ignored in related work, while furthermore enabling to create more explicit and localized force sensors on a textile.
* Connecting the resistive part with conductive yarn along the entire width (cf. Figure 2 left) provides more uniform current flow across all wales (cf. Figure 1c), since R\({}_{C}\) is insignificant against R\({}_{S}\). This should improve sensitivity consistency across the whole sensor area, which could be particularly relevant when the number of sensor wales is much higher than the number of its courses.
### Materials
As a resistive yarn for the sensor areas, we used Shakespeare(r) Resistat P6204 H100i1, which is a den 100/24 Polyester fiber with Carbon sheath, providing relatively high linear electrical resistance of \(\sim\)10 M\(\Omega\)/m. We twisted four den 100/24 threads with 30S in 1st stage and 50Z in 2nd stage, to achieve adequate yarn count for a balanced knit when combined with our PES yarn, yielding a den 400 thread with den 100/24x4 and \(\sim\)2.5 M\(\Omega\)/m.
Footnote 1: [https://shakespeare-pf.com/product/polyester/](https://shakespeare-pf.com/product/polyester/)
The conductive traces for providing connections were knit with Shieldex(r) Madeira HC402, which is a silver-coated PA yarn with den 260 and electrical resistance of \(<\)300 \(\Omega\)/m and proved highly durable during previous work [32, 33].
Footnote 2: [https://www.shieldex.de/products/madeira-hc-40/](https://www.shieldex.de/products/madeira-hc-40/)
For the surrounding substrate base structure, we used a PES with den 150/30 (TWD Fibres GmbH). The Lycra that we plated along the PES for improving the fabric's elastic recoil was a den 140 Lycra core covered with PES den 150/20 (Jorg Lederer GmbH).
### Knit Structure and Manufacturing
As any pattern that is knit on a single bed, internal forces on the Twill are unbalanced, meaning it shows inherent curling tendency. For many use case scenarios, where the knit is tailored together with other parts, this may not be an issue. Otherwise, it can be counteracted by framing with a more stable knit structure.
Instead of implementing the sensor area as an Intarsia field within a surrounding PES structure (cf. [29, 14, 10, 34, 11]), we knit the resistive yarn on the opposite needle bed
Figure 2: Sample of one of our sensor patches (left), with conductive yarn traces connecting the resistive area (black) on both upper and lower ends. We evaluated our sensors using a custom-built tensile tester with an integrated force cell (right).
and connect it to the PES face, which is knit as a continuous Twill. Apart from a more straight-forward integration into a knit, this provides better control about the force distribution throughout the structure. Due to different properties of PES and Resistat, an Intarsia field requires proper tuning of yarn count and stitch settings, to prevent an unbalanced and non-uniform surface. By knitting two faces on opposite beds, we gain more flexibility in tuning the Resistat tightness - and therefore the sensor's responsiveness - without introducing areas of considerable physical, visual, and haptic inconsistency. A side-effect is also that the functional parts can be hidden away and are therefore protected from abrasion by the covering PES layer, which may be a benefit in some use cases.
For connecting the sensor face with the base structure, we tuck the Resistat to the opposite bed at the beginning of each knit course. On upper and lower courses, the Resistat is knit to the conductive yarn which provide connector traces. Figure 3 provides a detailed knitting diagram. For knitting, we plied 2 threads of den 400 Resistat for the resistive parts, 2 threads of den 260 Madeira HC40 for the conductive parts, and 6 threads of den 150 PES for the substrate.
### Variations
As mentioned above, we knitted non-functional PES and functional Resistat on opposite needle beds, yielding two knit faces that need to be fixated so they do not fall apart. We investigated three options of doing so: the most straightforward one is to connect both faces along the sensor's outer wales (cf. Figure (a)a), by tucking at the respective outer needles. This results in a "tubular" knit structure, which we henceforth will address with "T". Note that both faces are completely detached in this knit, which could lead to erratic behaviour, depending on the fabric's firmness. We therefore tried three variations with the sensor parts knitted with different tightness: one with the Resistat knit with _lower_ tension than the PES ("Tl"), one with _medium_ tension, meaning PES and Resistat tightness balanced ("Tm"), and one with Resistat knit with _higher_ tension than PES ("Th").
We furthermore created variations that kept both faces closely attached, by tucking the Resistat to the PES ("P\(\leftarrow\)R") across the entire courses (cf. Figure (b)b), as well as the opposite, tucking the PES to the Resistat loops ("P\(\rightarrow\)R", cf. Figure (c)c).
From handling with the resulting knits, we could subjectively see that our first variations with 6 threads of PES
Figure 3: Twill based knitting structures for T (tubular, a), P\(\leftarrow\)R (Resistat tucked to front-face PES, b), and P\(\rightarrow\)R (PES tucked to back-face Resistat, c). Note the connecting front bed tucks at the beginning of each Resistat row (red) which secures the edges of the sensor area with the substrate knit (purple). Connector traces (blue) are knit on the back bed for connecting to the Resistat loops, and on the front bed otherwise. Images at the bottom show closeups of front bed (PES) and back bed (Resistat) faces of the resulting knit structures. In particular, the closeups show P\({}_{\text{Tm}}\) (a), P\({}_{\text{RP}}\) (b), and P\({}_{\text{PR}}\) (c).
("P") were prone to short-term wear-out and we therefore expected poor elastic recoil. For this reason, we also created samples that combined PES with Lycra ("PL"), to encounter this aspect (cf. Table 1). Hence, in addition to using 6\(\times\)PES for the surrounding substrate, we also created patches where we plated 5\(\times\)PES together with 1\(\times\)Lycra ("PL1"), as well as 4\(\times\)PES with 2\(\times\)Lycra ("PL2"). We already saw during our first evaluation, that lower-tension Resistat patches performed poorly, as well as connecting front and back faces outperforms tubular structures, we therefore chose to focus on those for our Lycra variations, hence, all of them were knit of type P\(\rightarrow\)R with _medium_ to _high_ Resistat tension.
## 3 Evaluation
### Apparatus
For evaluation, we used a custom tensile tester, which we built from an obsolete CNC milling machine (cf. Figure 2 right). The machine was operated by Art-Soft Mach4 CNC Control Software (v4.2.0), running on a Windows 10 PC. We attached mounts to clamp the textiles on both ends, incorporated needles at 2 cm distance additionally secured the textile so it would not slip. The clamp attached to the moving part was equipped with a single-point load-cell of type Sauter CP P1-Ba-d-18103 which was sampled at \(\sim\)80 Hz with an ADS 1231 24-bit Delta-Sigma ADC4. We acquired the sensors' resistance values using a simple voltage divider with a reference resistor of 606 k\(\Omega\) and sampled using an Adafruit ADS1115 16-bit ADC5 at \(\sim\)128 Hz. At \(\sim\)40 Hz, we averaged the samples of the previous period, and captured the results into CSV files for later analysis, along with timestamp, and actuator displacement. Sampling, recording, as well as remote control of Mach4 via RS232, was performed by an ESP32 on an Adafruit HUZZAH32 Feather board6.
Footnote 3: [https://www.kern-sohn.com/shop/en/measuring-instruments/measuring-cells/CP-P1/](https://www.kern-sohn.com/shop/en/measuring-instruments/measuring-cells/CP-P1/)
Footnote 4: [https://www.ti.com/product/ADS1231](https://www.ti.com/product/ADS1231)
Footnote 5: [https://www.adafruit.com/product/1085](https://www.adafruit.com/product/1085)
Note that our tensile tester is able to move along three axes, which enables testing for shearing effects, however, this would require a modification for omni-directional force measurement. Although this would greatly complicate the procedure and evaluation, we see potential for future work, in order to simulate more generic actuation which may be closer to many real-life scenarios. Since this this work is focusing on variations of knit structures, we explicitly performed orthogonal actuation in our tests.
### Procedure
All of our knitted sensor variations share a similar design (cf. Figure 2 left), i.e., a 4 cm \(\times\) 4 cm square field of resistive yarn, which is connected with conductive traces along the entire upper and lower courses. These connector traces were knit beyond the sensor area, leading to the edge of the textile sample, where we attached crocodile clips to connect our measurement electronics. We chose strong clips to avoid their slipping during the procedure and ensured adequate overlap with the conductive yarn. We refrained from testing different sensor dimensions, since we know from [35] that sensor resistance is directly proportional to height and inversely proportional to width, with \(R=\rho\frac{h}{w}\), where \(\rho\) is a material-specific constant. We were able to verify this correlation in a preliminary evaluation (see supplementary material).
Each sensor variation was knitted three times. We performed an _ex ante_ evaluation to get an estimate regarding consistency and to identify outliers. We found good con
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|l|} \hline Name & Type\({}^{*}\) & PES & Lycra & NP PES & NP Res & NP tuck & notes \\ \hline P\({}_{\text{Tl}}\) & T & 6 & - & 13.1 & 13.5 & - & tubular, low tension for Resistat \\ P\({}_{\text{Tm}}\) & T & 6 & - & 13.1 & 12.5 & - & tubular, medium tension for PES/Resistat \\ P\({}_{\text{Th}}\) & T & 6 & - & 13.1 & 11.5 & - & tubular, high tension for Resistat \\ P\({}_{\text{RP}}\) & P\(\leftarrow\)R & 6 & - & 13.1 & 12.0 & 9.0 & Resistat tucked to PES \\ P\({}_{\text{PR}}\) & P\(\rightarrow\)R & 6 & - & 13.1 & 12.0 & 9.0 & PES tucked to Resistat \\ \hline PL1\({}_{\text{m}}\) & P\(\rightarrow\)R & 5 & 1 & 12.5 & 12.5 & 9.0 & 1\(\times\)Lycra + medium tension Resistat \\ PL1\({}_{\text{h}}\) & P\(\rightarrow\)R & 5 & 1 & 12.5 & 11.5 & 9.0 & 1\(\times\)Lycra + high tension Resistat \\ PL1\({}_{\text{ml}}\) & P\(\rightarrow\)R & 5 & 1 & 12.5 & 12.5 & 9.5 & 1\(\times\)Lycra + medium tension Resistat, low tension tuck \\ PL2\({}_{\text{m+}}\) & P\(\rightarrow\)R & 4 & 2 & 12.0 & 11.8 & 9.0 & 2\(\times\)Lycra + medium-high tension Resistat \\ PL2\({}_{\text{hl}}\) & P\(\rightarrow\)R & 4 & 2 & 12.0 & 11.5 & 9.5 & 2\(\times\)Lycra + high tension Resistat, low tension tuck \\ \hline \multicolumn{1}{l}{\({}^{*}\) Types: T = tubular knit; P\(\leftarrow\)R = Resistat is tucked to front-face PES; P\(\rightarrow\)R = PES is tucked to back-face Resistat} \\ \end{tabular}
\end{table}
Table 1: Overview of our sensor variations. We varied structure composition (Type), ratio of PES threads vs. Lycra threads, nominal stitch length (NP) of substrate material (PES+Lycra), Resistat material (Res), and tuck stitches that connect front and back faces. Note that NP are a measure of yarn usage per loop, i.e., lower numbers represent tighter knits.
sistency overall and a very low number of outlier sensors (see supplement). We did however not perform an in-depth formal evaluation regarding consistency at this point.
For each sample, a single 5-cycle procedure was recorded. For the testing procedure, we marked the textiles at 5 cm distance with the 4 cm sensor areas centered, giving 5 mm extra on each side for mounting. The samples were then punched through the mounting needles at the marks, so the tests would start from consistent initial lengths of 5 cm. Tested patches were not previously ironed or otherwise chemically, mechanically, or thermally treated.
#### 3.2.1 Pulling with equal force
To observe correlation of sensor reading and applied force, as well as sensor offset and general consistency, we performed a test procedure repetitively applying force along wake direction, and releasing again. We chose the force based on an informal initial test, where we estimated the upper working limit of most of our sensor variants with \(\sim\)20 N, and repeated for 5 cycles with a jog rate of 1.333 mm/s. Note that, due to a communication lag in between the ESP32, Mach4, and the testing machine, we were slightly overshooting the target forces, however this does not undermine the general point of our results. Also note that, since different samples had different elastic behavior, this resulted in different strain ranges. Moreover, since we are not observing strain but force, we returned to F=0 N after each cycle, which does not align with the initial actuator position of d=0 mm, due to fabric extension from wear-out. As a result, this offset in strain could be considered a metric for the fabric's wear-out.
#### 3.2.2 Pulling with dwell
In order to investigate drift and relaxation effects, we conducted a test similar to our initial one, however instead of switching actuation direction at 20 N and 0 N immediately, we dwelt for 5 seconds at each position. Note that due to ongoing fabric relaxation, the force was not constant at this point. We refrained from readjustment motions, since we judged this would introduce considerable jerkiness in the data and complicate analysis.
#### 3.2.3 Pulling with varying speed
Since we noticed during _ex-ante_ experiments, that actuation speed can have a profound impact on the sensor reading - most notably on the spikes in resistance after starting and stopping - we repeated our initial tests (5 cycles at 20 N, no dwell) with half and twice the baseline speed, hence, speeds were 0.667 m/s, 1.333 m/s, and 2.667 m/s.
#### 3.2.4 Pulling with increasing force
To find upper sensing range limit and to inspect consistency when pulled with different amplitudes, we varied pulling force. For this test, we started at an initial 5 N and increased in steps of 5 N up until 40 N, with returning to 0 N after each cycle. The test was again based on our initial one, i.e., we did not dwell before switching direction and moved with 1.333 m/s.
Figure 4: Characteristics and timelines of non-Lycra (left) and Lycra (right) variants: Plots of correlation between strain \(e\) and force \(F\) (a,c), as well as strain and relative resistance change \(\Delta R/R_{0}\) (b,d). ’x’ marks initial values at beginning of recording. Timeline plots of all variations (e,f), overlaying strain \(e\) (dashed, black) and sensor conductivity \(G\), show respective conformity of our variations.
#### 3.2.5 Long-term pull
To observe long-term drift and relaxation effects, we performed a test pulling the samples to 20 N and returning to 0 N, dwelling for 15 minutes at each end.
#### 3.2.6 Course-directional pull
In related work, knit strain sensors are frequently tested along a single direction [13, 22, 18]. However, like most knitting structures such as Jersey and Rib patterns, a Twill is subject to anisotropic behavior in terms of physical properties, such as elasticity and recovery. We therefore investigated the behavior orthogonal to our primary testing direction as well, by mounting the sample rotated accordingly in our testing apparatus.
## 4 Results and Discussion
In the following, we summarize our main findings. Note that to save space and reduce complexity, we narrow down our subset of evaluated patches, by progressively excluding poorly performing sensor variations. Refer to Table 2, which sums up the majority of our results.
### General performance
#### 4.1.1 Non-Lycra variants
Non-Lycra variants show almost linear relation between strain \(e\) and applied force \(F\) for the pulling segments (cf. Figure 4a), however during release phase, we see considerable lag throughout all variations, which is due to poor elastic recoil. This effect inherently translates into hysteresis in sensor characteristics (cf. Figure 4b), since releasing does not reflect in the knit mesh immediately and instead exhibits noticeable delay for recovery of the structure. We consider this an innate limitation of knits, however this can be counteracted to some degree, which was our main motivation to add Lycra into the supporting base knit, as outlined in Section 2.4.
We noticed that the resting state (i.e., at \(F\)=0 N) of the sensor is highly different from the remaining iterations, hinting towards sensor offset. To quantify this wear-out effect, we calculated the relative extension \(\Delta d_{0,5}\), i.e., the change in length in between _before_1st and _after_5th pulling iteration. We see that for the non-Lycra variants, PTh performs best, with 7.9% extension. However, the first pulling iteration can be considered an outlier and may be irrelevant in many use case scenarios, e.g., when the fabric is draped and therefore permanently stretched. Therefore, we also report relative extension \(\Delta d_{1,5}\), which excludes the first iteration by calculating relative length change in between _after_1st and 5th pulling iterations. There, the results are different, with PPR clearly outperforming PTh.
Footnote 1st: We use the less-common lower case notation to mitigate confusion with electrical resistance \(R\)
When comparing sensor response by relative change in resistance \(\Delta R/R_{0}\) (cf. Figure 4b), we see that for tubular structures (i.e., PTh, PTh, and PTh), tighter knit Resistat areas result in superior characteristics (i.e., PTh, with better correlation between strain and resistance change, less hysteresis, less noisy signal). Unexpectedly, there is considerable difference between the two connected variants PRP and PPR. While the patch with PES tucked to the Resistat (P\(\rightarrow\)R) clearly outperforms its tubular equivalent PTm, tucking the other way around (P\(\leftarrow\)R) results in a defective sensor. We can eliminate the possibility of manufacturing flaws, since all of our three specimen of type RP showed this erratic behavior. We exclude PRP from further evaluations, due to the bad performance.
The difference in sensor performance is also clearly visible on the timeline plots (cf. Figure 4e), where conductivity \(G\) of PTh and PPR goes well in line with strain \(e\). Note that since we alternate between 0 N and 20 N, the values of \(e\) drift slightly upwards due to wear-out effects. As mentioned above, we present strain-related data in the paper for sake of comparability, however we refer the interested reader to the supplement, which shows that the conductivity is well in line with the amplitude of force \(F\) in most variations. We quantify conformance between the two trends of \(F\) and \(G\) using the Coefficient of Determination7\(r^{2}=1-\text{SS}_{res}/\text{SS}_{tot}\), where SS\({}_{res}\) is the residual sum of squares and SS\({}_{tot}\) is the total sum of squares. Both data series are first normalized using the preprocessing.StandardScaler from Python package scikit-learn8, which transforms all values with \(y=(x-\mu)/\sigma\). Results show that PTm, PTh, and PPR perform best in that regard.
Footnote 7: We use the less-common lower case notation to mitigate confusion with electrical resistance \(R\)
Footnote 8: [https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)
#### 4.1.2 Lycra variants
From Figure 4c, we see that elastic recoil was slightly improved by adding Lycra, however, the striking linearity we observed for the pulling-segments earlier seems to suffer from the boost in elasticity in general. For all of the Lycra variants PL*, conformity between force and conductivity is also slightly improved (cf. Figure 4f, and \(r^{2}\) values). Most notably the variants with 2\(\times\)Lycra (PL2*) show best linearity in e/R correlation with least hysteresis (cf. Figure 4d). Drift also appears to be less severe for those variations, which also reflects on relative extension values \(\Delta d_{0,5}\) and \(\Delta d_{1,5}\) (cf. Table 2).
### Hysteresis & Dynamic Range
In order to objectively compare hysteresis, we separated data into pulling and releasing segments and fit exponential functions to the data sets using SciPy function optimize.curve_fit9 (for further details refer to the supplementary material). We excluded the first pull/release cycle as an outlier for this curve fitting procedure and normalized \(R\) values by scaling with \(1/R_{0}\). We then searched for positions of maximum distance between pulling and releasing curves. Results are reported in Table 2, with resistance hysteresis \(h_{R}\), at respective locations \(F_{h}\). We see that the variants with two Lycra threads PL2* outperform all others, including non-Lycra knits. Furthermore, their maximum hysteresis is found at \(F\)\(\sim\)10 N, unlike most patches, which show considerable differences at 0 N, as results of strong settling effects.
Footnote 9: [https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html)
We report dynamic range as relative difference in sensor resistance in between \(F\)=0 N and \(F\)=20 N, i.e., \(\Delta R=\mathrm{abs}\left(R_{0}-R_{20}\right)/R_{0}\). To find \(R_{0}\) and \(R_{20}\), we used the curves fit for finding hysteresis, and evaluated them at \(F\)=0 N and \(F\)=20 N: for non-Lycra versions, the connected sensor shows higher range than the tubular knits. The versions with two strands of Lycra PL2* show least range.
Even though the PL2* variants were inferior in terms of range, we decided to exclude PL1* sensors for further evaluation, since we judged low hysteresis and better consistency far more important than range. We furthermore tried to keep a balanced set of Lycra and non-Lycra patches with the two best-performing each.
### Offset, Relaxation, & Drift
We use the terminology of [36], with _offset_ being the change in resting-state resistance after each pulling iteration, _relaxation_ being the continuous change in resistance at constant strain, and _drift_ being the continuous change in resistance when returned to resting state. We calculate offset as relative change in resistance between _before_ and _after_ each cycle. To quantify relaxation and drift, we calculated resistance change relative to the initial value of the respective dwelling segment. For calculating all three metrics, we again excluded the first pull/release iteration for each sets, since sensors initiate form long-term settled states (cf. initial trends in Figures 4e and 4f); averaged values of the remaining 4 segments are presented in Table 2.
Offset values show the remaining non-Lycra variants are superior over our Lycra versions. Furthermore, the tubular structure \(\mathrm{P_{Th}}\) outperforms the connected version \(\mathrm{P_{PR}}\). However, tests with dwelling at 0 N and 20 N for 5 seconds each showed that Lycra variants are by far superior in terms of both relaxation and drift, as can be seen also in Figure 5 (left). For Lycra variants, \(\mathrm{PL_{2h}}\) is only slightly superior, while non-Lycra versions tie for both metrics: the tubular sensor shows less drift but worse relaxation behavior.
Results from long-term test (cf. Figure 5, right) show similar results. It is clearly visible that \(\mathrm{P_{PR}}\) exhibits highest relative noise and \(\mathrm{P_{Th}}\) shows lowest. To compare settling of conductivity values, we calculated RSD over time windows
\begin{table}
\begin{tabular}{|r||r|r|r|r|r||r|r|r|r|r|} \hline & \multicolumn{6}{c||}{non-Lyc} & \multicolumn{6}{c|}{Lyc} \\ & \multicolumn{2}{c|}{\(\mathrm{P_{Tl}}\)} & \multicolumn{2}{c|}{\(\mathrm{P_{Tm}}\)} & \multicolumn{2}{c|}{\(\mathrm{P_{Th}}\)} & \multicolumn{2}{c|}{\(\mathrm{P_{RP}}\)} & \multicolumn{2}{c|}{\(\mathrm{P_{PR}}\)} & \multicolumn{2}{c|}{\(\mathrm{PL_{1m}}\)} & \multicolumn{2}{c|}{\(\mathrm{PL_{1h}}\)} & \multicolumn{2}{c|}{\(\mathrm{PL_{1m}}\)} & \multicolumn{2}{c|}{\(\mathrm{PL_{2m+}}\)} & \multicolumn{2}{c|}{\(\mathrm{PL_{2h}}\)} \\ \hline \hline \(\Delta\mathrm{d}_{0,5}\) [\%] & 11.7 & 11.8 & **7.9** & 11.4 & 10.2 & 11.0 & 10.6 & 9.1 & 5.4 & **5.2** \\ \(\Delta\mathrm{d}_{1,5}\) [\%] & 2.3 & 2.6 & 2.0 & 2.3 & **1.6** & 1.8 & 1.9 & 2.0 & **1.3** & **1.3** \\ \hline \(r^{2}\) & 0.65 & 0.90 & **0.91** & -1.94 & 0.90 & 0.92 & 0.90 & 0.91 & 0.92 & **0.93** \\ \hline \(\mathrm{h_{R}}\) [\%] & 14.9 & 27.1 & **10.7** & - & 25.4 & 45.6 & 63.3 & 24.8 & 5.8 & **4.1** \\ \(\mathrm{F_{h}}\) [N] & 0.5 & 0.0 & 0.0 & - & 0.0 & 0.0 & 0.0 & 0.0 & 9.2 & 11.1 \\ \hline \(\Delta\mathrm{R_{rel}}\) [\%] & 24.6 & 46.5 & 56.7 & - & **64.0** & 63.1 & **65.9** & 56.7 & 35.4 & 34.5 \\ \hline offset [\%] & - & - & **-1.62** & - & -2.14 & -1.62 & - & - & -3.45 & **-3.24** \\ relaxation [\%] & - & - & 7.32 & - & **5.80** & - & - & - & 2.59 & **2.45** \\ drift [\%] & - & - & **23.29** & - & 30.38 & - & - & - & 8.51 & **7.93** \\ \hline \(\mathrm{T_{r}}\) [s] & - & - & **22.9** & - & 630.4 & - & - & - & **15.1** & **15.1** \\ \(\mathrm{T_{d}}\) [s] & - & - & 24.6 & - & **23.7** & - & - & - & 10.7 & _10.2_ \\ \hline jog x 0.5 \(r^{2}\) & - & - & 0.84 & - & **0.94** & - & - & - & **0.92** & 0.20 \\ jog x 2.0 \(r^{2}\) & - & - & **0.90** & - & 0.87 & - & - & - & **0.94** & 0.84 \\ \hline course-dir \(\mathrm{h_{G}}\) [\%] & - & - & **4.3** & - & 8.7 & - & - & - & - & - \\ \(\mathrm{F}\) [N] & - & - & 6.9 & - & 5.5 & - & - & - & - & - \\ \hline \end{tabular}
\end{table}
Table 2: The majority of our results are gathered in this table. Note that we progressively excluded sensors that were performing badly from subsequent evaluation steps. Best values for non-Lycra and Lycra versions are put in bold.
of the past 10 seconds with \(\text{RSD}_{10}(t)=\sigma_{10}(t)/\mu_{10}(t)\), where \(\sigma_{10}\) and \(\mu_{10}\) denote the SD and arithmetic mean of \(G\) values in the period \([t-10\text{s},t]\) respectively. In Table 2, we specify periods for \(\text{RSD}_{10}\) to _permanently_ drop below 1%. We report one value for relaxation (i.e., when the sensor patch is pulled, \(\text{T}_{\text{r}}\)) and one for drift (i.e., after sensor patch is released again, \(\text{T}_{\text{d}}\)). Similar to our short-term dwelling tests, we observed that both Lycra variants are by far superior for both of relaxation and drift. Furthermore, the Lycra-versions' advantage is backed by long-term actuation tests, straining the sensors 2,200 times with e=20% over a time-span of 5.6 hours (see supplement).
### Actuation speed
To compare recordings with different lengths resulting from different jog-rates, we again discarded the first pull/release iterations as outliers, normalized along time-axes, and downsampled our data to equal sample count with scipy.interpolate.interpld10. We again scaled our samples using sklearn.preprocessing.StandardScaler with \(y=(x-\mu)/\sigma\), this time not individually for each recording, but using \(\mu\) and \(\sigma\) of our baseline set (1.333 mm/s) for all three speeds, to preserve relative deviations. We then determined conformity of half and double speed from the baseline set by calculating \(r^{2}\), which can be found in Table 2.
Footnote 10: [https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interpld.html](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interpld.html)
We see that for the non-Lycra versions, \(\text{P}_{\text{Th}}\) deviates more for half speed, however less for double speed. For Lycra patches, we see clearly that \(\text{PL2}_{\text{m+}}\) is superior to \(\text{PL2}_{\text{fl}}\) (cf. Figure 5(a)), which implies that tighter tuck tension (i.e., tighter connection between front and back faces) should be preferred.
### Increasing force
Figure 5(b) shows data collected from continuously increasing force for each iteration in steps of 5 N. We see that both Lycra variants perform very poorly and follow a different trend for each repetition, returning to largely different sensor values when released. Note that this is most
Figure 5: Results from short-term dwell with repetitive actuation (left) and long-term dwell with single actuation (right) show all sensors are subject to relaxation and sensor drift.
Figure 6: Results of our jog-rate test (a) showed that \(\text{PL2}_{\text{m+}}\) is most immune against variations in actuation speed. Cascading tests (b) showed that characteristics lines of Lycra-variants follow largely different trends, depending on the force applied.
severe when we went beyond our previous upper testing limit of 20 N. Non-Lycra patches show much more consistent trends, with PTh achieving the best results, even in highly-strained states. We see furthermore, that PPR seems to reach saturation more quickly.
### Anisotropy
Our main testing direction so far was along wales, where the knit provides good stability. Many use cases cause actuation along a single direction, however, given the anisotropic nature of most knitting structures, it is reasonable to also consider the orthogonal direction. Although for our Twill structure equal force causes higher strain in course-direction, first observations with non-Lycra patches did not hint towards considerable differences in terms of e/R correlation. Our tests however show that the Lycra variants exhibit significant erratic behavior when actuated along courses. In particular, characteristics and timelines in Figure 7 show erratic sensor values for Lycra-variants, similar to PRP, which we discarded earlier. We therefore infer that patches with Lycra additives are only of limited value for use cases that involve omni-directional strain and/or shear. Hysteresis data for non-Lycra patches can be found in Table 2. As earlier with PRP, we refrained from calculating hysteresis for Lycra-variants since we found curve fitting unreasonable due to their overall erratic behavior and high offsets in between pulling iterations; more details on curve fitting and hysteresis can be again found in the supplementary material.
### Discussion
Summarizing, we found that when manufacturing our sensors as tubular knits, the resistive part should exhibit slightly higher tension than the substrate. Connecting front and back faces can yield better results in many instances, however an unexpected finding was that the tucking direction is crucial: if the resistive yarn is tucked to the substrate, the resulting sensor is defective. If however the tucks are performed the other way around, the result is among the best-performing sensors, however gives noisier values when compared to the tubular versions. Lycra-variants produced more consistent results overall, at least within our testing range of 0 to 20 N. They subjectively showed less wear-out effects and better elastic recoil. In terms of quantitative measures, they are less prone to relaxation and drift, both in short- and in long-term. Knit using two threads of Lycra for the substrate showed best linearity, least hysteresis, and are least affected by variations in actuation speed; using higher tuck tension yielded slightly better results.
Still, we see that the best choice of materials depends largely on the specific use case at hand. We noticed that Lycra-enhanced variants perform well for wale-directional strain, however, the results were sobering when we varied actuation amplitudes, especially beyond our usual testing range with huge offsets as there was considerable offset and hysteresis visible in the plots. In contrast, the PES-only-patches show much less anisotropic behaviour and could be used even beyond 20 N, in particular the tubular version PTh.
## 5 Conclusion, Limitations, & Future Work
In this paper, we presented three means of fabrication for implementing a resistive force sensor in flatbed weftinting machines with a minimum of two beds. The chosen knitting pattern enables to knit the sensing part on one bed entirely, which allows for combinations with a supporting substrate. Our method therefore provides the possibility to hide away the functional part for aesthetic and/or protective purposes. Based on these methods, we presented and evaluated 10 variations, 5 of which used a PES-only substrate material. The remaining 5 combined PES with Lycra to improve their physical properties.
We do acknowledge a few limitations of this study: first, we did not evaluate all possible combinations of nominal stitch numbers (cf. Table 1), instead we chose settings driven by subjective measures of quality. We did this to keep the number of patches reasonably low. We did however experiment with other compositions and plies which were moderately successful, and only presented the most relevant ones in this paper.
Second, the stitch numbers for PES differ slightly for PL1* patches, which seems like they are not objectively comparable. We justify this with the change in haptic quality when new material of a different type is introduced into the composition, which requires stitch numbers to be ad
Figure 7: Course-directional strain behavior is most consistent for non-Lycra variants, while Lycra-versions act highly erratic.
justed accordingly, hence, we went for comparable haptic quality.
Third, we did not test all of our sensors to their full saturation, i.e., did not cover the entire working range. We did this since in our use case, 20 N was way beyond the expected upper limit, however we noticed especially during the tests with increasing actuation force, that some variations perform badly beyond this value. This connects to a limitation of our Lycra-patches: offsets and hysteresis depending on actuation amplitude pose a challenge in general, since there seems to be temporal data (i.e., the degree of "past actuation") required to infer correct force and/or strain at all times. We plan to investigate this aspect in future work using a specially trained Artificial Neuronal Network to act as a special filter. First steps into this direction already yielded promising results for compensating those temporal effects.
Finally, tests in harsh environmental conditions, such as in highly humid and extreme temperature scenarios were not performed at this point. Related work showed that conductive polymer composites can be affected in particular by high humidity [37], however, we expect that since this is a property of the material, the knitting structure does not have a profound influence in that regard. Since the sensing parts are entirely replaceable by similar products and our work was focusing on structural compositions and consistency benefits from adding Lycra material, we refrained from evaluating the specific materials that we used for our implementation, since we expect our results would reasonably translate to arbitrary resistive and conductive yarn.
## Acknowledgment
This research is part of the COMET project TextileUX (No. 865791, which is funded within the framework of COMET - Competence Centers for Excellent Technologies by BMVIT, BMDW, and the State of Upper Austria. The COMET program is handled by the FFG.
|
2307.08608 | Pressure safety approach for PIP-II cryogenic distribution system and
cryomodules | The Proton Improvement Plan-II (PIP-II) is a superconducting linear
accelerator being built at Fermilab that will provide 800 MeV proton beam for
neutrino production. The linac consists of a total of twenty-three (23)
cryomodules of five (5) different types. Cooling is required at 2K, 5K and 40K.
The Cryogenic Distribution System (CDS) consists of a Distribution Valve Box,
~285 m of cryogenic transfer line, modular Bayonet Cans to interface with
cryomodules, and a Turnaround Can. The cryogenic system must provide protection
from over-pressure by sizing pressure relief devices for all volumes and
process line circuits. The cryomodule cavity circuits have dual pressure
ratings, 4.1 bara when cold and 2.05 bara when warm (T>80K). Worst case
relieving cases will be identified. The methods for determining heat flux will
be presented. For the relieving occurring in the linac tunnel, flow must vent
to outside to avoid an oxygen deficiency hazard. Also, we will present vacuum
vessel relief sizing to protect the cryogenic distribution system vacuum shells
from over pressure during an internal line rupture. The project is funded by US
DOE Offices of Science, High Energy Physics. | William Soyars, Tomasz Banaszkiewicz, Ram Dhuley | 2023-07-17T16:18:28Z | http://arxiv.org/abs/2307.08608v1 | # Pressure safety approach for PIP-II cryogenic distribution system and cryomodules
###### Abstract
The Proton Improvement Plan-II (PIP-II) is a superconducting linear accelerator being built at Fermilab that will provide 800 MeV proton beam for neutrino production. The linac consists of a total of twenty-three (23) cryomodules of five (5) different types. Cooling is required at 2K, 5K and 40K. The Cryogenic Distribution System (CDS) consists of a Distribution Valve Box, \(\sim\)285 m of cryogenic transfer line, modular Bayonet Cans to interface with cryomodules, and a Turnaround Can. The cryogenic system must provide protection from over-pressure by sizing pressure relief devices for all volumes and process line circuits. The cryomodule cavity circuits have dual pressure ratings, 4.1 bara when cold and 2.05 bara when warm (T\(>\)80K). Worst case relieving cases will be identified. The methods for determining heat flux will be presented. For the relieving occurring in the linac tunnel, flow must vent to outside to avoid an oxygen deficiency hazard. Also, we will present vacuum vessel relief sizing to protect the cryogenic distribution system vacuum shells from over pressure during an internal line rupture. The project is funded by US DOE Offices of Science, High Energy Physics.
## 1 Introduction
The Proton Improvement Plan - II (PIP-II) is a superconducting linear accelerator being built at Fermilab that will provide 800 MeV proton beams for neutrino production. The cryogenic distribution system (CDS) must deliver 2K, 5K, and 40K cooling from the cryogenic plant to the linac cyomodules (CM). The linac contains five unique cryomodules. The CDS is segmented to provide each cryomodule with its own dismountable connect for each process line. The Cryogenic Distribution System and cryomodules are schematically shown in Figure 1. Thermodynamic design aspects have been investigated previously [1].
The CDS, from the Distribution Valve Box at the interface with the Cryogenic Plant, through twenty-five Bayonet Cans (BC), to the Turn-Around Can at the end of the linac, is approximately 285 m long. The CDS process circuits with size and nominal operating temperatures are: 4.5K Supply- DN 50 at 4.5K, 2K Return- DN 250 at 3.8K, Low Temperature Thermal Shield (LTTS) Return- DN50 at 9K, High Temperature Thermal Shield Supply (HTTS-S) - DN50 at 40K and Return (HTTS-R)-DN 50 at 80K, Cooldown Return (CD) - DN 80 at 80K.
All piping and vessels must be protected from overpressure for safe operation. The relief device locations are indicated in Figure 2. The pressure safety requirements established for the project and the technical approach will be useful for comparing to other systems.
## 2 Requirements
To meet project safety and quality requirements, the system will comply with ASME Boiler and Pressure Vessel Code and ASME B31.3 Process Piping Code. The CDS relieving is available at each end of the CDS. CM positive pressure relieving is available at the BC, connected by the u-tubes. CM 2K relief devices will be directly mounted on each CM. For the 2K circuit of the CMs, a dual pressure rating is given [2]. This utilizes the higher allowable stress of the niobium cavities when cold, \(<\)80K.
Any helium process line relieving into the linac tunnel space is not allowed. This enhances the Oxygen Deficiency Hazard safety for personnel in the tunnel. All reclosable safety relief valves will exhaust into the Low Pressure Return to compressor suction. For the CM 2K Return SVs which nominally operates with a sub-atmospheric pressure at its inlet, this is very important for system purity so that any leakage through the relief poppet will not have air contamination. And for this positive pressure SVs, this will save on helium inventory losses to recover any leakage from positive pressure process circuits.
Figure 1: PIPII cryogenic distribution system and cryomodules schematic
Figure 2: PIPII cryogenic distribution system and cryomodules relief device locations
## 3 Calculation Methods and assumptions
Sources of overpressure must be identified. For cryogenic systems, a significant source of overpressure is a sudden loss of insulating vacuum. Furthermore, for cryomodules, there is an additional vacuum failure mode from sudden loss of beam tube vacuum. Other potential sources of overpressure must also be considered. For some warmer circuits, with air condensation being less of a problem, cryogenic plant supply with return flow isolated will be investigated.
Temperature-conditions at cryogenic relieving are set according to standard methods [3, 4]. Pressure conditions are set by Code allowable overpressure. Take this as 21% above Maximum Allowable Working Pressure (MAWP) by considering loss-of-vacuum (LOV) as an "unexpected source of external heat" [5]. This allows for 21% above MAWP for "vessels exposed to fire or other unexpected sources of external heat." According to ISO 4126 Terms and Definitions for Pressure (section 3.2) the maximum allowable overpressure "is established by applicable code for and fire conditions". Calculations are performed in accordance with the EN ISO 4126-1 method for gas, and also with direct integration method which can lead to different results at cryogenic conditions.
### Heating during loss of vacuum
One straightforward method to is to assume a constant, maximum heat flux along the length of the CDS or CM. This will apply known, experimental heat fluxes [6, 7]. One term, for thermal radiation shield internal side without MLI, natural convection calculations used.
A second method commonly used [6, 7] is to limit the available heat input to the energy available energy deposition of air through the worst case, feasible orifice in the vacuum vessel. This often is the most plausible representation for a long transfer line scenario. See Table 3 for worst-case air inleak orifice assumption. When there are multiple process lines available for air condensation, this total energy needs to be distributed among the lines. The cold surface absorbs heat proportionately to its surface area as a percentage of the total cold surface area and temperature dependent heat flux product.
### Metal heating during loss of vacuum
For further refinement, one can consider that not all of the air ingress energy heats the helium but that some of air ingress heating goes to heating the metal of the CDS. This will lower the heat load to helium which the relief system must address. Consider for the given mass of piping, the heat capacity energy needed to raise metal temperature from process fluid temperatures to temperature where air freezing/condensing ends, taken to be 70K. To convert to heating rate, need time constant to apply for the duration of the heating period. Use same time constant as previously used for LCLSII transfer line which has similar geometry [8], 70 sec. This is based on experimental results from DESY/XFEL Cryomodule crash tests [ref] for air ingress choked flow, with active cryo-pumping, using same size opening to vacuum as considered here, DN80.
## 4 Cryogenic distribution system protection from overpressure
Geometry is taken from the preliminary design solid model. Each process line has two primary reliefs: one at the Distribution Valve Box (DVB) and one at the Turnaround Can (TC). Thus, CDS relieving is available at each end of the CDS.
### Heat input from loss of insulating vacuum
Assume air leaks in through open DN80 vacuum evacuation port. Applying approach in 3.1, this orifice size limits energy and is used to define loss-of-insulating-vacuum (LOIV) heat flux. See Table 1 for results.
### Overpressure flow rate from cryoplant
Consider that the cryogenic plant is operating at full capacity for a given circuit, then its return valve is closed, making the cryoplant a source of oversupply. This is conservative because realistically the
plant cannot output full capacity at its maximum design pressure. Nevertheless, this is basis for design for the HTTS Supply and the CD Return, which have very small requirements from LOIV. See Table 2.
### Relief sizing requirements
Apply heat transfer to He as shown in Table 1. Then relief requirements are calculated in Table 2.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline & & & Heat load & Total Heat & & & & & \\ & Peak & & per Length, & load & & & & & \\ Circuit & Heat & Dia. & from & distributed & & Heat transfer & & & \\ & Flux & & experimental & to He, from & & & & \\ & & & flux & constant flux & & & & & \\ & [kW/m\({}^{2}\)] & [cm] & [W/cm] & [kW] & [kW] & [\%] & [kW] & [\%] & [kW] \\ \hline
2k Return & 6 & 27.3 & 51.46 & 42\% & **232** & 35 & 80 & 12 & 312 \\
4.5k Supply & 6 & 6.03 & 11.37 & 9\% & **51** & 8 & 11 & 2 & 63 \\ LTTS & 6 & 6.03 & 11.37 & 9\% & **51** & 8 & 11 & 2 & 63 \\ HTTS & & & & & & & & \\ Supply & 0.23 & 6.03 & 0.44 & 0\% & **2** & 0 & 9 & 1 & 11 \\ HTTS Ret \& & & & & & & & & \\ Shield, & 0.23 & 62 & 4.48 & 4\% & **20** & 3 & 0 & 0 & 20 \\ insulated & & & & & & & & \\ HTTS & & & & & & & & & \\ Shield, & 2.4 & 56 & 42.22 & 35\% & **191** & 29 & 0 & 0 & 191 \\ uninsulated & & & & & & & & \\ CD & 0.23 & 8.89 & 0.64 & 1\% & **3** & 0 & 0 & 0 & 3 \\
**SUM** & - & - & **127** & **100\%** & **551** & **83** & **112** & **17** & **663** \\ \end{tabular}
\end{table}
Table 1: Heat load to CDS circuits during LOIV
\begin{table}
\begin{tabular}{c c c c c c c c} \hline & & Set & & & Helium flow & & \\ Circuit & Press. & & Relieving & Worst case & Heat load & requirement, & \\ & [kPa] & Temp. [K] & condition & [kW] & each relief & & SV size, DN \\ & & & & & & [kg/5] & \\ \hline
**2k Return** & 410 & 7.0 & LOIV & 232 & 4.78 & 100 x 150, & \\
**4.5k Supply** & 2000 & 12.7 & LOIV & 51 & 0.34 & 20 x 25, two & \\ LTTS Return & 1000 & 9.7 & LOIV & 51 & 0.57 & 20 x 25, two & \\ HTTS Supply & 2400 & 40. & Cryoplant & Not & & & \\ & & & oversupply & applicable & & & \\ HTTS Return & & & & & & & \\ \& shield & & & & & & \\ CD Return & 1000 & 80. & Cryoplant & Not & & & \\ & & & & & & & \\ & & & & & & & \\ \end{tabular}
\end{table}
Table 2: Helium relieving requirements for CDS
## 5 Cryomodules protection from overpressure
This analysis applies to the five different CMs: HWR, SSR1, SSR2, LB650 and HB650 CM. CM positive pressure relieving is available at the BC, connected by the u-tubes. CM 2K relief devices will be directly mounted on each CM. For the 2K circuits, these require further analysis due to their dual pressure ratings; furthermore, these have a unique loss of vacuum failure mode due to the LOBV scenario. None of these LOV scenarios will take credit for metal warming.
### Non-2K circuit pressure relieving from loss of insulating vacuum
Geometry is taken from solid models of the five different styles. For the positive pressure circuits, the heat load will be simply and conservatively estimated as the peak constant heat flux values (shown in Table 1) applied to its surface area. The cryomodules have a 2K-to-4K subcooling heat exchanger, and an approximation is made for heat exchanger surface area, with the sides and lengths treated as a cube. All heat flux to heat exchanger is assumed to go to the 4.5K supply. HTTS geometry is looked at separately, but then the heat load is divided between the 80K and 40K circuits, since this piping is common volume to both reliefs and acts over gradient of temperature. Results shown in Table 3.
### Rupture Disk sizing for sub-atmospheric 2K circuit cold M4WP protection
The goal is to protect the CM SRF cavities vessel from overpressure for cold conditions where the MAWP is 410 kPa. Use HB650 CM cavity geometry, the worst case. Two LOV scenarios will be considered. For loss-of-insulating-vacuum (LOIV), assume air leaks in through open DN100 vacuum evacuation port. For loss-of- beam tube-vacuum (LOBV), assume air leaks in through 60 mm diameter coupler port. Applying approach in 3.1, neither orifice size limits energy; thus, a constant heat flux over the surface area is used to define LOV heat flux.
Apply peak heat flux values to surface area. When comparing LOIV and LOBV case, one sees LOBV is the worst case, and will be used to define the requirements. See results in Table 3. This leads to DN100 RD, a sizing result previously arrived at to protect CMs while undergoing testing and is in service on our CM test stands.
\begin{table}
\begin{tabular}{c c c c c c c} \hline Circuit & Set & Relieving & Worst case & Heat load & Helium flow & SV size, DN \\ & Press. & Temp. [K] & condition & [kW] & requirement, & \\ & [kPa] & & & & each relief [kg/s] & \\ \hline
**4.5k Supply** & 2000 & 12.7 & LOIV & 3.23 & 0.047 & 15 \\ LTTS Return & 2000 & 12.7 & LOIV & 10.6 & 0.16 & 15 \\ HTTS Supply & 2400 & 40. & LOIV & 43.9 & 0.20 & 15 \\ & & & & half of HTTS & & \\ HTTS Return \& 2400 & 80. & LOIV & 43.9 & 0.10 & 15 \\ shield & & & & half of HTTS & & \\ & & & & total & & \\
2k volume-cold & 410 & 7.0 & LOBV & 216 & 8.9 & 100 \\
2k volume-cold & 275 & 6.0 & LOBV & 166 & 8.5 & 100 \\
2k volume- & 205 & 300 & Oversupply & Not applicable & 0.27 & 80 x 100 \\ warm & & & with return & & \\ & & & closed & & & \\ \hline \end{tabular}
\end{table}
Table 3: Helium relieving requirements for Cryomodules
### Safety valve sizing for 2K circuit warm MAWP protection
While the CMs are warm, \(>\)80K, they can see pressure above allowable due to oversupply from the cryogenic transfer line. Consider HB650 CM, which is the largest and have the largest cooldown and JT supply valves. For worst case flow conditions, consider that the CDS TL is cold at nominal conditions. The CM is warm, but 4.5K He supply valve is mistakenly set full open. Assume both CM JT and CD valves with known sizing also open. Assume valve closed, isolating the gas return path. Results are shown in Table 3.
## 6 Venting the relief exhaust to atmosphere
A final function of the CDS is to provide for warm gas return to cryoplant compressor suction during purification, cooldown, and warm up operations. This Low Pressure (LP) Return piping will be DN200 pipe and is rated at 140 kPa. The Design Pressure is consistent with other systems at Fermilab.
### LP Return for CDS and CM reclosable relief devices
The CDS requires for safety valves to exhaust into the LP Return header. This will be accomplished with SVs manifolded into a collection header at TC and DVB, which is then flanged into the LP Return. A key constraint on this is to ensure backpressure is low enough to not impact the 2K Return reliefs, set at 410 kPa. This necessitates that the 2K Return reliefs are a balance bellows style, which can accommodate some backpressure variation, up to about 50% of the gauge set pressure. This guided the set pressure choice of the LP Return header at 140 kPa, to allow for pressure drop up to the outdoor low pressure relief. There is relief to atm at either end of the LP Return header. At the DVB, it's a short distance from inside the CP building through a hole in the wall. At the TC, its more complicated because of limited penetrations from the tunnel up to the surface. Calculations show we want to expand the LP from 8 to 10 to meet backpressure requirements.
There is relief to atmosphere at either end of the LP Return header. At the DVB, it's a short distance from the DVB inside the CP building through a hole in the wall. At the TC, its more complicated because of limited penetrations from the tunnel up to the surface. The closest available one is about 30 m away. Calculations show for this header run up to the outdoor relief want to expand the LP from DN200 to DN250 to meet backpressure requirements. The plan is to use Fermilab designed and fabricated parallel plate spring loaded reliefs, with known performance from previous testing and operations [9].
Figure 3: Schematic of LP Return and venting to atmosphere
### Tunnel vent-to-atmosphere header for CM 2K RD
Requirements call for RD flow to vent to outside, not into the tunnel. This will utilize a dedicated line open to atmosphere at the surface. Analyse the first HB650 in the tunnel, which is the worst case as it at the midway location between the two vent chimneys to atmosphere. This location is 150 m to the US linac penetration and 131 m to the DS linac tunnel penetration
Calculations are made using internal HB650 CM geometry. This includes flow from eight cavities, collecting into common DN100 pipe to the RD. Conservatively assume a check valve present; this is conservative since production CMs are not expected to include one. After exiting the vacuum jacket space from the CM, temperature rise is calculated using natural convection heat transfer. For vent piping thermal contraction, an axial bellows is assumed at each of twenty-five BC, and these are considered in the flow resistance analysis.
The goal is to ensure that open RD opening, the cavities do not exceed 121% of their cold MAWP, that is 496 kPa. A DN200 vent-to-atmosphere header accomplishes this. See Figure 4. This size matches that of the LP Return header, which has advantage of commonality in the design.
## 7 Vacuum vessel protection from overpressure
The CDS vacuum jacket is size DN700 and must be protected from overpressure. It has a MAWP of 150 kPa. There are three insulating vacuum space volumes. The lengths are 71 m for surface and vertical penetration, then 104 m and 106 m in the tunnel. Note, CM vacuum vessel protection will not be discussed here.
Consider the spontaneous rupture of one internal line as the source of overpressure. For the most conservative approach, the analysis assumes that the line ruptures along the entire circumference of the pipe. The following are calculated: volume of vacuum jacket space, fixed mass of He in the pipe which is suddenly spilled, new He density, temperature to reach 150 kPa, and heat inflow primarily from outside jacket (300K) and secondarily from thermal shield (80K). Capacity and sizing are done with standard methods [4, 10].
It was found that 4.5K Supply generated the worst case. It will contact warmer surfaces of the vacuum vessel (300K) and the HTTS shield (80K), absorb heat, and expand. Consider maximum cryoplant capacity of 200 g/s at 300 kPa as providing flow and use know geometry for cross-sectional area in the DN700 pipe; this is used to come up with Reynolds number for approximating heat transfer coefficient from external heating, using MacAdams equation, a basic relationship. For the three segments, see Table 5. A design choice is made for relief diameter of 70 mm, which leads each TTL-BC to have two reliefs.
Figure 4: vent to atm DP
The method above requires helium gas leaking into any one location to flow to adjacent piping for relieving. A final check was made to ensure that the fixed supports along the ID of the vacuum jacket are not serving as a restriction. The flow area through the fixed support is \(>>\) the orifice area of one RV.
## 8 Conclusions
The CDS and CM pressure safety requirements, methods, and results have been presented. Flow requirements are available for relief device sizing specifications for design. These results are for preliminary design; final selection will occur during detailed design and procurement process. Nominal room temperature piping requirements to meet system relieving needs have been established.
|
2301.02915 | SFP: Providing System Call Flow Protection against Software and Fault
Attacks | With the improvements in computing technologies, edge devices in the
Internet-of-Things have become more complex. The enabler technology for these
complex systems are powerful application core processors with operating system
support, such as Linux. While the isolation of applications through the
operating system increases the security, the interface to the kernel poses a
new threat. Different attack vectors, including fault attacks and memory
vulnerabilities, exploit the kernel interface to escalate privileges and take
over the system.
In this work, we present SFP, a mechanism to protect the execution of system
calls against software and fault attacks providing integrity to user-kernel
transitions. SFP provides system call flow integrity by a two-step linking
approach, which links the system call and its origin to the state of
control-flow integrity. A second linking step within the kernel ensures that
the right system call is executed in the kernel. Combining both linking steps
ensures that only the correct system call is executed at the right location in
the program and cannot be skipped. Furthermore, SFP provides dynamic CFI
instrumentation and a new CFI checking policy at the edge of the kernel to
verify the control-flow state of user programs before entering the kernel. We
integrated SFP into FIPAC, a CFI protection scheme exploiting ARM pointer
authentication. Our prototype is based on a custom LLVM-based toolchain with an
instrumented runtime library combined with a custom Linux kernel to protect
system calls. The evaluation of micro- and macrobenchmarks based on SPEC 2017
show an average runtime overhead of 1.9 % and 20.6 %, which is only an increase
of 1.8 % over plain control-flow protection. This small impact on the
performance shows the efficiency of SFP for protecting all system calls and
providing integrity for the user-kernel transitions. | Robert Schilling, Pascal Nasahl, Martin Unterguggenberger, Stefan Mangard | 2023-01-07T18:35:08Z | http://arxiv.org/abs/2301.02915v2 | # SFP: Providing System Call Flow Protection against Software and Fault Attacks
###### Abstract.
With the improvements in computing technologies, edge devices in the Internet-of-Things or the automotive area have become more complex. The enabler technology for these complex systems are powerful application core processors with operating system support, such as Linux, replacing simpler bare-metal systems. While the isolation of applications through the operating system increases the security, the interface to the kernel poses a new threat. Different attack vectors, including fault attacks and memory vulnerabilities, exploit the kernel interface to escalate privileges and take over the system.
In this work, we present SFP, a mechanism to protect the execution of system calls against software and fault attacks providing integrity to user-kernel transitions. SFP provides system call flow integrity by a two-step linking approach, which links the system call and its origin to the state of control-flow integrity. A second linking step within the kernel ensures that the right system call is executed in the kernel. Combining both linking steps ensures that only the correct system call is executed at the right location in the program and cannot be skipped. Furthermore, SFP provides dynamic CFI instrumentation and a new CFI checking policy at the edge of the kernel to verify the control-flow state of user programs before entering the kernel. We integrated SFP into FIPCAC, a CFI protection scheme exploiting ARM pointer authentication. Our prototype is based on a custom LLVM-based toolchain with an instrumented runtime library combined with a custom Linux kernel to protect system calls. The evaluation of micro- and macrobenchmarks based on SPEC 2017 show an average runtime overhead of 1.9 % and 20.6 %, which is only an increase of 1.8 % over plain control-flow protection. This small impact on the performance shows the efficiency of SFP for protecting all system calls and providing integrity for the user-kernel transitions.
System Call Flow Protection, Control-Flow Integrity, Fault Attacks. +
Footnote †: 2022 Copyright held by the owner/under(s).
+
Footnote †: 2022 Copyright held by the owner/under(s).
granularities, depending on which threat model is considered. In a classical software setting, only indirect branches are protected since those are the only ones an attacker can manipulate. Faults pose a more severe threat, thus requiring even more robust protection. Fine-grained instrumentation (Bartos et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019) protects the control-flow of a program on basic-block or even instruction-level (Krizhevsky et al., 2019; Krizhevsky et al., 2019). As a result, these countermeasures protect direct or indirect branches or even the whole instruction sequence. Instruction-granular protection requires intrusive hardware changes to deal with the performance penalty, which is unsuitable for commodity devices.
CFI can be enforced in different security domains. While traditionally, CFI was only used to protect user-space applications, different CFI protection schemes can also protect the kernel (Krizhevsky et al., 2019; Krizhevsky et al., 2019). However, currently, there are no CFI protection schemes available providing protection between different security domains, _i.e._, the transitions between the user-space program and the kernel. Thus, the large attack surface, the transitions between user programs to the kernel remain unprotected. Hence, there is a need for new countermeasures that protect the software interface to the kernel and provides system call flow integrity for commodity devices.
### Contribution
In this work, we solve the problem of the unprotected system call interface and provide system call flow protection on top of CFI, protecting the interface to the kernel against both software and fault attacks. SFP cryptographically links the system call itself and its origin to a global CFI state that is verified at runtime in the operating system. A second-stage linking mechanism within the kernel dynamically applies a second link to ensure that the correct system call was selected and executed.
To automatically protect arbitrary programs, we develop an LLVM-based toolchain to provide CFI and instrument all system calls. We provide an instrumented standard library, where all system calls are instrumented with our system call protection. Furthermore, we modify the Linux kernel to dynamically verify at runtime that the correct system call was executed.
We implement SFP on top of FIPAC, a software-based CFI scheme exploiting ARM pointer authentication. We evaluate the performance of SFP based on a microbenchmark to measure the impact of SFP on the system call latency, leading to an overhead of 1.9 %. To show the applicability to real-world programs, we perform macro-robenchmarks using the SPEC 2017 application benchmark. On average, we measure a runtime overhead of 20.6 % for protected applications. Summarized, we make the following contributions:
* We provide system call flow protection by linking the syscall and its origin to a global CFI state and verifying it at runtime.
* We provide a prototype implementation comprising an LLVM-based toolchain, an instrumented C-standard library, and a modified Linux kernel.
* We evaluate the performance based on a microbenchmark and on the application-grade SPEC 2017 benchmark.
## 2. Background
This section provides background to fault attacks, pointer authentication, and control-flow integrity.
### Fault Attacks
Injecting faults into a digital circuit is a powerful threat allowing adversaries to break the security of a system entirely. The effect of an induced fault at the electrical level includes timing violations and transient voltage and current changes (Krizhevsky et al., 2019). Typically, the effect of a fault is modeled at the bit-level with transient bit-flips and permanent stuck-at effects (Krizhevsky et al., 2019).
Common fault injection approaches include voltage or clock glitching, laser fault injection (LFI), and electromagnetic fault injection (EMFI) (Krizhevsky et al., 2019). While these methodologies require physical access to the device, recently, new techniques relaxing this constraint have been released (Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019). E.g., in Plundervolt (Krizhevsky et al., 2019), the attacker utilizes the dynamic voltage scaling interface of the CPU to induce faults remotely in software.
Independently of the injection technique, an attacker can exploit the effects of faults in various ways. E.g., fault attacks on encryption primitives enable the attacker to leak secret keys (Bartos et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019). Despite dedicated attacks on encryption, fault attacks are also actively used to bypass security features, such as secure boot, on embedded systems (Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019). By inducing targeted faults into the program counter of a processor, faults enable an adversary to arbitrarily hijack the control-flow of a program (Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019).
### Control-Flow Integrity
The control-flow of a program can be hijacked using software attacks, fault attacks, or combined software-fault attacks. Therefore, various countermeasures targeting different attacker models were proposed to protect programs from these attack vectors.
Software CFI SchemesSoftware-based control-flow attacks are typically performed by exploiting a memory vulnerability. By overwriting control-flow-related data, e.g., return addresses or function pointers, the adversary can arbitrarily manipulate the execution of the program (Bartos et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019). To mitigate these attacks, software control-flow integrity (SCFI) schemes (Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019; Krizhevsky et al., 2019) aim to provide pointer integrity using different mechanisms. E.g., PARTS (Krizhevsky et al., 2019) uses ARM pointer authentication (PA) to cryptographically seal and verify security-sensitive pointers to protect them while stored in memory.
Fault CFI SchemesSoftware CFI schemes only protect control-flow transfers the adversary can also manipulate in the software threat model, _i.e._, return addresses and function pointers. Faults also allow the attacker to tamper with static control-flow data stored in the program or even skip instructions. Therefore, fault control-flow integrity (FCFI) schemes enforce their protection at a finer granularity, e.g., at the instruction level (Krizhevsky et al., 2019; Krizhevsky et al., 2019). However, as these schemes usually require custom hardware changes to avoid tremendous runtime overheads, software-based FCFI schemes typically operate at the function or basic block level (Krizhevsky et al., 2019; Krizhevsky et al., 2019). These schemes track the execution of the program using a signature and compare this running signature with a precomputed signature during runtime.
Software Fault CFI SchemesAs most FCFI schemes (Krizhevsky et al., 2019; Krizhevsky et al., 2019) do not consider a software attacker in their threat model, software attacks allow the adversary to bypass most FCFI schemes. Here, the adversary uses a memory bug to overwrite the state maintained in software and arbitrarily hijacks the control-flow. Hence, mitigating software, fault, and software-fault combined attacks require even stronger countermeasures, _i.e._, software fault CFI (SFCFI) schemes.
### Fipac
FIPAC (FIPAC, 2017) is a software fault CFI (SFCFI) scheme mitigating software and fault-based control-flow attacks exploiting ARM pointer authentication. Internally, FIPAC maintains a global state through the entire program execution. When entering a basic block, _i.e._, a block of consecutive instructions without a control-flow transfer, FIPAC cryptographically updates the state. Depending on FIPAC's configured checking policy, the value of the state is compared to the expected value determined during the compilation of the program at the end of each basic block, function, or program. On control-flow merges, _i.e._, indirect calls, the state is updated using a justification signature to ensure that different valid control-flow paths yield an identical state. To prevent a software adversary from predicting and overwriting the state using a memory bug, a MAC is utilized for the state update. Moreover, the state update and check functions cryptographically derive and verify the running signature on program execution. FIPAC uses the pointer authentication instructions of modern ARMv8.6A architectures for the MAC computation.
_ARM Pointer Authentication_ARM pointer authentication is a hardware feature introduced with ARMv8.3A (Fipac, 2017) and updated in ARMv8.6A (Fipac, 2017). This extension provides new instructions to cryptographically sign and authenticate data. These instructions derive a message authentication code (MAC) using a secret key, a 64-bit modifier, and the value of a provided register, e.g., an address stored in a pointer. A fraction of this MAC, called the pointer authentication code (PAC), is then stored in the upper bits of the provided register. By using the authentication instructions, the authenticity of the MAC and the data in the register can then be verified.
### Linux and the System Call Interface
Linux (Linux, 2018) is a monolithic kernel used in billions of devices (Linux, 2018) and embedded systems. To retrieve a particular service or get a specific resource, e.g., reading and writing a file, or to get dynamic memory, the user program needs to request this from the kernel, _i.e._, via a system call. A system call changes the privilege and transfers the execution from the user-space program to the kernel of the operating system, which then grants or denies the requested service. A user-space program aiming to execute a certain system call invokes the corresponding system call wrapper routine provided by a library. This wrapper then initiates a control-flow and privilege transfer into the kernel space by using a dedicated instruction, _i.e._, the svc instruction for AArch64. The system call instruction requires the system call number of the requested service and additional optional parameters as arguments.
## 3. Threat Model and Attack Scenario
Our threat model considers a powerful adversary capable of performing software attacks, fault attacks, or combined software and fault attacks. This attacker can exploit memory vulnerabilities to arbitrarily read or modify data in memory. However, we assume that the code segment of the program cannot be modified by a software adversary by, for example, exploiting memory vulnerabilities. Nevertheless, by inducing faults, the attacker can flip bits in memory, the registers, the code segment, or the instruction pipeline of the processor. We assume that the control-flow of executed programs _and_ the kernel is protected using an SFCFI scheme, such as FIPAC.
Note that faults on the data, except the syscall register, are out of the scope of this work. It requires orthogonal schemes, e.g., redundancy encoding schemes for data (Borda et al., 2017), for their protection. We assume ARM PA to be cryptographically secure, and the attacker does not have access to the encryption keys. Furthermore, the operating system is assumed to be secure, providing isolation of the kernel task structure to the user program.
### Attack Scenario
Within this threat model, the adversary aims to hijack the program's interface to the Linux kernel. In the example shown in Figure 1, the user program invokes the system call **C** using the Linux system call interface. However, by using a fault attack or a software-fault combined attack, the adversary can either _(i)_ redirect the system call to **B** or _(ii)_ entirely skip the system call.
Listing 1 shows the instruction sequence to invoke the system call **C** on AArch64. The system call number is stored in register w8, and the system call arguments are stored in the remaining registers. By flipping bits in register w8 using faults, the adversary can redirect _(i)_ the execution to a different system call.
Moreover, the syscall gadget in Listing 1 is susceptible to combined attacks. A software-fault combined attacker utilizes a memory vulnerability to overwrite data at address memAddress. Afterward, in Line 4, the adversary hijacks the execution of the program by flipping bits in the program counter to redirect the control-flow to the svc instruction in Line 7, responsible for switching to the kernel. This attack enables the adversary to invoke arbitrary system calls. In addition to these attacks, a fault attacker can also corrupt the svc instruction to skip _(ii)_ the execution of the entire syscall.
SCFI schemes, such as FIPAC, currently _cannot_ mitigate these attacks as these countermeasures do not consider transitions between user-space and kernel space in their threat model. While they only protect the user-space application, they fail to provide protection for the kernel interface, posing a large threat surface for critical vulnerabilities. Furthermore, current SCFI protection schemes use static control-flow instrumentation, which is the same for subsequent calls to the program. As a result, an attacker with access to the code segment or to general-purpose registers can learn from subsequent program executions. Thus, it would be possible for an attacker to attempt multiple control-flow attacks until the hijack succeeds.
### FIPAC Intra Basic Block Protection
The authors of FIPAC describe a mechanism to extend the protection guarantees of FIPAC from inter to intra-basic block security (Fipac, 2017). By applying a state update after every instruction within a
Figure 1. Redirecting a system call using fault attacks.
basic block, the subsequently also update the CFI state continuously. Although this mechanism can be applied around syscalls, it does not add any protection. With a state update before and after the system call, an attacker can still fault the syscall number or manipulate the svc instruction to perform a _top_ instruction. Although this attack manipulates the execution of the system call, FIPAC's extended intra-basic protection does _not_ detect these attacks. Consequently, it requires a different protection scheme to provide call flow protection for system calls.
## 4. Design of SFP
In this section, we present SFP, a mechanism that provides system call flow protection by exploiting a stateful CFI protection scheme. While SFP is generic and compatible with different CFI protection schemes, our design exploits FIPAC as the underlying CFI protection scheme. Section 7.3 discusses the compatibility aspects and how SFP can be applied to different CFI schemes.
### Requirements for System Call Protection
The goal of SFP is to protect the system call interface to the kernel against software, fault, and combined attacks. Based on the attack scenario from Section 3, the protection of SFP must fulfill the following requirements.
1. [label=**R0**]
2. _System Call Number._ Prevent an attacker from manipulating the system call number to a different system call.
3. _System Call Execution._ Ensure that a syscall cannot be skipped.
4. _System Call Protection._ Ensure the system call dispatcher in the kernel executes the correct system call function.
5. _Dynamic CFI Instrumentation._ Provide a dynamic CFI instrumentation to ensure protection between consecutive program executions.
### System Call Protection
To fulfill requirements **R1** to **R3**, SFP introduces a two-step approach cryptographically linking the syscall to the state of the deployed SCFI scheme. First, at the system call caller site, we cryptographically link the system call origin and which system call we want to execute to the cryptographic CFI state. Second, at runtime, we perform a second-stage linking operation during the system call operation, confirming that the correct syscall gets executed.
First-Stage System Call Linking.We statically identify at compile-time which system call is getting executed for all locations in the program. To protect the system call, SFP binds the syscall to the CFI state, _i.e._, to perform a CFI state update with the system call number. The system call number is a monotonically increasing number, thus not providing a significant Hamming distance between different system calls. A single bit-flip on the system call number changes the system call to a different one. As a result, the system call number cannot safely be used to bind it to the CFI state since faults can easily manipulate the system call to a different one.
To overcome this limitation and perform a safe and secure state update, we need to compute a system call-dependent update value with a sufficiently large Hamming distance. In SFP, we exploit the cryptographic properties of ARM PA for this purpose. We use computation of a PACIA operation, with the system call and a random modifier as input, and compute a cryptographic 15-bit patch value for the particular system call. Due to the cryptographic MAC operations of ARM PA, the patch values for subsequent system call numbers have a large Hamming distance and cannot be computed without having access to the secret ARM PA key. The computation of those patch values occurs at compile-time or load time and replaces the empty patch values in the binary.
Before executing a system call and jumping to the kernel, we patch the CFI state with the statically computed system call patch, thus performing the _first-stage_ linking. At this point in time, we bind the future execution of the particular system call to the CFI state ahead of executing it. Performing first-stage linking already provides protection for requirements **R1** and partly **R2**.
Second-Stage System Call Linking.After linking the system call to the CFI state in the user-space of the program, the system call is executed, and the execution switches into the kernel. Via dispatching code and the selected system call in the general-purpose register wd, the kernel selects the correct system call function and executes it. At the end of each system call function, we apply a second patch, _i.e._, the _second-stage_ linking to the CFI state, confirming that the previously selected system call was really executed. This patch value is computed dynamically during the execution of the syscall. The second linking step ensures that both requirements **R2** and **R3** are fulfilled.
In Figure 2, we summarize SFP's system call protection. A user program performs the first-stage linking and patches the CFI state with a statically computed syscall patch to link the execution of a system call. The execution transitions to the kernel, which executes the desired system call function. At the end of the system call, the kernel performs the second-stage linking operation, followed by a CFI check operation. The later second-stage linking operation only succeeds when the correct system call is linked to the CFI state. As a result, SFP's approach translates system call errors, independent of how they occur, to CFI state errors, which eventually are detected through the checking policy of the selected CFI protection scheme. Note, Figure 2 includes CFI checks at the beginning and end of the syscall to immediately detect a wrong syscall when entering the kernel and after the syscall's execution.
### Dynamic Instrumentation
Existing SFCFI protection schemes (Kang et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019) use a static post-processing or encryption phase. A dedicated post-processing tool recovers the control-flow, computes the patch and check values, and modifies the program. The static approach with a single
Figure 2. System Call protection in SFP. Before a syscall, we cryptographically bind the syscall to the CFI state for later verification and second-stage linking in the kernel.
encryption key leads to the fact that all executions of the same program use the same CFI values, e.g., patches, updates, or checks. By observing the used CFI-related values, attackers can more easily craft valid CFI states to bypass the control-flow protection.
In SFP, we overcome this limitation by splitting up the toolchain and integrating the CFI instrumentation into the kernel. When starting a program, the ELF loader of the OS identifies a CFI instrumented program. If generates a random ARM PA encryption key and stores it in the process task structure. The ELF loader then performs the per-program call unique CFI instrumentation and computes the expected CFI state and all patch values needed to handle the control-flow. The CFI states are stored along with the process task structure within the kernel. With this mechanism, subsequent calls to the same program create different encryption keys. As a result, it guarantees that different CFI values are generated on each new program start, _i.e._, fulfilling requirement **R4**.
Kernel Checking PolicyIn SFP, we develop a novel CFI checking policy at the edge of the operating system. Due to dynamically instrumenting the program when starting it, the operating system exactly knows the expected CFI state for every location of the program. When a user program now enters the kernel, e.g., due to a system call instruction, the kernel, which has access to both the user program state and the expected CFI states, can verify them. If the current CFI state matches the expected state, the system call continues. However, if the CFI state deviates from the expected state, a CFI error is detected, and the operating system aborts the program execution. A CFI check at the end of the syscall confirms the execution of the right syscall. Apart from system calls, a user program can enter the operating system also via different execution paths. We include the same checking policy when a timer interrupt is raised, and the kernel is entered.
## 5. Implementation
The prototype implementation of SFP consists of two parts. First, we develop a toolchain to automatically compile and instrument arbitrary C-programs with CFI, including a custom runtime library. Second, we modify the kernel of the Linux operating system to include the system call verification, the new checking policy, and the dynamic instrumentation on the program start.
### Toolchain
CompilerWe base the toolchain on the modified compiler of FIPAC (Zhou et al., 2017), which is based on the LLVM (Levy et al., 2017) compiler framework. We adapt the AArch64 backend of the compiler to instrument the control-flow and embed control-flow meta information in a custom section of the ELF binary. The compiler inserts the updates for every basic block, inserts patches for control-flow merges, and also deals with call instructions. Our modified compiler emits a running ELF binary but leaves all patch values for control-flow merges and system calls to be zero. The necessary post-processing step is shifted to the operating system, which computes all patches at the program start. Note that the instrumented program does not contain any check instructions as they are part of the transition to the operating system and are performed in the kernel.
C Standard LibrarySystem calls are typically invoked via wrapper functions provided by the standard library of the programming language. This prototype toolchain uses a CFI-instrumented version of the _musl_(Levy et al., 2017) C standard library. The standard library provides wrapper functions for all system calls or uses system calls directly in different library functions. We identify every system call in the must standard library and insert the necessary patch sequence containing an immediate load and the xor-based state update ahead of executing the system call. Listing 2 summarizes the first-stage linking, where the immediate value for the mov instruction is zero. When starting the binary, the operating system computes the actual patch value for this system call and fills out the correct load value.
### Kernel Support
SFP requires minor modifications to the operating system. We base the prototype of SFP on the Linux kernel in version 5.15.32 (Blees et al., 2016).
Dynamic Instrumentation on Program StartOn program start, when an instrumented ELF binary is started, SFP performs the per-program instrumentation of the program. First, the kernel generates a random encryption key used for the PA instrumentation. With the help of control-flow metadata, which is stored along with the ELF binary in a metadata section, we compute the CFI state throughout the program and fill the necessary patch values for justifying signatures. Furthermore, we compute the syscall- and key-dependent patch values that are used to protect the system call interface. For every system call in the program, we compute its PAC based on the system call number and user-space program unique modifier. The resulting PAC value, which is not guessable by the attacker, is filling out the immediate patch value before the syscall.
As discussed, the instrumented program does not contain dedicated CFI check operations as they are performed when entering the kernel. Instead, we store the expected CFI state for each program location in the task's kernel structure. To reduce the storage overhead, we use a RangeMap, to only have one entry for a contiguous range of states, where it does not change.
System Call VerificationDuring the system call, the user program updates the CFI state with a statically computed cryptographic patch value that depends on the system call number. The verification that the correct system call gets executed happens in the kernel. After the system call jumps into the kernel, a dispatcher code selects the correct system call function to be executed. At the end of every system call function in the kernel, we perform the second-stage linking. Based on the system call number, we dynamically compute a second patch value dependent on the currently executed system call. In Listing 3, we summarize this operation sequence, where we perform the second-stage linking within the kernel. To retrieve a cryptographically secure patch value, we exploit ARM PA's PACIA instruction, which takes the system call and a modifier as input operands. Note that the modifier used for the kernel update of the CFI state is different from the one used for the first-stage linking in the user program. This property is essential to avoid attackers being able to skip system calls entirely since patching the CFI state twice with the same value would cancel out and has no permanent
effect on the CFI state. We finally apply the computed patch to the CFI state and clear the lower bits from the system call.
Checking Policy at the Kernel BoundaryWhenever a user program enters the kernel, SFP performs a CFI check to validate if the current CFI state still matches the expected state. We perform CFI checks on two entering points: During a system call and when a timer interrupt is raised. With the help of the CFI states stored in a RangeMap within the process structure and the knowledge of the program's current program counter, we look up the expected CFI state for the program location. If the current CFI state, stored in the register x28 of the user program state, diverges from the expected state, a CFI error is raised, and SFP stops the program execution. For syscalls, we perform a second CFI check at the end of the syscall function in the kernel to ensure the syscall was really executed.
## 6. Evaluation
In this section, we first evaluate the security of SFP and show how it provides protection and the defined threat model. Second, we evaluate the functionality and the performance overhead of the prototype implementation.
### Security Evaluation
We analyze the security guarantees of SFP and show how different attacks within the threat model are mitigated.
Control-Flow Hijacks in the User-Space or KernelSF provides CFI protection for the user-space application based on the selected underlying CFI protection scheme. The prototype uses FIPAC, a basic-block granular CFI scheme, protecting all direct/indirect branches as well as direct/indirect calls. The protection domain includes the C standard library, which is fully CFI instrumented. Consequently, an attacker cannot redirect syscalls in the user-space application by redirecting the control-flow to a different wrapper function of the standard library. Control-flow attacks in the kernel are detected via the kernel internal CFI protection scheme.
Skipping a System CallWhen skipping a system call instruction, _i.e._, the svc instruction, the first-stage linking already occurred. Subsequently, the skipped system call misses the second-stage linking from the kernel, which yields a wrong CFI state, which is detectable through the CFI checking policy. However, if the entire system call instruction sequence is skipped, _i.e._, first-stage patching and the syscall instruction are omitted, the hijack is still detectable. As both patch operations are missing on the CFI state, the state is wrong again, and a subsequent CFI check, e.g., when the program gets scheduled, detects the invalid state. In both cases, SFP transforms the skipped system call into a CFI error, which manifests itself in a wrong CFI state, which is detectable.
Changing a System CallA fault on the register containing the system call number, or a combined attack, in which the attacker controls the register used to execute the system call, redirects the system call to a different one. SFP protects against both attacks. By applying the first-stage linking to the CFI state, the correct system call is already bound to its future execution. Manipulating the system call register, e.g., due to a fault or software vulnerability, leads to applying the wrong system call patch to the CFI state. When the system call is executed, the CFI state for that program differs from the expected state, and the CFI check in the kernel detects the problem and aborts the program.
To bypass a system call, the attacker only has a single chance to change the system call number and manipulate the previous system call patch to correct one for this location. However, the system call patch is protected via the secret ARM PA key, which the attacker cannot access. Guessing the PAC leads to a probability of \(p=\frac{1}{2^{5}}=0.0031\,\%\) for getting the correct patch value, where 15 is the configured PAC size of our prototype implementation. Furthermore, due to the dynamic instrumentation on the program startup, the system call patches always differ between subsequent calls of the same program. As a result, the attacker cannot learn new patch information between subsequent program calls.
### Functional Evaluation
To validate the functional correctness of SFP, we emulate the execution on the functional simulator QEMU (Wang et al., 2017) in version 7.0.0. Since this simulator currently only supports ARM PA from ARMv8.3-A, we extend it to include ARM PA of ARMv8.6-A to support the CFI protection. The functional evaluation runs the modified Linux kernel from the prototype and can start and execute instrumented programs, where all system calls are protected. Within the kernel, the functional simulator performs the second-stage linking and a CFI check to verify the execution of the correct syscall.
To verify the functionality of the countermeasure, we emulated skipping a system call and modifying the system call number. In both cases, SFP detects the attack through the next CFI check since the CFI state became invalid and stops the program execution.
### Performance Evaluation
At the time of evaluation, there is no publicly available system supporting ARMv8.6-A needed to run FIPAC. However, to conduct the performance evaluation and to measure the performance impact of SFP, we emulate the runtime overhead of PA instructions. Therefore, we base the performance evaluation on a Raspberry Pi 4 Model B (Rasman et al., 2017) with 8 GB RAM configured with a fixed CPU frequency of 1.5 GHz. The Raspberry Pi contains an ARM Cortex-A72 CPU based on ARMv8-A but without Pointer Authentication. To emulate the overhead of PA instructions, we replace them with a PA-analogue instruction sequence, _i.e._, four consecutive XORs. Related work (Rasman et al., 2017; Rasman et al., 2017) evaluated this instruction sequence to mimic the timing behavior of a PA instruction.
MicrobenchmarkTo evaluate the overhead of SFP executing system calls, we perform a simple microbenchmark. Our benchmark measures the syscall latency of the getpid system call, which is a side-effect-free syscall and is used in related works to benchmark the syscall execution path (Brock et al., 2018; Brock et al., 2018). We execute the system call 10 million times and measure the system call latency via the processor's inbuilt cycle counter. Figure 3 summarizes our evaluation results, showing the syscall latency in different kernel configurations. On the plain unmodified Linux kernel, we measure an average
system call latency of 2131 cycles. When integrating the system call verification alone, the latency rises to 2144 cycles. Furthermore, with the CFI checks alone enabled, the latency increases to 2175 cycles. When both are active, we measure a system call latency of only 2185 cycles, impacting the system call latency by only 1.9 %.
_Macrobenchmark._ To demonstrate the applicability of SFP on a larger scale, we perform a macrobenchmark on real-world applications. We compiled the SPECspeed 2017 (Peters et al., 2017) benchmark with our toolchain, including only C-based programs. In Figure 4, we plot the runtime overheads in two different configurations compared to the plain uninstrumented code. First, we only include the dynamic verification, including the new CFI checking policy, that verifies the CFI state of user programs when entering the kernel. Second, we include the syscal protection based on the two-stage linking approach together with the previously evaluated CFI checking policy.
During the evaluation, we measure a geometric mean overhead of 18.8 % for the new CFI checking policy and 20.6 % with the system call protection and CFI checking policy in place. Based on the evaluation of the SPEC 2017 benchmark, we only measure a difference in the overhead of 1.8 % between the pure CFI protection and the full system call protection of SFP. This result shows that the dominating part of the overhead comes from the CFI instrumentation, not from the system call protection. Thus, reducing the overheads of the CFI protection directly influences the performance of SFP.
## 7. Discussion
This section discusses prototype limitations and shows how SFP is compatible with other CFI protection schemes.
### Dynamic System Call Instrumentation
In our prototype, we manually instrument all syscalls of the C standard library with the necessary patch instructions, consisting of a load of an immediate patch value followed by applying the patch value to the CFI state. The immediate value is zero and is set to its concrete value during the dynamic instrumentation of the startup phase of the program. In a future version of SFP, we could instrument the compiler to detect syscall instructions, _i.e._, svc, and then automatically insert the necessary patch sequence. This enhancement would also include cases where syscalls are invoked manually without the wrapper functions of the standard library.
### CFI Checking Policy Extension
SFP currently performs CFI checks when entering the kernel through a syscall or a timer interrupt. A future version of this work can extend the CFI checking policy to include all interrupts of the system. Our microbenchmark shows adding new CFI checks adds minimal overhead to the syscall latency. Thus, adding additional CFI checks for all interrupt handlers are expected to have minimal impact on the system performance.
### Compatibility
Although SFP uses FIPAC as the underlying CFI protection scheme, the design or the protection mechanism of SFP is generic and compatible with different CFI schemes. To apply the protection of SFP to a different protection scheme, two things are required. First, the CFI protection scheme must be stateful, and there must be a possibility to manipulate the state, e.g., via standard or custom instructions, to inject the system call patch. Second, it is necessary to be able to dynamically compute a second system call patch required for the second-stage linking in the kernel. With these requirements, SFP is compatible with existing CFI protection schemes such as SCFP, SOFIA, or any other state-based CFI protection scheme.
## 8. Related Work
SCFP (Zhu et al., 2017) and SOFIA (Peters et al., 2017) are hardware-assisted control-flow integrity schemes on the instruction level. They encrypt the program's instruction stream at compile-time, and perform a fine-granular decryption during runtime to retrieve the correct instruction sequence. In order to deal with the performance penalty, both protection schemes require intrusive hardware changes. This limits their applicability to small custom embedded processing cores but does not provide protection on a larger scale.
FIPAC (Zhu et al., 2017) is a software-based PCFI protection scheme that exploits the architectural features of recent ARM processors. This protection scheme instruments all basic blocks of a user program with a running CFI signature, thus providing control-flow integrity at that granularity. They present three checking policies, _i.e._, where to check whether the running CFI signature still matches the expected one. However, FIPAC only protects the control-flow of the user-space part of the program. Although FIPAC is developed for being used with operating systems, they miss the protection of the system call interface to the kernel.
SFIP (Bordes et al., 2017) implements coarse-grained syscall flow protection for user-space applications. They statically identify the possible transitions between different syscalls at compile-time and then enforce that at runtime. Since SFIP only considers software attackers in their threat model, they fail to protect against fault attacks.
## 9. Conclusion
In this work, we presented SFP, a protection mechanism that provides system call flow protection on top of ordinary CFI, protecting the interface to the kernel against both software and fault attacks. We show that an already employed CFI protection scheme
Figure 4. Macrobenchmark shows the performance impact of SFP on the SPEC 2017 benchmark. We evaluate the impact of CFI only and SFP, including the system call protection.
Figure 3. The microbenchmark shows the system call latency of the getpid system call for different kernel configurations. SFP increases the system call latency by 1.9 %.
can be used as a versatile tool to protect the system call interface to the kernel. Furthermore, we present a new CFI checking policy at the edge of the kernel to verify the CFI state for all transitions to the kernel. Combined with a dynamic CFI instrumentation on program startup, the attacker cannot learn CFI or system call-related information from subsequent program executions. We showed a prototype implementation comprising an LLVM-based toolchain to automatically instrument arbitrary programs and protect all system calls. A modified Linux kernel running on a Raspberry Pi evaluation setup is used to show the applicability of SFP to real-world programs. Our evaluation based on a microbenchmark and on the SPEC 2017 application benchmark shows an average runtime overhead of 20.6 %, which is only an increase of 1.8 % compared to plain CFI protection. This slight increase in the performance impact shows the effectiveness of SFP for protecting all system calls of a program.
## Acknowledgments
This work has been supported by the Austrian Research Promotion Agency (FFG) under grant number 888087 (SEIZE).
|
2305.01733 | Cross-view Action Recognition via Contrastive View-invariant
Representation | Cross view action recognition (CVAR) seeks to recognize a human action when
observed from a previously unseen viewpoint. This is a challenging problem
since the appearance of an action changes significantly with the viewpoint.
Applications of CVAR include surveillance and monitoring of assisted living
facilities where is not practical or feasible to collect large amounts of
training data when adding a new camera. We present a simple yet efficient CVAR
framework to learn invariant features from either RGB videos, 3D skeleton data,
or both. The proposed approach outperforms the current state-of-the-art
achieving similar levels of performance across input modalities: 99.4% (RGB)
and 99.9% (3D skeletons), 99.4% (RGB) and 99.9% (3D Skeletons), 97.3% (RGB),
and 99.2% (3D skeletons), and 84.4%(RGB) for the N-UCLA, NTU-RGB+D 60,
NTU-RGB+D 120, and UWA3DII datasets, respectively. | Yuexi Zhang, Dan Luo, Balaji Sundareshan, Octavia Camps, Mario Sznaier | 2023-05-02T19:04:29Z | http://arxiv.org/abs/2305.01733v1 | # Cross-view Action Recognition via Contrastive View-invariant Representation
###### Abstract
Cross view action recognition (CVAR) seeks to recognize a human action when observed from a previously unseen viewpoint. This is a challenging problem since the appearance of an action changes significantly with the viewpoint. Applications of CVAR include surveillance and monitoring of assisted living facilities where is not practical or feasible to collect large amounts of training data when adding a new camera. We present a simple yet efficient CVAR framework to learn invariant features from either RGB videos, 3D skeleton data, or both. The proposed approach outperforms the current state-of-the-art achieving similar levels of performance across input modalities: 99.4% (RGB) and 99.9% (3D skeletons), 99.4% (RGB) and 99.9% (3D Skeletons), 97.3% (RGB), and 99.2% (3D skeletons), and 84.4% (RGB) for the N-UCLA, NTU-RGB+D 60, NTU-RGB+D 120, and UWA3DII datasets, respectively.
## 1 Introduction
Human (single) action and activity recognition from video data have a wide range of applications including surveillance [42], human-computer interaction [17] and virtual reality [51]. Recent developments in deep learning and the release of general-purpose large scale datasets, such as the Kinetics Human Action Video Dataset [7, 6, 49] with up to 700 classes and ActivityNet [4] with 203 activity classes and untrimmed videos, have fostered a large body of research on both action and activity recognition.
Most of the action recognition literature [22, 63] do not explicitly address the effect of view changes. Instead, they either focus on single views, rely on very large datasets where different viewpoints are well represented, or use other modalities such as 3D motion capture data or depth information which are easier to relate across views but more expensive to capture and not always available.
In contrast, the focus of this paper is _Cross-view Action Recognition_ (CVAR), where the goal is to identify actions from videos captured from _views entirely unseen during training_. CVAR is a challenging problem since the appearance of the actions can change significantly between different viewpoints, as illustrated in Fig. 1. Because of this, many approaches incorporate or rely entirely on 3D data. However, being able to do CVAR using only RGB data (during training and/or inference), would open up the possibility of training with much smaller scale datasets (i.e. no need to have data from all possible views) and eliminates the need for camera synchronization and collection of expensive 3D data. Motivated by this, we propose a novel framework (Figs. 1,3) that captures dynamics-based information from skeleton sequences in order to perform cross view classification in a view-invariant feature space. The main contributions of this paper are:
* A flexible and lean invariance-based CVAR framework, suitable for a variety of input modalities: RGB alone, 3D skeletons alone, or a combination of both. The proposed model uses only 1.4M parameters and 11.0G FLOPS, 50% and 30% less than the previous state-of-the-art (SOTA), in the NTU-60 benchmark.
Figure 1: **Proposed framework for Cross-view Action Recognition (CVAR).** CVAR requires making inferences using data from previously unseen viewpoints during training. The problem is challenging since actions can look significantly different from different points of view. We propose a framework where the classification is done in a view-invariant feature space.
* Our method outperforms the current SOTA performance in four standard CVAR benchmark datasets for all input modalities and on the single-view action sub-JHMDB benchmark for RGB inputs. Furthermore, the level of CVAR performance is the same across all modalities, bridging a long standing performance gap between 2D and 3D based methods.
* We report extensive ablation studies evaluating different design choices, types of input data, and datasets to perform CVAR and the related problem of cross subject action recognition, where the actors in the testing data have not been seen during training.
## 2 Related Work
A comprehensive review of approaches to the general topic of action recognition can be found in the recent surveys [22, 63]. Here, we focus on the particular problem of CVAR, where the goal is to recognize actions from previously unseen view points.
Many recent methods incorporate depth data or 3D skeletons, since it is easier to relate this type of information across views. Amir et al. [48] used a structured sparsity learning machine to explore factorized components when RGB and Depth information are both available. Li et al. [28] proposed to use view-adversarial training to encourage view-invariant feature learning using only depth information. Wang et al. [57] extracted features from depth and RGB modalities as a joint entity through 3D scene flow to get spatio-temporal motion information. Varol et al. [53] proposed an approach to generate synthetic videos with action labels using 3D models. Yang et al. [62] learn skeleton representations from unlabeled skeleton sequence data using a cloud colorization technique. In [9], Chen et al. proposed a channel-wise topology refinement graph convolution. Friji et al. [18] used Kendall's shape analysis while Li et al. [30] used elastic semantic nets. Nguyen [41] proposed to represent skeleton sequences using sets of SPD matrices. Su et al. [50] used motion consistency and continuity to learn self-supervised representations.
Relatively few methods use only RGB data. Earlier approaches used epipolar geometry to perform coarse 3D reconstruction [52, 47], bag of words to get view invariant representations [32], dense feature tracking to obtain view invariant Hankelet features [26], or used a discretization of the viewing hemisphere to learn view invariant features using shape and pose [46]. More recent approaches use pose heatmaps [16], codebooks [25, 36, 45], attention mechanisms [1], adversarial training [39], view-based batch normalization [19], CNN [24, 34], RNN [14, 33, 3, 65] and GraphCNN [61, 27, 58] to learn view-invariant features. [59, 38, 60] also try to achieve view invariance by using information from enough views during training and Vyas et al. [54] proposed a method using representation learning to get a holistic representation across multiple views. There is also a stream of approaches [29, 67, 68, 44] that seek a view independent latent space to compare features from different views. In spite of these efforts, the performance gap between RGB-based approaches and other modalities-based approaches remains large.
## 3 Proposed Approach Overview
Inspired by studies by Johansson [23], which suggest that it is possible to understand human motions by only paying attention to a few moving points, our approach will leverage recent advances in computer vision that have developed efficient and accurate detectors of skeletons in 2D images.
In the CVAR setup, our goal is to identify actions from the motion of the human joints, captured from views entirely unseen during training. However, this is a challenging task as illustrated in Fig. 2. There, it can be seen that the raw trajectories of two corresponding joints can be significantly different when the action is observed for different amounts of time, from different viewpoints, and using asynchronous cameras. We address this challenge by learning viewpoint and initial condition dynamics-based invariant representations (DIR) that capture the underlying dynamics of the observed motions for the human joints, using only data from the training (source) views. The proposed (DIR), which is described in section 4, can be used with sequences of different lengths, from either 2D or 3D trajectories, or both.
While it is true that motion alone provides strong cues for action recognition, scene context also carries useful information. Thus, if RGB data is also available, we propose to use a two-stream approach where one branch captures the DIR from skeleton data and the other branch captures
Figure 2: **Challenges in understanding actions from skeletons:** The top and bottom frames depict two sequences (of different lengths) of the same action, observed from different viewpoints using asynchronous cameras. It is difficult to compare trajectories of corresponding keypoints when they have different lengths, and are seen from different view points using unsynchronized sensors.
the context information representation (CIR) from the RGB frames. The details of the CIR branch are given in section 5. A diagram of the complete architecture is shown in Fig. 3, and its implementation details are provided in section 7.
## 4 Dynamics-based Invariant Representation
Consider two trajectories of the same human joint while performing the same action, but observed unsynchronously from different view points (as illustrated in Fig. 2):
\[\mathbf{y}_{1:T_{1}}^{(1)}=[\mathbf{y}_{1}^{(1)},\mathbf{y}_{2}^{ (1)},\ldots,\mathbf{y}_{T_{1}}^{(1)}]^{t} \tag{1}\] \[\mathbf{y}_{1:T_{2}}^{(2)}=[\mathbf{y}_{1}^{(2)},\mathbf{y}_{2}^ {(2)},\ldots,\mathbf{y}_{T_{2}}^{(2)}]^{t} \tag{2}\]
where \(\mathbf{y}_{k}^{(i)}=(x_{k}^{(i)},y_{k}^{(i)},z_{k}^{(i)},1)^{t}\) or \(\mathbf{y}_{k}^{(i)}=(x_{k}^{(i)},y_{k}^{(i)},1)^{t}\) are the 3D or 2D joint's homogeneous coordinates of the \(k^{th}\) observation from viewpoint \(i\), respectively. These trajectories are observations of corresponding joints, and hence we can assume that they are related by a linear transformation, once they are temporally aligned:
\[\mathbf{y}_{k}^{(2)}=\mathbf{A}\mathbf{y}_{k+\delta}^{(1)} \tag{3}\]
where \(\delta\) is the (unknown) temporal delay between viewpoints, and the (unknown) matrix \(\mathbf{A}\) is a \(4\times 4\) rotation and translation transformation \(\mathbf{A}=[\mathbf{R}|\mathbf{T}]\) if both trajectories are 3D, a \(3\times 4\) affine matrix, if one of them is an affine 2D projection of the other, or a \(3\times 3\) affine matrix if both trajectories are 2D affine projections of the 3D motion.
In this paper we will model each trajectory as the impulse response of a discrete linear time invariant (LTI) system of (unknown) order \(n_{i}\), with transfer matrix in the frequency domain \(\mathcal{Y}^{(i)}(z)=\frac{\mathbf{N}^{(i)}(z)}{D^{(i)}(z)}\), where \(D^{(i)}(z)\) and the the entries of the vector \(\mathbf{N}^{(i)}(z)\) are polynomials of degree \(n_{i}\).
**Theorem 1:** Given two corresponding temporal sequences (1) and (2) satisfying (3), generated from observable LTI systems and such that \(T_{i}\geq 2n_{i}+1\), in the absence of noise, then, the denominators of their transfer matrices are the same, i.e. \(n_{1}=n_{2}\) and \(D^{(1)}(z)=D^{(2)}(z)\).
**Proof:** Please see supplemental material.
**Corollary 1:** Since the denominator of the transfer functions for both trajectories are identical, their roots, i.e. the poles of the corresponding systems, \(p_{1},p_{2},\ldots,p_{n}\) are also the same: \(D^{(1)}(z)=D^{(2)}(z)=\Pi_{i=1}^{n}(z-p_{i})\).
**Remark:** Comparing the raw sequences themselves is meaningless: they can be very different, even though they are from the same joint and they might have different lengths. The above corollary provides a principled way of comparing them by comparing instead the poles of the underlying dynamics, since they are invariant to affine viewpoint and to initial conditions changes and are independent of the sequence length. Both types of invariances are relevant to the CVAR problem. Affine invariance provides support for a view agnostic dynamic encoding of the input data, while initial condition invariance shows that this representation is valid, even when the data from different views are not synchronized, or might be of different lengths, for example with one view showing only a portion of the action.
Figure 3: **Proposed architecture. The DIR Stream learns invariant dynamics-based features from 2D or 3D skeleton sequences. The CIR Stream learns the appearance and context of the action when RGB data is available. When applying DIR only, ‘CLS1’ takes features \(\boldsymbol{F}\) to predict probabilities for each class; when applying CIR only, ‘CLS2’ will return action probabilities from \(\boldsymbol{F}^{*}\). When using the full 2-stream architecture, the action probabilities are predicted by fusing \(\boldsymbol{F}\) and \(\boldsymbol{F}^{*}\).**
### Design of the DIR branch
The input to the DIR branch is a set of motion sequences. For example, they can be \(2M\) sequences with the \(x\) and \(y\) coordinates for \(M\) joints as detected by an off-the-shelf pose estimator such as Openpose [5], or \(3M\) sequences with the \(x,y,z\) joint coordinates measured by a 3D motion capture sensor over time. This input is processed by three main modules (top Fig. 3), as described in detail next.
\(\bullet\)**RHS:** The RHS module encodes the input sequences using a Re-weighted Heuristic Sparsity optimization layer to find fixed length, sparse representations of the inputs. This is the first step towards identifying the invariant poles and it is motivated by the observation that the z-transform of the impulse response of each of the input sequences could be written as the sum of \(n\) impulse responses, one for each of its invariant poles, if the poles were known:
\[\mathcal{Y}(z)=\frac{\mathbf{N}(z)}{\Pi_{i=1}^{n}(z-p_{i})}=\sum_{i=1}^{n} \frac{\mathbf{c}_{i}z}{z-p_{i}}\]
Taking the inverse of the \(z\)-transform, we can write: \(\mathbf{y}_{k}=\sum_{i=1}^{n}p_{i}^{k-1}\mathbf{c}_{i}\). Collecting the equations for \(k=1,\ldots,T\):
\[\mathbf{y}_{1:T}=\left[\begin{array}{cccc}1&1&\cdots&1\\ p_{1}&p_{2}&\cdots&p_{n}\\ \vdots&\vdots&\cdots&\vdots\\ p_{1}^{T-1}&p_{2}^{T-1}&\cdots&p_{n}^{T-1}\end{array}\right]\left[\begin{array}[] {c}\mathbf{c}_{1}^{t}\\ \mathbf{c}_{2}^{t}\\ \vdots\\ \mathbf{c}_{n}^{t}\end{array}\right]=\mathbf{P_{y}}\mathbf{C_{y}} \tag{4}\]
where the matrix \(\mathbf{P_{y}}\) is invariant, since it is completely determined by the invariant poles.
However, neither the number of poles \(n\) nor the poles themselves are known a-priori. Thus, the RHS module uses an over complete (to be learned) dictionary of candidate poles \(\mathcal{D}_{N}=\{1,\rho_{1},\ldots,\rho_{N}\}\) with \(N>>n\) to select a subset \(\mathcal{D}_{n}\) of up to \(n\) poles to minimize the reconstruction error:
\[\{p_{1}^{*},\ldots,p_{n}^{*}\}=\arg\min_{\mathcal{D}_{n}\subset\mathcal{D}_{N }}\left\{\min_{\mathbf{C_{y}}}\|\mathbf{y}_{1:T}-\mathbf{P_{\mathcal{D}_{n}}} \mathbf{C_{y}}\|_{2}^{2}\right\}\]
where \(\mathbf{P_{\mathcal{D}_{n}}}\) is the matrix formed from the poles in \(\mathcal{D}_{n}\). Since the outer minimization is a combinatorial optimization problem (due to the need to select \(n\) poles out of the possible \(N\), where \(n\) is not known), the RHS module jointly solves for the poles and \(\mathbf{C_{y}}\) by optimizing:
\[\min_{\mathbf{C_{y}}}\|\mathbf{y}_{1:T}-\mathbf{P_{\mathcal{D}_{N}}}\mathbf{C _{y}}\|_{2}^{2}+\lambda\|\mathbf{C_{y}}\|_{1}\]
where the first term of the minimization objective penalizes the reconstruction error and the second term penalizes high order systems. Then, the order of the system \(n\) is given by the number of non-zero elements of \(\mathbf{C_{y}}\), while the poles \(\{p_{1}^{*},\ldots,p_{n}^{*}\}\) are those associated with the corresponding columns of \(\mathbf{P_{\mathcal{D}_{N}}}\). In [35], they solve a similar problem using the FISTA [2] algorithm. Our experiments show that in practice, using FISTA results on most of the elements of \(\mathbf{C_{y}}\) to be small but non-zero, leading to overfitting. We addressed this problem by further promoting sparsity by introducing a re-weighted heuristic approach [40] where we run the FISTA optimization module repeated times instead of only once. Each time, we increase a penalty applied to any small but non-zero coefficients from the previous iteration to push them closer to zero in the current iteration. This is easily accomplished by starting from the previous solution and using the inverse of the magnitude of the coefficient as its penalty. Moreover, since each iteration starts from the previous solution, the increased computational cost of running FISTA again is small. Finally, the loss function to learn \(\mathcal{D}_{N}\) is:
\[\mathcal{L}_{D}=\|\mathbf{Y}-\mathbf{P_{\mathcal{D}_{N}}}\mathbf{C}\|_{2}^{2} +\lambda\|\mathbf{C}\|_{1} \tag{5}\]
where \(\mathbf{Y}\) is a matrix with all the input joint trajectories.
\(\bullet\)**Binarization Module:** Different from [35], we are not interested on the matrix \(\mathbf{C}\) since it is not affine invariant. This is easy to see since, in general, \(\mathbf{A}\mathbf{Y}=\mathbf{A}\mathbf{P}\mathbf{C}\neq\mathbf{P}\mathbf{C}= \mathbf{Y}\). Instead, here we seek the poles selected by the non-zero elements of \(\mathbf{C}\). To this effect, DIR uses a binarization module to find an indicator vector \(\mathbf{b}\) for each input sequence of dimension \(N\). Its bit \(\mathbf{b}_{k}\) is turned "on" if the value of \(\mathbf{c}_{k}\) is non-zero to indicate that pole \(\rho_{k}\in\mathcal{D}_{N}\) is needed, and turned "off" otherwise. Note that an added benefit of using this representation is that while the order of the underlying system and number of selected poles \(n\) can change from sequence to sequence, the dimension of the indicator vector is fixed and set to the size \(N\) of the dictionary \(\mathcal{D}_{N}\).
We explored two approaches to threshold the latent features \(\mathbf{C}\). In one approach, inspired by [31], we mapped the features to +1/-1 by incorporating a binarization loss term:
\[\mathcal{L}_{BI}=\||\mathbf{b}|-\mathbf{1}\|_{1} \tag{6}\]
where, \(\mathbf{b}\in\{+1,-1\}^{N}\) and \(N\) is the number of bits of the binary code. This module consists of three blocks and two Fully Connected (FC) layers. The first block combines one Conv2D layer with a LeakyRelu followed by Maxpooling. Then, the last two blocks have the same pattern, combining Conv2D + LeakyRelu with Avgpooling. The output binary code \(\mathbf{b}\) remains the same size as \(\mathbf{C}\) but with discrete values.
As an alternative approach, we used the Gumbel re-parametrization trick on \(\mathbf{C}\), followed by a feature-wise sigmoid function \(\sigma(.)\) to change each element drawn from a Bernouilli distribution, to learn the categorical distribution of \(\mathbf{b}\), where \(\mathbf{b}\in\{0,1\}^{N}\). That is, we define: \(\mathbf{g}(\mathbf{C})\thicksim\text{Bern}(\sigma(\mathbf{Gumbel}(|\mathbf{C}|; \theta))\) where we use absolute value to take care of both positive and negative values, and \(\theta\) are the Gumbel parameters. Then, the binarization is done by setting \(\mathbf{b}(i)=1\) if \(\mathbf{g}(i)>\alpha\) and \(\mathbf{b}(i)=0\), otherwise, where \(\alpha\)
is the Gumbel threshold. Finally, we used the training loss function:
\[\mathcal{L}_{Gumbel}=\|\mathbf{b}\|_{1} \tag{7}\]
\(\bullet\)**Classification Head:** It takes the binary invariant features from the binarization module and outputs the features for the action classifier. It consists of three 1D-Conv blocks(Conv1D+BN+LeakyRelu), two 2D-Conv blocks(Conv2D+BN+LeakyRelu) and one MLP block. The first three 1D Conv-blocks capture the global and local features of the given input features, while the following two 2D-Conv blocks take the concatenation of global/local features. The MLP block outputs the final action class predictions. This module uses cross entropy to compute the classification loss \(\mathcal{L}_{class}\) for action recognition with \(c\) classes:
\[\mathcal{L}_{class}=-\sum_{i=1}^{c}t_{i}\log(p_{i}^{rhs}) \tag{8}\]
where, \(t_{i}\) is the true label and \(p_{i}\) is the probability of the \(i^{th}\) class. More details of this module can be found in the supplemental material.
**Training Loss:** The DIR branch is trained with a combination of the modules losses: \(\mathcal{L}_{DIR}=\lambda_{1}\mathcal{L}_{class}+\lambda_{2}\mathcal{L}_{B}+ \lambda_{3}\mathcal{L}_{D}\), where the binarization loss \(\mathcal{L}_{B}\) is either (6) or (7), depending on which binarization module is used.
### Enforcing Dependencies between Trajectories
A shortcoming of the DIR branch as described above is that equation (4) decouples the coordinates (\(x\), \(y\), and \(z\)) of each joint and ignores physical couplings between pairs of joints (i.e. shoulder and elbow are connected), potentially using significantly more poles than strictly needed. To address this issue, we propose two improvements. Firstly, we propose to use a contrastive learning strategy similar to [8] as illustrated in Fig. 4, to encourage the trajectories of the coordinates of each joint to share poles. Here, the positive augmented examples are obtained by applying random affine transformations to the input skeletons before passing them through the DIR branch and a projection head \(g(.)\). After training is completed, we throw away the projection head and fine tune the DIR branch. Secondly, to encourage the network to learn that the motion of the joints is constrained by the limbs connecting them, we augment the input data to also include the trajectories of the coordinates of the middle point of each of the limbs. The effectiveness of these approaches is evaluated in our ablation experiments.
## 5 Context Information Representation
While the skeletons provide critical view-invariant motion information, RGB data can also provide useful scene context. Thus, we incorporate an I3D [7] based RGB branch to capture a context information representation (CIR) from the RGB frames when they are available. A diagram showing the components of this stream is provided at the bottom of Fig. 3 and the details are described next.
We modified the original I3D architecture by using a two-branch design to better solve our problem. ROIs are cropped around each actor, resized to \(3\times 224\times 224\) and then fed into an 'I3D head' to get local features for each actor while another 'I3D head' takes raw images to capture global features through time, simultaneously. Both I3D heads use the same layers as [7]: a 'conv3-1a-7x7' layer followed by a 3D max pooling layer. The local and global features provided by the heads are concatenated channel-wise to enrich the information from RGB images. Then, they are fed into the 'I3D Blocks', consisting of the I3D layers from 'conv3-1a' until'mixed-4c'. Finally, the layer 'Temporal pooling', which is a block combining two Conv3D layers, pools features along the temporal domain.
The loss function used by CIR to learn the probabilities for each action class is defined as: \(\mathcal{L}_{CIR}=-\sum_{i=1}^{c}t_{i}\log(p_{i}^{rgb})\). The combined probability is defined as \(p_{i}=\mathcal{F}(\beta_{1}f_{i}^{rgb}+\beta_{2}f_{i}^{rhs})\)1, where \(\mathcal{F}\) is the last fusion layer combining features \(f_{i}^{rgb}\) and \(f_{i}^{rhs}\). Then, the overall loss function using the DIR and CIR streams is given by:
Footnote 1: In our experiments, we set \(\beta_{1}:\beta_{2}=1:1\)
\[\mathcal{L}_{2-stream}=-\lambda_{1}\sum_{i=1}^{c}t_{i}\log(p_{i}^{{}^{\prime} })+\lambda_{2}\mathcal{L}_{B}+\lambda_{3}\mathcal{L}_{D} \tag{9}\]
## 6 Sampling Strategies
The backbone of the network uses a Sampling Clip module to process shorter sequences. We explored two possible sampling strategies, which are described next.
**Multi-clips.** Consider the input image sequence \(\mathcal{I}_{1:L}\) and its skeleton sequence \(\mathcal{X}_{1:L}\), where \(L\) is the total length of the input. We first uniformly sample \(n\) anchor frames from the
Figure 4: **Training DIR stream with Contrastive Loss.** Assuming ’\(Y\)’ represents input a 2D/3D skeleton, \(\tau\) and \(\tau^{\prime}\) are two different random affine transformations to augment the data with positve pairs. ’\(g\)’ is a projection head to learn feature projections from the DIR representations \(h_{i}\) and \(h_{j}\). The contrastive loss [8] is used to maximize the agreement between the projected features \(z_{i}\) and \(z_{j}\).
sequence and extract \(t\) frames centered at each of these anchor frames. For instance, if the first anchor is the \(j^{th}\) input frame, the first image clip \(\mathcal{I}_{t,1}\) is made of frames \(\mathcal{I}_{j-\frac{1}{2}:j+\frac{1}{2}}\). Therefore, \(\mathcal{I}_{1:L}\) is sampled to \(\{I_{t,1},I_{t,2},...,I_{t,n}\}\) and the corresponding skeleton sequences \(\mathcal{X}_{1:L}\) are sampled to \(\{X_{t,1},X_{t,2},...,X_{t,n}\}\). Note that these clips may or may not overlap. The network learns the representation from each clip and outputs the final decision by combining all clips together: _Action Label_\(=\arg\max(\frac{1}{n}\sum_{i=1}^{n}P_{i})\) where, \(P_{i}\) is the combined probability for the \(i^{th}\) clip.
**Single-clip.** Alternatively, we tested sub-sampling the sequence into a single clip. Here, the sampled clip consists of only the uniformly sampled anchor frames. In this case, the action label is given by _Action Label_\(=\arg\max(P)\) where \(P\) is the final probability from the network.
## 7 Reproducibility and Implementation Details
A Pytorch implementation of our approach will be made available. Pseudo code is also provided in the supplemental material. The input skeletons were normalized by the mean and variance, which were computed over the entire training sets. We also resized the input images to 3x224x224 and normalized them using the mean (0.485,0.456,0.406) and the standard deviation (0.229,0.224,0.225). We use SGD optimizer and set the learning rate to 1e-4 for the RHS module and to 1e-3 for the rest of the modules (e.g classifier).
The hyper-parameter \(\lambda\) in (5) was chosen by using a greedy search between 0.1 and 100 and balancing reconstruction error versus sparsity. In the end, if training only the DIR branch, we set \(\lambda=0.2\) in (5), \(\lambda_{1}:\lambda_{2}:\lambda_{3}=\)2:1:0.1 in the loss \(\mathcal{L}_{DIR}\), and the Gumbel threshold to 0.51; if training DIR and CIR, we set \(\lambda=0.1\), \(\lambda_{1}:\lambda_{2}:\lambda_{3}=\)1:1:0.1 in (9), and the Gumbel threshold to 0.505. The Gumbel threshold was determined by drawing the distribution of dynamic representations across the entire training set. During inference, the Gumbel threshold was kept the same. Furthermore, since the binarization loss term is unsupervised in the sense that its ground truth is unknown, we found beneficial to pre-train a standalone binarization module using synthetic data and fine tune the pre-trained during the end-to-end training.
## 8 Experiments
We performed experiments using four benchmark datasets for CVAR (N-UCLA, NTU-RGB+D60, NTU-RGB+D 120, and UWA3D Multiview II) and one dataset for single view action detection (sub-JHMDB). These datasets are described in detail in the supplemental material.
### Ablation Studies
We conducted ablation studies on the N-UCLA and NTU 60 datasets, Cross-view(CV) setup, to evaluate the effec
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{11}{|c|}{\multicolumn{11}{|c|}{\multirow{2}{*}{DIR Stream}}} & \multicolumn{8}{c|}{DIR Stream} & \multicolumn{4}{c|}{CIR Stream} & \multicolumn{4}{c|}{DIR + CIR Streams} \\ \hline Architecture & Baseline & RHS & RHS+BI & RHS+Gumbel & Baseline & RHS & RHS+BI & RHS+Gumbel & *3D & *43D & RHS+BI+*3D & RHS+Gumbel+*3D \\ \hline Sampling & Single & Single & Single & Single & Multiple & Multiple & Multiple & Multiple & Single & Multiple & Multiple & Multiple \\ \hline Accuracy(\%) & 86.0 & 86.7 & 89.0 & 90.2 & 87.1 & 87.5 & 90.1 & **92.9** & 87.5 & **91.2** & 94.4 & **95.7** \\ \hline \hline \multicolumn{11}{|c|}{\multirow{4}{*}{DIR Stream}} & \multicolumn{8}{c|}{Abitation Study: NTU- RGB+D 60 Cross View} & \multicolumn{4}{c|}{DIR Stream} & \multicolumn{4}{c|}{DIR + CIR Streams} \\ \cline{2-11} \cline{2-11} Architecture & Baseline & RHS & RHS+BI & RHS+Gumbel & Baseline & RHS & RHS+BI & RHS+Gumbel & *3D & *43D & RHS+BI+*3D & RHS+Gumbel+*3D \\ \hline Sampling & Single & Single & Single & Single & Multiple & Multiple & Multiple & Multiple & Single & Multiple & Multiple & Multiple \\ \hline Accuracy(\%) & 83.3 & 84.8 & 87.6 & 89.5 & 84.1 & 85.8 & 90.0 & **91.3** & 84.7 & **90.2** & 93.1 & **95.0** \\ \hline \end{tabular}
\end{table}
Table 1: **Ablation study on different architecture configurations and sampling strategies.** Input data is **only RGB video**. ‘Baseline’ uses a vanilla DYAN encoder [35] without binarization, ‘BI’ and ‘Gumbel’ indicate the type Binarization Module, and ‘*I3D’ stands for our modified version from the original paper[7].
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{11}{|c|}{\multicolumn{11}{|c|}{\multirow{2}{*}{DIR Stream}}} & \multicolumn{4}{c|}{Input Variations on NTU-60} \\ \hline \multicolumn{11}{|c|}{\multirow{2}{*}{DIR Stream}} & \multicolumn{2}{c|}{\# of joints= limbs} & CV & CS & FLOPS(G) & \#Params(M) \\ \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & 1\({}^{\text{p}}\) & 17 & 96.6 & 93.7 & 15.90 & 2.00 \\ \cline{2-7} & 1\({}^{\text{p}}\)+L & 17*8 & 97.1 & 94.1 & - & - \\ \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & 17 & 96.8 & 92.9 & **98.0** & **1.16** \\ \cline{2-7} & 3 & 25 & 97.3 & 93.1 & 9.84 & 1.19 \\ \cline{2-7} & \({}^{\text{p}}\)+L & 17*8 & 98.3 & 94.5 & 10.51 & 1.21 \\ \cline{2-7} & 2*L & 25*8 & **98.4** & **94.5** & 11.00 & 1.38 \\ \hline \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } & \({}_{\text{jet}}\) & 20 & 98.1 & 93.7 & 9.90 & 1.18 \\ \cline{2-7} & \({}^{\text{p}}\)+L & 20*8 & **99.0** & **95.2** & 10.2 & 1.24 \\ \hline \end{tabular}
\end{table}
Table 3: **Ablation on DIR Input Sources.** ‘J’, ‘J*’, and ‘J*’ indicate the source of the 2D joints from RGB data: [5], [16], and ground truth, respectively. ‘J+L’ stands for joint and limb data. Followed with [16], there are eight limb keypoints.
tiveness of each component of our approach. Comparisons are made against a baseline vanilla DYAN [35] encoder.
**Architecture Variations and Sampling Strategies.** Table 1 shows that each of the proposed modules, RHS, binarization, and sampling increases performance. The largest improvements are observed when adding binarization and multiple sampling. We believe that the contribution of the binarization modules is to correctly identify the invariant features, and that using multiple clipping ensures that each clip captures well these invariants. The experiments also show that using the DIR stream alone has better performance than using the CIR stream alone, highlighting the benefits of using invariance. However, the two streams bring complementary features since using them together improves the overall performance.
**Training Strategies.** We evaluated the effect of pre-training the RHS module and of using contrastive learning to train the DIR branch. Here, pre-training means that the RHS dictionary is pre-trained on the input reconstruction loss. Table 2 shows that both strategies are beneficial, with contrastive learning providing the largest boost.
**DIR Input Data.** We evaluated the impact of using different skeleton input sources as well as the number of input sequences used on classification performance and computational costs. For input sources, we considered 2D skeletons from RGB provided by [16], computed with Openpose [5] and ground truth. Each of these sources provide a different number of joints. In addition, we evaluated the effect of adding sequences for the mid point of the limbs. A summary of these experiments is given in Table 3. The performances using either of the pose detectors are very similar, marginally better when using Openpose. Using ground truth skeletons also provides a bit of improvement. In all cases, adding limb data boosts performance. Finally, the average FLOPS and number of parameters are 10G and 1.23M, respectively. In comparison, the previous SOTA uses 15.9G FLOPS and 2M parameters.
**Additional ablation studies.** A summary of these experiments are included in the supplemental material: (1) We evaluated the benefits of using a re-weighted heuristic in conjunction with FISTA in the DIR stream, pre-training the binarization modules, and fusing the DIR and CIR streams. (2) The common protocol for cross-view on the N-UCLA dataset, calls for training on views 1 and 2 and testing on 3. We tested the performance of the proposed approach using all possible combinations training with two views and testing with the remaining one. This experiment showed that view 1 is the most challenging set up. We hypothesize that this is because view 1 has significant perspective distortion and our approach assumes affine invariance.
### Comparisons against the SOTA
We compared the performance of our architecture using multiple clipping, RHS, Gumbel binarization and the CIR stream (if using RGB data), against SOTA using different input modalities: RGB alone, 3D skeletons alone, and RGB together with 3D skeletons. For fair comparison, when comparing against RGB approaches, the DIR stream does not use the available skeleton ground truth information. Instead, it uses as input 2D skeletons detected using Openpose[5] on the given videos. When comparing against 3D approaches, the input to the DIR stream is the same as used by other approaches, i.e. the skeletons provided in the datasets. For 3D approaches, we reported performance with and without using the CIR stream. We also tested using 3D skeletons estimated from RGB videos [43]. However, (see Table 5), the skeletons are not accurate and performance suffered. As is traditional in the literature, in addition to the Cross-view (CV) setup, we also evaluated our approach by following the Cross-subject (CS) protocol for all datasets.
The results of these experiments are reported in Tables 4, 5, 6, and 7. Our approach consistently improves the CVAR SOTA on all four datasets, regardless of the input modality used (RGB alone, 3D skeletons alone, and RGB and 3D skeletons together). The largest improvements are observed when restricting the input data to RGB videos, with performance achieving comparable levels to the performance using 3D data. Indeed, our approach reduced the 2D-3D performance gap to 0.5%, 0.3% and 1.9% in the N-UCLA, NTU 60, and NTU 120 datasets. These experiments also show the flexibility of our architecture, since it can be used with different types of input modalities with minimal changes. Even though the proposed architecture was not designed for the cross-subject task, our experiments show that the proposed architecture outperforms the SOTA in this task for the N-UCLA, NTU-60, and NTU-120 using all in
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{11}{|c|}{Accuracy(\%) on the UWA3D dataset} \\ \hline Training views & V1\&V2 & V1\&V3 & V1\&V4 & V2\&V3 & V2\&V4 & V3\&V4 & Average \\ \hline Testing views & V3 & V4 & V2 & V4 & V2 & V3 & V1 & V4 & V1 & V3 & V1 & V2 & \\ \hline VA-fusion[64] & 80.9 & 84.3 & 78.7 & 86.2 & 75.2 & 73.3 & 87.6 & 84.3 & 86.0 & 74.9 & 86.4 & 79.5 & 81.4 \\ \hline VT+GARN[21] & 79.5 & 83.4 & 75.3 & 85.2 & 74.3 & **84.7** & 86.3 & 84.8 & 86.1 & 75.5 & 86.4 & 74.1 & 81.3 \\ \hline \hline
**Ours** (CL-DIR+CIR) & **84.2** & **86.9** & **80.8** & **87.1** & **77.7** & 80.2 & **88.3** & **87.9** & **88.5** & **80.1** & **88.9** & **82.7** & **84.4** \\ \hline \end{tabular}
\end{table}
Table 4: **Comparison of all setups on UWA3DII dataset**. RGB input modality.
put modalities.
Finally, we also tested our approach on single view action recognition with the sub-JHMDB dataset. The results of this experiment are summarized in the supplemental material. Our approach achieved 92.5% accuracy using the DIR and CIR streams, outperforming the current SOTA.
## 9 Conclusions
We introduced a two stream architecture that learns dynamics-based invariant features and context features for cross-view action recognition. The proposed framework is flexible and can be used with different types of input modalities: RGB, 3D Skeletons, or both. Our extensive ablation studies show that both streams contribute to boosting the performance. Comparisons of the proposed approach against the current state of the art methods, using four widely used benchmark datasets, show that our approach outperforms the state of the art in all input modalities and has closed significantly the existing performance gap between RGB and 3D skeleton based approaches. We attribute this significant improvement to the use of dynamics-based invariants in the DIR stream, which provide a way of capturing the dynamics of the 3D motion from its affine projections. Additionally, our experiments also showed that the framework works well in the related task of cross subject action recognition. This opens up the possibility of having widely deployable action recognition applications based on easily obtained video data, avoiding the need for special sensors which are required to collect 3D data.
\begin{table}
\begin{tabular}{|l||c|c|c|} \hline \multicolumn{4}{|c|}{Accuracy(\%) on NTU-10} \\ \hline Method & Modality & CS & CV \\ \hline CNN-LSTM[37] & RGB & 56.0 & - \\ DA-NET[55] & RGB & - & 75.3 \\ Att-LSTM[66] & RGB & 63.3 & 70.6 \\ CNN-BiLSTM[28] & RGB & 55.5 & 49.3 \\ UMVRL[54] & RGB & 82.3 & 86.3 \\ \hline
**Ours**(CLR) & RGB & **89.7** & **90.2** \\ \hline \hline HNCNP[12] & RGB+J\({}_{gt^{\prime}}\) & 95.7 & 98.8 \\ PoseConv3D[16] & RGB+(J+J*L) & 97.0 & **99.6** \\ \hline
**Ours**(CL-DIR+CIR) & RGB+(J*L) & 97.2 & 90.9 \\
**Ours**(CL-DIR+CIR) & RGB+(J*L) & 97.5 & **99.4** \\ \hline \hline GeomNet[41] & 3D Skeleton & 93.6 & 96.3 \\ Else-Net[30] & 3D Skeleton & 91.6 & 96.4 \\ CTR-GCN[9] & 3D Skeleton & 92.4 & 96.8 \\ ACFL-CTR-GCN[58] & 3D Skeleton & 92.5 & 97.1 \\ PVSKL[15] & 3D Skeleton & 92.6 & 97.4 \\ KShapeNet[18] & 3D Skeleton & 97.0 & 98.5 \\ \hline
**Ours** (CL-DIR) & 3D Skeleton & 96.8 & 99.6 \\
**Ours** (CL-DIR) & 3D Skeleton + L & **97.5** & **99.8** \\ \hline \hline VPN++[11] & RGB+3D Skeleton & 96.6 & 99.1 \\
**Ours** (CL-DIR+CIR) & RGB+3D Skeleton & 97.7 & 99.8 \\
**Ours** (CL-DIR+CIR) & RGB+(3D Skeleton+L) & **98.0** & **99.9** \\ \hline \end{tabular}
\end{table}
Table 6: **Comparison against SOTA Cross-Subject (CS) and Cross-View (CV) on NTU-60.** Note that for [12], 2D skeletons is projected from ground truth 3D skeletons.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{Accuracy(\%) on NTU-120} \\ \hline Method & Modality & CS & C-setup \\ \hline \hline PoseConv3D[16] & RGB + (J+L) & 95.3 & 96.4 \\ \hline
**Ours** (CL-DIR + CIR) & RGB + (J*) & 94.2 & 96.6 \\
**Ours** (CL-DIR + CIR) & RGB + (J*L) & 95.0 & 96.7 \\
**Ours** (CL-DIR + CIR) & RGB + (J*+L) & **95.8** & **97.3** \\ \hline \hline GeomNet[41] & 3D Skeleton & 86.5 & 87.6 \\ CTR-GCN[9] & 3D Skeleton & 88.9 & 90.6 \\ PYSKL[15] & 3D Skeleton & 88.6 & 90.8 \\ ACFL-CTR-GCN[10] & 3D Skeleton & 89.7 & 90.9 \\ KShapeNet[18] & 3D Skeleton & 90.6 & 86.7 \\ \hline
**Ours** (CL-DIR) & 3D Skeleton & 92.7 & 93.5 \\
**Ours** (CL-DIR) & 3D Skeleton + L & **93.6** & **95.0** \\ \hline \hline VPN++[11] & RGB + 3D Skeleton & 90.7 & 92.5 \\ \hline
**Ours** (CL-DIR + CIR) & RGB + 3D Skeleton & 96.8 & 98.0 \\
**Ours** (CL-DIR + CIR) & RGB + (3D Skeleton+L) & **97.7** & **99.2** \\ \hline \end{tabular}
\end{table}
Table 7: **Comparison against SOTA Cross-Subject (CS), Cross-Setup(C-setup) on NTU-120.**
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline \multicolumn{4}{|c|}{Accuracy(\%) on NTU-60} \\ \hline Method & Modality & CS & CV \\ \hline \hline CNN-LSTM[37] & RGB & 56.0 & - \\ DA-NET[55] & RGB & - & 75.3 \\ Att-LSTM[66] & RGB & 63.3 & 70.6 \\ CNN-BiLSTM[28] & RGB & 55.5 & 49.3 \\ UMVRL[54] & RGB & 82.3 & 86.3 \\ \hline
**Ours**(CIR) & RGB & **89.7** & **90.2** \\ \hline \hline HNCNP[12] & RGB+J\({}_{gt^{\prime}}\) & 95.7 & 98.8 \\ PoseConv3D[16] & RGB+(J*L) & 97.0 & **99.6** \\ \hline
**Ours**(CL-DIR+CIR) & RGB+(J*L) & 97.2 & 90.9 \\
**Ours**(CL-DIR+CIR) & RGB+(J*L) & 97.5 & **99.4** \\
**Ours**(CL-DIR+CIR) & RGB+(J) & 97.2 & 99.1 \\
**Ours**(CL-DIR+CIR) & RGB+(J+L) & **97.6** & **99.4** \\ \hline \hline GeomNet[41] & 3D Skeleton & 93.6 & 96.3 \\ Else-Net[30] & 3D Skeleton & 91.6 & 96.4 \\ CTR-GCN[9] & 3D Skeleton & 92.4 & 96.8 \\ ACFL-CTR-GCN[58] & 3D Skeleton & 92.5 & 97.1 \\ PVSKL[15] & 3D Skeleton & 92.6 & 97.4 \\ KShapeNet[18] & 3D Skeleton & 97.0 & 98.5 \\ \hline
**Ours** (CL-DIR) & 3D Skeleton & 96.8 & 99.6 \\
**Ours** (CL-DIR) & 3D Skeleton + L & **97.5** & **99.8** \\ \hline \hline VPN++[11] & RGB+3D Skeleton & 96.6 & 99.1 \\
**Ours** (CL-DIR+CIR) & RGB+3D Skeleton & 97.7 & 99.8 \\
**Ours** (CL-DIR+CIR) & RGB+(3D Skeleton+L) & **98.0** & **99.9** \\ \hline \end{tabular}
\end{table}
Table 8: **Comparison against SOTA Cross-Subject (CS) and Cross-View (CV) on NTU-60.** Note that for [12], 2D skeletons is projected from ground truth 3D skeletons.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline \multicolumn{4}{|c|}{Accuracy(\%) on NTU-10A} \\ \hline Method & Modality & CS & CV \\ \hline \hline PoseConv3D[16] & RGB + (J+L) & 95.3 & 96.4 \\ \hline
**Ours** (CL-DIR + CIR) & RGB + (J*) & 94.2 & 96.6 \\
**Ours** (CL-DIR + CIR) & RGB + (J*L) & 95.0 & 96.7 \\
**Ours** (CL-DIR + CIR) & RGB + (J*+L) & **95.8** & **97.3** \\ \hline \hline GeomNet[41] & 3D Skeleton & 86.5 & 87.6 \\ CTR-GCN[9] & 3D Skeleton & 88.9 & 90.6 \\ PYSKL[15] & 3D Skeleton & 88.6 & 90.8 \\ ACFL-CTR-GCN[10] & 3D Skeleton & 89.7 & 90.9 \\ KShapeNet[18] & 3D Skeleton & 90.6 & 86.7 \\ \hline
**Ours** (CL-DIR) & 3D Skeleton & 92.7 & 93.5 \\
**Ours** (CL-DIR) & 3D Skeleton + L & **93.6** & **95.0** \\ \hline \hline VPN++[11] & RGB + 3D Skeleton & 90.7 & 92.5 \\ \hline
**Ours** (CL-DIR + CIR) & RGB + (3D Skeleton & 96.8 & 98.0 \\
**Ours** (CL-DIR + CIR) & RGB + (3D Skeleton+L) & **97.7** & **99.2** \\ \hline \end{tabular}
\end{table}
Table 9: **Comparison against SOTA Cross-Subject (CS), Cross-Setup(C-setup) on NTU-120.** |
2310.14836 | The role of intra- and inter-group Matthew effect in the social dilemma
of public goods games | The Matthew effect describes the phenomenon where the rich tend to get
richer. Such a success-driven mechanism has been studied in spatial public
goods games in an inter-group way, where each individual's social power is
enhanced across all groups. For instance, factors like knowledge can exert an
advantage across various social contexts. In contrast, certain factors,
especially local material goods, only enhance advantages within their current
group. Building on this, we further explore the intra-group Matthew effect
where the enhancement of social power is calculated separately in each group.
Our findings indicate that the intra-group Matthew effect sustains cooperation
more at high productivity, while the inter-group Matthew effect promotes
cooperation at low productivity. Moreover, the mixture of the intra- and
inter-group Matthew effect harms cooperation. This study provides insights into
addressing social dilemmas by adjusting wealth accumulation across diverse
social groups. | Chaoqian Wang | 2023-10-23T11:56:33Z | http://arxiv.org/abs/2310.14836v1 | # The role of intra- and inter-group Matthew effect in the social dilemma of public goods games
###### Abstract
The Matthew effect describes the phenomenon where the rich tend to get richer. Such a success-driven mechanism has been studied in spatial public goods games in an inter-group way, where each individual's social power is enhanced across all groups. For instance, factors like knowledge can exert an advantage across various social contexts. In contrast, certain factors, especially local material goods, only enhance advantages within their current group. Building on this, we further explore the intra-group Matthew effect where the enhancement of social power is calculated separately in each group. Our findings indicate that the intra-group Matthew effect sustains cooperation more at high productivity, while the inter-group Matthew effect promotes cooperation at low productivity. Moreover, the mixture of the intra- and inter-group Matthew effect harms cooperation. This study provides insights into addressing social dilemmas by adjusting wealth accumulation across diverse social groups.
## 1 Introduction
Matthew 25:29 (New Revised Standard Version) states, "For to all those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away." In the real world, it is evident that wealthy individuals have greater opportunities to become richer, while the poor have limited prospects of improving their financial situations. This phenomenon is known as the Matthew effect [1, 2]. The role of the Matthew effect has been studied in the social dilemma of evolutionary public goods games [3].
Social dilemmas refer to the conflict between individual and collective interests [4]. While cooperation (also known as prosocial behavior) benefits everyone in a group, individual defection can benefit oneself at the expense of others, resulting in a reduction in average benefits for all group members, leading to the tragedy of the commons [5]. However, cooperation can thrive in human society [6], and several explanations have been proposed for this phenomenon [7], including network reciprocity, which has been studied by researchers from evolutionary biology to statistical physics [8] and from computer simulations [9, 10, 11] to theoretical analyses [12, 13, 14].
It is a natural idea to consider the role of the Matthew effect in multiplayer games instead of two-player games because the former involves the allocation of public goods and the enhancement of wealth. The emergence of cooperation in multiplayer games has become increasingly popular over the past few decades [15, 16, 17], and group interactions [18] play a significant role in higher-order population structures [19, 20, 21, 22]. The fundamental model type is the public goods game, in which cooperators contribute to the group, producing more than the cost, but defectors can share the fruits equally [23]. In other words, in a classic public goods game group, everyone shares an equal amount of fruits. However, introducing the Matthew effect mentioned earlier, those receiving more can share more. It was found that this can have an intriguing effect on the cooperation level: the Matthew effect in distributing fruits promotes cooperation but also preserves defection [3, 24]. Similar to this success-driven mechanism in distribution, other success-driven mechanisms in different aspects of evolutionary dynamics have been studied, such as success-driven migration [25, 26, 27, 28], success-driven participation [29], success-driven group formation [30], and success-driven multigames [31]. Opposite assumptions from the Matthew effect have also been investigated, such as the situation where rich people contribute more [32].
A widely accepted notion is that inter-group reciprocity, where an individual participates in multiple groups and engages in games in each, plays a crucial role in spatial public goods games [33, 34, 35]. When coupling groups, apparently similar things can differ [36]. Previous research has investigated various inter-group mechanisms in public goods games, such as group competition [37] and inter-group selection [38, 39]. Some studies have also examined scenarios where individuals use different strategies with different neighbors [40, 41].
Similarly, as an additional mechanism, the Matthew effect can also be considered separately in different groups. On the one hand, an individual can accumulate social power in different groups independently. On the other hand, an individual can also possess a unified social power through all its groups, as previous research has shown [3]. More generally, the intra- and inter-group social power can be blended, and the impact of their proportions on the role they play can be explored. The idea of differentiating the intra- and inter-group Matthew effects captures a detail of the real world: people are involved in multiple social groups and interact with members in each. For example, a student
attends different classes at a university, and their advantages accrue separately or jointly in various classes, depending on the relevance of the knowledge. Similarly, a multinational corporation operates in different countries, and its advantages in each country may accumulate separately or jointly, depending on the trade policies.
In this study, we establish a model to describe the intra- and inter-group Matthew effects in the spatial public goods game and investigate their roles separately and collectively. It should be noted that we use synchronous updates [9, 10, 42, 43] in this work. Although most previous studies adopt asynchronous updates, the synchronous approach is more intuitive in describing specific mechanisms [44, 45]. The results do not differ significantly in quality [46].
## 2 Model
The spatial public goods game is conducted on an \(L\times L\) square lattice, where each node is occupied by an agent. The agent set is denoted by \(\mathcal{N}\), and its size is the population \(N\), which yields \(|\mathcal{N}|\equiv N=L^{2}\). Each agent \(i\in\mathcal{N}\) is the center of a group \(\Omega_{i}\), which contains the centering agent \(i\) and its four nearest neighbors \(j\in\Omega_{i}\setminus\{i\}\). In this work, we set the group size to \(G=5\). Each agent \(i\) is involved in \(G\) groups, centered on itself and its neighbors, \(\Omega_{j}\) for \(j\in\Omega_{i}\).
The payoff received by agent \(i\) from the group centered on \(j\) at time \(t\) is denoted by \(\pi^{j}_{i,t}\), calculated as follows:
\[\pi^{j}_{i,t}=M_{i,j,t}r\sum_{k\in\Omega_{j}}s_{k,t}c-s_{i,t}c, \tag{1}\]
where \(s_{i,t}\) denotes the strategy of agent \(i\) at time \(t\). If \(s_{i,t}=1\), agent \(i\) cooperates by contributing \(c\) (\(c>0\)) to the group \(j\), while if \(s_{i,t}=0\), agent \(i\) defects and makes no contribution. The contributions to group \(j\) by players \(k\in\Omega_{j}\) are enlarged by the productivity factor \(r\) (\(r>1\)) and allocated according to the Matthew effect \(M_{i,j,t}\), which we will define later.
The average payoff of agent \(i\) received from its \(G\) groups at time \(t\) is denoted by \(\pi_{i,t}\), yielding
\[\pi_{i,t}=\frac{1}{G}\sum_{j\in\Omega_{i}}\pi^{j}_{i,t}. \tag{2}\]
At each time step \(t\), the agents' strategies \(s_{i,t}\) are updated synchronously. For each agent \(i\), one of its neighbors \(j\in\Omega_{i}\setminus\{i\}\) is selected randomly, and agent \(i\) adopts \(j\)'s strategy \(s_{j,t}\) with probability
\[W(s_{i,t+1}\gets s_{j,t})=\frac{1}{1+\mathrm{e}^{-(\pi_{i,t}-\pi_{i,t})/ \kappa}}, \tag{3}\]
where \(\kappa\) is the noise parameter, set to \(\kappa=0.1\) in this work. If the update is not successful, agent \(i\) keeps its current strategy, \(s_{i,t+1}\gets s_{i,t}\).
To define the Matthew effect, we introduce the concept of social power, which is determined by an agent's payoff, both within and between groups. First, an agent's intra-group social power varies across groups. Specifically, agent \(i\)'s intra-group power in group \(j\) at time \(t+1\) is based on its payoff from group \(j\) at time \(t\), as given by Eq. (4):
\[p^{(\mathrm{intra})}_{i,j,t+1}=\pi^{j}_{i,t}. \tag{4}\]
Second, an agent's inter-group social power is the average of its intra-group power across all of its \(G\) groups. In other words, it is simply the agent's payoff at time \(t\), as expressed by Eq. (5):
\[p^{(\mathrm{inter})}_{i,t+1}=\frac{1}{G}\sum_{j\in\Omega_{i}}p^{(\mathrm{intra })}_{i,j,t+1}=\pi_{i,t}. \tag{5}\]
The social power of agent \(i\) is a weighted combination of its intra- and inter-group power, as given by Eq. (6):
\[P_{i,j,t+1}=aP^{(\mathrm{intra})}_{i,j,t+1}+(1-a)P^{(\mathrm{inter})}_{i,t+1}, \tag{6}\]
where \(0\leq a\leq 1\) determines the relative weight of intra-group power versus inter-group power.
The Matthew effect, which determines the allocation of public goods to agent \(i\) in group \(j\), is proportional to the agent's social power. Specifically, the higher an agent's social power, the more public goods are allocated to it. This is captured by Eq. (7):
\[M_{i,j,t+1}=\frac{\mathrm{e}^{\mathrm{i}\boldsymbol{\omega}P_{i,j,t+1}}}{ \sum_{k\in\Omega_{j}}\mathrm{e}^{\mathrm{i}\boldsymbol{\omega}P_{k,j,t+1}}}, \tag{7}\]
where \(\boldsymbol{\omega}\geq 0\) determines the strength of the Matthew effect. A larger value of \(\boldsymbol{\omega}\) corresponds to a stronger Matthew effect, in which agents with higher payoffs receive a larger share of the public good. When \(\boldsymbol{\omega}=0\), Eq. (7) reduces to \(M_{i,j,t}\equiv 1/G\) for all \(i\in\mathcal{N}\), \(j\in\Omega_{i}\), and \(t\in\mathbb{N}^{*}\), which corresponds to the classic public goods game.
In our model, the Matthew effect is self-cumulative, as described by Eqs. (4)-(5). Specifically, the social power at time \(t\) affects the allocation of public goods at time \(t+1\), which in turn affects the social power at time \(t+1\) and the allocation at time \(t+2\), and so on. This automatic iteration process obviates the need for storing data from every previous step.
## 3 Results and discussion
The fixed secondary parameters of this study are population size \(N=100\times 100=10^{4}\), group size \(G=5\), and selection noise \(\kappa=0.1\). The main parameters are productivity \(r\), the strength of the Matthew effect \(\boldsymbol{\omega}\), and the proportion of the intra-group Matthew effects \(a\), whose values are flexible.
At the start of the simulation, each agent \(i\in\mathcal{N}\) is assigned a random strategy of cooperation (\(s_{i,0}=1\)) or defection (\(s_{i,0}=0\)), with the proportion of cooperators being approximately \(1/2\). The social power \(P_{i,j,0}\) for each agent in their respective groups \(j\in\Omega_{i}\) is initialized to \(0\), resulting in \(M_{i,j,0}=1/G\) and an equal distribution of public goods at the first time step.
The system is then allowed to evolve from \(t=0\) to \(t=10^{5}\), and the proportion of cooperators \(\sum_{i\in N^{\prime}}s_{i,l}/N\) is calculated at each time step \(t\). The average proportion of cooperators in the last \(2\times 10^{4}\) time steps is used to define the cooperation level (\(\rho_{C}\)) under the assigned parameter values, \(\rho_{C}=\sum_{t=8\times 10^{4}+1}^{10^{4}}\sum_{i\in N^{\prime}}s_{i,l}/N/(2 \times 10^{4})\).
### The role of the inter-group Matthew effect
We begin by examining a special case where only the inter-group Matthew effect is present. When \(a=0\), the level of cooperation, \(\rho_{C}\), is plotted as a function of productivity \(r\) and the strength of the Matthew effect \(w\) in Fig. 1. To facilitate a comprehensive presentation of the results, we have rescaled the \(w\)-axis logarithmically for \(w>1\), while the scale in \(0<w<1\) is linear. When \(w=0\), the results coincide with those of the classic synchronous spatial public goods game. However, when \(w>0\), we can observe the impact of the inter-group Matthew effect's strength by increasing \(w\).
As illustrated in Fig. 1, the inter-group Matthew effect promotes cooperation at low productivity (\(r\lesssim 4\)), with cooperation emerging at \(w\gtrsim 1\). However, at intermediate or high productivity (\(r\gtrsim 4\)), the inter-group Matthew effect inhibits cooperation. Interestingly, the inhibition of cooperation is most significant when the inter-group Matthew effect is moderate. As the inter-group Matthew effect becomes stronger, the cooperation level rebounds, albeit remaining lower than in the absence of the Matthew effect. Notably, for intermediate productivity (\(4\lesssim r\lesssim 6\)), cooperation disappears entirely when the effect strength is moderate. Our findings are consistent with previous work [3] that investigated the pure inter-group Matthew effect and concluded that it promotes cooperation at low productivity but sustains defection at high productivity, albeit for a different implementation of our approach in modeling.
### The role of the intra-group Matthew effect
Next, we analyze the impact of the intra-group Matthew effect by setting \(a=1\) and examine the relationship between the intra-group effect's strength and the cooperation level. Fig. 2 illustrates the cooperation level as a function of productivity and the strength of the intra-group Matthew effect, with the \(w\)-axis scaled similarly to Fig. 1. When the intra-group Matthew effect is present (\(w>0\)), we can observe its role by increasing \(w\).
In contrast to the inter-group Matthew effect, a certain level of intra-group Matthew effect (\(w\gtrsim 0.2\)) can only promote cooperation at high productivity (\(r\gtrsim 5\)). At moderate strength (\(0.2\lesssim w\lesssim 1\)), neither the intra-group nor the inter-group Matthew effect can promote cooperation at low productivity, but the intra-group Matthew effect can sustain higher levels of cooperation at high productivity. At higher strength (\(w\gtrsim 10^{9}\)), the intra-group Matthew effect cannot promote cooperation at low productivity as effectively as the inter-group effect, but it can maintain higher levels of cooperation at high productivity. In conclusion, while the intra-group Matthew effect cannot promote cooperation at low productivity, it can retain more cooperation at high productivity compared to the inter-group Matthew effect. This advantage is most pronounced at a moderate strength of the Matthew effect.
To explain how the intra-group Matthew effect maintains cooperation better than the inter-group effect at high productivity, we can imagine the interface between cooperation and defection clusters and consider the role of the intra-group Matthew effect. In a public goods game within a single group, defectors receive higher payoffs than cooperators because of the extra cost \(c\) incurred by cooperators. Consequently, when allocation in the next moment within a group depends solely on the current payoff, individuals who adopt the defective strategy are always allocated a larger proportion \(M_{i,j,l+1}\) of the public good than cooperative individuals. Under the high productivity of the classic model, cooperative clusters expand rapidly. Defectors who
Figure 1: Cooperation level \(\rho_{C}\) as a function of productivity \(r\) and the strength of inter-group Matthew effect \(w\), with fixed intra-group proportion \(a=0\). The \(w\)-axis scale is linear for \(0\leq w\leq 1\) and logarithmic for \(10^{9}\leq w\leq 10^{3}\). The results are qualitatively similar to the ones in [3]: a strong inter-group Matthew effect promotes the emergence of cooperation when productivity is low, but also preserves defection when productivity is high. Meanwhile, a moderate inter-group Matthew effect disfavors cooperation most.
can allocate more public goods in the next moment mostly become cooperators in the next moment. As a result, they are allocated the same proportion of public goods in the group facing the defection region and more public goods in the group facing the cooperation region due to the effect of defection in the previous moment. This transformation of defectors into cooperators encourages adjacent defectors to become more willing to transform into cooperators, which preserves the rapid expansion of cooperative clusters.
### The combination of the intra- and inter-group Matthew effect
Finally, we investigate the combined effects of the intra- and inter-group Matthew effects, where the intra-group social power is represented as a proportion of \(a\) and the inter-group social power as a proportion of \(1-a\). We use Fig. 3 to display the cooperation level as a function of productivity \(r\) and the proportion of the intra-group Matthew effect \(a\) at different strengths of the Matthew effect \(w\).
When the strength of the Matthew effect is moderate [Fig. 3(b), (c), and (d)], we find that the intra-group Matthew effect has a dominant range where the cooperation level is high, particularly in the region where \(a\) is large. The cooperation level varies continuously within this region. Similarly, the inter-group Matthew effect has its dominant range of smaller \(a\). The cooperation level varies sharply on the adjoining interface of the dominant range of both effects, which is the most prominent in Fig. 3(b).
In contrast, when the strength of the Matthew effect is large, we observe that the intra-group Matthew effect only manifests when it exists alone (\(a=1\)). In other cases where \(a<1\), the change in cooperation level is dominated by the inter-group Matthew effect, which varies continuously at low cooperation levels. Overall, in each panel of Fig. 3, the cooperation level is high at the ends of the \(a\)-axis and low in the middle. These results suggest that a combination of intra- and inter-group Matthew effects is detrimental to cooperation.
## 4 Conclusion
In conclusion, this study expands upon the concept introduced in [3] and examines the impact of the intra- and inter-group Matthew effect separately and in combination. Our findings indicate that the inter-group Matthew effect inhibits cooperation at a moderate level but can sustain it to some degree when it is strong. In comparison, the intra-group Matthew effect encourages cooperation, particularly when productivity is high, due to the rapid spread of cooperative strategies at cluster boundaries. However, the combination of both intra- and inter-group Matthew effects has a negative impact on cooperation.
The numerical simulations in this study have the potential to provide insights for policy analysis on wealth distribution. As previously noted, individuals participate in different games within various social relationships, and the Matthew effect is a factor that cannot be ignored in the real world. To address this issue, we suggest adjusting the role of the wealth enhancement effect based on the current level of social productivity. At low productivity, the Matthew effect should be transferable between different social relationships, and success in one group should facilitate success in all groups. At high productivity, the Matthew effect should not be correlated across different social relationships, and success in one group should only increase the likelihood of success in the same group. In any case, we recommend minimizing the mixing of these two approaches.
## Declaration of competing interest
None.
## Code availability
The Matlab code for the numerical simulations is available upon request from the corresponding author.
Figure 2: Cooperation level \(\rho_{C}\) as a function of productivity \(r\) and the strength of intra-group Matthew effect \(w\), with fixed intra-group proportion \(a=1\). The \(w\)-axis scale is linear for \(0\leq w\leq 1\) and logarithmic for \(10^{0}\leq w\leq 10^{3}\). A strong intra-group Matthew effect impedes cooperation, and, compared to this, a moderate intra-group Matthew effect can sustain cooperation more. |
2307.00254 | Efficient Algorithms for Euclidean Steiner Minimal Tree on Near-Convex
Terminal Sets | The Euclidean Steiner Minimal Tree problem takes as input a set $\mathcal P$
of points in the Euclidean plane and finds the minimum length network
interconnecting all the points of $\mathcal P$. In this paper, in continuation
to the works of Du et al. and Weng et al., we study Euclidean Steiner Minimal
Tree when $\mathcal P$ is formed by the vertices of a pair of regular,
concentric and parallel $n$-gons. We restrict our attention to the cases where
the two polygons are not very close to each other. In such cases, we show that
Euclidean Steiner Minimal Tree is polynomial-time solvable, and we describe an
explicit structure of a Euclidean Steiner minimal tree for $\mathcal P$. We
also consider point sets $\mathcal P$ of size $n$ where the number of input
points not on the convex hull of $\mathcal P$ is $f(n) \leq n$. We give an
exact algorithm with running time $2^{\mathcal{O}(f(n)\log n)}$ for such input
point sets $\mathcal P$. Note that when $f(n) = \mathcal{O}(\frac{n}{\log n})$,
our algorithm runs in single-exponential time, and when $f(n) = o(n)$ the
running time is $2^{o(n\log n)}$ which is better than the known algorithm
stated in Hwang et al. We know that no FPTAS exists for Euclidean Steiner
Minimal Tree unless P=NP, as shown by Garey et al. On the other hand FPTASes
exist for Euclidean Steiner Minimal Tree on convex point sets, as given by
Scott Provan. In this paper, we show that if the number of input points in
$\mathcal P$ not belonging to the convex hull of $\mathcal P$ is
$\mathcal{O}(\log n)$, then an FPTAS exists for Euclidean Steiner Minimal Tree.
In contrast, we show that for any $\epsilon \in (0,1]$, when there are
$\Omega(n^{\epsilon})$ points not belonging to the convex hull of the input
set, then no FPTAS can exist for Euclidean Steiner Minimal Tree unless P=NP. | Anubhav Dhar, Soumita Hait, Sudeshna Kolay | 2023-07-01T07:21:38Z | http://arxiv.org/abs/2307.00254v1 | # Efficient Algorithms for Euclidean Steiner Minimal Tree on Near-Convex Terminal Sets
###### Abstract
The Euclidean Steiner Minimal Tree problem takes as input a set \(\mathcal{P}\) of points in the Euclidean plane and finds the minimum length network interconnecting all the points of \(\mathcal{P}\). In this paper, in continuation to the works of [5] and [16], we study Euclidean Steiner Minimal Tree when \(\mathcal{P}\) is formed by the vertices of a pair of regular, concentric and parallel \(n\)-gons. We restrict our attention to the cases where the two polygons are not very close to each other. In such cases, we show that Euclidean Steiner Minimal Tree is polynomial-time solvable, and we describe an explicit structure of a Euclidean Steiner minimal tree for \(\mathcal{P}\).
We also consider point sets \(\mathcal{P}\) of size \(n\) where the number of input points not on the convex hull of \(\mathcal{P}\) is \(f(n)\leq n\). We give an exact algorithm with running time \(2^{\mathcal{O}(f(n)\log n)}\) for such input point sets \(\mathcal{P}\). Note that when \(f(n)=\mathcal{O}(\frac{n}{\log n})\), our algorithm runs in single-exponential time, and when \(f(n)=o(n)\) the running time is \(2^{o(n\log n)}\) which is better than the known algorithm in [9].
We know that no FPTAS exists for Euclidean Steiner Minimal Tree unless \(\mathrm{P}=\mathrm{NP}\)[6]. On the other hand FPTASes exist for Euclidean Steiner Minimal Tree on convex point sets [14]. In this paper, we show that if the number of input points in \(\mathcal{P}\) not belonging to the convex hull of \(\mathcal{P}\) is \(\mathcal{O}(\log n)\), then an FPTAS exists for Euclidean Steiner Minimal Tree. In contrast, we show that for any \(\epsilon\in(0,1]\), when there are \(\Omega(n^{\epsilon})\) points not belonging to the convex hull of the input set, then no FPTAS can exist for Euclidean Steiner Minimal Tree unless \(\mathrm{P}=\mathrm{NP}\).
Steiner minimal tree, Euclidean Geometry, Almost Convex point sets, FPTAS, strong NP-completeness 2012 ACM Subject Classification: Theory of computation Computational geometry
## 1 Introduction
The Euclidean Steiner Minimal Tree problem asks for a network of minimum total length interconnecting a given finite set \(\mathcal{P}\) of \(n\) points in the Euclidean plane. Formally, we define the problem as follows, taken from [2]:
Euclidean Steiner Minimal Tree
**Input:** A set \(\mathcal{P}\) of \(n\) points in the Euclidean plane
**Question:** Find a connected plane graph \(\mathcal{T}\) such that \(\mathcal{P}\) is a subset of the vertex set \(V(\mathcal{T})\), and for the edge set \(E(\mathcal{T})\), \(\Sigma_{e\in E(\mathcal{T})}\overline{e}\) is minimized over all connected plane graphs with \(\mathcal{P}\) as a vertex subset.
Note that the metric being considered is the Euclidean metric, and for any edge \(e\in E(\mathcal{T})\), \(\overline{e}\) denotes the Euclidean length of the edge. Here, the input set \(\mathcal{P}\) of points is often called a set of _terminals_, the points in \(\mathcal{S}=V(\mathcal{T})\setminus\mathcal{P}\) are called _Steiner points_. A solution graph \(\mathcal{T}\) is referred to as a _Euclidean Steiner minimal tree_, or simply an SMT.
The Euclidean Steiner Minimal Tree problem is a classic problem in the field of Computational Geometry. The origin of the problem dates back to Fermat (1601-1665) who proposed the problem of finding a point in the plane such that the sum of its distance
from three given points is minimized. This is equivalent to finding the location of the Steiner point when given three terminals as input. Torricelli proposed a geometric solution to this special case of 3 terminal points. The idea was to construct equilateral triangles outside on all three sides of the triangle formed by the terminals, and draw their circumcircles. The three circles meet at a single point, which is our required Steiner point. This point came to be known as the _Torricelli point_. When one of the angles in the triangle is at least \(120^{\circ}\), the minimizing point coincides with the obtuse angle vertex of the triangle. In this case, the Torricelli point lies outside the triangle and no longer minimizes the sum of distances from the vertices. However, when vertices of polygons with more than 3 sides are considered as a set of terminals, a solution to the Fermat problem does not in general lead to a solution to the Euclidean Steiner Minimal Tree problem. For a more detailed survey on the history of the problem, please refer to [2, 9]. For convenience, we refer to the Euclidean Steiner Minimal Tree problem as ESMT.
ESMT is NP-hard. In [6], Garey et al. prove a discrete version of the problem (Discrete ESMT) to be strongly NP-complete via a reduction from the Exact Cover by 3-Sets (X3C) problem. Although it is not known if the ESMT problem is in NP, it is at least as hard as any NP-complete problem. So, we do not expect a polynomial time algorithm for it. A recursive method using only Euclidean constructions was given by Melzak in [11] for constructing all the Steiner minimal trees for any set of \(n\) points in the plane by constructing full Steiner trees of subsets of the points. Full Steiner trees are interconnecting trees having the maximum number of newly introduced points (Steiner points) where all internal junctions are of degree 3. Hwang improved the running time of Melzak's original exponential algorithm for full Steiner tree construction to linear time in [8]. Using this, we can construct an Euclidean Steiner minimal tree in \(2^{\mathcal{O}(n\log n)}\) time for any set of \(n\) points. This was the first algorithm for Euclidean Steiner Minimal Tree. The problem is known to be NP-hard even if all the terminals lie on two parallel straight lines, or on a bent line segment where the bend has an angle of less than \(120^{\circ}\)[13]. Since the above sets of terminals all lie on the boundary of a convex polygon (or, are in convex position), this shows that ESMT is NP-hard when restricted to a set of points that are in weakly convex position.
Although the ESMT problem is NP-hard, there are certain arrangements of points in the plane for which the Euclidean Steiner minimal tree can be computed efficiently, say in polynomial time. One such arrangement is placing the points on the vertices of a regular polygon. This case was solved by Du et al. [5]. Their work gives exact topologies of the Euclidean Steiner minimal trees. Weng et al. [16] generalized the problem by incorporating the centre point of the regular polygon as part of the terminal set, along with the vertices. This case was also found to be polynomial time solvable.
Tractability in the form of approximation algorithms for ESMT has been extensively studied. It was proved in [6] that a fully polynomial time approximation scheme (FPTAS) cannot exist for this problem unless \(\mathrm{P}=\mathrm{NP}\). However, we do have an FPTAS when the terminals are in convex position [14]. Arora's celebrated polynomial time approximation scheme (PTAS) for the ESMT and other related problems is described in [1]. Around the same time, Rao and Smith gave an efficient polynomial time approximation scheme (EPTAS) in [12]. In recent years, an EPTAS with an improved running time was designed by Kisfaludi-Bak et al. [10].
### Our Results.
In this paper, we first extend the work of [5] and [16]. We state this problem as ESMT on \(k\)-Concentric Parallel Regular \(n\)-gons.
**Definition 1** (\(k\)-Concentric Parallel Regular \(n\)-gons).: \(k\)_-Concentric Parallel Regular \(n\)-gons are \(k\) regular \(n\)-gons that are concentric and where the corresponding sides of polygons are parallel to each other._
Please refer to Figure 1(a) for an example of a 2-Concentric Parallel Regular 12-gon. We call \(k\)-Concentric Parallel Regular \(n\)-gons as \(k\)-CPR \(n\)-gons for short.
We consider terminal sets where the terminals are placed on the vertices of 2-CPR \(n\)-gons. In the case of \(k=2\), the \(n\)-gon with the smaller side length will be called the inner \(n\)-gon and the other \(n\)-gon will be called the outer \(n\)-gon. Also, let \(a\) be the side length of the inner \(n\)-gon, and \(b\) be the side length of the outer \(n\)-gon. We define \(\lambda=\frac{b}{a}\) and refer to it as the _aspect ratio_ of the two regular polygons. In Section 3, we derive the exact structures of the SMTs for 2-CPR \(n\)-gons when the aspect ratio \(\lambda\) of the two polygons is greater than \(\frac{1}{1-4\sin{(\pi/n)}}\) and \(n\geq 13\).
Next, we consider ESMT on an \(f(n)\)-Almost Convex Point Set.
**Definition 2** (\(f(n)\)-Almost Convex Point Set).: _An \(f(n)\)-Almost Convex Point Set \(\mathcal{P}\) is a set of \(n\) points in the Euclidean plane such that there is a partition \(\mathcal{P}=\mathcal{P}_{1}\uplus\mathcal{P}_{2}\) where \(\mathcal{P}_{1}\) forms the convex hull of \(\mathcal{P}\) and \(|\mathcal{P}_{2}|=f(n)\)._
Please refer to Figure 1(b) for an example of a 5-Almost Convex Set of 13 points. In Section 4, we give an exact algorithm for ESMT on \(f(n)\)-Almost Convex Sets of \(n\) terminals. The running time of this algorithm is \(2^{\mathcal{O}(f(n)\log n)}\). Thus, when \(f(n)=\mathcal{O}(\frac{n}{\log n})\), then our algorithm runs in \(2^{\mathcal{O}(n)}\) time, and when \(f(n)=o(n)\) then the running time is \(2^{o(n\log n)}\). This is an improvement on the best known algorithm for the general case [9].
Next, for \(f(n)=\mathcal{O}(\log n)\), we give an FPTAS in Section 5.1. On the other hand in Section 5.2 we show that, for all \(\epsilon\in(0,1]\), when \(f(n)\in\Omega(n^{\epsilon})\), there cannot exist any FPTAS unless \(\mathrm{P}=\mathrm{NP}\).
## 2 Preliminaries
Notations.For a given positive integer \(k\in\mathbb{N}\), the set of integers \(\{1,2,\ldots,k\}\) is denoted for short as \([k]\). Given a graph \(G\), the vertex set is denoted as \(V(G)\) and the edge set as \(E(G)\)
Figure 1: Examples for Definition 1 and Definition 2
Given two graphs \(G_{1}\) and \(G_{2}\), \(G_{1}\cup G_{2}\) denotes the graph \(G\) where \(V(G)=V(G_{1})\cup V(G_{2})\) and \(E(G)=E(G_{1})\cup E(G_{2})\).
In this paper, a regular \(n\)-gon is denoted by \(A_{1}A_{2}A_{3}...A_{n}\) or \(B_{1}B_{2}B_{3}...B_{n}\). For convenience, we define \(A_{n+1}:=A_{1}\), \(B_{n+1}:=B_{1}\), \(A_{0}:=A_{n}\) and \(B_{0}:=B_{n}\). We use the notation \(\{A_{i}\}\) to denote the polygon \(A_{1}A_{2}A_{3}\ldots A_{n}\) and \(\{B_{i}\}\) to denote the polygon \(B_{1}B_{2}B_{3}\ldots B_{n}\). For any regular polygon \(A_{1}A_{2}A_{3}...A_{n}\), the circumcircle of the polygon is denoted as \((A_{1}A_{2}A_{3}...A_{n})\). Given any \(n\)-vertex polygon in the Euclidean plane with vertices \(\mathcal{P}=P_{1}P_{2}P_{3}\ldots P_{n}\), and interval in \(\mathcal{K}\) is a subset of consecutive vertices \(P_{i}F_{i+1...P_{j}}\), \(i,j\in[n]\), also denoted as \([P_{i},P_{j}]\). Here \(P_{i}\) is considered the starting vertex of the interval and \(P_{j}\) the ending vertex. For any \(P_{k}\), \(i\leq k\leq j\) in the interval we will also use the notation \(P_{i}\leq P_{k}\leq P_{j}\).
Given two points \(P\), \(Q\) in the Euclidean plane, we denote by \(\mathsf{dist}(P,Q)\) the Euclidean distance between \(P\) and \(Q\). Given a line segment \(AB\) in the Euclidean plane, \(\overrightarrow{AB}=\mathsf{dist}(A,B)\). For two distinct points \(A\) and \(B\), \(L_{AB}\) denotes the line containing \(A\) and \(B\); and \(\overrightarrow{AB}\) denotes the ray originating from \(A\) and containing \(B\).
When we refer to a graph \(\mathcal{G}\) in the Euclidean plane then \(V(\mathcal{G})\) is a set of points in the Euclidean plane, and \(E(\mathcal{G})\) is a subset of the family of line segments \(\{P_{1}P_{2}|P_{1},P_{2}\in V(\mathcal{G})\}\). For any tree \(\mathcal{T}\) in the Euclidean plane, we denote by the notation \(|\mathcal{T}|\) the value of \(\Sigma_{e\in E(\mathcal{T})}\overline{e}\). A path in a tree \(\mathcal{T}\) is uniquely specified by the sequence of vertices on the path; therefore, \(P_{1}\), \(P_{2}\), \(P_{3}\),..., \(P_{k}\) (where \(P_{i}\in V(\mathcal{T}),\forall i\in[k]\) and \(P_{i}P_{i+1}\in E(\mathcal{T}),\forall i\in[k-1]\)) denotes the path starting from the vertex \(P_{1}\), going through the vertices \(P_{2}\), \(P_{3}\),..., \(P_{k-1}\) and finally ending at \(P_{k}\). Equivalently, we can specify the same path as _the path from \(P_{1}\) to \(P_{k}\)_, since \(\mathcal{T}\) is a tree. Consider the graph \(T\) such that \(V(T)=\{v_{P}|P\in V(\mathcal{T})\}\), \(E(T)=\{v_{P_{1}}v_{P_{2}}|P_{1}P_{2}\) is a line segment in \(E(\mathcal{T})\}\). Then \(T\) is said to be the topology of \(\mathcal{T}\) while \(\mathcal{T}\) is said to realize the topology \(T\). Given two trees \(\mathcal{T}_{1}\), \(\mathcal{T}_{2}\) in the Euclidean plane, \(\mathcal{T}^{\prime}=\mathcal{T}_{1}\cup\mathcal{T}_{2}\) is the graph where \(V(\mathcal{T}^{\prime})=V(\mathcal{T}_{1})\cup V(\mathcal{T}_{2})\) and \(E(\mathcal{T}^{\prime})=E(\mathcal{T}_{1})\cup E(\mathcal{T}_{2})\).
Given any graph \(G\), a Steiner minimal tree or SMT for a terminal set \(\mathcal{P}\subseteq V(G)\) is the minimum length connected subgraph \(G^{\prime}\) of \(G\) such that \(\mathcal{P}\subseteq V(G^{\prime})\). The Steiner Minimal Tree problem on graphs takes as input a set \(\mathcal{P}\) of terminals and aims to find a minimum length SMT for \(\mathcal{P}\). For the rest of the paper, we also refer to a Euclidean Steiner minimal tree as an SMT. Given a set of points \(\mathcal{P}\) in the Euclidean plane, the convex hull of \(\mathcal{P}\) is denoted as \(\mathrm{CH}(\mathcal{P})\).
Euclidean Minimum Spanning Tree (MST).Given a set \(\mathcal{P}\) of \(n\) points in the Euclidean plane, let \(G\) be a graph where \(V(G)=\{v_{P}|P\in\mathcal{P}\}\) and \(E(G)=\{v_{P_{i}}v_{P_{j}}|P_{i},P_{j}\in\mathcal{P}\}\). Also, a weight function \(w_{G}:E(T)\rightarrow\mathbb{R}\) is defined such that for each edge \(v_{P_{1}}v_{P_{2}}\in E(T)\), \(w_{G}(v_{P_{1}}v_{P_{2}})=\overline{P_{1}P_{2}}\). The Euclidean minimum spanning tree of a set \(\mathcal{P}\) is the minimum spanning tree of the graph \(G\) with edge weights \(w_{G}\). Note that a Steiner tree may have shorter length than a minimum spanning tree of the point set \(\mathcal{P}\).
In the plane, the Euclidean minimum spanning tree is a subgraph of the Delaunay triangulation. Using this fact, the Euclidean minimum spanning tree for a given set of points in the Euclidean plane can be found in \(\mathcal{O}(n\log n)\) time as discussed in [15].
Properties of a Euclidean Steiner minimal tree.A Euclidean Steiner minimal tree (SMT) has certain structural properties as given in [3]. We state them in the following Proposition.
Consider an SMT on \(n\) terminals.
No two edges of the SMT intersect with each other.
2. _Each Steiner point has degree exactly_ \(3\) _and the incident edges meet at_ \(120^{\circ}\) _angles. The terminals have degree at most_ \(3\) _and the incident edges form angles that are at least_ \(120^{\circ}\)_._
3. _The number of Steiner points is at most_ \(n-2\)_, where_ \(n\) _is the number of terminals._
A full Steiner tree (FST) is a Steiner tree (need not be minimal, but may include Steiner points) having exactly \(n-2\) Steiner points, where \(n\) is the number of terminals. In an FST, all terminals are leaves and Steiner points are interior nodes. When the length of an FST is minimized, it is called a minimum FST.
All SMTs can be decomposed into FST components such that, in each component a terminal is always a leaf. This decomposition is unique for a given SMT [9]. A topology for an FST is called a full Steiner topology and that of a Steiner tree is called a Steiner topology.
#### Steiner Hulls.
A Steiner hull for a given set of points is defined to be a region which is known to contain an SMT. We get the following propositions from [9].
For a given set of terminals, every SMT is always contained inside the convex hull of those points. Thus, the convex hull is also a Steiner hull.
The next two propositions are useful in restricting the structure of SMTs and the location of Steiner points.
[The Lune property] Let \(\mathrm{UV}\) be any edge of an SMT. Let \(L(\mathrm{U},\mathrm{V})\) be the lune-shaped intersection of circles of radius \(|\mathrm{UV}|\) centered on \(\mathrm{U}\) and \(\mathrm{V}\). No other vertex of the SMT can lie in \(L(\mathrm{U},\mathrm{V})\), except \(U\) and \(V\) themselves.
[The Wedge property] Let \(W\) be any open wedge-shaped region having angle \(120^{\circ}\) or more and containing none of the points from the input terminal set \(\mathcal{P}\). Then \(W\) contains no Steiner points from an SMT of \(\mathcal{P}\).
#### Approximation Algorithms.
We define all the necessary terminology required in terms of a minimization problem, as ESMT is a minimization problem.
[Efficient Polynomial Time Approximation Scheme (EPTAS)] An algorithm is called an efficient polynomial time approximation scheme (EPTAS) for a problem if it takes an input instance and a parameter \(\epsilon>0\), and outputs a solution with approximation factor \((1+\epsilon)\) for a minimization problem in time \(f(1/\epsilon)n^{\mathcal{O}(1)}\) where \(n\) is the input size and \(f(1/\epsilon)\) is any computable function.
[Fully Polynomial Time Approximation Scheme (FPTAS)] An algorithm is called a fully polynomial time approximation scheme (FPTAS) for a problem if it takes an input instance and a parameter \(\epsilon>0\), and outputs a solution with approximation factor \((1+\epsilon)\) for a minimization problem in time \((1/\epsilon)^{\mathcal{O}(1)}n^{\mathcal{O}(1)}\) where \(n\) is the input size.
## 3 Polynomial cases for Euclidean Steiner Minimal Tree
In this section, we consider the Euclidean Steiner Minimal Tree problem for 2-CPR \(n\)-gons. Throughout the section, we denote the inner \(n\)-gon as \(\{A_{i}\}\) and the outer \(n\)-gon as \(\{B_{i}\}\). First, we consider the configuration of an Euclidean Steiner minimal tree in a subsection of the annular area between \(\{A_{i}\}\) and \(\{B_{i}\}\), which will form an isosceles trapezoid. Next, we consider the simple but illustrative case of \(n=3\). Finally we prove our result for all \(n\).
### Isosceles Trapezoids and Vertical Forks
In this section, we discuss one particular Steiner topology when the terminal set is formed by the four corners of a given isosceles trapezoid. However, we will limit the discussion to only the isosceles trapezoids such that the angle between the non-parallel sides is of the form \(\frac{2\pi}{n}\) where \(n\in\mathbb{N}\), \(n\geq 4\). The reason is that given 2-CPR \(n\)-gons \(\{A_{i}\}\), \(\{B_{i}\}\), for \(n\geq 4\) and for any \(j\in\{1,\ldots,n-1\}\), the region \(A_{j}A_{j+1}B_{j}B_{j+1}\) is an isosceles trapezoid such that the angle between the non-parallel sides is of the form \(\frac{2\pi}{n}\).
Let \(ABQP\) be an isosceles trapezoid with \(AB\), \(PQ\) as the parallel sides, and \(AP\), \(BQ\) as the non-parallel sides. Assume without loss of generality that \(AB\) is shorter in length than \(PQ\). Let \(\overline{AB}=1\) and \(\overline{PQ}=\lambda\), where \(\lambda\geq\frac{\sqrt{3}+\tan\frac{\pi}{n}}{\sqrt{3}-\tan\frac{\pi}{n}}\). For brevity, we say \(\lambda_{v}=\frac{\sqrt{3}+\tan\frac{\pi}{n}}{\sqrt{3}-\tan\frac{\pi}{n}}\). Let \(L_{PA}\) and \(L_{QB}\) be the lines containing the line segments \(PA\) and \(QB\) respectively. Also let \(O\) be the point of intersection of \(L_{PA}\) and \(L_{QB}\). Further, let \(M\) and \(N\) be the midpoints of \(AB\) and \(PQ\) respectively (as in Figure 2). As mentioned earlier, \(\angle AOB=\frac{2\pi}{n}\) where \(n\in\mathbb{N}\), \(n\geq 4\).
Now, we explore the following Steiner topology of the terminal set \(\{A,B,P,Q\}\):
1. \(A\) and \(B\) are connected to a Steiner point \(S_{1}\).
2. \(P\) and \(Q\) are connected to another Steiner point \(S_{2}\).
3. \(S_{1}\) and \(S_{2}\) are directly connected (Please see Figure 3).
We call such a topology a _vertical fork topology_ and the Steiner tree realising such a topology, the _vertical fork_. Note that in a vertical fork topology the only unknowns are the locations of the two Steiner points \(S_{1},S_{2}\). Therefore, we have the vertical fork topology as \(T_{vf}\), with \(E(T_{vf})=\{AS_{1},BS_{1},S_{1}S_{2},S_{2}P,S_{2}Q\}\).
We show the existence of a vertical fork and calculate its total length in the following lemma.
A vertical fork \(\mathcal{T}_{vf}\) can be constructed for any \(n\geq 4\) and for any \(\lambda\geq\lambda_{v}\), where
\[\lambda_{v}=\frac{\sqrt{3}+\tan\frac{\pi}{n}}{\sqrt{3}-\tan\frac{\pi}{n}}\]
such that the length of the vertical fork
\[|\mathcal{T}_{vf}|=\frac{(\lambda-1)}{2\tan\frac{\pi}{n}}+\frac{\sqrt{3}( \lambda+1)}{2}\]
Proof.: First, we construct the Steiner points \(S_{1}\), \(S_{2}\) and then prove that the construction works.
In the following construction, we describe how to find the locations of \(S_{1}\) and \(S_{2}\) for the vertical fork:
* We construct equilateral triangles \(ABE\) and \(PQF\) where both points \(E\) and \(F\) lie outside the trapezoid \(ABQP\).
* We construct the circumcircles (\(ABE\)) and (\(PQF\)) of \(ABE\) and \(PQF\), respectively.
* Recall that \(L_{MN}\) is the line segment containing \(M\) and \(N\). Define \(S_{1}\) to be the point of intersection of \(L_{MN}\) and the circle (\(ABE\)) distinct from \(E\); similarly, \(S_{2}\) is the point of intersection of the \(L_{MN}\) and (\(PQF\)) distinct from \(F\). Therefore, by construction, \(S_{2}\), \(M\) must lie on the same side of \(N\) on \(L_{MN}\), and \(S_{1}\), \(N\) must lie on the same side of \(M\) on \(L_{MN}\). Further, \(\angle AS_{1}B=\angle PS_{2}Q=\frac{2\pi}{3}\) by construction.
We now show that the points \(S_{1}\) and \(S_{2}\) indeed lie inside the line segment \(MN\) and the points appear in the order: \(M\), \(S_{1}\), \(S_{2}\), \(N\). We prove the following claim to serve this purpose.
\(\rhd\) Claim 10. \(\overline{S_{1}M}+\overline{S_{2}N}\leq\overline{MN}\)
Proof.: We have \(\overline{MN}=\overline{ON}-\overline{OM}=\frac{\lambda}{2\tan\frac{\pi}{n}}- \frac{1}{2\tan\frac{\pi}{n}}=\frac{(\lambda-1)}{2\tan\frac{\pi}{n}}\)
Again \(\overline{S_{1}M}=\frac{1}{2\sqrt{3}}\) and \(\overline{S_{2}N}=\frac{\lambda}{2\sqrt{3}}\).
Therefore we have
\[\overline{MN}-(\overline{S_{1}M}+\overline{S_{2}M})\] \[= \frac{(\lambda-1)}{2\tan\frac{\pi}{n}}-\frac{1}{\sqrt{3}}-\frac{ \lambda}{2\sqrt{3}}\] \[= \frac{(\lambda-1)}{2\tan\frac{\pi}{n}}-\frac{(\lambda+1)}{2\sqrt {3}}\] \[= \frac{\lambda(\sqrt{3}-\tan\frac{\pi}{n})-(\sqrt{3}+\tan\frac{ \pi}{n})}{2\sqrt{3}\tan\frac{\pi}{n}}\] \[= \frac{\sqrt{3}-\tan\frac{\pi}{n}}{2\sqrt{3}\tan\frac{\pi}{n}}\cdot (\lambda-\lambda_{v})\] Therefore \(\lambda\geq\lambda_{v}\) implies \(\overline{MN}\geq(\overline{S_{1}M}+\overline{S_{2}M})\). This proves Claim 10.
Figure 3: The Vertical Fork, \(\mathcal{T}_{vf}\)
From Claim 10, we get \(\overline{S_{1}M},\overline{S_{2}N}\leq\overline{MN}\). As \(S_{2}\), \(M\) lie on the same side of \(N\) on \(L_{MN}\) and \(S_{1}\), \(N\) lie on the same side of \(M\) on \(L_{MN}\), this implies that \(S_{1}\), \(S_{2}\) lie on the line segment \(MN\). Further, \(\overline{S_{1}M}+\overline{S_{2}N}\leq\overline{MN}\) implies \(\overline{S_{1}M}\leq\overline{S_{2}M}\leq\overline{NM}\), which in turn implies that the points appear in the order: \(M\), \(S_{1}\), \(S_{2}\). \(N\).
Now, we calculate the total length of the vertical fork, \(|\mathcal{T}_{v}f|\):
\[|\mathcal{T}_{v}f|\] \[= \overline{AS_{1}}+\overline{BS_{1}}+\overline{S_{1}S_{2}}+ \overline{PS_{2}}+\overline{QS_{2}}\] \[= 2\overline{PS_{2}}+2\overline{AS_{1}}+\overline{S_{1}S_{2}}\] \[= \frac{2\lambda}{\sqrt{3}}+\frac{2}{\sqrt{3}}+\left(\frac{( \lambda-1)}{2\tan\frac{\pi}{n}}-\frac{(\lambda+1)}{2\sqrt{3}}\right)\] \[= \frac{(\lambda-1)}{2\tan\frac{\pi}{n}}+\frac{\sqrt{3}(\lambda+1) }{2}\] This completes the proof of Lemma 9.
### Euclidean Steiner Minimal Tree for \(2\)-Concentric Parallel Regular \(3\)-gons
Note that a regular \(3\)-gon is an equilateral triangle and therefore, for the rest of this section we call a regular \(3\)-gon as an equilateral triangle. We describe a minimal solution for Euclidean Steiner Minimal Tree for \(2\)-CPR equilateral triangles.
**Lemma 11**.: _Consider two concentric and parallel equilateral triangles \(A_{1}A_{2}A_{3}\) and \(B_{1}B_{2}B_{3}\), where \(B_{1}B_{2}B_{3}\) has side length \(\lambda>1\) and \(A_{1}A_{2}A_{3}\) has side length \(1\). An SMT of \(\{A_{1},A_{2},A_{3},B_{1},B_{2},B_{3}\}\) is an SMT of \(\{B_{1},B_{2},B_{3}\}\), and has length \(\sqrt{3}\cdot\lambda\)._
Proof.: It is to be noted that the centre \(O\) of both \(A_{1}A_{2}A_{3}\) and \(B_{1}B_{2}B_{3}\) is also the Torricelli point of both \(A_{1}A_{2}A_{3}\) and \(B_{1}B_{2}B_{3}\)[5]. On taking \(O\) as the only Steiner point, the SMT for \(\{B_{1},B_{2},B_{3}\}\) is \(\mathcal{T}_{3}=\{OB_{1},OB_{2},OB_{3}\}\)[5]. However, the edges of \(\mathcal{T}_{3}\) already pass through \(A_{1}\), \(A_{2}\) and \(A_{3}\). Therefore, \(\mathcal{T}_{3}\) with \(E(\mathcal{T}_{3})=\{OA_{1},OA_{2},OA_{3},A_{1}B_{1},A_{2}B_{2},A_{3}B_{3}\}\) is also the SMT for \(\{A_{1},A_{2},A_{3},B_{1},B_{2},B_{3}\}\) as shown in Figure 4.
From the definition of \(\mathcal{T}_{3}\), we have the length of the SMT for \(\{A_{i}\}\cup\{B_{i}\}\), for \(n=3\) as
\[|\mathcal{T}_{3}|=\sqrt{3}\cdot\lambda\]
### Euclidean Steiner Minimal Tree and Large Polygons with Large Aspect Ratios
In this section, we consider the Euclidean Steiner Minimal Tree problem when the terminal set is formed by the vertices of \(2\)-CPR \(n\)-gons, namely \(\{A_{i}\}\) and \(\{B_{i}\}\). As mentioned earlier, \(\{A_{i}\}\) is the inner polygon and \(\{B_{i}\}\) is the outer polygon of this set of \(2\)-CPR \(n\)-gons. In particular, we consider the case when \(n\geq 13\); for \(n\leq 12\) these are constant sized input instances and can be solved using any brute-force technique. We also require that the aspect ratio \(\lambda\) has a lower bound \(\lambda_{1}\), i.e. we do not want the two polygons to have sides of very similar length. The exact value of \(\lambda_{1}\) will be clear during the description of the algorithm. Intuitively, when \(\lambda\) is _very large_, the SMT should look similar to what was derived in [16]. In other words, (please refer to Figure 5):
1. for some \(j\in[n]\), there is a vertical fork connecting the two consecutive inner polygon points \(A_{j},A_{j+1}\) with the two consecutive outer polygon points \(B_{j},B_{j+1}\) - we refer to this vertical fork as the _vertical gadget_ for the SMT,
2. the other points in \(\{B_{i}\}\) are connected directly via \((n-2)\) outer polygon edges,
3. the other points in \(\{A_{i}\}\) are connected via \((n-2)\) inner polygon edges.
We call such a topology, a _singly connected topology_ as in Figure 5. For the rest of this section, we consider the SMTs for a large enough aspect ratio, \(\lambda\) and show that there is an SMT that must be a realisation of a _singly connected topology_. We refer to an SMT for the terminal set defined by the vertices of \(\{A_{i}\}\) and \(\{B_{i}\}\) as the SMT for \(\{A_{i}\}\cup\{B_{i}\}\).
Without loss of generality, we consider the edge length of any side \(A_{i}A_{i+1}\) in \(\{A_{i}\}\) to be \(1\). As we defined the aspect ratio to be \(\lambda\), any side \(B_{i}B_{i+1}\) of \(\{B_{i}\}\) must have a side length of \(\lambda\). Further, we observe that for any SMT \(\mathcal{T}\), specifying \(E(\mathcal{T})\) sufficiently determines the entire tree, as \(V(\mathcal{T})=\{P\ |\ \exists Q\text{ such that }PQ\in E(\mathcal{T})\}\).
We start with the following formal definitions:
A Steiner topology of \(\{A_{i}\}\cup\{B_{i}\}\) is a **singly connected topology**, if it has the following structure:
1. [label=]
2. A vertical gadget i.e. five edges \(\{A_{j}S_{a},A_{j+1}S_{a},S_{a}S_{b},S_{b}B_{j},S_{b}B_{j+1}\}\) for some \(1\leq j\leq n\), where \(S_{a}\) and \(S_{b}\) are newly introduced Steiner points contained in the isosceles trapezoid \(\{A_{j},A_{j+1},B_{j},B_{j+1}\}\).
3. All \((n-2)\) polygon edges of \(\{A_{i}\}\) excluding the edge \(A_{j}A_{j+1}\)
4. All \((n-2)\) polygon edges of \(\{B_{i}\}\) excluding the edge \(B_{j}B_{j+1}\)
We define the notion of a path in an SMT for the vertices of \(\{A_{i}\}\) and \(\{B_{i}\}\) where the starting point is in \(\{A_{i}\}\) and the ending point is in \(\{B_{i}\}\).
An **A-B path** is a path in a Steiner tree of \(\{A_{i}\}\cup\{B_{i}\}\) which starts from a vertex in \(\{A_{i}\}\) and ends at a vertex in \(\{B_{i}\}\) with all intermediate nodes (if any) being Steiner points.
The following Definition and Figure 6 is useful for the design of our algorithm.
A **counter-clockwise path** is a path \(P_{1},P_{2},...P_{m}\) in a Steiner tree such that for all \(i\in\{2,\ldots,m-1\},\angle P_{i-1}P_{i}P_{i+1}=\frac{2\pi}{3}\) in the counter-clockwise direction. Similarly, a **clockwise path** is a path \(P_{1},P_{2},...P_{m}\) in a Steiner tree such that for all \(i\in\{2,\ldots,m-1\},\angle P_{i-1}P_{i}P_{i+1}=\frac{4\pi}{3}\) in the counter-clockwise direction.
Figure 4: SMT for 2-CPR 3-gons
Now, we consider any Steiner point \(S\) in any SMT. Let \(P\) and \(Q\) be two neighbours of \(S\). We now prove that there is no point of the SMT inside the triangle \(PSQ\).
Let \(S\) be a Steiner point in any SMT for \(\{A_{i}\}\cup\{B_{i}\}\), with neighbours \(P\) and \(Q\). Then, no point of the SMT lies inside the triangle \(PSQ\).
Proof.: By the _lune property_ (Proposition 5), for any edge \(P_{1}Q_{1}\) in an SMT, for the two circles centred at \(P_{1}\) and at \(Q_{1}\), respectively and both having a radius of \(\overline{P_{1}Q_{1}}\), the intersection region does not contain any point of the SMT.
Let \(E\) and \(F\) be points on the internal angle bisector of \(\angle PSQ\), such that \(\angle SPE=\angle SQF=\frac{\pi}{3}\) as shown in Figure 7. Since \(E\) and \(F\) are points on the angle bisector of \(\angle PSQ\), \(\angle PSE=\angle QSF=\frac{\pi}{3}\). Hence, triangles \(PSE\) and \(QSF\) are equilateral triangles.
Since \(PS\) is an edge in the SMT, by the lune property, the intersection of the circles centred at \(P\) and \(S\), both with radius \(\overline{PS}\) contain no point inside which is a part of the SMT. Since the lune contains the entire equilateral triangle \(PSE\), no point of the SMT lies inside triangle \(PSE\). Similarly, no point of the SMT lies inside the triangle \(QSF\).
Further, as \(\angle PSQ=\frac{2\pi}{3}\), \(\angle SPQ+\angle SQP=\frac{\pi}{3}\). This means \(\angle SPQ,\angle SQP<\frac{\pi}{3}\). Therefore, as \(\angle SPE=\angle SQF=\frac{\pi}{3}\), \(E\) and \(F\) must lie outside the triangle \(PSQ\). This implies that the triangle \(PSQ\) is covered by the union of the triangles \(PSE\) and \(QSF\). As no point of the SMT lies in triangles \(PSE\) and \(QSF\), triangle \(PSQ\) must contain no points of the SMT as well.
Next, we show that in an SMT for \(\{A_{i}\}\cup\{B_{i}\}\) there cannot be any Steiner point, in the interior of the polygon \(\{A_{i}\}\), that is a direct neighbour of some point \(B_{k}\) in the polygon \(\{B_{i}\}\).
**Observation 16**.: _For any SMT for \(\{A_{i}\}\cup\{B_{i}\}\), there cannot exist a Steiner point \(S\) lying in the interior of the polygon \(\{A_{i}\}\) such that \(SB_{k}\) is an edge in an SMT for some \(B_{k}\in\{B_{i}\}\)._
Proof.: For the sake of contradiction, we assume that for some SMT for \(\{A_{i}\}\cup\{B_{i}\}\) there exists a Steiner point \(S\) lying in the interior of the polygon \(\{A_{i}\}\) such that \(SB_{k}\) is an edge in the SMT for some \(B_{k}\in\{B_{i}\}\). Let \(A_{m}A_{m+1}\) be the edge such that \(SB_{k}\) intersects \(A_{m}A_{m+1}\). Without loss of generality, assume that \(A_{m}\) is closer to \(B_{k}\) than \(A_{m+1}\). Therefore \(\angle B_{k}A_{m}S>\angle B_{k}A_{m}A_{m+1}\geq\frac{\pi}{2}+\frac{\pi}{n}> \frac{\pi}{2}\). This means that \(B_{k}S\) is the longest edge in the triangle \(B_{k}SA_{m}\). Therefore we can remove the edge \(B_{k}S\) from the SMT and replace it with either \(B_{k}A_{m}\) or \(SA_{m}\) to get another tree connecting the terminal set with a shorter total length than what we started with, which is a contradiction.
We further analyze SMTs for \(\{A_{i}\}\cup\{B_{i}\}\).
**Observation 17**.: _Let \(\mathcal{V}=\{A_{j},A_{j+1},\ldots,A_{k}\}\) be the interval of consecutive vertices of \(\{A_{i}\}\) lying between \(A_{j}\) and \(A_{k}\) (which includes \(A_{j+1}\)) such that \(A_{j}\) is distinct from \(A_{k+1}\). Let \(U\) be any point on the line segment \(A_{k}A_{k+1}\). Then an SMT of \(\mathcal{V}\cup\{U\}\) is \(\mathcal{T}\), with \(E(\mathcal{T})=\{A_{j}A_{j+1},A_{j+1}A_{j+2},\ldots,A_{k-1}A_{k}\}\cup\{A_{k}U\}\)._
Proof.: For the sake of contradiction, assume that there exists an SMT \(\mathcal{T}^{\prime}\) of \(\mathcal{V}\cup\{U\}\) such that \(|\mathcal{T}^{\prime}|<|\mathcal{T}|\).
From [5], we know that \(\mathcal{T}_{A}\), with \(E(\mathcal{T}_{A})=\{A_{j}A_{j+1},A_{j+1}A_{j+2},\ldots,A_{j-2}A_{j-1}\}\) (_i.e._ all edges of polygon \(\{A_{i}\}\) except \(A_{j-1}A_{j}\)) is an SMT of \(\{A_{i}\}\). Since \(U\in A_{k}A_{k+1}\in E(\mathcal{T}_{A})\), \(\mathcal{T}_{A}\) must also be an SMT of \(\{A_{i}\}\cup\{U\}\). However, \(\mathcal{T}_{A}\) can be partitioned as \(\mathcal{T}_{A}=\mathcal{T}\uplus\mathcal{T}_{1}\), where \(E(\mathcal{T}_{1})=\{UA_{k+1}\}\cup\{A_{k+1}A_{k+2},\ldots,A_{j-2}A_{j-1}\}\). However, as \(\mathcal{T}^{\prime}\) is assumed to be of shorter total length than \(\mathcal{T}\), \(\mathcal{T}^{\prime}\cup\mathcal{T}_{1}\) is a tree, containing \(\{A_{i}\}\) as a vertex subset, which has a shorter total length than \(\mathcal{T}_{A}\), contradicting the optimality of \(\mathcal{T}_{A}\).
We proceed by showing that in any SMT for \(\{A_{i}\}\cup\{B_{i}\}\) there exists at least one A-B path which is also a counter-clockwise path. Symmetrically, we also show that for any SMT for \(\{A_{i}\}\cup\{B_{i}\}\) there exists another clockwise A-B path which consists of only clockwise turns. We can intuitively see that this is true because, if all clockwise paths starting at a vertex in \(\{A_{i}\}\) also ended in a vertex in \(\{A_{i}\}\), there would be enough paths to form a cycle, which is not possible in a tree.
Figure 7: Observation 15: Given triangle \(PSQ\), equilateral triangles \(PSE\) and \(QSF\) are constructed
**Lemma 18**.: _In any SMT for \(\{A_{i}\}\cup\{B_{i}\}\), there exists an \(A\)-\(B\) path which is also a clockwise path and there exists an \(A\)-\(B\) path which is also a counter-clockwise path._
Proof.: For the sake of contradiction, assume that for some SMT for \(\{A_{i}\}\cup\{B_{i}\}\) there is no A-B path which is a counter-clockwise path. We pick an arbitrary vertex \(A_{i_{1}}\in\{A_{i}\}\) such that it is connected to at least one Steiner point (say \(S_{i_{1}}\)). We consider the counter-clockwise path, \(\mathcal{C}_{1}\) starting from \(A_{i_{1}}S_{i_{1}}\) and ending at the first terminal point in the counter-clockwise path. By assumption, there can be no vertex of \(\{B_{i}\}\) in \(\mathcal{C}_{1}\), hence the endpoint must be a vertex in \(\{A_{i}\}\). Let \(A_{i_{2}}\) be the other endpoint of \(\mathcal{C}_{1}\). By definition, the penultimate vertex in this counter-clockwise path must be a Steiner point, we call it \(S_{i_{2}}\). We again consider the counter-clockwise path starting from \(A_{i_{2}}S_{i_{2}}\), and similarly, let \(A_{i_{3}}\) be the first terminal that is encountered in this path. The penultimate vertex in this counter-clockwise path must be a Steiner point, we call it \(S_{i_{3}}\). We can repeat this procedure indefinitely to obtain \(A_{i_{4}},A_{i_{5}},A_{i_{6}},\ldots\) as there are no counter-clockwise A-B paths. However, as \(\{A_{i}\}\) has \(n\) vertices, there must be a repetition of vertices among \(A_{i_{1}},A_{i_{2}},A_{i_{3}},\ldots,A_{i_{n+1}}\), implying the existence of a cycle in the SMT, which is a contradiction.
This symmetrically implies that there must also be a clockwise A-B path.
Our next step is to bound the number of 'connections' that connect the inner polygon \(\{A_{i}\}\) and the outer polygon \(\{B_{i}\}\) for a large aspect ratio, \(\lambda\). As \(\lambda\) increases, the area of the annular region between the two polygons increases as well. Therefore, an increase in the number of connections would lead to a longer total length of the SMT considered. Consequently, we will prove that after a certain positive constant \(\lambda_{1}\), for \(\lambda>\lambda_{1}\) any SMT for \(\{A_{i}\}\cup\{B_{i}\}\) will have a single 'connection' between the two polygons. Moreover, [16] gives us an evidence that as \(\lambda\to\infty\), there will indeed be a single connection connecting the outer polygon and the inner polygon for \(n\geq 12\). We can formalize this notion of existence of a single 'connection' with the following lemma.
**Lemma 19**.: _For any SMT for \(\{A_{i}\}\cup\{B_{i}\}\) with \(n\geq 13\) and \(\lambda>\lambda_{1}\), the number of edges needed to be removed in order to disconnect \(\{A_{i}\}\) and \(\{B_{i}\}\) is 1, where_
\[\lambda_{1}=\frac{1}{1-4\sin\frac{\pi}{n}}\]
Proof.: For the sake of contradiction, assume that for some SMT for \(\{A_{i}\}\cup\{B_{i}\}\), there are at least two distinct edges in that SMT, which are needed to be removed in order to disconnect \(\{A_{i}\}\) and \(\{B_{i}\}\). We start with a claim.
\(\rhd\)Claim 20.: A counter-clockwise A-B path in any SMT of \(\{A_{i}\}\cup\{B_{i}\}\) must have an edge of length greater than \(\lambda\).
Proof.: We consider a generic setting, where \(\mathcal{T}\) is an SMT of some set of terminal points \(\mathcal{P}\). Let \(H\in V(\mathcal{T})\) be vertex of \(\mathcal{T}\). If \(\mathcal{C}\) be a counter-clockwise path starting from \(H\) such that no edge in the counter clockwise path has a length of more than \(r\), for some \(r\in\mathbb{R}^{+}\). Due to Lemma 2.4 (1) of [16], we know that \(\mathcal{C}\) is contained entirely in the circle centred at \(H\) with radius \(2r\).
In our case, any vertex in \(\{A_{i}\}\) and any vertex in \(\{B_{i}\}\) are separated by the distance of at least \(\frac{\lambda-1}{2\sin\frac{\pi}{n}}\). Therefore, by the above fact, the maximum edge length in a counter-clockwise A-B path of any SMT of \(\{A_{i}\}\cup\{B_{i}\}\) must be at least \(\frac{\lambda-1}{4\sin\frac{\pi}{n}}\). Moreover, we have
\(\lambda>\lambda_{1}\). Therefore \(\lambda>\frac{1}{1-4\sin\frac{\pi}{n}}\implies\frac{\lambda-1}{4\sin\frac{\pi}{n}}>\lambda\). Hence, a counter-clockwise A-B path in any SMT of \(\{A_{i}\}\cup\{B_{i}\}\) must have one edge greater than \(\lambda\). This proves the claim.
Now, for any SMT of \(\{A_{i}\}\cup\{B_{i}\}\), let \(\mathcal{C}\) be a counter-clockwise A-B path (this exists due to Lemma 3.2). From Claim 3.2, we know that there is an edge \(e\) in \(\mathcal{C}\), with a length greater than \(\lambda\). On removing the edge \(e\), the SMT splits into a forest of two trees. Let the trees be \(\mathcal{T}_{x}\) and \(\mathcal{T}_{y}\). As we assumed that there are two edges required to disconnect \(\{A_{i}\}\) and \(\{B_{i}\}\), there must exist an A-B path in either \(\mathcal{T}_{x}\) or \(\mathcal{T}_{y}\). Without loss of generality, let \(\mathcal{T}_{x}\) contain an A-B path, and hence \(\mathcal{T}_{x}\) contains at least one point from \(\{A_{i}\}\) and at least one point from \(\{B_{i}\}\). Further, \(\mathcal{T}_{y}\) must contain at least one terminal point (as it must contain all terminal points in one of the sides of the removed edge \(e\)). If \(\mathcal{T}_{y}\) contains a point from \(\{A_{i}\}\), then the polygon \(\{A_{i}\}\) has vertices both from \(\mathcal{T}_{x}\) and \(\mathcal{T}_{y}\); otherwise, if \(\mathcal{T}_{y}\) contains a point from \(\{B_{i}\}\), then the polygon \(\{B_{i}\}\) has vertices both from \(\mathcal{T}_{x}\) and \(\mathcal{T}_{y}\).
This means that either the polygon \(\{A_{i}\}\) or the polygon \(\{B_{i}\}\) will contain at least one node from each of \(\mathcal{T}_{x}\) and \(\mathcal{T}_{y}\). Further, as any given vertex must be either in \(\mathcal{T}_{x}\) or in \(\mathcal{T}_{y}\), either \(\{A_{i}\}\) or \(\{B_{i}\}\) must contain two consecutive vertices \(U_{i}\) and \(U_{i+1}\) such that one of them is in \(\mathcal{T}_{x}\) and the other is in \(\mathcal{T}_{y}\). We simply connect \(U_{i}\) and \(U_{i+1}\) by the polygon edge which is of length 1 (if \(U_{i},U_{i+1}\in\{A_{i}\}\)) or of length \(\lambda\) (if \(U_{i},U_{i+1}\in\{B_{i}\}\)), giving us back a tree \(\mathcal{T}^{\prime}\)containing all the terminals. However we discarded an edge of length greater than \(\lambda\) and added back an edge of length at most \(\lambda\) in this process, which means that the total length of \(\mathcal{T}^{\prime}\) is strictly less than the SMT we started with. This is a contradiction.
We now proceed to further investigate the connectivity of \(\{A_{i}\}\) and \(\{B_{i}\}\). Consider an SMT for \(\{A_{i}\}\cup\{B_{i}\}\) for \(n\geq 13\) and \(\lambda\geq\lambda_{1}\). There must exist \(j\in[n]\) and a Steiner point \(S_{1}\), such that terminals \(A_{j},A_{j+1}\) form a path \(A_{j}\), \(S_{1}\), \(A_{j+1}\) in the SMT and each A-B path passes through \(S_{1}\); where
\[\lambda_{1}=\frac{1}{1-4\sin\frac{\pi}{n}}\]
Proof.: From Lemma 3.2, we know that there exists one clockwise A-B path and one counterclockwise A-B path in any SMT of \(\{A_{i}\}\cup\{B_{i}\}\). Let a clockwise A-B path start from \(A_{r}\) and a counter-clockwise A-B path start from \(A_{l}\). Further following from Lemma 3.2, as there is one edge common to all A-B paths, the clockwise A-B path from \(A_{r}\) and the counter-clockwise A-B path from \(A_{l}\) must share a common edge \(S_{1}S_{2}\). Therefore, each A-B path must pass through \(S_{1}\) and \(S_{2}\). Without loss of generality we assume that point \(S_{1}\) is closer to the polygon \(\{A_{i}\}\) than \(S_{2}\). This means \(S_{1}\) is either a Stiener point or a terminal vertex of \(\{A_{i}\}\). \(\rhd\) Claim 3.2. \(S_{1}\) is not a vertex in \(\{A_{i}\}\)
Proof.: For the sake of contradiction, we assume that \(S_{1}\) to be a vertex in \(\{A_{i}\}\), let \(S_{1}=A_{k}\) in some SMT \(\mathcal{T}_{0}\). We disconnect the edge \(S_{1}S_{2}\) from \(\mathcal{T}_{0}\), which results in the formation of a forest of two trees \(\mathcal{T}_{x}\) and \(\mathcal{T}_{y}\) such that \(S_{1}=A_{k}\in\mathcal{T}_{x}\) and \(S_{2}\in\mathcal{T}_{y}\).
\(\mathcal{T}_{x}\) must contain all vertices of \(\{A_{i}\}\) and \(\mathcal{T}_{y}\) must contain all vertices of \(\{B_{i}\}\), as there would be an A-B path in the graph otherwise (contradicting that \(S_{1}S_{2}\) disconnects \(\{A_{i}\}\) and \(\{B_{i}\}\)). We replace \(\mathcal{T}_{x}\) with the SMT of \(\{A_{i}\}\), which is also an MST (from [16]). Since all MST's are of the same length, we choose such an MST in which \(A_{k}\) is not a leaf node. This means \(A_{k-1}A_{k}\) and \(A_{k}A_{k+1}\) are edges in the chosen MST of \(\{A_{i}\}\). We now add back the edge \(A_{k}S_{2}\), resulting in a connected tree \(\mathcal{T}_{0}^{\prime}\) of \(\{A_{i}\}\cup\{B_{i}\}\). Since we replaced the tree
with an SMT of \(\{A_{i}\}\), the total length of the \(\mathcal{T}^{\prime}_{0}\) must not be more than the total length of the \(\mathcal{T}_{0}\).
However, we observe that \(A_{k}\) has three neighbours in \(\mathcal{T}^{\prime}_{0}\), which are \(A_{k+1},A_{k-1},S_{2}\). However \(\angle A_{k-1}A_{k}A_{k+1}>\frac{2\pi}{3}\). This means either \(\angle S_{2}A_{k}A_{k+1}<\frac{2\pi}{3}\) or \(\angle A_{k-1}A_{k}S_{2}<\frac{2\pi}{3}\). But due to Proposition 3, this cannot form an SMT. Therefore \(\mathcal{T}^{\prime}_{0}\) is not optimal; and hence, \(\mathcal{T}_{0}\) cannot be optimal as well. This proves the claim.
Therefore, \(S_{1}\) must be a Steiner point. Let \(P\) and \(Q\) be the neighbours of \(S_{1}\) other than \(S_{2}\), such that \(\angle PS_{1}S_{2}\) is a clockwise turn while \(\angle QS_{1}S_{2}\) is a counter-clockwise turn. This means that the clockwise A-B path from \(A_{r}\) passes through \(P\) and the counter-clockwise A-B path from \(A_{l}\) passes through \(Q\). We prove that \(P\) and \(Q\) are consecutive vertices of \(\{A_{i}\}\) in some SMT of \(\{A_{i}\}\cup\{B_{i}\}\).
\(\rhd\) Claim 23. \(P\) and \(Q\) cannot simultaneously lie in the interior of the polygon \(\{A_{i}\}\).
Proof.: We assume for the sake of contradiction that both \(P\) and \(Q\) lie in the interior of the polygon \(\{A_{i}\}\). On deleting the edge \(S_{1}S_{2}\), the SMT of \(\{A_{i}\}\cup\{B_{i}\}\) splits into two trees \(\mathcal{T}_{1}\) (rooted at \(S_{1}\)) and \(\mathcal{T}_{2}\) (rooted at \(S_{2}\)). Further, as all A-B paths pass through \(S_{1}S_{2}\), all vertices of \(\{A_{i}\}\) must be in \(\mathcal{T}_{1}\) whereas all vertices of \(\{B_{i}\}\) must lie in \(\mathcal{T}_{2}\). Further, \(\mathcal{T}_{1}\) must be the SMT of \(\{A_{i}\}\cup\{S_{1}\}\) and \(\mathcal{T}_{2}\) must be the SMT of \(\{B_{i}\}\cup\{S_{2}\}\).
* _Case I: One point in \(\{S_{1},S_{2}\}\) lies in the interior of \(\{A_{i}\}\) and the other point lies in the exterior of \(\{A_{i}\}\):_ This means that the edge \(S_{1}S_{2}\) crosses some polygon edge of \(\{A_{i}\}\), call it \(A_{m}A_{m+1}\). Let \(D\) be the intersection of \(A_{m}A_{m+1}\) and \(S_{1}S_{2}\). We replace \(\mathcal{T}_{1}\) with an MST of \(\{A_{i}\}\) that contains the edge \(A_{m}A_{m+1}\) (this can never lead to increase in total tree length due to [16]) and remove the line segment \(S_{1}D\) from \(\mathcal{T}_{2}\). This forms a tree connecting the terminal set \(\{A_{i}\}\cup\{B_{i}\}\) which has a total length smaller than the SMT we started with, which is a contradiction.
* _Case II: Both \(S_{1}\) and \(S_{2}\) lie in the interior of polygon \(\{A_{i}\}\):_ We further consider two cases for this:
* Consider that there is at least one polygon edge \(A_{m}A_{m+1}\) of \(\{A_{i}\}\) such that it does not intersect with \(\mathcal{T}_{2}\). Then we can replace \(\mathcal{T}_{1}\) by the MST of \(\{A_{i}\}\) which does not contain the edge \(A_{m}A_{m+1}\) without reducing the total edge. However, this will be a connecting tree of \(\{A_{i}\}\cup\{B_{i}\}\) with a smaller total length than the tree we started with (as we had removed the edge \(S_{1}S_{2}\) previously), which is a contradiction.
* Now, consider that all polygon edges of \(\{A_{i}\}\) intersect with some edge in \(\mathcal{T}_{2}\). Since \(\mathcal{T}_{2}\) is rooted at \(S_{2}\) which lies in the interior of \(\{A_{i}\}\), there must be \(n\) distinct edges crossing the polygon \(\{A_{i}\}\). However, \(\mathcal{T}_{2}\) must be the SMT of the points \(\{S_{2}\}\cup\{B_{i}\}\), which means there can be at most \((n-1)\) Steiner points other than \(S_{2}\) (from Proposition 3). Therefore, one of these \(n\) edges must have a point in \(\{B_{i}\}\) as one of its endpoints, contradicting Observation 16.
* _Case III: Both \(S_{1}\) and \(S_{2}\) lie in the exterior of polygon \(\{A_{i}\}\):_ This means that the edges \(S_{1}P\) and \(S_{1}Q\) intersect the polygon edges of \(\{A_{i}\}\). Further, from Observation 15, there cannot be any terminal inside the triangle \(S_{1}PQ\). Hence, \(S_{1}P\) and \(S_{1}Q\) must intersect the same polygon edge of \(\{A_{i}\}\) (otherwise intermediate vertices from \(\{A_{i}\}\) would lie in the triangle \(S_{1}PQ\)). Let this edge be \(A_{t}A_{t+1}\). Let \(P_{1}\) and \(Q_{1}\) be the points of intersection of \(A_{t}A_{t+1}\) with \(S_{1}P\) and \(S_{1}Q\) respectively.
We now remove the line segments \(S_{1}P_{1}\) and \(S_{1}Q_{1}\) from \(\mathcal{T}_{1}\). This results in another split into two connected trees \(\mathcal{T}_{P}\) (containing \(P_{1}\), \(P\) and a subset of \(\{A_{i}\}\)) and \(\mathcal{T}_{Q}\) (containing \(Q_{1}\), \(Q\) and the remaining vertices of \(\{A_{i}\}\)).
We observe that the terminals in \(\mathcal{T}_{P}\) and \(\mathcal{T}_{Q}\) form consecutive intervals of the edges in \(\{A_{i}\}\). To see why, consider the opposite, _i.e._ there are vertices \(A_{i_{1}},A_{i_{2}},A_{i_{3}},A_{i_{4}}\) appearing in that order in \(\{A_{i}\}\) such that \(A_{i_{1}},A_{i_{3}}\in\mathcal{T}_{P}\) whereas \(A_{i_{2}},A_{i_{4}}\in\mathcal{T}_{Q}\). As \(\mathcal{T}_{P}\) and \(\mathcal{T}_{Q}\) lie in the interior of \(\{A_{i}\}\), the path from \(A_{i_{1}}\) to \(A_{i_{3}}\) in \(\mathcal{T}_{P}\) must cross the path from \(A_{i_{2}}\) to \(A_{i_{4}}\) in \(\mathcal{T}_{Q}\). However, there cannot be crossing paths in the original SMT of \(\{A_{i}\}\cup\{B_{i}\}\) (due to Proposition 3).
Let \(\{A_{w},A_{w+1},\ldots,A_{t}\}\) be the terminals in \(\mathcal{T}_{P}\) and the remaining terminals in \(\{A_{i}\}\) are in \(\mathcal{T}_{Q}\). Again, let \(\mathcal{T}_{P}^{\prime}\), \(\mathcal{T}_{Q}^{\prime}\) be defined as:
\[E(\mathcal{T}_{P}^{\prime})=\{A_{w}A_{w+1},A_{w+1}A_{w+2},\ldots,A_{t-1}A_{t} \}\cup\{A_{t}P_{1}\}\]
and
\[E(\mathcal{T}_{Q}^{\prime})=\{A_{t+1}A_{t+2},\ldots,A_{w-2}A_{w-1}\}\cup\{Q_{1 }A_{t-1}\}\]
From Observation 17, we know that \(\mathcal{T}_{P}^{\prime}\) is the SMT of \(\{A_{w},A_{w+1},\ldots,A_{t}\}\cup\{P_{1}\}\) and \(\mathcal{T}_{Q}^{\prime}\) is the SMT of \(\{A_{t+1},A_{t+2},\ldots,A_{w-1}\}\cup\{Q_{1}\}\). Therefore, \(|\mathcal{T}_{P}|\leq|\mathcal{T}_{P}|\) and \(|\mathcal{T}_{Q}^{\prime}|\leq|\mathcal{T}_{Q}|\). This means that \(|\mathcal{T}_{1}|\geq|\mathcal{T}_{1}|\), where \(\mathcal{T}_{1}^{\prime}=\{S_{1}P_{1},S_{1}Q_{1}\}\cup\mathcal{T}_{P}^{\prime} \cup\mathcal{T}_{Q}^{\prime}\). Further, \(\mathcal{T}_{1}^{\prime}\) is also a connecting tree of \(\{S_{1}\}\cup\{A_{i}\}\) and as \(\mathcal{T}_{1}\) is an SMT of \(\{S_{1}\}\cup\{A_{i}\}\), then \(\mathcal{T}_{1}^{\prime}\) must also be an SMT with \(|\mathcal{T}_{1}^{\prime}|=|\mathcal{T}_{1}|\).
However, We can remove \(S_{1}P_{1}\) and \(P_{1}A_{t}\) from \(\mathcal{T}_{1}^{\prime}\) and add \(S_{1}A_{t}\) to get another connecting tree of \(\{S_{1}\}\cup\{A_{i}\}\), but with shorter total length (as \(\overline{S_{1}P_{1}+\overline{P_{1}A_{t}}}>\overline{S_{1}A_{t}}\) from triangle inequality). This contradicts the optimality of \(\mathcal{T}_{1}^{\prime}\) which was derived to be an SMT of \(\{S_{1}\}\cup\{A_{i}\}\).
This proves the claim.
We proceed to prove a stronger claim regarding \(P\) and \(Q\).
\(\rhd\) Claim 24. \(P\) and \(Q\) are consecutive vertices of \(\{A_{i}\}\) in any SMT of \(\{A_{i}\}\cup\{B_{i}\}\).
Proof.: We first prove that \(P,Q\) are vertices of \(\{A_{i}\}\).
From Claim 23, we know that at least one among \(P\) and \(Q\) must not be in the interior of polygon \(\{A_{i}\}\). Without loss of generality, let it be \(P\). We now show that \(P\) is a vertex of \(\{A_{i}\}\). For the sake of contradiction we assume that \(P\) is not a vertex of \(\{A_{i}\}\)_i.e._\(P\) is a Steiner point. Let \(\overrightarrow{PF_{1}}\) and \(\overrightarrow{PF_{2}}\) be tangents from \(P\) to \(\{A_{i}\}\) where \(F_{1},F_{2}\) are the points of tangency on \(\{A_{i}\}\). As \(\overrightarrow{PF_{1}}\) and \(\overrightarrow{PF_{2}}\) are tangents, \(\angle F_{1}PF_{2}<\pi\). We denote the region between the tangents \(\overrightarrow{PF_{1}}\) and \(\overrightarrow{PF_{2}}\) which contains the all the points in \(\{A_{i}\}\) as \(\mathcal{R}\).
From any Steiner point \(H\), which lies outside \(\mathcal{R}\), we can choose a neighbour \(H_{1}\) of \(H\) such that \(\overrightarrow{HH_{1}}\) is not directed towards \(\mathcal{R}\). Further we now show that there is one neighbour \(P_{1}\) of \(P\) such that \(P_{1}\) is not in \(\mathcal{R}\) and \(P_{1}\neq S_{1}\).
_Case I: \(S_{1}\) lies in \(\mathcal{R}\)._ However there must be another neighbour \(P_{1}\) of \(P\) not in \(\mathcal{R}\) (as \(\angle F_{1}PF_{2}<\pi\)) but as \(P_{1}\) is outside \(\mathcal{R}\), we must have \(P_{1}\neq S_{1}\).
_Case II: \(S_{1}\) does not lie in \(\mathcal{R}\) (Figure 8)._ As the counter-clockwise A-B path from \(A_{l}\) passes through \(Q\), \(A_{l}\) must be to the left of the line \(L_{QS_{i}}\) if the line is given a orientation from \(Q\) to \(S_{1}\). This means that one of the tangents from \(P\) (Without loss of generality assume it to be \(\overrightarrow{PF_{1}}\)) intersects with the line \(L_{QS_{1}}\). Therefore, taking angles in counter-clockwise order, we have:
\[\angle S_{1}PF_{1} <\frac{\pi}{3} [\text{as }\overrightarrow{PF_{1}}\text{ intersects }L_{QS_{1}}]\] \[\implies\angle S_{1}PF_{2} =\angle S_{1}PF_{1}+\angle F_{1}PF_{2}<\frac{\pi}{3}+\pi=\frac{4 \pi}{3}\] \[\implies\angle F_{2}PS_{1} =2\pi-\angle S_{1}PF_{2}>2\pi-\frac{4\pi}{3}=\frac{2\pi}{3}\]
Hence there must exist one neighbour \(P_{1}\) of \(P\) lying outside the \(\mathcal{R}\), precisely in the region bounded by the rays \(\overrightarrow{PF_{2}}\) and \(\overrightarrow{PS_{1}}\) with \(P_{1}\neq S_{1}\).
Further we can choose a neighbour \(P_{2}\) of \(P_{1}\) such that \(\overrightarrow{P_{1}P_{2}}\) is directed away from \(\mathcal{R}\). We can continue choosing \(P_{2},P_{3},\ldots\) such that \(\overrightarrow{P_{i}P_{i+1}}\) is directed away from the region \(\mathcal{R}\). Moreover, the path \(P,P_{1},P_{2},\ldots\) must end at some point \(B_{k}\) as it cannot end in any vertex of \(\{A_{i}\}\) (since all vertices of \(\{A_{i}\}\) are in \(\mathcal{R}\)). Now, let \(\mathcal{C}_{1}\) be the path from \(A_{r}\) to \(B_{k}\) (which passes through \(P\) and \(P_{1}\)) and let \(\mathcal{C}_{2}\) be the counter-clockwise A-B path from \(A_{l}\) (passing through \(Q\), \(S_{1}\) and \(S_{2}\)). We observe that \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are two edge disjoint A-B paths, which is a contradiction to Lemma 19. This proves that \(P\) is indeed a vertex in \(\{A_{i}\}\). Therefore by Claim 23, \(Q\) does not lie inside \(\{A_{i}\}\) and repeating this same argument on \(Q\) yields that \(Q\) is also a vertex of \(\{A_{i}\}\).
Now, to prove that \(P\) and \(Q\) are consecutive vertices of \(\{A_{i}\}\), we use Observation 15. Observation 15 implies that there must not be any other point of the SMT in the triangle \(PS_{1}Q\). This means that \(P\) and \(Q\) must be consecutive vertices of \(\{A_{i}\}\), otherwise all polygon vertices of \(\{A_{i}\}\) occurring in between \(P\) and \(Q\) would be inside the triangle \(PS_{1}Q\) (as \(\angle PS_{1}Q=\frac{2\pi}{3}\) and \(n\geq 13\)). This proves the claim.
Therefore, \(P\) and \(Q\) are consecutive vertices \(A_{j},A_{j+1}\) of the polygon \(\{A_{i}\}\), for some \(j\in[n]\) such that \(A_{j}\), \(S_{1}\), \(A_{j+1}\) is a path in the SMT, where \(S_{1}\) is a Steiner point lying on all A-B paths.
Our next step is to investigate some more structural properties of an SMT for \(\{A_{i}\}\cup\{B_{i}\}\). From [5], we may guess that there would be a lot of polygon edges of both \(\{A_{i}\}\) and \(\{B_{i}\}\) in
Figure 8: Case II of Claim 24
an SMT. We prove the following Lemma, stating that there is an SMT of \(\{A_{i}\}\cup\{B_{i}\}\) which contains \((n-2)\) polygon edges of \(\{A_{i}\}\).
For an SMT for \(\{A_{i}\}\cup\{B_{i}\}\) with aspect ratio \(\lambda\), \(\lambda>\lambda_{1}=\frac{1}{1-4\sin\frac{\pi}{n}}\), let \(S_{1}\) be the Steiner point such that all A-B paths pass through \(S_{1}\). Let \(A_{j}\) and \(A_{j+1}\) be vertices of \(\{A_{i}\}\) which are connected to \(S_{1}\). Then, there exists an SMT of \(\{A_{i}\}\cup\{B_{i}\}\) having \((n-2)\) polygon edges of \(\{A_{i}\}\) other than \(A_{j}A_{j+1}\).
Proof.: Let \(\mathcal{T}_{0}\) be any SMT of \(\{A_{i}\}\cup\{B_{i}\}\). From Lemma 4, we know that there exists \(S_{1}\), \(A_{j}\) and \(A_{j+1}\) such that \(S_{1}\) is a Steiner point which is a part of all A-B paths, and \(A_{j}S_{1}A_{j+1}\) is a path \(\mathcal{T}_{0}\).
From \(\mathcal{T}_{0}\), we remove the edges \(S_{1}A_{j}\) and \(S_{1}A_{j+1}\) and add the edge \(A_{j}A_{j+1}\). This results in a forest of two disjoint trees \(\mathcal{T}_{x}\) and \(\mathcal{T}_{y}\). One of these trees (say \(\mathcal{T}_{x}\)) must contain all terminal points from \(\{A_{i}\}\) and the other tree must contain all terminals from \(\{B_{i}\}\), as no more A-B paths exist after we removed the edges \(S_{1}A_{j}\) and \(S_{1}A_{j+1}\). Therefore we have \(|\mathcal{T}_{0}|=|\mathcal{T}_{x}|+|\mathcal{T}_{y}|-1+\overline{S_{1}A_{j}} +\overline{S_{1}A_{j+1}}\).
We further replace \(\mathcal{T}_{x}\) with a Euclidean minimum spanning tree \(\mathcal{T}_{x}^{\prime}\) of \(\{A_{i}\}\) such that the edge \(A_{j}A_{j+1}\) is present in \(\mathcal{T}_{x}^{\prime}\). From [5], we know that \(|\mathcal{T}_{x}^{\prime}|\leq|\mathcal{T}_{x}|\). We now remove the edge \(A_{j}A_{j+1}\) and add back the edges \(S_{1}A_{j}\) and \(S_{1}A_{j+1}\) which gives a connected tree \(\mathcal{T}_{0}^{\prime}\) of \(\{A_{i}\}\cup\{B_{i}\}\). Therefore we have:
\[|\mathcal{T}_{0}^{\prime}|=|\mathcal{T}_{x}^{\prime}|+|\mathcal{T}_{y}|-1+ \overline{S_{1}A_{j}}+\overline{S_{1}A_{j+1}}\leq|\mathcal{T}_{x}|+|\mathcal{ T}_{y}|-1+\overline{S_{1}A_{j}}+\overline{S_{1}A_{j+1}}=|\mathcal{T}_{0}|\]
This means \(\mathcal{T}_{0}^{\prime}\) must be an SMT. However, all polygon edges of polygon \(\{A_{i}\}\) appearing in \(\mathcal{T}_{x}^{\prime}\) also appear in \(\mathcal{T}_{0}^{\prime}\) as well, except \(A_{j}A_{j+1}\). Therefore, the SMT \(\mathcal{T}_{0}^{\prime}\) has \((n-2)\) polygon edges of the polygon \(\{A_{i}\}\).
With these set of results in hand, we can now show that there exists an SMT of \(\{A_{i}\}\cup\{B_{i}\}\) following a _singly connected topology_. To show this, we start with any SMT for \(\{A_{i}\}\cup\{B_{i}\}\), \(\mathcal{T}_{0}\), that satisfies all the results derived so far and transform it into a Steiner tree of _singly connected topology_ having total length not longer than the initial Steiner tree \(\mathcal{T}_{0}\).
There exists an SMT for \(\{A_{i}\}\cup\{B_{i}\}\) following a singly connected topology for \(n\geq 13\) and \(\lambda\geq\lambda_{1}\), where
\[\lambda_{1}=\frac{1}{1-4\sin\frac{\pi}{n}}\]
Proof.: Let \(\mathcal{T}_{0}\) be any SMT of \(\{A_{i}\}\cup\{B_{i}\}\) which satisfies the properties of Lemma 4. Further, from Lemma 4, there is a Steiner point \(S_{1}\) which lies on all A-B paths, and there are two consecutive vertices \(A_{j}\), \(A_{j+1}\) such that \(A_{j}\), \(S_{1}\), \(A_{j+1}\) is a path in \(\mathcal{T}_{0}\). As \(\mathcal{T}_{0}\) satisfies the property of Lemma 4, \(\mathcal{T}_{0}\) has \((n-2)\) polygon edges of \(\{A_{i}\}\) excluding the edge \(A_{j}A_{j+1}\).
Let \(H\) be the point in the interior of the polygon \(\{A_{i}\}\) such that \(HA_{j}A_{j+1}\) form an equilateral triangle. As \(n>6\), the common centre \(O\) of \(\{A_{i}\}\) and \(\{B_{i}\}\) does not lie inside the triangle \(HA_{j}A_{j+1}\). Now, we modify \(\mathcal{T}_{0}\) as follows:
1. Remove edges \(A_{j}S_{1}\), \(S_{1}A_{j+1}\) and add edge \(S_{1}H\) to get the forest \(\mathcal{T}_{1}\). We know from [9] that \(S_{1}\), \(S_{2}\) and \(H\) are collinear and this transformation does not change the total length. Therefore \(|\mathcal{T}_{0}|=|\mathcal{T}_{1}|\). Here, \(|\mathcal{T}_{1}|\) denotes the sum of the lengths of edges present in \(\mathcal{T}_{1}\).
2. Add edge \(HO\) and remove all polygon edges of \(\{A_{i}\}\) to get \(\mathcal{T}_{2}\). Therefore \(|\mathcal{T}_{2}|=|\mathcal{T}_{1}|+\overline{HO}-(n-2)=|\mathcal{T}_{0}|+ \overline{HO}-(n-2)\). We observe that \(\mathcal{T}_{2}\) is a tree connecting the points in \(\{B_{i}\}\cup\{O\}\).
3. Let \(S_{0}\) be the Torricelli point of the triangle \(OB_{j}B_{j+1}\). Let \(\mathcal{T}_{3}\) be the Steiner tree of \(\{B_{i}\}\cup\{O\}\) with edges \(S_{0}O\), \(S_{0}B_{j}\), \(S_{0}B_{j+1}\) and other points in \(\{B_{i}\}\) connected through \((n-2)\) polygon edges of the polygon \(\{B_{i}\}\). From [16], we know that \(\mathcal{T}_{3}\) is the SMT of \(\{B_{i}\}\cup\{O\}\). Therefore \(|\mathcal{T}_{3}|\leq|\mathcal{T}_{2}|=|\mathcal{T}_{0}|+\overline{Hom}-(n-2)\). Further we know that \(H\) lies on the edge \(OS_{0}\) (as \(O\), \(S_{0}\) and \(H\) lie on the perpendicular bisector of \(B_{j}\) and \(B_{j+1}\)).
4. Remove edge \(S_{0}O\) and add edge \(S_{0}H\) to get \(\mathcal{T}_{4}\). As \(H\) lies on the edge \(OS_{0}\), we have \(|\mathcal{T}_{4}|=|\mathcal{T}_{3}|-\overline{Hom}\leq|\mathcal{T}_{0}|-(n-2)\).
5. Let \(S_{3}\) be the intersection of the circumcircle of triangle \(A_{j}HA_{j+1}\) (from Lemma 9 the intersection exists as \(\lambda_{1}\geq\lambda_{v}\) for \(n\geq 13\)). Remove the edge \(S_{3}H\) and add the edges \(S_{3}A_{j}\) and \(S_{3}A_{j+1}\) to get \(\mathcal{T}_{5}\). Again, from [9] we know that this transformation does not change the total length. Hence \(|\mathcal{T}_{5}|=|\mathcal{T}_{5}|\leq|\mathcal{T}_{0}|-(n-2)\). Moreover, as \(\lambda>\lambda_{v}\), we observe that \(\{A_{j},B_{j},A_{j+1},B_{j+1},S_{3},S_{0}\}\) form the vertices of the vertical gadget and points \(O\), \(H\), \(S_{3}\), \(S_{0}\) appear in that order on the perpendicular bisector of \(B_{j}\) and \(B_{j+1}\).
6. Add back the \((n-2)\) polygon edges of \(\{A_{i}\}\) which were removed in the second step to get \(\mathcal{T}_{6}\). Therefore \(|\mathcal{T}_{4}|=|\mathcal{T}_{5}|+(n-2)\leq|\mathcal{T}_{0}|\). We further observe that \(\mathcal{T}_{6}\) is a Steiner tree connecting the points \(\{A_{i}\}\cup\{B_{i}\}\) with a singly connected topology.
Therefore we started with an arbitrary SMT \(\mathcal{T}_{0}\) and transformed it into a Steiner tree \(\mathcal{T}_{6}\) with a singly connected topology (where \(\{A_{j},B_{j},A_{j+1},B_{j+1},S_{3},S_{0}\}\) form the vertices of the vertical gadget) which has a total length not worse than \(\mathcal{T}_{0}\). Hence \(\mathcal{T}_{6}\) must be an SMT of \(\{A_{i}\}\cup\{B_{i}\}\). This proves the theorem.
Theorem 26 determines the exact structure of the SMT for \(\{A_{i}\}\cup\{B_{i}\}\). Further from Section 3.1 we determine the exact method to construct the two additional Steiner points in \(\mathcal{O}(1)\) steps - note that this construction time is independent of the integer \(n\) or the real number \(\lambda\). Therefore, SMT for \(\{A_{i}\}\cup\{B_{i}\}\) for \(n\geq 13\) and \(\lambda\geq\lambda_{1}\) is solvable in polynomial time.
Note that the total length of any SMT for \(\{A_{i}\}\cup\{B_{i}\}\), when \(n\geq 13\) and \(\lambda\geq\lambda_{1}\), is
\[|\mathcal{T}_{6}| =|\text{vertical gadget}|+|(n-2)\text{ edges of }\{B_{i}\}|+|(n-2)\text{ edges of }\{A_{i}\}|\] \[\Longrightarrow|\mathcal{T}_{6}| =\bigg{(}\frac{(\lambda-1)}{2\tan\frac{\pi}{n}}+\frac{\sqrt{3}( \lambda+1)}{2}\bigg{)}+(n-2)\cdot\lambda+(n-2)\] \[\Longrightarrow|\mathcal{T}_{6}| =\frac{(\lambda-1)}{2\tan\frac{\pi}{n}}+\bigg{(}n-2+\frac{\sqrt{3 }}{2}\bigg{)}(\lambda+1)\]
Further, \(\lambda_{1}\) converges to \(1\) very quickly with increasing \(n\) (plotted in Figure 9):
\begin{tabular}{|c|c|c|c|c|c|} \hline \(n\) & 13 & 20 & 40 & 100 & 500 \\ \hline \(\lambda_{1}\) & 23.3987 & 2.6719 & 1.4574 & 1.1437 & 1.0258 \\ \hline \end{tabular}
This means for large sized \(n\) and for ratios that are not too small, the SMT will follow a _singly connected topology_.
## 4 Euclidean Steiner Minimal Tree on \(f(n)\)-Almost Convex Point Sets
In this section, we design an exact algorithm for Euclidean Steiner Minimal Tree on \(f(n)\)-Almost Convex Point Sets running in time \(2^{\mathcal{O}(f(n)\log n)}\). Note that \(f(n)\leq n\) is always
true. Therefore, we are given as input a set \(\mathcal{P}\) of \(n\) points in the Euclidean Plane such that \(\mathcal{P}\) can be partitioned as \(\mathcal{P}=\mathcal{P}_{1}\uplus\mathcal{P}_{2}\), where \(\mathcal{P}_{1}\) is the convex hull of \(\mathcal{P}\) and \(|\mathcal{P}_{2}|=f(n)\).
First, we look into some mathematical results and computational results to finally arrive at the algorithm for solving Euclidean Steiner Minimal Tree on \(f(n)\)-Almost Convex Point Sets.
We know that the SMT of \(\mathcal{P}\) can be decomposed uniquely into one or more full Steiner subtrees, such that two full Steiner subtrees share at most one node [9]. In the following lemma, we further characterize one full Steiner subtree.
Let \(\mathbb{F}\) be the full Steiner decomposition of an SMT of \(\mathcal{P}\). Then there exists a full Steiner subtree \(\mathcal{F}\in\mathbb{F}\) such that \(\mathcal{F}\) has at most one common node with at most one other full Steiner subtree in \(\mathbb{F}\).
Proof.: If the SMT of \(\mathcal{P}\) is a full Steiner tree, then the statement is trivially true.
Otherwise, we assume that the SMT of \(\mathcal{P}\) has full Steiner subtrees, \(\mathbb{F}=\{\mathcal{F}_{1},\mathcal{F}_{2},\ldots,\mathcal{F}_{m}\}\), \(m\geq 2\). Now, for the sake of contradiction, we assume that for each full Steiner subtree \(\mathcal{F}_{j}\) there are atleast two other full Steiner subtrees \(\mathsf{Neitree}(\mathcal{F}_{j})^{1},\mathsf{Neitree}(\mathcal{F}_{j})^{2} \in\mathbb{F}\) and two terminals \(P_{1}(\mathcal{F}_{j}),P_{2}(\mathcal{F}_{j})\in\mathcal{P}\) such that \(P_{i}(\mathcal{F}_{j})\in V(\mathcal{F}_{j})\cap V(\mathsf{Neitree}(\mathcal{F} _{j})^{i})\), \(i\in\{1,2\}\). Now, let us construct a walk \(W\) in the SMT of \(\mathcal{P}\). Starting from \(P_{1}(\mathcal{F}_{1})\) of the full Steiner subtree \(\mathcal{F}_{1}\in\mathbb{F}\), we include the path in \(\mathcal{F}_{1}\) connecting to \(P_{2}(\mathcal{F}_{1})\). Note that \(P_{2}(\mathcal{F}_{1})\) is also contained in \(\mathsf{Neitree}(\mathcal{F}_{1})^{2}\). Let \(\mathsf{Neitree}(\mathcal{F}_{1})^{2}=\mathcal{F}_{w_{1}}\) for some \(w_{1}\in[m],w_{1}\neq 1\). Also let \(P_{2}(\mathcal{F}_{1})=P_{1}(\mathcal{F}_{w_{1}})\). Then, we know that there is a \(P_{2}(\mathcal{F}_{w_{1}})\). In \(W\), we include the path in \(\mathcal{F}_{w_{1}}\) connecting \(P_{1}(\mathcal{F}_{w_{1}})\) to \(P_{2}(\mathcal{F}_{w_{1}})\). In general, suppose the \(i^{th}\) full Steiner subtree to be considered in building the walk is \(\mathcal{F}_{w_{i-1}}\) which was reached via point \(P_{1}(\mathcal{F}_{w_{i-1}})\). Then we include in \(W\) the path in \(\mathcal{F}_{w_{i-1}}\) connecting \(P_{1}(\mathcal{F}_{w_{i-1}})\) and \(P_{2}(\mathcal{F}_{w_{i-1}})\). Thus, we can indefinitely keep constructing the walk \(W\) as for each \(\mathcal{F}_{w_{i-1}}\) both \(P_{1}(\mathcal{F}_{w_{i-1}}),P_{2}(\mathcal{F}_{w_{i-1}})\) always exist. However, since there are \(m\) full Steiner subtrees this means that there is an \(\mathcal{F}_{k}\in\mathbb{F}\) and two indices \(i\neq j\) such that \(\mathcal{F}_{k}=\mathcal{F}_{w_{i}}=\mathcal{F}_{w_{j}}\). Thus, there exists a cycle in \(W\), which implies that there is a cycle in the
SMT of \(\mathcal{P}\) (contradiction). Therefore, there must be at least one full Steiner subtree that has at most one common terminal with at most one other full Steiner subtree.
A full Steiner subtree of the SMT of \(\mathcal{P}\) has the topology of a tree. Thus, from Lemma 4.1, we conclude that a full Steiner subtree, that has at most one common terminal with at most one other full Steiner subtree, has at least one leaf of the SMT.
Let the full Steiner subtrees, that have at most one terminal shared with at most one other full Steiner subtree, be called **leaf full Steiner subtrees**. Let the terminal which is shared be called the **pivot** of the leaf full Steiner subtree.
Let \(\mathcal{F}\) be a leaf full Steiner Subtree of the SMT of \(\mathcal{P}\), with terminal points \(\mathcal{P}_{\mathcal{F}}\subseteq\mathcal{P}\) and having pivot \(P_{\mathcal{F}}\). Deleting \(\mathcal{F}\setminus\{P_{\mathcal{F}}\}\) from the SMT of \(\mathcal{P}\) gives us an SMT of the terminal points \(((\mathcal{P}-\mathcal{P}_{\mathcal{F}})\cup\{P_{\mathcal{F}}\})\).
Proof.: Firstly, we observe that deleting \(\mathcal{F}\setminus\{P_{\mathcal{F}}\}\) from the SMT of \(\mathcal{P}\) will indeed give us a tree, as \(\mathcal{F}\) is a leaf full Steiner subtree. Let us call this tree \(\mathcal{Y}\).
Now for the sake of contradiction, we assume that the total length of \(\mathcal{Y}\) is strictly larger than the SMT \(\mathcal{F}^{\prime}\) of \(((\mathcal{P}-\mathcal{P}_{\mathcal{F}})\cup\{P_{\mathcal{F}}\})\). However, this means, the total length of \(\mathcal{F}^{\prime}\cup\mathcal{F}\) is strictly smaller than that of the SMT of \(\mathcal{P}\). As \(\mathcal{F}^{\prime}\cup\mathcal{F}\) is also a Steiner tree of \(\mathcal{P}\), this contradicts the minimality of the initial SMT of \(\mathcal{P}\).
Now we are ready to describe the algorithm. Recall that \(\mathcal{P}\) is partitioned as \(\mathcal{P}=\mathcal{P}_{1}\uplus\mathcal{P}_{2}\), where \(\mathcal{P}_{1}\) is the convex hull of \(\mathcal{P}\) and \(\mathcal{P}_{2}\) is the set of \(f(n)\) points lying in the interior of \(\mathcal{P}_{1}\). For the sake of brevity of notations let \(|\mathcal{P}_{2}|=k\).
Let \(\mathcal{P}\) be a \(k\)-Almost Convex Point Set. A minimum FST of a subset \(\mathcal{S}\) of \(\mathcal{P}\) can be found in \(\mathcal{O}(4^{|\mathcal{S}|}\cdot|\mathcal{S}|^{k})\) time.
Proof.: We observe that for any \(\mathcal{S}\subseteq\mathcal{P}\), \(\mathcal{S}\) forms a convex polygon with at most \(k\) points lying in the interior. For \(|\mathcal{S}|\leq 2\), the statement of the lemma is trivially true. Hence we assume that \(|\mathcal{S}|>2\).
From [9], the number of full Steiner topologies of \(\mathcal{S}\) is
\[\frac{|\mathcal{S}|!}{|\mathrm{CH}(\mathcal{S})|!}\cdot\frac{\binom{2|\mathcal{ S}|-4}{|\mathcal{S}|-2}}{|\mathcal{S}|-1}\]
However, we know that:
Figure 10: Leaf full Steiner subtrees enclosed in ellipses, other full Steiner subtrees enclosed in rectangles, pivots of leaf full Steiner subtrees encircled
\[\frac{(\sum\limits_{|\mathcal{S}|-2}^{|\mathcal{S}|-4})}{|\mathcal{S}|-1}<\frac{ (\sum\limits_{|\mathcal{S}|}^{|\mathcal{S}|})}{|\mathcal{S}|}<\frac{\sum\limits _{r=0}^{2|\mathcal{S}|}{2^{|\mathcal{S}|}_{k}}}{|\mathcal{S}|}=\frac{2^{2| \mathcal{S}|}}{|\mathcal{S}|}=\frac{4^{|\mathcal{S}|}}{|\mathcal{S}|}\]
And,
\[\frac{|\mathcal{S}|!}{|\mathrm{CH}(\mathcal{S})|!}<\frac{|\mathcal{S}|!}{(| \mathcal{S}|-k)!}<|\mathcal{S}|^{k}\]
Therefore, the number of full Steiner topologies of \(\mathcal{S}\) is at most \(4^{|\mathcal{S}|}|\mathcal{S}|^{k-1}\). Each of these topologies can be enumerated and using _Melzak's FST Algorithm_, we can also find the SMT realizing each such full Steiner topology in linear time, as given in [9]. Therefore to iterate over all topologies and find a minimum takes at most time:
\[(4^{|\mathcal{S}|}\cdot|\mathcal{S}|^{k-1})\cdot\mathcal{O}(|\mathcal{S}|)=O( 4^{|\mathcal{S}|}\cdot|\mathcal{S}|^{k})\]
Now, we find the time required for extending the results of Lemma 32 to all subsets of \(\mathcal{P}\).
Let \(\mathcal{P}\) be a \(k\)-Almost Convex Set. Computing a minimum FST **for all** subsets \(\mathcal{S}\subseteq\mathcal{P}\) can be done in \(\mathcal{O}(n^{k}\cdot 5^{n})\) time.
Proof.: Using Lemma 32, we can get a minimum FST for a single subset \(\mathcal{S}\subseteq\mathcal{P}\) in \(\mathcal{O}(4^{|\mathcal{S}|}|\mathcal{S}|^{k})\) time. Moreover, we know that the number of subsets of \(\mathcal{P}\) that are of size \(r\) is \(\binom{n}{r}\). This means that the total time to compute a minimum FST **for all** subsets \(\mathcal{S}\subseteq\mathcal{P}\), time taken is:
\[\sum\limits_{r=0}^{n}\binom{n}{r}\cdot\mathcal{O}(4^{r}\cdot r^{k})=\sum \limits_{r=0}^{n}\binom{n}{r}\cdot\mathcal{O}(n^{k}\cdot 4^{r})=\mathcal{O}(n^{k} \cdot(1+4)^{n})=\mathcal{O}(n^{k}\cdot 5^{n})\]
For each \(\mathcal{S}\subseteq\mathcal{P}\), we denote by \(\mathcal{F}_{\mathcal{S}}\) a minimum FST of \(\mathcal{S}\) and by \(\mathcal{T}_{\mathcal{S}}\) the SMT of \(\mathcal{S}\).
The SMT of subset \(\mathcal{S}\subseteq\mathcal{P}\), \(\mathcal{T}_{\mathcal{S}}\), can be found in \(\mathcal{O}(|\mathcal{S}|\cdot 2^{|\mathcal{S}|})\) time, given that we have pre-computed \(\mathcal{T}_{\mathcal{R}}\) and \(\mathcal{F}_{\mathcal{R}}\), \(\forall\mathcal{R}\subseteq\mathcal{S}\).
Proof.: If \(\mathcal{T}_{\mathcal{S}}\) was a full Steiner tree then it would be \(\mathcal{F}_{\mathcal{S}}\). Otherwise, \(\mathcal{T}_{\mathcal{S}}\) contains multiple full Steiner subtrees.
Let \(\mathcal{F}\) be a leaf full Steiner subtree of \(\mathcal{T}_{\mathcal{S}}\) with pivot \(P_{\mathcal{F}}\). Therefore from Lemma 31 we have \(\mathcal{T}_{\mathcal{S}}=\mathcal{T}_{((\mathcal{S}-V(\mathcal{F}))\cup(P_{ \mathcal{F}}))}\cup\mathcal{F}\). Therefore we can iterate over all subsets \(\mathcal{R}\subset\mathcal{S}\) and all terminals \(P\in\mathcal{R}\), and take the minimum-length tree among \(\mathcal{T}_{((\mathcal{S}-\mathcal{R})\cup(P))}\cup\mathcal{F}_{\mathcal{R}}\). Since we are iterating over all \(\mathcal{R}\subset\mathcal{S}\), and all \(P\in\mathcal{R}\), we are guaranteed to get \(\mathcal{R}=V(\mathcal{F})\cap\mathcal{S}\) and \(P=P_{\mathcal{F}}\) on one such iteration.
Now, as there are \(\mathcal{O}(2^{|\mathcal{S}|})\) possibilities of \(\mathcal{R}\subset\mathcal{S}\) and \(\mathcal{O}(|\mathcal{S}|)\) possibilities of \(P\in\mathcal{R}\), we have \(\mathcal{O}(|\mathcal{S}|\cdot 2^{|\mathcal{S}|})\) possibilities of the pair \((\mathcal{R},P)\). Therefore the total time required for iterating is \(\mathcal{O}(|\mathcal{S}|\cdot 2^{|\mathcal{S}|})\).
SMTs for all subsets \(\mathcal{S}\subseteq\mathcal{P}\), \(\mathcal{T}_{\mathcal{S}}\) can be found in \(\mathcal{O}(n\cdot 3^{n})\) time, given that we have precomputed a minimum FST \(\mathcal{F}_{\mathcal{S}}\)\(\forall\mathcal{S}\subseteq\mathcal{P}\).
Proof.: Using Lemma 34, we can get the SMT \(\mathcal{T}_{\mathcal{S}}\), for a single subset \(\mathcal{S}\subseteq\mathcal{P}\) in \(\mathcal{O}(r\cdot 2^{r})\) time, where \(|\mathcal{S}|=r\). Moreover, we know that the number of subsets of \(\mathcal{P}\) that are of size \(r\) is \(\binom{n}{r}\). This means that the total time to compute the SMT for all subsets \(\mathcal{S}\subseteq\mathcal{P}\), time taken is:
\[\sum_{k=0}^{n}\binom{n}{k}\cdot\mathcal{O}(k\cdot 2^{k})=\mathcal{O}(n\cdot(1+ 2)^{n})=\mathcal{O}(n\cdot 3^{n})\]
However, to apply Lemma 34 on some subset \(\mathcal{S}\subseteq\mathcal{P}\) for computing \(\mathcal{T}_{\mathcal{S}}\), we must also have \(\mathcal{T}_{\mathcal{R}}\) precomputed for all \(\mathcal{R}\subseteq\mathcal{S}\). This can be guaranteed by computing \(\mathcal{T}_{\mathcal{S}}\) and \(\mathcal{F}_{\mathcal{S}}\), for all subsets \(\mathcal{S}\subseteq\mathcal{P}\) in an increasing order of \(|\mathcal{S}|\) (or any order which guarantees that the subsets of \(\mathcal{S}\) are processed before \(\mathcal{S}\)).
Finally, we state our algorithm.
**Theorem 36**.: _An SMT \(\mathcal{T}_{\mathcal{P}}\) of a \(k\)-Almost Convex Set \(\mathcal{P}\) of terminals can be computed in \(\mathcal{O}(n^{k}\cdot 5^{n})\) time._
Proof.: Consider the following algorithm:
```
1:for all \(\mathcal{S}\subseteq\mathcal{P}\)do
2: Compute \(\mathcal{F}_{\mathcal{S}}\)\(\triangleright\) Using Lemma 33
3:endfor\(\triangleright\) This takes \(\mathcal{O}(n^{k}\cdot 5^{n})\) time
4:for all \(\mathcal{S}\subseteq\mathcal{P}\)do
5: Compute \(\mathcal{T}_{\mathcal{S}}\)\(\triangleright\) Using Lemma 35
6:endfor\(\triangleright\) This takes \(\mathcal{O}(n\cdot 3^{n})\) time
7:return\(\mathcal{T}_{\mathcal{P}}\)\(\triangleright\) Total runtime is \(\mathcal{O}(n^{k}\cdot 5^{n}+n\cdot 3^{n})=\mathcal{O}(n^{k}\cdot 5^{n})\)
```
**Algorithm 1** Computation of \(\mathcal{T}_{\mathcal{P}}\) **Input:**\(\mathcal{P}\)
Hence we have an SMT of a \(k\)-Almost Convex Point Set \(\mathcal{P}\) in \(\mathcal{O}(n^{k}\cdot 5^{n})\) time.
The above theorem gives us several improvements in special classes of inputs, based on the number of input points lying inside the convex hull of the input set, as described in the following corollary. Let there be an \(f(n)\)-Almost Convex Point Set \(\mathcal{P}\) containing \(n\) points. Recall that \(\mathcal{P}=\mathcal{P}_{1}\uplus\mathcal{P}_{2}\), \(\mathcal{P}_{1}\) containing the points on the convex hull of \(\mathcal{P}\), and \(|\mathcal{P}_{2}|=f(n)\). It is only possible that \(f(n)\leq n\).
**Corollary 37**.: _Let \(\mathcal{P}\) be a \(f(n)\)-Almost Convex Point Set. Then, then there is an algorithm \(\mathcal{A}\) for Euclidean Steiner Minimal Tree such that, \(\mathcal{A}\) runs in \(2^{\mathcal{O}(n+f(n)\log n)}\) time. In particular,_
1. _When_ \(f(n)=\mathcal{O}(n)\)_,_ \(\mathcal{A}\) _runs in_ \(2^{\mathcal{O}(n\log n)}\) _time._
2. _When_ \(f(n)=\Omega(\frac{n}{\log n})\) _and_ \(f(n)=o(n)\)_,_ \(\mathcal{A}\) _runs in_ \(2^{o(n\log n)}\)_._
3. _When_ \(f(n)=\mathcal{O}(\frac{n}{\log n})\)_,_ \(\mathcal{A}\) _runs in_ \(2^{\mathcal{O}(n)}\) _time._
Therefore, for \(f(n)=o(n)\), our algorithm for Euclidean Steiner Minimal Tree does better on \(f(n)\)-Almost Convex Points Sets than the current best known algorithm [8].
## 5 Approximation Algorithms for Euclidean Steiner Minimal Tree
The Euclidean Steiner Minimal Tree problem is NP-hard as shown by Garey et al. in [6]. Garey et al. also prove that there cannot be an FPTAS (fully polynomial time approximation scheme) for this problem unless \(P=NP\). At the same time, the case when all the terminals lie on the boundary of a convex region admits an FPTAS as given in [14]. We aim to conduct a more fine-grained analysis for the problem by considering \(f(n)\)-Almost Convex Point Sets of \(n\) terminals and studying the existence of FPTASes for different functions \(f(n)\). First, we present an FPTAS for Euclidean Steiner Minimal Tree on \(f(n)\)-Almost Convex Sets of \(n\) terminals, when \(f(n)=\mathcal{O}(\log n)\). Next, we prove that no FPTAS exists for the case when \(f(n)=\Omega(n^{\epsilon})\), where \(\epsilon\in(0,1]\).
### FPTAS for Euclidean Steiner Minimal Tree on Cases of Almost Convex Point Sets
We first propose an algorithm for computing the SMT of a planar graph \(G\) having \(N\) vertices and \(n\) terminals, out of which \(k\) terminals lie on the outer face of \(G\) and the remaining terminals lie within the boundary. Next, following the procedure in [14] we get an FPTAS for Euclidean Steiner Minimal Tree on \(f(n)\)-Almost Convex Sets of \(n\) terminals, where \(f(n)=\mathcal{O}(\log n)\).
We state the following proposition from Theorem 1 in [14]:
Let \(\mathcal{P}\) be the vertices of any polygon in the plane, \(\mathcal{K}\) a subset of \(\mathcal{P}\), and \(\mathcal{T}\) a tree consisting of all the vertices of \(\mathcal{K}\) (and possibly some other vertices as well) and contained entirely inside \(\mathcal{P}\). Then on removing any edge of the tree, we get two disjoint trees \(\mathcal{T}_{1},\mathcal{T}_{2}\), such that the vertices of \(\mathcal{K}\) in each tree \(\mathcal{T}_{i},i\in\{1,2\}\) form an interval in \(\mathcal{K}\).
Using Proposition 38 and the Dreyfus-Wagner algorithm [4], we give an algorithm for obtaining the SMT of a planar graph \(G\).
Let \(\mathcal{K}\) represent the set of terminals lying on the outer face of \(G\) and \(\mathcal{R}\) be the set of terminals lying inside the outer face of \(G\). We have \(|V(G)|=N\), \(|\mathcal{K}\cup\mathcal{R}|=n\), and \(|\mathcal{K}|=k\). Let \(C(\mathcal{L})\) denote the SMT in \(G\) for a terminal subset \(\mathcal{L}\subseteq V(G)\). Let \(B(v,\mathcal{L},[a,b))\) denote the SMT in \(G\) for the terminal set \(\{v\}\cup\mathcal{L}\cup[a,b)\), where \(\mathcal{L}\subseteq\mathcal{R}\), \([a,b)\) is the set of vertices in \(\mathcal{K}\) forming an interval from vertex \(a\) to \(b\) in counterclockwise direction along the outer boundary of \(G\) including \(a\) but excluding \(b\), \(v\in V(G)\setminus(\mathcal{L}\cup[a,b))\), and the degree of \(v\) is at least \(2\) in \(B(v,\mathcal{L},[a,b))\). Let \(A(v,\mathcal{L},[a,b))\) denote the SMT in \(G\) for the terminal set \(\{v\}\cup\mathcal{L}\cup[a,b)\), where \(\mathcal{L}\), \([a,b)\), and \(v\) are as defined in the previous case, and the degree of \(v\) is at most \(1\) in \(A(v,\mathcal{L},[a,b))\).
Splitting the SMT at a vertex \(v\) of degree at least \(2\) gives rise to two smaller instances of the Steiner Minimal Tree problem on graphs.
\[B(v,\mathcal{L},[a,b))=\min_{\Pi_{1},\Pi_{2},\Pi_{3}}\{C(\{v\}\cup\mathcal{L }^{\prime}\cup[a,x))+C(\{v\}\cup(\mathcal{L}\setminus\mathcal{L}^{\prime}) \cup[x,b))\} \tag{1}\]
where the conditions on \(\mathcal{L}^{\prime}\) and \(x\) are \(\Pi_{1}:\mathcal{L}^{\prime}\subseteq\mathcal{L}\), \(\Pi_{2}:x\in\mathcal{K},a<x<b\), and \(\Pi_{3}:\emptyset\subset\mathcal{L}^{\prime}\cup[a,x)\subset\mathcal{L}\cup[ a,b)\).
The intuition is to root the tree at an internal terminal vertex and start growing the Steiner tree from there. Observe that on removing one of the internal vertices \(v\) in the tree \(\mathcal{T}\), we get one, two or three disjoint subtrees. They induce a partition over the terminals. The terminals in \(\mathcal{K}\) in each of the subtrees form intervals in \(\mathcal{K}\), according to Proposition 38.
Moreover, the terminals in \(\mathcal{R}\) can be partitioned in any way, not necessarily maintaining the interval structure. This is captured in the following recurrence relation:
\[C(\{v\}\cup\mathcal{L}\cup[a,b))=\min_{\Pi_{1},\Pi_{2},\Pi_{3}}\{A(v,\mathcal{L} _{1},[a,c))+A(v,\mathcal{L}_{2},[c,d))+A(v,\mathcal{L}_{3},[d,b))\} \tag{2}\]
where the conditions on \(\mathcal{L}_{1}\), \(\mathcal{L}_{2}\), \(\mathcal{L}_{3}\), \(c\), and \(d\) are \(\Pi_{1}:\mathcal{L}_{1},\mathcal{L}_{2},\mathcal{L}_{3}\subseteq\mathcal{L}\), \(\Pi_{2}:\mathcal{L}_{1}\cup\mathcal{L}_{2}\cup\mathcal{L}_{3}=\mathcal{L}\), and \(\Pi_{3}:c,d\in\mathcal{K},a\leq c\leq d\leq b\) and we have
\[A(v,\mathcal{L}^{\prime},[p,q))=\min\{\min_{u\notin\mathcal{L}^{\prime}}\{B(u,\mathcal{L}^{\prime},[p,q))+d(u,v)\},\min_{u\in\mathcal{L}^{\prime}\cup[p,q) }\{C(\mathcal{L}^{\prime}\cup[p,q))+d(u,v)\}\} \tag{3}\]
Our aim is to compute \(C(\{v\}\cup(\mathcal{R}\setminus\{v\})\cup\mathcal{K})\), where \(v\in\mathcal{R}\). We precompute the shortest distance between all pairs of vertices. We then compute the values of \(C(.)\) and \(B(.)\) in increasing order of cardinality of subsets of vertices in \(\mathcal{K}\) and \(\mathcal{R}\). Let \(d(u,v)\) denote the shortest path length between \(u\) and \(v\). The base cases are \(C(\{v\}\cup\{a\})=d(v,a)\) for all \(v\in V(G)\) and \(a\in\mathcal{K}\cup\mathcal{R}\).
```
1:Compute the shortest distance between all pairs of vertices
2:for all \(u\in V(G)\) and \(a\in K\cup R\)do
3: Set \(C(\{u\}\cup\{a\})=d(u,a)\)
4:endfor
5:Select a vertex \(v\in\mathcal{R}\)
6:for\(i=1,\ldots,n-k-1\)do
7:for each \(\mathcal{L}\subseteq\mathcal{R}\setminus\{v\}\) of size \(i\)do
8:for\(j=1,\ldots,k\)do
9:if j=k then
10: Compute \(B(v,\mathcal{L},\mathcal{K})\) using Equation (1)
11: Compute \(C(\{v\}\cup\mathcal{L}\cup\mathcal{K})\) using Equation (2)
12:else
13:for each \([a,b)\subseteq\mathcal{K}\) of size \(j\)do
14: Compute \(B(v,\mathcal{L},[a,b))\) using Equation (1)
15: Compute \(C(\{v\}\cup\mathcal{L}\cup[a,b))\) using Equation (2)
16:endfor
17:endif
18:endfor
19:endfor
20:endfor
21:return\(C(\{v\}\cup(\mathcal{R}\setminus\{v\})\cup\mathcal{K})\)
```
**Algorithm 2** Computation of SMT of planar graph \(G\) with terminal set \(\mathcal{K}\cup\mathcal{R}\) **Input:**\(G,\mathcal{K}\), \(\mathcal{R}\)
We analyse the correctness and running time of Algorithm 2.
**Theorem 39**: _Consider a planar graph \(G\) on \(N\) vertices and a set \(\mathcal{K}\uplus\mathcal{R}\subseteq V(G)\) of \(n\) terminals such that \(\mathcal{K}\) is defined as the terminals lying on the outer face of \(G\). Moreover, let \(|\mathcal{K}|=k\). Then Algorithm 2 computes the SMT for \(\mathcal{K}\uplus\mathcal{R}\) in \(G\) in time \(\mathcal{O}(N^{2}k^{4}4^{n-k}+Nk^{3}3^{n-k}+N^{3})\)._
**Correctness of Algorithm 2.** In order to prove the correctness of Algorithm 2, we need to show that the Equations (1) and (2) are valid.
In Equation (1), \(B(v,\mathcal{L},[a,b))\) denotes an SMT for the terminal set \(\{v\}\cup\mathcal{L}\cup[a,b)\), conditioned on the fact that the degree of \(v\) is at least 2 in it. Let us split the SMT at vertex \(v\) into two smaller subtrees. This must also split the terminals in \([a,b)\) in two intervals \([a,x)\) and \([x,b)\), respectively. Otherwise it would mean that the SMT has crossing edges, which is not possible. The vertices in \(\mathcal{L}\) can be present in any of the two subtrees, hence we consider all possible partitions of \(\mathcal{L}\) into two subsets \(\mathcal{L}^{\prime}\) and \(\mathcal{L}\setminus\mathcal{L}^{\prime}\). Thus for Equation (1), \(\mathrm{LHS}\geq\mathrm{RHS}\). On the other hand, the expression in the RHS of Equation (1) is a tree containing the vertex subset \(\{v\}\cup\mathcal{L}\cup[a,b)\). Since, \(B(v,\mathcal{L},[a,b))\) is an SMT for the same vertex subset, in Equation (1) \(\mathrm{LHS}\geq\mathrm{RHS}\). Therefore, Equation (1) is valid.
For Equation (2), we take \(v\) as the root of the SMT \(C(\{v\}\cup\mathcal{L}\cup[a,b))\). The degree of \(v\) in the SMT can be 1, 2, or 3. Accordingly, on removing \(v\), we will get 1, 2, or 3 subtrees, with the condition that in each subtree the degree of \(v\) is 1. Again, the terminals in \([a,b)\) are divided into smaller intervals \([a,c)\), \([c,d)\) and \([d,b)\) for some \(c,d\in\mathcal{K}\) satisfying \(a<c<d<b\). The terminals in \(\mathcal{L}\) are divided among the subtrees in any combination. The number of such intervals and partitions is equal to the degree of \(v\) in \(C(\{v\}\cup\mathcal{L}\cup[a,b))\). The term \(A(v,\mathcal{L}^{\prime},[p,q))\) is obtained by minimizing across all SMTs satisfying the condition that degree of \(v\) in the SMT is 1. Thus in Equation (2), \(\mathrm{LHS}\geq\mathrm{RHS}\). On the other hand, the RHS of Equation (2) given a tree containing vertices \(\{v\}\cup\mathcal{L}\cup[a,b)\). Since \(C(\{v\}\cup\mathcal{L}\cup[a,b))\) is an SMT on the terminal set \(\{v\}\cup\mathcal{L}\cup[a,b)\), in Equation (2) \(\mathrm{LHS}\leq\mathrm{RHS}\). Thus, Equation (2) is valid.
**Running time of Algorithm 2.** All pairs shortest paths can be calculated in \(\mathcal{O}(N^{3})\) time. The time complexity of the dynamic program has two components to it. One is due to computation of the \(\mathrm{B}(.)\) values using Equation (1) and the other is for calculating the \(\mathrm{C}(.)\) values using Equation (2).
1. The number of computational steps for calculating \(B(v,\mathcal{L},[a,b))\) using Equation (1) is of the order of the number of choices of \(v\), \(\mathcal{L}\), \(\mathcal{L}^{\prime}\), \([a,b)\), and \([a,x)\) such that \(\mathcal{L}\subset\mathcal{R}\), \(\mathcal{L}^{\prime}\subseteq\mathcal{L}\), \(a,x,b\in\mathcal{K}\), \(a\leq x\leq b\), and \(v\in V(G)\setminus(L\cup[a,b))\). Each vertex in \(\mathcal{R}\) belongs to exactly one of the sets \(\mathcal{L}^{\prime}\), \(\mathcal{L}\setminus\mathcal{L}^{\prime}\), or \(V(G)\setminus\mathcal{L}\). The vertices in \(\mathcal{K}\) are partitioned into three intervals, \([a,x)\), \([x,b)\), and \([b,a)\). There are at most \(N\) possibilities for \(v\). This gives us a running time of \(\mathcal{O}(Nk^{3}3^{n-k})\).
2. The number of computational steps for calculating \(C(\{v\}\cup\mathcal{L}\cup[a,b))\) using Equation (2) is \(3N\) times the order of the number of choices of \(v\), \(\mathcal{L}\), \(\mathcal{L}_{1}\), \(\mathcal{L}_{2}\), \(a\), \(b\), \(c\), and \(d\) such that \(\mathcal{L}\subset\mathcal{R}\), \(\mathcal{L}_{1}\subseteq\mathcal{L}\), \(\mathcal{L}_{2}\subseteq\mathcal{L}\), \(\{a,c,d,b\}\subseteq\mathcal{K}\), \(a\leq c\leq d\leq b\), and \(v\in V(G)\setminus(\mathcal{L}\cup[a,b))\). Each vertex in \(\mathcal{R}\) belongs to exactly one of the sets \(\mathcal{L}_{1}\), \(\mathcal{L}_{2}\), \(\mathcal{L}_{3}\) or \(V(G)\setminus\mathcal{L}\). The vertices in \(\mathcal{K}\) are partitioned into at most four intervals, \([a,c)\), \([c,d)\), \([d,b)\) and \([b,a)\). There are at most \(N\) possibilities for \(v\). The \(3N\) factor is because calculating each of the \(C(.)\) values involves minimization over at most \(3N\) terms. This gives us a running time of \(\mathcal{O}(N^{2}k^{4}4^{n-k})\). Thus, the time complexity of the algorithm is \(\mathcal{O}(N^{2}k^{4}4^{n-k}+Nk^{3}3^{n-k}+N^{3})\).
We obtain the following corollary from the above theorem.
**Corollary 40**.: _Consider a planar graph \(G\) on \(N\) vertices and a set \(\mathcal{K}\uplus\mathcal{R}\subseteq V(G)\) of \(n\) terminals such that \(\mathcal{K}\) is defined by the terminals lying on the outer face of \(G\). Moreover, let \(|\mathcal{K}|=k\) and let \(|\mathcal{R}|=n-k=\mathcal{O}(\log n)\). Then Algorithm 2 computes the SMT for \(\mathcal{K}\uplus\mathcal{R}\) in \(G\) in time \(N^{3}k^{4}n^{\mathcal{O}(1)}\)._
Next we state the FPTAS for Euclidean Steiner Minimal Tree on \(f(n)\)-Almost Convex Sets of \(n\) terminals. This is achieved by converting the instance of Euclidean Steiner Minimal Tree into an instance of Steiner Minimal Tree on graphs. The Steiner Minimal Tree problem shall be solved using Algorithm 2. For this, we use the
Algorithm 2 in [14]. We restate the Algorithm 2 in [14] for our problem instance. We denote the set of terminals with \(\mathcal{P}\).
```
1:Compute the convex hull of the set of terminals \(\mathcal{P}\). Let the region enclosed by the convex hull \(\mathrm{CH}(\mathcal{P})\) be denoted by \(\mathbb{R}_{\mathrm{CH}(\mathcal{P})}\). Let the points in \(\mathcal{P}\) lying on \(\mathrm{CH}(\mathcal{P})\) be \(\mathcal{K}\) and \(\mathcal{R}=\mathcal{P}\setminus\mathcal{K}\).
2:We enclose the set of terminals \(\mathcal{P}\) with the smallest axis-parallel bounding square. Let its side length be \(D\). We divide the bounding square into same sized grids of side length \(\frac{D\epsilon}{8n-12}\), where \(\epsilon\) is the approximation factor.
3:Let \(\mathcal{V}_{0}\) be the set of all lattice points introduced in the previous step, and \(\mathcal{V}_{1}\) be the set of all lattice points lying on the edges of \(\mathrm{CH}(\mathcal{P})\). We define the weighted graph \(G_{f,\epsilon}\) to be the complete graph with vertex set \(V(G_{f,\epsilon})=\mathcal{K}\cup\mathcal{R}\cup(\mathcal{V}_{0}\cap\mathbb{R }_{\mathrm{CH}(\mathcal{P})})\cup\mathcal{V}_{1}\). The edge weights are equal to the Euclidean distance between the two end points.
4:Return the SMT \(\mathcal{T}\) for the graph \(G_{f,\epsilon}\) with \(\mathcal{K}\cup\mathcal{R}\) as the terminal set using Algorithm 2.
```
**Algorithm 3** Computation of \((1+\epsilon)\)-approximate SMT of \(\mathcal{P}\) **Input:**\(\mathcal{P},\epsilon\)
We analyse the correctness and running time of Algorithm 3.
**Theorem 41**.: _Consider a set \(\mathcal{P}\) of \(n\) points such that \(\mathcal{K}\) is defined as the points lying on the convex hull of \(\mathcal{P}\), i.e. \(\mathrm{CH}(\mathcal{P})\), and \(\mathcal{R}=\mathcal{P}\setminus\mathcal{K}\). Moreover, let \(|\mathcal{K}|=k\). Then Algorithm 3 computes a \((1+\epsilon)\)-approximate SMT for \(\mathcal{P}\) in time \(\mathcal{O}(\frac{n^{4}k^{4}}{\epsilon^{4}}4^{n-k})\)._
Proof.: **Correctness of Algorithm 3.** In order to prove the correctness of Algorithm 3, we need to show that \(\mathcal{T}\) is a \((1+\epsilon)\)-approximation of the SMT of the terminal set \(\mathcal{P}\), and \(\mathcal{T}\) is indeed the SMT for \(\mathcal{K}\cup\mathcal{R}\) in the complete weighted graph \(G_{f,\epsilon}\).
In [14], the concept of weight planar graphs is used. A graph \(G\) is called weight planar if it is a non-planar graph embedded on the Euclidean plane, having non-negative edge weights, such that every pair of edges \((u,v)\) and \((u^{\prime},v^{\prime})\) which intersect in this embedding of \(G\), satisfy the inequality: \(w(u,v)+w(u^{\prime},v^{\prime})>d(u,u^{\prime})+d(v,v^{\prime})\), where \(w(u,v)\) is the weight of the edge between vertices \(u\) and \(v\) and \(d(x,y)\) is the length of the shortest path between vertices \(x\) and \(y\). Since the edge weights of \(G_{f,\epsilon}\) are the Euclidean distances between the points, \(G_{f,\epsilon}\) is a weight planar graph.
From Theorem 5 of [14], we get that the SMT of a weight planar graph does not contain any crossing edges even though the input graph is non-planar. Because the SMT does not contain any crossing edges and lies completely inside the convex hull of the terminal pointset, the terminals on the outer boundary of \(G_{f,\epsilon}\), i.e. \(\mathcal{K}\), follow the interval pattern as stated in Proposition 38. Therefore, Algorithm 2 designed for planar graphs can be applied in the case of weight planar graphs as well. So, \(\mathcal{T}\) is the SMT for \(\mathcal{K}\cup\mathcal{R}\) in the complete weighted graph \(G_{f,\epsilon}\).
Finally, from Theorem 12 in [14], we get that the length of the Steiner tree obtained from Algorithm 3 is at most \((1+\epsilon)\) times the length of the SMT \(\mathcal{T}^{*}\) of \(\mathcal{P}\), i.e. \(|\mathcal{T}|\leq(1+\epsilon)|\mathcal{T}^{*}|\). Thus, we are done.
**Running time of Algorithm 3.** Constructing the convex hull takes \(\mathcal{O}(n\log n)\) time. The number of lattice points contained in the bounding box is \(\left(\frac{8n-12}{\epsilon}\right)^{2}=\mathcal{O}(\frac{n^{2}}{\epsilon^{2}})\). Thus, the number of vertices in the resultant graph \(G_{f,\epsilon}\) is \(N=\mathcal{O}(\frac{n^{2}}{\epsilon^{2}})+n=\mathcal{O}(\frac{n^{2}}{\epsilon^ {2}})\). The time complexity of Algorithm 2 is \(\mathcal{O}(N^{2}k^{4}4^{n-k}+Nk^{3}3^{n-k}+N^{3})=\mathcal{O}(\frac{n^{4}k^{4} }{\epsilon^{4}}4^{n-k})\). This step dominates the running time resulting in the complexity of Algorithm 3 being \(\mathcal{O}(\frac{n^{4}k^{4}}{\epsilon^{4}}4^{n-k})\)
**Theorem 42**.: _There exists an FPTAS for Euclidean Steiner Minimal Tree on an \(f(n)\)-Almost Convex Set of \(n\) terminals, where \(f(n)=\mathcal{O}(\log n)\)._
Proof.: From Theorem 41, we get a \((1+\epsilon)\)-approximate SMT for any \((n-k)\)-Almost Convex Set of \(n\) terminals in time \(\mathcal{O}(\frac{n^{4}k^{4}}{c^{4}}4^{n-k})\). For \(n-k=\mathcal{O}(\log n)\), we get the running time of Algorithm 3 to be \(\mathcal{O}(\frac{n^{4}k^{4}}{c^{4}}n\mathcal{O}^{(1)})\). Thus, Algorithm 3 is an FPTAS for the Euclidean Steiner Minimal Tree problem on an \(\mathcal{O}(\log n)\)-Almost Convex Set of \(n\) terminals.
### Hardness of Approximation for Euclidean Steiner Minimal Tree on Cases of Almost Convex Sets
In this section, we consider the Euclidean Steiner Minimal Tree problem on \(f(n)\)-Almost Convex Sets of \(n\) terminal points, where \(f(n)=\Omega(n^{\epsilon})\) for some \(\epsilon\in(0,1]\). We show that this problem cannot have an FPTAS. The proof strategy is similar to that in [6]. First, we give a reduction for the problem Exact Cover by 3-Sets (defined below) to our problem to show that our problem is NP-hard. Next, we consider a discrete version of our problem and reduce our problem to the discrete version. The discrete version is in NP. Therefore, this chain of reductions imply that the discrete version of our problem is Strongly NP-complete and therefore cannot have an FPTAS, following from [6]. Similar to the arguments in [6], this also implies that our problem cannot have an FPTAS.
Before we describe our reductions, we take a look at the NP-hardness reduction of the Euclidean Steiner Minimal Tree problem from the Exact Cover by 3-Sets (X3C) problem in [6]. In the X3C problem, we are given a universe of elements \(U=\{1,2,\ldots,3n\}\) and a family \(\mathbb{F}\) of 3-element subsets \(F_{1},F_{2},\ldots,F_{t}\) of the \(3n\) elements. The objective is to decide if there exists a subcollection \(\mathbb{F}^{\prime}\subseteq\mathbb{F}\) such that: (i) the elements of \(\mathbb{F}^{\prime}\) are disjoint, and (ii) \(\bigcup_{F^{\prime}\in\mathbb{F}^{\prime}}F^{\prime}=U\). The X3C problem is NP-complete [7].
In [6], various gadgets are constructed, i.e. particular arrangements of a set of points. These are then arranged on the plane in a way corresponding to the given X3C instance. Figure 11 shows the reduced ESMT instance obtained for \(U=\{1,2,3,4,5,6\}\) and \(\mathbb{F}=\{\{1,2,4\},\{2,3,6\},\{3,5,6\}\}\) (taken from [6]). The squares, hexagons (crossovers), shaded circles (terminators) and lines (rows) all represent specific arrangements of a subset of points. Let \(X(\mathbb{F})\) denote the reduced instance. The number of points in \(X(\mathbb{F})\) is bounded by a polynomial in \(n\) and \(t\). Let this polynomial be \(\mathcal{O}(t^{\gamma})\), as we can assume \(t\geq n\) since otherwise it trivially becomes a NO instance. Here \(\gamma\) is some constant.
We restate Theorem 1 in [6].
**Proposition 43**.: _Let \(\mathcal{S}^{*}\) denote an SMT of \(X(\mathbb{F})\), the instance obtained by reducing the X3C instance \((n,\mathbb{F})\), and \(|\mathcal{S}^{*}|\) denote its length. If \(\mathbb{F}\) has an exact cover, then \(|\mathcal{S}^{*}|\leq f(n,t,\hat{C})\), otherwise \(|\mathcal{S}^{*}|\geq f(n,t,\hat{C})+\frac{1}{200nt}\), where \(t=|\mathbb{F}|\), \(\hat{C}\) is the number of crossovers, i.e. hexagonal gadgets, and \(f\) is a positive real-valued function of \(n,t,\hat{C}\)._
We extend this construction to prove NP-hardness for instances of Euclidean Steiner Minimal Tree where the terminal set \(\mathcal{P}\) has \(\Omega(n^{\epsilon})\) points inside \(\text{CH}(\mathcal{P})\). Here, \(\epsilon\in(0,1]\) and \(n\) is the number of terminals.
Let us call the _length_ of a gadget to be the maximum horizontal distance between any two points in that gadget. Similarly, we define the _breadth_ of a gadget to be the maximum vertical distance between any two points in that gadget.
The _terminator_ gadget used is shown in Figure 12. The straight lines represent a row of at least 1000 points separated at distances of 1/10 or 1/11. The angles between them are as shown. The upward terminator has the point \(A\) above the other points in the terminator,
whereas the downward terminator has the point \(A\) below the other points. Firstly, we adjust the number of points in the long rows, such that the length and breadth of the terminators is same as that of the hexagonal gadgets (crossovers). We can fix this length and breadth to be some constants, such that the number of points in each gadget is also bounded by some constant. In our construction, we modify the terminators \(\Omega_{0}\), \(\Omega_{1}\), and \(\Omega_{2}\) as shown in Figure 11 enclosed in squares. \(\Omega_{1}\) is the terminator corresponding to the first occurrence of the element \(3n\in U\) in some set in \(\mathbb{F}\) and \(\Omega_{2}\) is the terminator corresponding to the last occurrence of \(3n\) in some set in \(\mathbb{F}\) (if there are more than one occurrences of \(3n\)). If there are no occurrences of \(3n\), then it is trivially a no-instance. The modified gadgets are shown in Figure 14. All the other gadgets remain unaltered.
We call a set of points arranged as shown in Figure 13, as a Conic Set.
**Definition 44**: _A Conic Set is a set of points consisting of a point \(T\), called the tip of the cone, and the remaining points denoted by \(\mathcal{S}\). Let \(\mathcal{C}\) be the circle with \(T\) as centre and radius \(r\). All the points in \(\mathcal{S}\) lie on \(\mathcal{C}\), such that the angle at the tip formed by the two extreme
Figure 11: Reduced instance of ESMT from X3C (taken from [6])
Figure 12: The Terminator gadget symbol and arrangement of points
points \(L,R\in\mathcal{S}\), i.e. \(\angle LTR=30^{\circ}\) in the anticlockwise direction. So, we have \(\overline{TL}=\overline{TR}=r\). The distance between any two consecutive points in \(\mathcal{S}\) is the same, say \(d\). Let the number of points in \(\mathcal{S}\) be \(n\). We denote the Conic Set as \(\mathrm{Cone}(T,r,n)\) and \(\mathcal{S}\) as \(\mathrm{Circ}(T,r,n)\). We call \(TL\) as the left slope of the Conic Set and \(TR\) as the right slope of the Conic Set._
We use the Conic Set in the reduction for our problem. Now, we state the reduction of an X3C instance \((n,\mathbb{F})\) to an instance \(X^{\prime}(\mathbb{F},\epsilon)\) of Euclidean Steiner Minimal Tree. Later, we show that the instance will satisfy the desired properties on the number of terminals inside the convex hull of the terminal set.
**Algorithm \(\mathcal{A}\) for construction of an ESMT instance \(X^{\prime}(\mathbb{F},\epsilon)\) from an X3C instance \((n,\mathbb{F})\):**
Figure 14: The modified terminator gadgets
Figure 13: Conic Set: \(\mathrm{Cone}(T,r,n)\)
* Reduce the input X3C instance to the points configuration \(X(\mathbb{F})\) according to the reduction given in [6].
* Modify the terminators \(\Omega_{0}\), \(\Omega_{1}\), and \(\Omega_{2}\) to as shown in Figure 14 and call them \(\Omega_{0}^{\prime}\), \(\Omega_{1}^{\prime}\), and \(\Omega_{2}^{\prime}\). Let \(DQCP\) be the smallest axis-parallel rectangle bounding \(X(\mathbb{F})\) after modifying the terminators, where \(D\) is the bottom leftmost point of \(\Omega_{0}^{\prime}\).
* Take \(\alpha=\frac{1}{\epsilon}\). Define \(r=ct^{\alpha}=\mathcal{O}(t^{\alpha})\) and \(n^{\prime}=c^{\prime}t^{\gamma\alpha}=\mathcal{O}(t^{\gamma\alpha})\), where \(t=|\mathbb{F}|\) and \(c\) and \(c^{\prime}\) are constants. Add the \(\operatorname{Cone}(D,r,n^{\prime})\), such that \(D\) is the tip of the Conic Set, and the left slope \(DE\) makes an angle of \(120^{\circ}\) with \(DP\). The right slope \(DF\) also makes an angle of \(120^{\circ}\) with \(DQ\).
Now we prove a few properties of the constructed instance \(X^{\prime}(\mathbb{F},\epsilon)\).
All the points in \(\operatorname{Circ}(D,r,n^{\prime})\) (according to Definition 4.2) lie on the convex hull of the reduced ESMT instance \(X^{\prime}(\mathbb{F},\epsilon)\) constructed by Algorithm \(\mathcal{A}\), where \(\epsilon\in(0,1]\).
Proof.: By the construction of \(\operatorname{Cone}(D,r,n^{\prime})\) in Algorithm \(\mathcal{A}\), let \(\mathcal{C}\) be the circle on which all the points in \(\operatorname{Circ}(D,r,n^{\prime})\) lie. If we draw a tangent to \(\mathcal{C}\) at any of the points in \(\operatorname{Circ}(D,r,n^{\prime})\), then all the remaining points in the configuration \(X^{\prime}(\mathbb{F},\epsilon)\) lie towards one side of the tangent. We know that if we can find a line passing through a point such that all the other points in the plane lie on one side of the line, then the point lies on the convex hull of the points in the plane. Therefore, all the points in \(\operatorname{Circ}(D,r,n^{\prime})\) lie on the convex hull of the reduced instance \(X^{\prime}(\mathbb{F},\epsilon)\).
Let us denote the convex hull of \(X^{\prime}(\mathbb{F},\epsilon)\) by \(\operatorname{CH}(X^{\prime}(\mathbb{F},\epsilon))\) and that of the points lying inside or on the bounding rectangle PDQC, i.e. \(X^{\prime}(\mathbb{F},\epsilon)\setminus\operatorname{Circ}(D,r,n^{\prime})\) by \(\operatorname{CH}(X^{\prime}(\mathbb{F},\epsilon)\setminus\operatorname{Circ }(D,r,n^{\prime}))\).
**Lemma 46**.: _The reduced ESMT instance \(X^{\prime}(\mathbb{F},\epsilon)\) constructed by Algorithm \(\mathcal{A}\) has \(\Omega(N^{\epsilon})\) points inside the convex hull, where \(\epsilon\in(0,1]\) and \(N\) is the total number of terminals in \(X^{\prime}(\mathbb{F},\epsilon)\)._
Proof.: CH\((X^{\prime}(\mathbb{F},\epsilon))\) contains all the points in \(\operatorname{Circ}(D,r,n^{\prime})\) by Lemma 45. \(\operatorname{Circ}(D,r,n^{\prime})\) contains \(n^{\prime}=\mathcal{O}(t^{\gamma\alpha})\) points.
Now we need to analyze the number of points on \(\operatorname{CH}(X^{\prime}(\mathbb{F},\epsilon)\setminus\operatorname{Circ }(D,r,n^{\prime}))\). The remaining points in \(X(\mathbb{F})\), i.e. \(X(\mathbb{F})\setminus\operatorname{CH}(X^{\prime}(\mathbb{F},\epsilon) \setminus\operatorname{Circ}(D,r,n^{\prime}))\) must lie within the convex hull of the entire construction, i.e. \(X^{\prime}(\mathbb{F},\epsilon)\). From the construction in Algorithm \(\mathcal{A}\), no point on the connecting rows can be a part of \(\operatorname{CH}(X^{\prime}(\mathbb{F},\epsilon)\setminus\operatorname{Circ }(D,r,n^{\prime}))\) as there is no line passing through it, which contains all terminals on one side of it. The same thing holds for the square and hexagonal gadgets as well, except the hexagonal gadgets corresponding to the last element of the last set in the family, i.e. \(F_{t}\). Thus, only the terminators and the hexagonal gadgets corresponding to the last element of \(F_{t}\) contribute points to \(\operatorname{CH}(X^{\prime}(\mathbb{F},\epsilon)\setminus\operatorname{Circ }(D,r,n^{\prime}))\).
If we look at the arrangement of points in the terminators (modified as well as those left unchanged) and the hexagonal gadgets as shown in Figures 12 and 16, the convex hull of each of these gadgets consists of constantly many points. Therefore, the number of points each of these gadgets contribute to \(\operatorname{CH}(X^{\prime}(\mathbb{F},\epsilon)\setminus\operatorname{Circ }(D,r,n^{\prime}))\) is bounded by some constant. The number of terminators is \(6t+2\) and the number of hexagonal gadgets corresponding to the last element of \(F_{t}\) is at most \(3n\). Therefore, the number of points on \(\operatorname{CH}(X^{\prime}(\mathbb{F},\epsilon)\setminus\operatorname{Circ }(D,r,n^{\prime}))\) is \(\mathcal{O}(t+n)=\mathcal{O}(t)\) as \(n\leq t\).
The instance \(X(\mathbb{F})\) obtained via reduction from X3C has \(6t+2\) terminators, \(t\) squares, at most \(9nt\) crossovers (hexagonal gadgets), and \(\mathcal{O}(nt)\) connecting rows of points. The number of gadgets is \(\mathcal{O}(nt)\). Therefore, the total number of points in \(X(\mathbb{F})\) is \(\Omega(nt)=\omega(t)\). The modified terminators result in a constantly many increase in the number of points. So, we have \(\gamma>1\).
Thus, the number of points inside the convex hull is \(\Omega(t^{\gamma})\) and those on the convex hull is \(\mathcal{O}(t^{\gamma\alpha})\). So, the total number of terminals is \(N=\mathcal{O}(t^{\gamma\alpha})+\mathcal{O}(t^{\gamma})=\mathcal{O}(t^{\gamma \alpha})\), and those inside the convex hull is \(\Omega(t^{\gamma})=\Omega(N^{1/\alpha})=\Omega(N^{\epsilon})\) as \(\alpha=\frac{1}{\epsilon}\).
We further prove structural properties of SMTs of the reduced instance \(X^{\prime}(\mathbb{F},\epsilon)\) when considering the modified gadgets \(\Omega^{\prime}_{0}\), \(\Omega^{\prime}_{1}\), and \(\Omega^{\prime}_{2}\).
**Lemma 47**.: _Consider an SMT \(\mathcal{S}^{*}\) of the ESMT instance \(X(\mathbb{F})\) obtained via reduction from the X3C instance \((n,\mathbb{F})\) as per [6]. Consider a tree \(\mathcal{S^{\prime}}^{*}\) on the terminal set of \(X^{\prime}(\mathbb{F},\epsilon)\)
Figure 16: The hexagonal gadget (crossover), the convex hull of the gadget is the quadrilateral abcd (taken from [6])
obtained from \(\mathcal{S}^{*}\) as follows: Consider the modified terminator gadgets \(\Omega^{\prime}_{i},\ i\in\{0,1,2\}\) as in Algorithm \(\mathcal{A}\). For each \(i\in\{0,1,2\}\), the edge \(B_{i}O_{i}\) is excluded from \(\mathcal{S}^{*}\) and the edge \(D_{i}O_{i}\) is included to form \(\mathcal{S}^{\prime}{}^{*}\). \(\mathcal{S}^{\prime}{}^{*}\) is an SMT for the terminal set of \(X^{\prime}(\mathbb{F},\epsilon)\)._
Proof.: Consider \(\mathcal{S}^{*}\). Due to Lemma 4 of [6], Steiner points of \(\mathcal{S}^{*}\) can only be connected to points in the triangular and square gadgets. Lemma 5 of [6] states that if there are two terminals \(x,y\in X(\mathbb{F})\) and the distance between \(x\) and \(y\) does not exceed \(\frac{1}{10}\), then \((x,y)\) is an edge of \(\mathcal{S}^{*}\). So in \(\mathcal{S}^{*}\), for each \(i\in\{0,1,2\}\) all the points on \(B_{i}O_{i}\) are joined together along \(B_{i}O_{i}\). Lemma 5 of [6] also holds true on modifying the terminators to \(\Omega^{\prime}_{i},\ i\in\{0,1,2\}\). Now, we join all the points on \(D_{i}O_{i}\) along \(D_{i}O_{i}\). This gives us the SMT \(\mathcal{S}^{\prime}{}^{*}\) for the terminal set of \(X^{\prime}(\mathbb{F},\epsilon)\).
Now we focus on the structure of the SMT of \(X^{\prime}(\mathcal{F},\epsilon)\). The SMT is basically the union of the SMT \(\mathcal{S}^{\prime}{}^{*}\) of the points in the bounding rectangle \(PDQC\) as stated in Lemma 47 and the SMT of the set of points \(\mathrm{Cone}(D,r,n^{\prime})\).
\(\mathrm{CH}(X^{\prime}(\mathbb{F},\epsilon)\setminus\mathrm{Circ}(D,r,n^{ \prime}))\) is enclosed by the bounding rectangle \(PDQC\) and \(D\) must lie on \(\mathrm{CH}(X^{\prime}(\mathbb{F},\epsilon)\setminus\mathrm{Circ}(D,r,n^{ \prime}))\). We label the vertices of \(\mathrm{CH}(X^{\prime}(\mathbb{F},\epsilon)\setminus\mathrm{Circ}(D,r,n^{ \prime}))\) as \(D,P_{1},P_{2},\ldots,P_{k}\) in the counter-clockwise order. Let \(\mathrm{CH}(X^{\prime}(\mathbb{F},\epsilon))\) be the convex hull of all the points. By Lemma 45, all the points in \(\mathrm{Circ}(D,r,n^{\prime})\) lie on \(\mathrm{CH}(X^{\prime}(\mathbb{F},\epsilon))\). Let \(EP_{i}\) and \(FP_{j}\) be edges in \(\mathrm{CH}(X^{\prime}(\mathbb{F},\epsilon))\), such that \(P_{i},P_{j}\notin\mathrm{Circ}(D,r,n^{\prime})\).
The SMT of \(X^{\prime}(\mathbb{F},\epsilon)\) clearly lies inside its convex hull, \(\mathrm{CH}(X^{\prime}(\mathbb{F},\epsilon))\). We show that the Steiner hull can be further restricted to the bounding rectangle \(PDQC\) and the convex polygon formed by the points in \(\mathrm{Cone}(D,r,n^{\prime})\). For this we use Theorem 1.5 in [9], as stated below.
[9] Let \(H\) be a Steiner hull of \(N\). By sequentially removing wedges \(\mathrm{abc}\) from the remaining region, where \(\mathrm{a}\), \(\mathrm{b}\), \(\mathrm{c}\) are terminals but \(\triangle\mathrm{abc}\) contains no other terminal, \(\mathrm{a}\) and \(\mathrm{c}\) are on the boundary and \(\angle\mathrm{abc}\geq 120^{\circ}\), a Steiner hull \(H^{\prime}\) invariant to the sequence of removal is obtained.
The region comprising of the bounding rectangle \(PDQC\) according to Algorithm \(\mathcal{A}\) and the convex polygon formed by the set of points \(\mathrm{Cone}(D,r,n^{\prime})\) is a Steiner hull of \(X^{\prime}(\mathbb{F},\epsilon)\).
Proof.: Firstly, let us consider the wedge \(EP_{i+1}P_{i}\). All the points are terminals, \(E\) and \(P_{i}\) are boundary points, and \(\triangle EP_{i+1}P_{i}\) contains no other terminal. Now, \(\angle EP_{i+1}P_{i}\) is greater than the exterior angle of \(\angle EP_{i+1}D\), which in turn is greater than \(\angle EDP_{i+1}\). \(\angle EDP_{i+1}\geq\angle\mathrm{EDP}=120^{\circ}\), by the construction. Therefore, \(\angle EP_{i+1}P_{i}\geq 120^{\circ}\). By applying Proposition 48, we can remove the wedge \(EP_{i+1}P_{i}\) from the convex hull \(\mathrm{CH}(X^{\prime}(\mathbb{F},\epsilon))\) to get a smaller Steiner hull. This can be repeated for the wedges \(EP_{i+2}P_{i+1},EP_{i+3}P_{i+2},\ldots,EDP_{k}\). The same argument can also be used to get rid of the wedges \(FP_{j-1}P_{j},FP_{j-2}P_{j-1},\ldots,FDP_{1}\). So, we get the final Steiner hull \(H^{\prime}\) to be the union of the bounding rectangle \(PDQC\) and the convex polygon formed by the points in \(\mathrm{Cone}(D,r,n^{\prime})\).
Given the nature of the above Steiner hull, we show that we can treat \(X(\mathbb{F})\) and \(\mathrm{Cone}(D,r,n^{\prime})\) separately.
There is an SMT of \(X^{\prime}(\mathbb{F},\epsilon)\) that is the union of an SMT of \(X(\mathbb{F})\) and an SMT of the points in \(\mathrm{Cone}(D,r,n^{\prime})\), with \(D\) being common to both of them.
Proof.: According to Lemma 49, there is an SMT of \(X^{\prime}(\mathbb{F},\epsilon)\) that lies completely inside the the bounding quadrilateral \(PDQC\) and the convex polygon formed by \(\mathrm{Cone}(D,r,n^{\prime})\). These
two regions have \(D\) as the only common point. Therefore, \(D\) is an articulation point in the tree and connects these two regions. So, we have this SMT of \(X^{\prime}(\mathbb{F},\epsilon)\) as the union of an SMT of \(X(\mathbb{F})\) and an SMT of the points in \(\operatorname{Cone}(D,r,n^{\prime})\).
We can identify a structure for an SMT of the points in \(\operatorname{Cone}(D,r,n^{\prime})\) using [16].
There is an SMT of the points in \(\operatorname{Cone}(D,r,n^{\prime})\) that is as shown in Figure 17. In the SMT, \(D\) is connected to the two middle points in \(\operatorname{Circ}(D,r,n^{\prime})\) via a Steiner point \(S^{t}\). The other points in \(\operatorname{Circ}(D,r,n^{\prime})\) are connected along the circumference.
Proof.: The number of points in \(\operatorname{Circ}(D,r,n^{\prime})\) is \(\mathcal{O}(t^{\gamma\alpha})\). We can take the constant factor to be large enough so that \(|\operatorname{Circ}(D,r,n^{\prime})|>=12\). If we complete the regular polygon on \(\mathcal{C}\) having \(\operatorname{Circ}(D,r,n^{\prime})\) as a subset of its vertices, then it contains more than \(12\) vertices and along with the centre \(D\) has a SMT with structure given in [16].
Let the Steiner tree for \(\operatorname{Cone}(D,r,n^{\prime})\) as shown in Figure 17 be denoted by \(\mathcal{T}_{1}\). If this is not minimal, then there exists another Steiner tree \(\mathcal{T}_{2}\) such that \(|\mathcal{T}_{2}|<|\mathcal{T}_{1}|\). Then we can replace \(\mathcal{T}_{1}\) by \(\mathcal{T}_{2}\) in the SMT of the regular polygon and its centre to get a shorter Steiner tree. This contradicts the minimality of the structure given in [16]. Therefore, the SMT of \(\operatorname{Cone}(D,r,n^{\prime})\) follows the structure in Figure 17.
Finally, we prove the NP-hardness of Euclidean Steiner Minimal Tree on \(f(n)\)-Almost Convex Sets of \(n\) terminals, when \(f(n)=\Omega(n^{\epsilon})\) for some \(\epsilon\in(0,1]\).
Let \(\mathcal{S}^{s}_{\mathbb{F},\epsilon}\) denote an SMT of \(X^{\prime}(\mathbb{F},\epsilon)\) and \(|\mathcal{S}^{s}_{\mathbb{F},\epsilon}|\) denote its length. If \(\mathbb{F}\) has an exact cover, then \(|\mathcal{S}^{s}_{\mathbb{F},\epsilon}|\leq f(n,t,\hat{C})+|\mathcal{T}_{1}|\), otherwise \(|\mathcal{S}^{s}_{\mathbb{F},\epsilon}|\geq f(n,t,\hat{C})+\frac{1}{200nt}+| \mathcal{T}_{1}|\), where \(\hat{C}\) is the number of crossovers, i.e. hexagonal gadgets, and \(f\) is a positive real-valued function of \(n,t,\hat{C}\) as stated in Proposition 4.
Proof.: From Lemma 4.2, we have \(|\mathcal{S}^{s}_{\mathbb{F},\epsilon}|=|\mathcal{S}^{\ast}|+|\mathcal{T}_{1}|\). From Lemma 4.2, we can compute the length of \(\mathcal{T}_{1}\) as a function of \(t\), \(\alpha\), and \(\gamma\). Finally, using Proposition 4.2 we get the required reduction.
Since it is not known if the ESMT problem is in NP, Garey et al. [6] show the NP-completeness of a related problem called the Discrete Euclidean Steiner Minimal Tree (DESMT) problem, which is in NP. We define the DESMT problem as given in [6]. The DESMT problem takes as input a set \(\mathcal{X}\) of integer-coordinate points in the plane and a positive integer \(L\), and asks if there exists a set \(\mathcal{Y}\supseteq\mathcal{X}\) of integer-coordinate points such that some spanning tree \(\mathcal{T}\) for \(\mathcal{Y}\) satisfies \(|\mathcal{T}|_{d}\leq L\), where \(|\mathcal{T}|_{d}=\Sigma_{e\in E(\mathcal{T})}\lceil\overline{e}\rceil\), i.e. we round up the length of each edge to the least integer not less than it.
In order to show that DESMT is NP-hard, the same reduction as that of the ESMT problem can be used, followed by scaling and rounding the coordinates of the points. Theorem 4 of [6] proves that the DESMT problem is NP-Complete. Moreover, since it is Strongly NP-Complete, the DESMT problem does not admit any FPTAS. Finally in Theorem 5 of [6], Garey et al. show that as a consequence, the ESMT problem does not have any FPTAS as well.
Now we show that the DESMT problem is NP-hard even on \(f(n)\)-Almost Convex Sets of \(n\) terminals, when \(f(n)=\Omega(n^{\epsilon})\) and where \(\epsilon\in(0,1]\).
In Section 7 of [6], the reduced instance \(X(\mathbb{F})\) of ESMT is converted into an instance \(X_{d}(\mathbb{F})\) of DESMT. The conversion is as follows:
\(X_{d}(\mathbb{F})=\{(\lceil 12M\cdot 200nt\cdot x_{1}\rceil,\lceil 12M\cdot 200 nt\cdot x_{2}\rceil):x=(x_{1},x_{2})\in X(\mathbb{F})\}\), where \(M=|X(\mathbb{F})|\).
We apply a similar conversion to the reduced ESMT instance \(X^{\prime}(\mathbb{F},\epsilon)\), to convert it into a DESMT instance of an \(\Omega(n^{\epsilon})\)-Almost Convex Set. The conversion goes as follows:
\(X^{\prime}_{d}(\mathbb{F},\epsilon)=\{(\lceil 12N\cdot 200nt\cdot x_{1}\rceil, \lceil 12N\cdot 200nt\cdot x_{2}\rceil):x=(x_{1},x_{2})\in X^{\prime}( \mathbb{F},\epsilon)\}\), where \(N=|X^{\prime}(\mathbb{F},\epsilon)|\).
The next two lemmas establish the validity of \(X^{\prime}_{d}(\mathbb{F},\epsilon)\) as an instance of DESMT and the upper bounds on the size of the constructed instance. Note that the reduction from X3C followed by the conversion can be done in polynomial time.
The instance \(X^{\prime}_{d}(\mathbb{F},\epsilon)\) constructed above is a valid DESMT instance.
Proof.: All the points in \(X^{\prime}_{d}(\mathbb{F},\epsilon)\) have integer coordinates according to the conversion stated above. So, it is a DESMT instance.
The reduced DESMT instance \(X^{\prime}_{d}(\mathbb{F},\epsilon)\) has \(N\) distinct points, where \(N=|X^{\prime}(\mathbb{F},\epsilon)|\).
Proof.: The minimum distance between any two points in \(X^{\prime}(\mathbb{F},\epsilon)\) is that between two consecutive points of \(\operatorname{Circ}(D,r,n^{\prime})\), which is \(\mathcal{O}(\frac{1}{t^{7}})\). Recall Lemma 46 which establishes that \(N=\mathcal{O}(t^{\gamma\alpha})\). So, the minimum distance between any two points in \(X^{\prime}_{d}(\mathbb{F},\epsilon)\) is \(\mathcal{O}(N\cdot nt\cdot\frac{1}{t^{7}})=\mathcal{O}(nt^{\alpha+1})\). Because of the substantial distance obtained between points after scaling, the rounding will not cause any distinct points of \(X^{\prime}_{d}(\mathbb{F},\epsilon)\) to coincide. Therefore, the number of points remains unchanged, i.e. \(|X^{\prime}_{d}(\mathbb{F},\epsilon)|=|X^{\prime}(\mathbb{F},\epsilon)|=N\).
Now we present the following lemma for the constructed DESMT instance \(X^{\prime}_{d}(\mathbb{F},\epsilon)\) analogous to Lemma 46 for the ESMT instance \(X^{\prime}(\mathbb{F},\epsilon)\).
The reduced DESMT instance \(X^{\prime}_{d}(\mathbb{F},\epsilon)\) constructed is an \(\Omega(N^{\epsilon})\)-Almost Convex Set, where \(N=|X^{\prime}_{d}(\mathbb{F},\epsilon)|\).
Proof.: From Lemma 46, we know that the reduced ESMT instance \(X^{\prime}(\mathbb{F},\epsilon)\) has \(\Omega(N^{\epsilon})\) points inside its convex hull \(\operatorname{CH}(X^{\prime}(\mathbb{F},\epsilon))\), and \(N=|X^{\prime}(\mathbb{F},\epsilon)|\). The number of points after conversion remains the same by Lemma 54. We need to show that after conversion, except for the points of \(\mathcal{O}(t)\) gadgets, no other points inside the convex hull \(\operatorname{CH}(X^{\prime}(\mathbb{F},\epsilon))\) lie on the
new convex hull \(\operatorname{CH}(X^{\prime}_{d}(\mathbb{F},\epsilon))\). The number of points in each of the \(\mathcal{O}(t)\) anomalous gadgets are constant in number, and hence not too many points from the interior of \(\operatorname{CH}(X^{\prime}(\mathbb{F},\epsilon))\) can lie on \(\operatorname{CH}(X^{\prime}_{d}(\mathbb{F},\epsilon))\).
After conversion, all the points on a horizontal connecting row have the same \(y\)-coordinate, as they initially had the same \(y\)-coordinate and therefore undergo the same transformation. Thus, all the points on a horizontal connecting row still lie on a horizontal line segment in \(X^{\prime}_{d}(\mathbb{F},\epsilon)\). Similarly, all the points on a vertical connecting row still lie on a vertical line segment in \(X^{\prime}_{d}(\mathbb{F},\epsilon)\). This implies that none of the points on the connecting rows can be a part of \(\operatorname{CH}(X^{\prime}_{d}(\mathbb{F},\epsilon))\) as there can be no line passing through them that also contains all terminal points on one side of it.
The same thing holds for the square and hexagonal gadgets (crossovers) as well, except the hexagonal gadgets placed at the beginning or end of any row. This is because all the points which are a part of these square and hexagon gadgets are surrounded by connecting row points all four sides, above, below, left and right. So again, only the terminators and the hexagonal gadgets appearing at the beginning or end of any row contribute to \(\operatorname{CH}(X^{\prime}_{d}(\mathbb{F},\epsilon))\).
Now, since we had adjusted the number of points in the long rows of the terminators and hexagonal gadgets such that their lengths and breadths are some constants, the number of points in each of the terminators and hexagonal gadgets can be bounded by some constant as the minimum distance between any two consecutive points on the long rows or standard rows is at least \(\frac{1}{11}\). Therefore, each of these gadgets contribute some constantly many points to \(\operatorname{CH}(X^{\prime}_{d}(\mathbb{F},\epsilon))\).
As we have seen in Lemma 4.2, the number of terminators is \(6t+2\) and the number of hexagonal gadgets corresponding to the beginning or end of any row is at most \(6n\). Therefore, the number of points contributed by the terminators and the hexagonal gadgets placed at the beginning or the end of any row, to \(\operatorname{CH}(X^{\prime}_{d}(\mathbb{F},\epsilon)\) is \(\mathcal{O}(t+n)=\mathcal{O}(t)\), as \(n\leq t\). Even if all the points in \(\operatorname{Circ}(D,r,n^{\prime})\) lie on the new convex hull \(\operatorname{CH}(X^{\prime}_{d}(\mathbb{F},\epsilon)\), we have \(\Omega(t^{\gamma})=\Omega(N^{\epsilon})\) points inside it. Thus we are done.
We get the following theorem from Lemmas 53-55.
**Theorem 56**.: _The instance \(X^{\prime}_{d}(\mathbb{F},\epsilon)\) constructed is a valid DESMT instance on an \(\Omega(N^{\epsilon})\)-Almost Convex Set, where \(|X^{\prime}_{d}(\mathbb{F},\epsilon)|=|X^{\prime}(\mathbb{F},\epsilon)|=N\)._
Following Theorems 3 and 4 in [6], we get that the DESMT problem is NP-Complete for \(\Omega(N^{\epsilon})\)-Almost Convex Sets, where \(N\) is the total number of terminals. Since we get the reduced instance \(X^{\prime}_{d}(\mathbb{F},\epsilon)\) from the X3C instance \((n,\mathbb{F})\), the DESMT problem is strongly NP-complete for \(\Omega(N^{\epsilon})\)-Almost Convex Sets, and does not admit any FPTAS.
Using Theorem 5 of [6], we get that if the ESMT problem has an FPTAS, then the X3C problem can be solved in polynomial time. The Theorem also applies for our case of \(\Omega(N^{\epsilon})\)-Almost Convex Sets. Therefore, we get the following theorem,
**Theorem 57**.: _There does not exist any FPTAS for the ESMT problem on \(f(n)\)-Almost Convex Sets of \(n\) terminals, where \(f(n)=\Omega(n^{\epsilon})\) and \(\epsilon\in(0,1]\), unless \(P=\text{NP}\)._
## 6 Conclusion
In this paper, we first study ESMT on vertices of 2-CPR \(n\)-gons and design a polynomial time algorithm. It remains open to design a polynomial time algorithm for ESMT on \(k\)-CPR \(n\)-gons, or show NP-hardness for the problem. Next, we study the problem on \(f(n)\)-Almost Convex Sets of \(n\) terminals. For this NP-hard problem, we obtain an algorithm that runs in
\(2^{\mathcal{O}(f(n)\log n)}\) time. We also design an FPTAS when \(f(n)=\mathcal{O}(\log n)\). On the other hand, we show that there cannot be an FPTAS if \(f(n)=\Omega(n^{\epsilon})\) for any \(\epsilon\in(0,1]\), unless \(\mathrm{P}=\mathrm{NP}\). The question of existence of FPTASes when \(f(n)\) is a polylogarithmic function remains open.
|
2310.00843 | Prov2vec: Learning Provenance Graph Representation for Unsupervised APT
Detection | Modern cyber attackers use advanced zero-day exploits, highly targeted spear
phishing, and other social engineering techniques to gain access and also use
evasion techniques to maintain a prolonged presence within the victim network
while working gradually towards the objective. To minimize the damage, it is
necessary to detect these Advanced Persistent Threats as early in the campaign
as possible. This paper proposes, Prov2Vec, a system for the continuous
monitoring of enterprise host's behavior to detect attackers' activities. It
leverages the data provenance graph built using system event logs to get
complete visibility into the execution state of an enterprise host and the
causal relationship between system entities. It proposes a novel provenance
graph kernel to obtain the canonical representation of the system behavior,
which is compared against its historical behaviors and that of other hosts to
detect the deviation from the normality. These representations are used in
several machine learning models to evaluate their ability to capture the
underlying behavior of an endpoint host. We have empirically demonstrated that
the provenance graph kernel produces a much more compact representation
compared to existing methods while improving prediction ability. | Bibek Bhattarai, H. Howie Huang | 2023-10-02T01:38:13Z | http://arxiv.org/abs/2310.00843v1 | # Prov2vec: Learning Provenance Graph Representation for Unsupervised APT Detection
###### Abstract.
Modern cyber attackers use advanced zero-day exploits, highly targeted spear phishing, and other social engineering techniques to gain access and also use evasion techniques to maintain a prolonged presence within the victim network while working gradually towards the objective. To minimize the damage, it is necessary to detect these Advanced Persistent Threat as early in the campaign as possible. This paper proposes, Prov2vec, a system for the continuous monitoring of enterprise host's behavior to detect attackers' activities. It leverages the data provenance graph built using system event logs to get complete visibility into the execution state of an enterprise host and the causal relationship between system entities. It proposes a novel provenance graph kernel to obtain the canonical representation of the system behavior, which is compared against its historical behaviors and that of other hosts to detect the deviation from the normality. These representations are used in several machine learning models to evaluate their ability to capture the underlying behavior of an endpoint host. We have empirically demonstrated that the provenance graph kernel produces a much more compact representation compared to existing methods while improving prediction ability.
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
a system in a new platform requires a substantial manual effort and expertise. While incorporating novelty detection on a node or edge level (Bahdan et al., 2015; Chen et al., 2016; Chen et al., 2017) can potentially detect previously unseen attacks, it is important to note that new benign activities also emerge constantly, necessitating a broader perspective on activities for decision-making.
To tackle these challenges, we introduce Prov2vec, an innovative system that leverages a novel **provenance graph kernel** to derive a canonical form for a given graph snapshot, capturing the aggregated host behavior at a specific time point in a fixed-size vector representation. Prov2vec operates by mining label-aware backward walks, with a maximum length specified by the user as **h**, for each node in provenance graph. These walks encompassing the execution history of nodes over varying hop lengths from \(0\leq i\leq h\) are then compressed into a label that succinctly describes the nodes causal history. A node label histogram is constructed by tallying the frequencies of distinct labels across all nodes in the graph for each hop length from 0 to \(h\). These histograms are stored in memory using a fixed-size probabilistic data structure called histsketch (Zhu et al., 2017), which are utilized by downstream machine learning tasks to model the behavior of the hosts and detect when they behave abnormally.
This work makes several key contributions:
* We develop an end-to-end system for unsupervised APT detection. Leveraging the provenance graphs built from logs gathered during normal operations, our system creates comprehensive host behavior profiles. Any provenance graphs deviating from those generated by benign activities are identified as anomalies. We utilize these anomalous graph snapshots, along with their associated users and hosts, to traverse the authentication graph and uncover all compromised entities.
* We propose a novel graph kernel that enhances the generalization of similar provenance graph structures using compact node label histograms. Our approach achieves superior or comparable accuracy in downstream machine learning tasks while maintaining the histogram size an order magnitude smaller than Wesfeller-Lehman subtree (WLSubtree) kernel (Wang et al., 2017) and temporally ordered WL subtree kernel from Unicorn (Wang et al., 2017).
* graph classification, graph clustering, and graph anomaly detection
- on provenance graphs generated from Windows and Linux hosts.
The rest of the paper is organized into five major sections. Section 2 introduces the threat model for our system. Section 3 discusses the detail of Prov2vec system design, where we discuss the novel provenance graph kernel for obtaining the compact node label histogram. We discuss the performance of Prov2vec to model the enterprise host's behavior and compare it against state-of-the-art graph kernels in Section 4. The sections following discuss the assumptions made and shortcomings of the Prov2vec system, summarizes the related works, and discuss their relation with Prov2vec respectively.
## 2. Threat Model
We focus on a typical APT life cycle, where adversaries gain unauthorized access to the enterprise hosts and aim to remain stealth for an extended period. To achieve their objectives, attackers carry out various post-exploitation activities, including internal reconnaissance, privilege escalation, lateral movement, and data exfiltration (Wang et al., 2017). Our goal is to detect compromised hosts based on a given snapshot of the provenance graph at a specific time \(t\), using Prov2vec. We assume that Prov2vec has sufficient historical data to establish a behavior profile of enterprise hosts during normal operations. We also assume that the provenance graph obtained during an attack exhibits distinct differences from the graphs observed during prior normal operations.
Pro2vec does not make assumptions about the specific actions performed by an attacker, apart from the fact that their intent and/or actions leave indicators in the audit logs and, consequently on the provenance graph. To accurately capture this information, Prov2vec assumes the correctness of log collection frameworks. The remainder of this paper assumes the validity of log collecting frameworks and log data used in our experiments, focusing on Prov2vec's ability to model system behavior based on them. For modeling the system behavior, this work assumes that provenance graphs with similar structures indicate comparable operational behavior. Therefore, the detection of abnormal behavior entails the computation of (dis)similarities among the provenance graph snapshots.
## 3. Prov2vec Design
Figure 1 shows the high level overview of Prov2vec system.
Given the stream of log events generated by auditing tools (Han et al., 2015; Chen et al., 2016; Chen et al., 2017), Prov2vec updates the provenance graph continuously with new events. Periodically, it takes snapshots of said provenance graph \(G_{t}=(V_{t},E_{t})\).
The novel provenance graph kernel is used to convert the graph snapshots into node label histograms. These histograms can have different sizes depending on the number of distinct provenance labels in a given graph snapshot while aggregating over the specified neighborhood size.
To compare the histograms with one another, we convert them into vectors of the same size. In the static setting, this can be done by building a vector of size equal to node label vocabulary built using the histograms of all graphs in question. In the streaming setting, the vocabulary size is constantly increasing. To enable an easy comparison of streaming histograms, we utilize a probabilistic data structure called histosketch (Zhu et al., 2017) that uses the consistent weighted hashing (Zhu et al., 2017) to sample the histograms into a fixed size vectors while preserving the similarity between
Figure 1. The system diagram of Prov2vec.
them.
Finally, the series of feature vectors representing provenance graph snapshots is fed to machine learning models to learn the behavior of an enterprise host. They can be designed for one of many tasks such as graph classification, outlier detection, and graph clustering. During deployment, the first three steps are performed and the resultant feature vector is tested against the model learned to detect whether the behavior at any instance is anomalous.
The anomalies generated from these models serve as the leads for analysts, providing indications of potentially malicious activity. These anomalies prompt further investigation to gain insights into the underlying causes and potential countermeasures. To extract subgraphs that capture the sequence of actions performed by the attacker, alert generation and correlation can be conducted using systems like Rapsheet (Rapsheet, 2012) or SteinerLog (Beng et al., 2015). Prov2vec plays a crucial role in identifying suspicious endpoint hosts, enabling alert correlation systems to focus their fine-grained analysis on those specific hosts. The remainder of this chapter delves into the various steps involved in constructing an end-to-end system for detecting compromised enterprise hosts using Prov2vec, providing a comprehensive understanding of each component.
### Provenance Graph Creation
The system logs are parsed into a triplet of (_sub ject_, _action_, _object_) and inserted into a provenance graph. The direction of edges signifies the flow of data or information. For instance, an edge corresponding to a process writing on a file will point from the process to the file, whereas a process reading a file will have the opposite direction. Figure 2 shows two snapshots of a provenance graph at time \(t=0\) and later at \(t=1\). The red edges and nodes on the second snapshot represent the part inserted after the first snapshot.
To reduce the graph size and avoid dependency explosion during the forensic analysis, we utilize causality preserving duplicate elimination (Zhou et al., 2017) and node versioning. When inserting an edge \((src,event,dst)\), if there already is another edge with the same triplet in the provenance graph, and there are not any outgoing edges from the latest version of \(dst\), i.e., \(dst_{t}\), we simply update the time information of edge and avoid inserting the edge again. However, if the latest version of the node \(dst\), i.e., \(dst_{t}\) already has outgoing edges, the insertion of the edge changes the provenance of all those nodes. In that case, we simply create a new version of that node \(dst_{t+1}\), and insert the edge \((src,event,dst_{t+1})\) instead. In addition, an edge needs to be inserted between \(dst_{t}\) and \(dst_{t+1}\) to indicate that the latter is the newer version of the former node. In our experiment data, an average of 1.2 node versions are created for subjects and objects but we were able to reduce the number of edges in the graph by a factor of 3.38\(\times\).
Node versioning and redundant edge elimination allow for efficient incremental computation of node-label histograms. By creating different versions of nodes in the provenance graph as subjects or objects change, we can focus the computation only on newly inserted nodes. This approach minimizes redundant processing and improves computational efficiency, ensuring the node-label histograms are efficiently updated as the provenance graph evolves.
### Provenance Graph Kernel
To capture the heterogeneity of the provenance graph, we perform label-aware backward walks from each node in the given snapshot. These walks traverse the graph up to a user-defined length **h**. By accumulating the provenance labels of all nodes, we construct a provenance label histogram for the snapshot. This approach allows us to capture the diverse characteristics of the graph and generate a comprehensive representation of the node labels within the specified walk length.
Definition 1 ().: _Label-aware backward walk: Given a node \(v\in V_{t}\), a backward walk of length \(i\) starting at \(v\) is defined as \(\llbracket(e_{0}),l(e_{1}),\)\(\langle(e_{2}),\)\(...\)\(l(e_{i-1}),\)\(l(u)\rrbracket\), where \((e_{i-1},\)\(e_{i-2},\)\(...,\)\(e_{0})\) is the sequence of edges representing the information flow from \(u\) to \(v\), and \(l(e_{*})\) and \(l(u)\) represent the type of events and objects on the walk respectively._
_Backward walk set \(W_{i}(e)\) of a given node \(v\) is the set of all possible backward walks of length \(i\) from \(v\)._
Each backward walk of length \(i\) describes how node \(v\) is impacted by the set of nodes \(\{u\}\) with sequence of \(i\) consecutive activities. For \(i=0\), the walk corresponds to the node itself, i.e. (\(l(v)\)). For instance, in Figure 2(a), length 2 backward walks for registry1 are (\(EDIT\), \(CREATE\), \(PROCESS\)) and (\(EDIT\), \(READ\), \(FILE\)). Similarly, the walks of length 1 and 0 for registry1 are (\(PROCESS,EDIT\)) and (\(REGISTY\)) respectively.
Given the set of backward walks \(W_{i}(v)\) consisting of every length \(i\) backward walks from node \(v\), we group together labels at equal distances from \(v\) in these walks. Formally, \(\tau_{i}^{j}(v)=\{l(\epsilon_{i-j})|\forall w\in W_{i}(v)\}\) for \(1\leq j\leq i\), and \(\tau_{i}^{0}=\{l(u)|\forall w\in W_{i}(v)\}\), where each \(w\) consists of a sequence of labels \(\{l(e_{0}),l(e_{1}),l(e_{2}),...l(e_{i-1}),\)\(l(u)\}\). The labels \(\tau_{i}^{j}\) for \(0\leq j\leq i\) are then stacked together to form **i-provenance label**, i.e., \(\psi_{l}(v)=(\tau_{i}^{j},\tau_{i-1}^{j},\)\(\tau_{i}^{0})\). If no backward walk of length \(i\) exists, then the empty set \(\{i\}\) is used to denote both \(W_{i}(v)\) and i-provenance label \(\psi_{l}(v)\). The process is repeated for each depth \(i\) for \(0\leq i\leq h\). Let's look at the 0-, 1-, and 2-provenance labels of node \(registry1\) in Figure 2(a),
* For i = 0, \(\psi_{0}(registry1)=(\{REGISTRY\})\), where \(\tau_{0}^{0}=\{REGISTRY\}\),
* For i = 1, \(\tau_{0}^{1}(registry1)=\{PROCESS\}\), \(\tau_{1}^{1}(registry1)=\{EDIT\}\), and \(\psi_{1}(registry1)=(\{EDIT\},\{PROCESS\})\)
Figure 2. The sample provenance graph captured. The graph on the right(b) has some new nodes and edges added since the last snapshot at left (a) denoted by red color.
For each provenance graph snapshot \(G_{t}\), a histogram is constructed containing the frequency of different provenance labels for all nodes in the graph. The histogram keys are generated based on the unique \(\psi_{i}(v)\) values for all \(v\in V_{t}\) and \(0\leq i\leq h\), where \(h\) is the maximum walk length. The histogram size of the provenance graph snapshots obtained in Prov2vec is significantly smaller compared to the WL subtree kernel (Krishnan et al., 2017) and temporally sorted subtree kernel (Krishnan et al., 2017).
In contrast to the multi-set approach used in the WL subtree kernel (Krishnan et al., 2017) and the temporally sorted multi-set approach in Unicorm (Krishnan et al., 2017), the provenance kernel in Prov2vec utilizes a set to aggregate labels from the neighborhood. This distinction is important because the multi-set approach has been deemed to provide better discrimination power necessary in many domains. However for provenance graph, since we use entity and event types as labels, these can generate spurious labels and weaken the generalization.
For instance, take three graphs in Figure 3 all of which represents a very similar set of actions, i.e., a process \(p1\) reads from file(s), loads a module, and edits a registry item. In Prov2vec, after mining length 1 backward walks, the same provenance label \([(LOAD,READ),\) (\(FILE,MODULE\)) is generated for \(p1\) in each of the graphs G1, G2, and G3. However, the WL-subtree kernel maps \(p1\) in G3 to a different label \([(LOAD,\,MODULE),\) (\(READ,\,FILE)]\) compared to \(p1\) in G1 and G2, i.e., \([(LOAD,\,MODULE),\) (\(READ,\,FILE)]\). Similarly, the Unicorn's kernel also considers the temporal order of \(m1\) and \(f1\), resulting in a different label for \(p1\) in each graph. The ability of the provenance graph kernel to map similar behaviors to identical labels helps in better generalization of underlying behavior, reducing false positives in downstream tasks. This means that Prov2vec can capture similarities between different instances of \(p1\) across the graphs, while the other kernels may treat them as distinct. By providing consistent labels for similar behavior, the provenance graph kernel enhances the accuracy and effectiveness of subsequent analysis tasks.
### Incremental Provenance Graph Kernel
Algorithm 1 presents a streaming approach for updating the provenance label histogram in real-time. It takes the newly inserted edges and iterates through them to obtain the provenance labels for newly inserted nodes and updated labels of impacted old nodes. First, it initializes the placeholders (lines 1 - 6) for provenance labels to hold \(\psi_{i}(v)\) for all new nodes \(v\in V_{new}\) and \(0\leq i\leq h\). In order to get \(\psi_{i}(v)\), we need placeholder for \(r_{i}^{j}(v)\) for \(0\leq j\leq i\). Once the initialization is done, we iterate through all inserted edges for \(h\) times in order to obtain the provenance labels corresponding to the backward walks of length \(0\) through \(h\) (lines 7-17). Once the provenance labels are obtained, we update the label histogram to reflect the newly formed provenance labels(lines 14 - 17).
In the graph snapshot of Figure 2(b), three new edges were inserted in the earlier snapshot, which creates three new nodes in the graph. Once the placeholders for \(\psi_{i}\) and corresponding \(r_{i}^{j}\) for each of these nodes are initialized, it obtains the provenance label of new nodes _registry2, process5.exe_, and \(IP2\), using the labels of their in-neighbors, i.e., \((process2.exe)\), \((process3.exe)\), and \((process2.exe)\), respectively. The new labels are then updated in the histogram.
The runtime complexity of algorithm 1 is \(\mathcal{O}(h^{2}|\Delta E|)\) for a given batch of edge insertions \(\Delta E\). For the initial snapshot, the runtime complexity is \(\mathcal{O}(h^{2}|E_{0}|)\), where \(E_{O}\) represents the number of edges in the initial snapshot. The initialization phase (lines 1-6) can be completed in \(\mathcal{O}(h^{2}|V_{new}|)\), where \(V_{new}\) is the set of newly inserted nodes in the given snapshot, and the entire vertex set for the initial snapshot. After the initialization, the computation of provenance labels occurs in \(h\times|\Delta E|\times h\) operations, as the process needs to update i-provenance labels for each of the inserted edge for \(0\leq i\leq h\). While the complexity is higher than \(\mathcal{O}(h|\Delta E|)\) of WL subtree kernel (Krishnan et al., 2017) with h-hop neighborhood, it is important to note that the value of \(h\) is typically very low (e.g., \(\leq 4\)). As a result, the overhead from the quadratic scaling is generally negligible in practice.
### Featurization of Histograms
Most machine learning algorithms require a fixed-size input vector. The node label histograms from different snapshots have different number of bins, i.e., distinct node labels. We need to convert these variable size histograms to a fixed sized vectors. Let us assume histograms \(H_{0},H_{1},...,H_{k}\) are generated from graph snapshots \(G_{0},G_{1},...,G_{k}\). A label vocabulary \(\Sigma\) is the set of all the distinct labels computed for all the nodes in all the graph snapshots, i.e., \(\Sigma=\cup_{i=0}^{k}L_{i}\), where \(L_{i}\) is the bins (labels) from \(H_{i}\).
In the streaming setting, where the label vocabulary is continuously expanding, we utilize a histsketch data structure to convert the variable-sized histogram \(H_{i}\) into a fixed-size vector \(S_{i}\) of size \(K\). Histsketch employs consistent weighted hashing to transform the histogram into a compact sketch. By applying this technique, we can represent each snapshot with a fixed-size vector, regardless of the growing label vocabulary. To assess the similarity between two vectors \(V_{i}\in\mathbf{R}^{D}\) and \(V_{j}\in\mathbf{R}^{D}\), we can compute the distance between them using normalized min-max, which serves as a popular distance measure for non-negative vectors. Further details on histsoketch can be found in Appendix A.
\[D_{NMM}(V_{i},V_{j})=\frac{\sum_{l\in\Sigma}min(V_{i}[l],V_{j}[l])}{\sum_{l\in \Sigma}max(V_{i}[l],V_{j}[l])} \tag{1}\]
Figure 3. Three toy graphs representing a similar set of actions of a process reading from a file, loading module and editing a registry. Provenance graph kernel maps all of them to an identical histogram while existing kernels make distinction based on temporal order or repeated events.
```
Data: Provenance graph snapshot \(G_{t}\), current histogram \(hist\), inserted edges \(\Delta E\), new nodes \(V_{new}\), max walk length \(h\) Result: An updated provenance label histogram \(hist\) \(\triangleright\) Initialize the labels
1for\(v\in V_{new}\)do
2for\(0\leq i\leq h\)do
3\(\psi_{i}(v)\leftarrow()\);
4for\(0\leq j\leq i\)do
5\(\tau_{i}^{j}(v)\leftarrow\{\}\);
6
7\(\tau_{0}^{0}\leftarrow\{l(v)\}\), \(\psi_{0}\leftarrow(\tau_{0}^{0})\);
8
9 Iterate over inserted edges to infer other provenance labels
10for\(1\leq i\leq h\)do
11for\(e=(u,v)\in\Delta E\)do
12if\(\psi_{i-1}(u)\) is emptythen
13 skip the edge;
14\(\tau_{i}^{j}(v).\)insert(\(l(e)\));
15for\(0\leq j\leq i-1\)do
16\(\tau_{i}^{j}(v)\gets\tau_{i}^{j}(v)\cup\tau_{i-1}^{j}(u)\);
17\(\triangleright\) Update the histogram, if there was a old label, we need to remove it
18if\(\psi_{i}(v)\) is not emptythen
19\(hist[\psi_{i}(v)]\leftarrow-\);
20\(\psi_{i}(v)=(\tau_{i}^{i}(v),\tau_{i}^{i-1},...,\tau_{i}^{0})\);
21\(hist[\psi_{i}(v)]\) + ;
```
**Algorithm 1**Incremental algorithm for computing provenance label histogram
## 4. Evaluation
We utilized the x-stream edge-centric graph computing framework (Suttle et al., 2017) to implement the graph kernels. This framework supports both in-memory and out-of-core graphs, enabling scalable computing on shared memory machines. In our implementation, node labels are stored on the vertices, and in each iteration of the graph kernel, the labels are scattered via edges and aggregated on the affected nodes to compute the set of newly formed labels from the streamed edges. This approach allows for efficient computation and maintenance of histograms and sketches in memory, while storing the provenance graph itself on disk. Other components of the Prov2vec system, such as downstream task modeling and data parsing, were implemented using Python.
**Datasets:** We evaluated Prov2vec in 3 different datasets:
**1. StreamSpot** dataset generated by (Suttle et al., 2017) contains information flow graphs derived from one attack and five benign scenarios. Each of the benign scenarios involves a normal task: watching Youtube, downloading files, browsing cnn.com, checking Gmail, and playing video games. The attack graphs are captured while a drive-by-download is triggered by visiting a malicious URL that exploits a flash vulnerability and gains root access to the visiting host. Each task is run 100 times on a **Linux machine** collecting a total of 600 graphs, where each graph encompasses all the system calls on the machine from boot up to shut down. In total, there are 5 different subject/object types and 29 different event types.
**2. SupplyChain attack scenarios** dataset (Kumar et al., 2018) contains a whole system provenance including background activity captured by CamFlow (v0.5.0) (Suttle et al., 2017) while simulating two APT supply chain attacks SC-1 and SC-2 on a continuous integration (CI) platform. They follow a typical cyber kill chain with 7 non-exclusive phases, i.e., reconnaissance, weaponization, delivery, exploitation, installation, command and control(C&C), and actions on objective (Suttle et al., 2017). In SC-1 GNU wget version 1.17 is exploited (CVE-2016-4971) using remote file upload when the victim requests a malicious URL to a compromised server. In SC-2, they exploited a vulnerability (CVE-2014-6271) from GNU Bash version 4.3, which allows remote attackers to execute arbitrary code via crafted trailing strings after function definitions in Bash scripts. Each scenario generates 125 graphs from the benign activity and 25 graphs from the attacker's activities.
**3. Operational Transparent Cyber (OpTC)** data (Kumar et al., 2018) is collected over nine days at National Cyber Range in a simulated network with one thousand hosts, with half of the client machines turned off during data collection. Each host was running Windows 10 on VMware and was scripted to mimic daily user activities by performing common tasks such as creating, editing, and deleting word, powerpoint, excel, and text files; sending, receiving, and downloading files via emails; and browsing the internet. Three red-team APT exercises were performed, each on a separate day, where randomly chosen machines were targeted, compromised, and used to laterally move on to the other network clients. This dataset contains more than 17 billion events, from 500 hosts and 627 different users. Among these log events, there are 11 object types and 32 different event types. Most popular objects are FLOW (71.7%), FILE (12.4%), PROCESS (8.6%), MODULE (3.9%), THREAD (3.0%), and REGISTRY (0.3%). The rest of the objects constitute less than 0.1% of overall events. Only 0.3 million, approximately 0.0016% of total events, are malicious (Bartos et al., 2018).
**Graph Kernels:** Along with provenance graph kernel Prov2vec, we implemented two other graph kernels from existing works. (1) Weisfeiler-Lehman subtree kernel (**WLSubtree**) (Kumar et al., 2018) is implemented to include both edge labels and node labels in their aggregation. Using the edge and node label of each incoming neighbor of the given node \(v\), a sorted multi-set of labels is built which is concatenated with the label of \(v\). (2) The temporally ordered Weisfeiler-Lehman Subtree (**unicorn**) kernel (Kumar et al., 2018) is implemented.
**Downstream Tasks:** We utilize the representation obtained from provenance graph kernel in three distinct downstream tasks:
* **Graph classification** classifies the provenance graphs based on the underlying action being performed on the system. We use XGBoost classifier (Zhou et al., 2018) for graph classification.
* **Novelty detection** using One-class support vector machine (Zhou et al., 2018). It is useful for detecting anomalous behavior in homogeneous system.
* **Anomaly detection** using K-Medoids Clustering. It uses partitioning around medoids (PAM) algorithm to minimize the distance between points labeled to be in a cluster and a point designated as the center of that cluster (Zhou et al., 2018). It is useful
for detecting anomalous behavior in heterogeneous system, i.e., a system with multiple benign behavior profiles.
The average performance from 5-fold cross validation is reported in all of the prediction task reporting. The five fold split is only performed in benign graphs for the task of anomaly/novelty detection, i.e., four fifth of benign data are used to train the model.
### Graph Classification
We obtained the static histograms on StreamSpot datasets, i.e., for each task and each run, one graph is built, and one histogram is constructed. We convert the histograms to sparse label frequency vectors, i.e., the feature vectors used here have sizes equal to the number of distinct node labels among all graphs, i.e., vocabulary size. We evaluate the ability of Prov2vec to distinguish between different activities based on the provenance label histogram they generated. We use h = 3, i.e., the 3-hop neighborhood labels were collected for all of the different kernels. We use supervised learning by training XGB Classifier with a varying number of graphs and use the remaining graphs to test the classification performance. As depicted in Figure 4, all three kernel-based classifiers are able to reach peak classification performance in as little as around 20 graphs per task. This depicts the ability of the provenance kernel to identify similar tasks via a comparison of their provenance labels with a reasonable amount of data.
### Static Novelty Detection
Using unsupervised learning, we predict the graphs that correspond to the attacks. We utilize 80% of all benign tasks (400 graphs in StreamSpot and 100 graphs in SC-1 and SC-2) as normal behavior profiles and use them to train One-class SVM. The remaining 20% of the benign activity graphs and all the graphs generated from the attack scenarios are used to test the anomaly detector, i.e., 200 graphs in StreamSpot and 50 graphs each in SC-1 and SC-2 respectively. Table 1 shows the performance for all three graph kernels and Figure 5 shows the area under ROC curve for three kernel functions on the three datasets. Despite having a significantly smaller histogram size (Figure 8), the Prov2vec outperforms both WLSubtree and time-ordered WL Subtree kernel from Unicorn (Li et al., 2017). The lower dimension of features helps the runtime of training and testing, while the better generalization of provenance using the concise histogram helps us to minimize the false positives, thereby improving the prediction ability of the anomaly detector.
### Real-time Anomaly Detection
The OpTC data provides a much better representation of real-world enterprise networks. The host logs for 500 different windows 10 hosts are collected over 9 days. During the first 6 days, only normal activities are performed on each host such as browsing the internet, playing video games, using Gmail, etc. Those 6 days are divided into 4 different boot-up to shut down sessions, i.e., (1) 17-18th, (2) 18-19th, (3) 19th, and (4) 20th - 23rd September 2019. We built different graphs for each host during each of these sessions, where the node label histogram is maintained incrementally and a snapshot is taken periodically. The series of histogram snapshots were then converted into fixed-sized sketch vectors of length 2048. All the sketches are then clustered using the k-medoid algorithm where an optimal number of clusters is determined by maximizing the silhouette coefficient (Yang et al., 2017). The trained k-medoid is then used for compromise detection during the evaluation period.
The APT attack exercises were performed during the last 3 days, where one attack campaign is carried out each day. During the evaluation period, we create a provenance graph on each host every day and incrementally run graph kernels to compute node label histograms. The snapshots of histograms are taken periodically and are converted to sketch vectors. The resultant sketch vector is then tested against the k-medoids model trained during benign activity duration. If the sketch does not fit on any of the underlying clusters in the trained model, the snapshot is considered an anomaly. If a host in given evaluation day has at least one anomalous snapshot, we raise an alert indicating that the host has been compromised.
Table 2 shows the performance for detecting compromised hosts on each day of the attack. We used a time period of one hour between snapshots, neighborhood size of _h_=3 for graph kernels, and sketch the size of _2048_. The precision represents the fraction of
\begin{table}
\begin{tabular}{|c|l|l|l|l|l|} \hline \multirow{2}{*}{Dataset} & Kernel & P & R & A & F1 & \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } \\ \cline{3-5} \cline{5-6} & Prov2vec & **1.000** & **0.985** & **0.9852** & **0.061** \\ \hline \multirow{3}{*}{StreamSpot} & OurDisfigure & 0.75 & 0.9 & 0.54 & 0.8609 & 1.21 \\ \cline{2-6} & Union & 0.753 & 1.0 & 0.82 & 0.8475 & 3.054 \\ \hline \multirow{3}{*}{SC-1} & Pro2vec & **0.7742** & **1.0** & **0.8571** & **0.8272** & **1.445** \\ \cline{2-6} & OurDisfigure & **0.8677** & 1.0 & 0.7758 & 0.8136 & 2.521 \\ \hline \multirow{3}{*}{SC-2} & Unizon & 0.7599 & 1.0 & 0.7959 & 0.8276 & 8.016 \\ \cline{2-6} & Pro2vec & **0.7533** & **1.0** & **0.82** & **0.8475** & **1.251** \\ \cline{1-1} \cline{2-6} & WDisfigure & 0.7145 & 1.0 & 0.8 & 0.8333 & 10.687 \\ \cline{1-1} \cline{2-6} & Unizon & 0.6579 & 1.0 & 0.74 & 0.7937 & 14.539 \\ \hline \end{tabular}
\end{table}
Table 1. The performance of one-class svm based anomaly detection on three different graph kernels(used h = 3 on each kernel). P, R, A, and F1 represents precision, recall, accuracy, and f1-score respectively.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Attack & Kernel & P & R & A & F1 \\ \hline \multirow{2}{*}{Day1- Powershell} & Pro2vec & **1.000** & **0.1765** & **0.9720** & **0.3060** \\ \cline{2-6} & UniSubtree & 0.4000 & 0.1705 & 0.9860 & 0.7272 \\ \cline{2-6} & Unicorn & 0.4000 & 0.1176 & 0.9560 & 0.1318 \\ \hline \multirow{2}{*}{Day2-Deathwater} & Pro2vec & **1.000** & **0.8333** & **0.9860** & **0.9600** \\ \cline{2-6} & WDisfigure & 0.4647 & 0.2229 & 0.9840 & 0.3333 \\ \hline \multirow{2}{*}{Day3-Malicious Update} & Unizon & 0.3313 & 0.2222 & 0.9750 & 0.2647 \\ \hline \multirow{2}{*}{Day4-Malicious Update} & Pro2vec & **1.0000** & **1.0000** & **1.0000** & **1.0000** \\ \cline{2-6} & Unizon & 0.2857 & 1.0000 & 0.9900 & 0.4464 \\ \hline \end{tabular}
\end{table}
Table 2. The anomaly detection results on 3 attack campaigns using k-medoids algorithm for h = 3 and sketch size = 2048. P, R, A, and F1 represents precision, recall, accuracy, and f1-score respectively.
Figure 4. The classification of graphs into 6 tasks(youtube, download, cnn, gmail, vgame, and attack) with varying amount of training data.
detected hosts that were actually compromised, while recall represents the fraction of compromised hosts that are detected. First, the precision of Prov2vec kernel is much better than both _WLSubtree_ and _unicorn_ kernels. This is most likely down to the better generalization and a much more succinct histogram for Prov2vec kernel compared to the other two techniques, which helps to provide a much better generalization of provenance for a given node. Notice that the recall is noticeably low for all of the kernels during day1 and day2. This is due to the fact that during these campaigns, there is hardly any activity on some of the compromised hosts where an attacker simply logs in after obtaining the credential from the domain controller. Below we discuss each of these attack campaigns in detail.
The attack campaign on day 1 uses PowerShell empire [16], where it manually connects to _Sysclient201_ as the user _zleazer_ and downloads malicious Powershell Empire stager. It then uses privilege escalation methods to obtain elevated agents, Mimikatz to collect credentials, registry edits to establish persistence, and discovery techniques to gather system and network information. It then pivots to _Sysclient402_ using WMI invoke as an elevated agent, where it performs ping sweep of local network and pivots to _Sysclient660_. Finally, it obtains domain controller information by using Powsrell commands, pivots to _DC1_ (domain-controller 1), where it obtains the user hashes using lsa, and pivots to 14 different hosts. The detection process flags _Sysclient201_ and _Sysclient600_ as compromised with all three different kernels, while Unicorn kernel missed _Sysclient402_. The remaining 14 hosts are missed as they do not have enough log data produced during the attacker's presence, and we could not flag the domain controller since there is no log collected for it.
The attack campaign of day2 was carried out using Deathstar, which starts with a phishing email containing malicious Powershell stagers to two users _bantonio_ and _rsantill_. On _Sysclient501_, _bantonio_ opens the malicious attachment. Once checked in the attacker runs a series of commands to list domain controllers, SID, and admins. It uses several UAC bypass techniques available in Powershell Empire such as _eventvwr_, _fodhelper_, _wmi invoke_, and _windir value modification_ in order to escalate the privilege. It then starts reverse shell to the attacker, which downloads a netcat application with a different alias, compresses the content of _Documents_ folder into a file named _export.zip_ and copies it to news.com hosted at _132.197.158.98_. The attacker pivots to _Sysclient974_ and explores files in the Documents folder. Similarly, it pivots to _Sysclient005_, where it exfiltrates the data from the Downloads folder. The hosts _Sysclient501_, _Sysclient974_, and _Sysclient005_ are 3 out of 9 compromised hosts that are detected by all three kernels.
On the day3, two hosts installed _notepad.exe_ susceptible to malicious upgrade, which when updated reaches out to the attacker's server hosted at _53.192.68.50_ and downloads a reverse tcp meter payload that connects back to the attacker. Once connected, it runs discovery techniques to gather information on the local system, applications, domain controllers, and network shares. It then migrates to _lssas_ process, which uses _Mimikatz_ to collect clear-text passwords and hashes. Afterward, persistence is maintained by installing run keys and user 'admin' is added to administrators and the RDP group. A similar approach was taken on both hosts _Sysclient351_ and _Sysclient051_, where they leave large enough footprints for an anomaly detector to trigger the alert.
Afterward, we utilize a user-host interaction graph built using the user-session logs to flag potentially compromised hosts and users in order to quickly extract the impacted agents. The user-session logs in the OpTC data contains information such as user logins, logouts, and remote desktop protocol accesses and built a coarse-grained graph. When we detect a compromised host using the real-time anomaly detection on provenance graph snapshots, we extract the metadata from such anomalies, mainly the user, host, and the timestamp of the first anomaly. Following those agents
Figure 5. ROC curve of one-class SVM based novelty detection for three different graph kernels on different datasets. The area under ROC curve for Prov2vec kernel is consistently better than _WLSubtree_ and _Unicorn_ kernels.
Figure 6. The movement of compromised user across network during attack campaign of day 1.
and time information, we perform a temporal traversal on the user-host graph in order to obtain the potentially compromised hosts. Figure 6 and 7 shows the graphs containing the impacted hosts and users for the attack campaign of day1 and day2 respectively. With this temporal traversal, we were able to detect all the compromised hosts on day 1 except domain controller 1(DC1) as we did not have user-session logs for DC1. In addition it produced one false positive _system10203_ which was not mentioned in ground truth. On day2, following this traversal obtained a bit large number of false positives as _bantonio_ logs into hundreds of hosts following the detection of an anomaly on _Sysclient501_. However, the user with elevated privilege, i.e., _Administrator_ connects to all 9 hosts mentioned in the ground truth, which can be traced from the user-session logs. With this temporal traversal, we can detect the compromised hosts that were missed by anomaly detection as long as the anomaly detection finds at least one of the compromised hosts.
### Effect of Sketch Size
We evaluate the impact of using a fixed-size sketch vector in the performance of downstream tasks compared to the use of a sparse label histogram of size equal to the number of distinct labels among all graphs. We varied the size of the sketch from 32 to 2048, doubling each time to represent the node label histogram obtained by running all three kernels for \(h=3\). The histogram sketch obtained is thus used as the feature representation for the given graph. We trained the k-medoids clustering algorithm using 80% of the graphs generated by benign activities. The remaining 100 benign graphs and 100 graphs generated during the attack are used for testing. During testing, each graph is tested against every cluster formed during training and flagged as an anomaly if it does not fit in any of the clusters. A graph is considered to fit in a cluster if its distance from the given clusters medoid is within \(d\) standard deviation of the mean distance of all training samples in that cluster. In our experiments, we used \(d=2\), i.e., if a sample is farther than \(mean+2std\) away from all the medoids, it is considered an anomaly. The performance for varying sizes of sketches is shown in Table 3 for anomaly detection on StreamSpot data.
The results in Table 3 show that sketch size much smaller than the node label vocabulary size can match the performance for all kernels. The performance for Prov2vec kernel saturates after a sketch size of 128. Similarly the performance for _WLSubtree_ and _unicorn_ kernels saturates at sketch size of 512 and 1024 respectively. The peak performance of **WLSubtree** and _unicorn_ kernels match that of their sparse histogram vector counterpart from Table 1. However, the precision of Prov2vec kernel is slightly amiss from its static counterpart. Nevertheless, sketching constantly changing and different-sized histograms with fixed-size feature sketches preserves the similarity between them and provide a viable option for comparing continuously changing provenance graphs.
### Effect of Neighborhood Size
We compared the resource consumption for using different kernels to compute the node label histograms in different datasets. We varied the value of h, i.e., the size of the neighborhood, and recorded the histogram size as well as the runtime for different graph kernels. As illustrated in Figure 8(a)-(f), the histogram for the 0-hop neighborhood is identical for all kernels, i.e., histograms built on node types. As the value of h increases, the difference between the sizes of histogram for _unicorn_ and _WLSubtree_ kernels compared to Prov2vec kernel get larger. The comparison of histogram size growth over time for three kernels are shown in Figure 9. The number of labels and rate of arrival of unseen labels both are much smaller in provenance graph kernel. Despite this succinct representation, the performance on downstream task for Prov2vec kernel is consistently better or comparable to the other two kernels as illustrated in the earlier subsection.
The downside is increased runtime for provenance graph kernel as represented in Figure 8(g)-(i). Although the runtime for Prov2vec kernel has quadratic growth with \(h\), i.e., \(h^{2}\), compared to linear growth for _WLSubtree_ and _unicorn_ kernels, the optimal value of h is usually very small, thereby alleviating the impact of quadratic scaling.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \hline \multicolumn{1}{|c|}{K} & \multicolumn{3}{c|}{Pro2vec Kernel} & \multicolumn{3}{c|}{WLSubtree Kernel} & \multicolumn{3}{c|}{Unicorn Kernel} \\ \hline P & R & A & F1 & P & B & A & F1 & P & R & A & F1 \\ \hline
32. & 0.81 & 1 & 0.88 & 0.89 & 0.82 & 1 & 0.89 & 0.9 & 0.84 & 1 & 0.91 & 0.91 \\ \hline
64. & 0.83 & 1 & 0.9 & 0.8 & 1 & 0.88 & 0.89 & 0.83 & 1 & 0.9 & 0.9 \\ \hline
128. & **0.9** & **0.9** & **0.9** & 0.83 & 1 & 0.9 & 0.91 & 0.76 & 0.85 & 0.87 \\ \hline
256 & 0.9 & 1 & 0.94 & 0.95 & 0.83 & 1 & 0.93 & 0.94 & 0.85 & 1 & 0.91 & 0.92 \\ \hline
512 & 0.89 & 1 & 0.84 & 0.94 & 0.99 & **0.9** & **0.9** & **0.9** & **0.9** & 0.92 \\ \hline
1024 & & & & & 0.97 & 1 & 0.94 & 0.91 & **0.9** & **0.9** & **0.94** & **0.94** \\ \hline
2048 & & & & & 0.89 & 1 & 0.94 & 0.94 & 0.89 & 1 & 0.94 & 0.94 \\ \hline \hline \end{tabular}
\end{table}
Table 3. The evaluation of effect of different sized listsketches on the anomaly detection performance on StreamSpot data. P, R, A, and F1 represents precision, recall, accuracy, and f1-score respectively. K is sketch vector size.
Figure 7. The movement of compromised user across network during attack campaign of day 2.
Furthermore, we evaluated the impact of neighborhood size (h) based on the performance of corresponding histograms in downstream machine learning tasks. We used two SupplyChain datasets to evaluate the impact of neighborhood size on anomaly detection. We converted the histograms of corresponding snapshots to sketch vectors of size _2048_. The performance for anomaly detection is shown in Table 4 for two attack scenarios SC-1 (wget) and SC2(shellshock). As expected, the performance for each kernel improves as we increase the neighborhood size, reach the peak for the value of h = 3 or 4, and start to decline afterward.
## 5. Discussions and Limitations
Prov2vec makes certain assumptions and has limitations that should be considered.
First, it operates under the **closed-world assumption**, assuming that all benign behaviors have been observed during training (Zhu et al., 2017). However, in real enterprise networks, it is challenging to cover all possible benign cases. This may result in false alarms for previously unseen normal behaviors. To address this, system administrators can periodically update the model with new benign data. The incremental nature of Prov2vec makes it easy for the model to update.
Second, Prov2vec assumes an **integrity of training data** during a modeling period. It assumes that the newly observed normal behavior used for model updates is not corrupted by poisoning attacks (Zhu et al., 2017) or graph backdoors (Zhu et al., 2017). The robustness of Prov2vec against such attacks is an area for future study.
The **datasets used in the experiments are synthetic**, which limits the representation of real-world APT attacks. While efforts have been made to make the datasets realistic, they lack some characteristics of APT attacks in the wild. Testing Prov2vec against actual enterprise systems or more realistic APT scenarios is a priority for future research.
**Granularity of data provenance:** Some attacks do not produce the attack pattern in the data provenance graphs. For example, malicious code in a file and thread-based attacks have the text information on the corresponding files and threads that are too fine granular to be recorded in the provenance graph. Like all provenance-based detection methods, Prov2vec will fail to detect those attacks. Incorporating more host-based data into the threat detection process or improving the information capture process for finer-grained provenance graph generation can be the research directions to further investigate this limitation.
Figure 8. The comparison of resource consumption for different kernels. The plots (a)-(c) shows the average size of histogram per graph, plots (d)-(f) shows the vocabulary size for different kernels, and plots (g)-(i) compares the runtime of different kernels for increasing neighborhood size.
Figure 9. The histogram size trend with each hourly snapshot on host 201 during 16-17Sep on OpTC data.
The explainability of anomalies is a challenge in black-box machine learning systems. Prov2vec may struggle to provide detailed explanations for the detected anomalies. However, methods such as LIME and EDR systems can be used to explain individual predictions and understand the series of activities leading to an anomaly.
The provenance graph kernel **only supports discrete labels**, which limits its ability to capture continuous attributes. Including such attributes may require the use of deep learning techniques or graph kernels that support continuous attributes. Future work will explore whether these techniques can improve the performance of downstream prediction tasks. Overall, while Prov2vec has shown promising results, addressing these limitations will be crucial for its broader applicability and effectiveness in detecting sophisticated attacks.
## 6. Related Works
**Provenance graph** has been popular tool for threat hunting research in last few years. Several works have been proposed to improve the provenance data collection (Bahata et al., 2017; Wang et al., 2018; Wang et al., 2019), redundancy elimination (Krishnan et al., 2017; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019), intrusion detection using provenance graphs (Bahata et al., 2017; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). We refer interested readers to the comprehensive survey on threat detection techniques using provenance graph (Wang et al., 2019).
**Provenance query systems:** Traditional query systems are not optimized for provenance analysis. Several solutions (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) have been proposed to provide threat investigation specific abilities such as streaming queries, causality tracking, graph pattern matching, and anomaly analysis. These systems are implemented on top of mature stream processors or databases and take the provenance graphs specific data model and query engine.
**Provenance data reduction:** is very important for storage and computational efficiency. Causality preserving reduction (Wang et al., 2019) and subsequent dependence preserving reduction (Wang et al., 2019) merge the events if they do not alter the causality or forward and backward reachability respectively. LogGC (Wang et al., 2019) proposes a provenance garbage collection, that finds the isolated "temporary" nodes and removes them. Since garbage collection and causality/dependency preserving reduction can remove correlation between alerts or alert themselves, we modified these reduction systems to preserve alerts.
**Threat detection with provenance graphs:** Sleuth (Krishnan et al., 2017) uses policy based rules to trigger alerts and uses **tag propagation** technique to store and transmit the system execution history. The **abnormal behavior detection** systems (Wang et al., 2019; Wang et al., 2019) learn host behavior from historical data or parallel systems and tries to find abnormal interaction between system entities. The **graph pattern matching and alignment** based works such as Holmes (Holmes, 2018), Poirot (Pairot, 2018), Rapsheet (Rapsheet, 2018), and SteinerLog (Bahata et al., 2017) uses indicator of attacks(IOAs) to generate suspicious events and chain them together using graph exploration techniques. They use those chain of alerts to detect the attacks as well as to reconstruct the individual steps taken by an attacker. However, a substantial amount of manual effort and domain expertise is required to come up with the relevant IOAs for matching. Eg., Poirot requires one to write a different query for each of the attack campaigns and find their alignment on a provenance graph. Holmes (Holmes, 2018), Rapsheet (Rapsheet, 2018) and SteinerLog (Bahata et al., 2017) uses more fine-grained behavioral patterns representing different TTPs relevant to their system and follows the causal dependency in provenance graph to construct the attack campaigns. Prov2vec follows the **graph embedding based systems** such as Unicorn (Krishnan et al., 2017) and Log2Vec (Wang et al., 2019) closely, where it uses graph representation computation to embed the log entries and perform anomaly detection. However, with more compact histogram and consequently better generalization, we are able to outperform Unicorn.
Graph kernels are widely used for learning node and graph representations in machine learning tasks. These techniques iteratively accumulate and compress information from a node's neighborhood to derive a new node label. Various methods, such as random walks (Krishnan et al., 2017; Wang et al., 2019; Wang et al., 2019), subtrees (Wang et al., 2019; Wang et al., 2019), cyclic patterns (Han et al., 2019), shortest paths (Han et al., 2019), and graphlets (Holmes, 2018), are employed to capture node neighborhoods. Recently, Graph Neural Networks (GNNs) (Krishnan et al., 2017; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) have gained popularity for representation learning. GNNs utilize recursive aggregation to compute a node's representation vector by incorporating information from its neighborhood, with each iteration encompassing a larger one-hop neighborhood. Node representations are then aggregated to obtain the feature vector for the entire graph.
**Sequence-based learning** techniques, which involve converting log sequences into key vectors representing system events, have gained popularity in operational anomaly detection (Krishnan et al., 2017; Wang et al., 2019; Wang et al., 2019). Models based on recurrent neural networks (RNNs) or Transformers are then trained with these key sequences (Bahata et al., 2017; Wang et al., 2019; Wang et al., 2019). During deployment, these models predict anomalous behavior by forecasting the next event based on the observed sequence. However, their effectiveness is limited as they mainly examine short system call sequences and struggle to capture long-term behavior, leaving them vulnerable to evasion techniques. To detect stealthy and slow Advanced Persistent Threat (APT) attacks, which require a broader context, graph-based techniques leveraging the causal relationships among events in provenance graphs offer more promising solutions.
## 7. Conclusion
We proposed a fully unsupervised technique in Prov2vec, which was able to successfully learn the system host behaviors from their provenance graphs and identify the potentially malicious behaviors that differ from the normality. The provenance graph kernel proposed, while incurs a slight overhead in histogram computation compared to state-of-the-art graph kernels achieves an order magnitude smaller node label histogram sizes while improving the performance of downstream machine learning tasks at the same time. The result from Prov2vec can be used as the first level of filtering for fine-grained alert correlation systems, where the anomalous hosts are further inspected to understand the context around underlying behavior.
|
2307.12888 | An objective evaluation of Hearing Aids and DNN-based speech enhancement
in complex acoustic scenes | We investigate the objective performance of five high-end commercially
available Hearing Aid (HA) devices compared to DNN-based speech enhancement
algorithms in complex acoustic environments. To this end, we measure the HRTFs
of a single HA device to synthesize a binaural dataset for training two
state-of-the-art causal and non-causal DNN enhancement models. We then generate
an evaluation set of realistic speech-in-noise situations using an Ambisonics
loudspeaker setup and record with a KU100 dummy head wearing each of the HA
devices, both with and without the conventional HA algorithms, applying the DNN
enhancers to the latter. We find that the DNN-based enhancement outperforms the
HA algorithms in terms of noise suppression and objective intelligibility
metrics. | Enric Gusó, Joanna Luberadzka, Martí Baig, Umut Sayin Saraç, Xavier Serra | 2023-07-24T15:32:38Z | http://arxiv.org/abs/2307.12888v1 | An Objective Evaluation of Hearing Aids and DNN-Based Binaural Speech Enhancement in Complex Acoustic Scenes
###### Abstract
We investigate the objective performance of five high-end commercially available Hearing Aid (HA) devices compared to DNN-based speech enhancement algorithms in complex acoustic environments. To this end, we measure the HRTFs of a single HA device to synthesize a binaural dataset for training two state-of-the-art causal and non-causal DNN enhancement models. We then generate an evaluation set of realistic speech-in-noise situations using an Ambisonics loudspeaker setup and record with a KU100 dummy head wearing each of the HA devices, both with and without the conventional HA algorithms, applying the DNN enhancers to the latter. We find that the DNN-based enhancement outperforms the HA algorithms in terms of noise suppression and objective intelligibility metrics.
Eric Guso,\({}^{1,2}\) Joanna Luberadzka,\({}^{2}\) Marti Baig,\({}^{3}\) Umut Sayin,\({}^{2}\) Xavier Serra\({}^{1}\)\({}^{1}\) Universitat Pompeu Fabra, Music Technology Group, Barcelona
[email protected], [email protected]
\({}^{2}\) Eurecat, Centre Tecnologic de Catalunya, Tecnologies Multimedia, Barcelona
[email protected], [email protected]
\({}^{3}\) Microsoft, Amplifon Group, Barcelona, [email protected] hearing aids, speech enhancement, denoising, dereverberation
## 1 Introduction
+
Footnote †: The research leading to these results has received funding from the European union’s Horizon Europe programme under grant agreement No 101017884 - GuestXR project.
+
Footnote †: The research leading to these results has received funding from the European union’s Horizon Europe programme under grant agreement No 101017884 - GuestXR project.
+
Footnote †: The research leading to these results has received funding from the European union’s Horizon Europe programme under grant agreement No 101017884 - GuestXR project.
+
Footnote †: The research leading to these results has received funding from the European union’s Horizon Europe programme under grant agreement No 101017884 - GuestXR project.
+
Footnote †: The research leading to these results has received funding from the European union’s Horizon Europe programme under grant agreement No 101017884 - GuestXR project.
+
Footnote †: The research leading to these results has received funding from the European union’s Horizon Europe programme under grant agreement No 101017884 - GuestXR project.
+
Footnote †: The research leading to these results has received funding from the European union’s Horizon Europe programme under grant agreement No 101017884 - GuestXR project.
+
Footnote †: The research leading to these results has received funding from the European union’s Horizon Europe programme under grant agreement No 101017884 - GuestXR project.
+
Footnote †: The research leading to these results has received funding from the European union’s Horizon Europe programme under grant agreement No 101017884 - GuestXR project.
+
Footnote †: The research leading to these results has received funding from the European union’s Horizon Europe programme under grant agreement No 101017884 - GuestXR project.
+
Footnote †: The research leading to these results has received funding from the European union’s Horizon Europe programme under grant agreement No 101017884 - GuestXR project.
+
Footnote †: The research leading to these results has received funding from the European union’s Horizon Europe programme under grant agreement No 101017884 - GuestXR project.
+
Footnote †: The research leading to these results has received funding from the European union’s Horizon Europe programme under grant agreement No 101017884 - GuestXR project.
+
Footnote †: The research leading to these results has received funding from the European union’s Horizon Europe programme under grant agreement No 101017884 - GuestXR project.
+
Footnote †: The research leading to these results has received funding from the European union’s Horizon Europe programme under grant agreement No 101017884 - GuestXR project.
+
Footnote †: The research leading to these results has received funding from the European union’s Horizon Europe programme under grant agreement No 101017884 - GuestXR project.
## 2 HA Measurement Setup
Our measurement setup (depicted in Figure 1) was designed to capture the signals processed by various HA devices within complex acoustic scenes. We recorded with five high-end RIC HAs available in the market at the time of writing: GN ONE 961-DRWC, GN ONE 561-DRWC, Phonak Audeo P90-R, Phonak Audeo P70-R, and Signia Pure C\(\&\)G 3x. In total, we have tested fifteen combinations of HA and receiver, recording with low, mid and high power receivers for each device.
**Scenes generation --** To ensure that the sound processed by the tested hearing devices closely resembled real-life scenarios, we used an Ambisonics-based spatial sound reproduction system.
All HA devices were exposed to three different acoustic scenes of speech in noise. To create these scenes we used three noise recordings from the ARTE database [17] representing common sound environments: _party_, _restaurant_ and _office_. Recordings come from real environments captured with a 62-channel microphone array, and are available as 31-channel mixed-order Ambisonics signals which we zero-padded up to 10th order. The target speech consisted of nine randomly selected sentences spoken by a female speaker from the Sharvard database [18]. To simulate room acoustics, we used an adaptation of the Multichannel Acoustic Signal Processing Library1 (MASP): a shoebox room impulse response simulator based on the Image Source Method that allows for Spherical Harmonics expansion (SH, i.e. Ambisonics). We used 10th-order Ambisonics, which provide sufficient spatial resolution and generated two sets of sound fields at the left ear, right ear, and _head_ positions: one set where we only simulated sound propagation (the direct sound field w/o reflections) and the other one being the actual reverberation. We used 17.5cm of ear distance for computing the left and right ear coordinates from the \(head\) origin to match the KU100 ear distance. We simulated three rooms with dimensions set to 15x10x3.5m, 28x17x4.2m, and 5x2x2.5m for the _party_, _restaurant_, and _office_ environments respectively. All targets where placed at 1m from the _head_ with two different angles: 0 degrees or 30 degrees to the right (relative to the head horizontal orientation \(head_{\theta}\)). We adjusted the RT60 parameter using informal listening by comparing with the ARTE recordings. We chose RT60s that were \(60\%\) of the ones reported in ARTE to account for furniture and people absorption.
Footnote 1: [https://github.com/andresprezlorpez/masp](https://github.com/andresprezlorpez/masp)
**Decoding --** Despite in [19] Thiemann et al. published HRTFs of the microphones of a BTE HA placed at a head and torso simulator, no recordings were made with KU100 dummy which was available for our evaluation. Besides, in this study, we had no access to the microphone signals due to evaluating commercially available HA. Hence, we decided to measure our own set of HRTFs in a setup as close as possible to the intended HA recording setup (i.e. in the same room, with a HA coupled to the dummy head ear canal, w/o signal enhancement features and only providing a linear gain). We used a pair of _audifon lew!_\(R\) HA devices and the HRTF sets were measured using the sweep method with a single Genelec 8020 loudspeaker following a 50-point Lebedev grid. Impulse responses were cropped before the arrival of the first wall reflection and low frequencies were extended by LFE algorithm [20]. This set of HRTFs was then used to build the 10-th order Ambisonics to binaural decoder following the Bilateral Magnitude Least Squares method (BiMagLS) [21], applying high order tapering with a cutoff frequency of 6239Hz --which is the theoretical cutoff frequency for correct representation in 10-th order Ambisonics. To obtain the clean anechoic reference signal needed for the objective evaluation we took the left and right ear anechoic sound fields simulated in MASP and applied the BiMagLS decoder. We weighted the Ambisonics signals so that SNR was +5dB when decoded to binaural. Finally, we added the weighted speech and noise sound fields simulated at the \(head\) center position, and decoded the resulting Ambisonics mixture into the loudspeaker signals using a fifth order in-phase decoder optimized with IDHOA [22], particularly tailored to our specific loudspeaker setup. We also normalized all sets of speaker signals to have the same energy (sum of squares) for ease of calibration.
**Recordings --** The KU100 dummy head was positioned at the center of a three-dimensional irregular loudspeaker array comprising 25 Genelec 8040s loudspeakers. We calibrated the system so all scenes were at 70dB SPL. For each recording, we placed the hearing aids behind the ears of the KU100 dummy head and inserted the receiver into the ear canal. To minimize the influence of direct sound, we occluded the entrance to the ear canal with adhesive putty material in addition to the HA's power dome.
We recorded each hearing device in two modes: _bypass_ and _enabled_. In _bypass_ mode all the HA algorithms were deactivated except for the feedback canceller and a linear amplification of approximately 20dB. Hearing aid models chosen for this study are commercially available HA. For such devices it is not straightforward to record directly from the HA microphone. Instead, we used signals recorded in _bypass_ mode at the KU100 as an approximation of the HA microphone signals. The _bypass_ recordings were used as the input to the offline DNN-based speech enhancement methods. In contrast to the _bypass_ recordings, in the _enabled_ mode the HA applied the signal enhancement algorithms present in the default factory settings. In all tested devices, these settings included at least some form of adaptive beamforming and single-channel noise reduction. No hearing correction was applied.
In a preliminary round of recordings we employed the phase-inversion procedure [23] commonly used to estimate the SNR at the output of a linear hearing aid. However, we noticed that for some of the HA and for all DNN-based algorithms the linearity assumption of the method could not be met, making the SNR estimates obtained with this method unreliable. Therefore, we decided to rely on intrusive, reference-based metrics instead.
Figure 1: Hearing aid measurement setup. Scenes generation: complex acoustic scenes are generated by combining existing databases (Sharvard, ARTE) with room acoustic simulation. Recordings: audio material is played back in an Ambisonics-based reproduction system and the signals processed by the HA are captured with the microphones of a dummy head. HA are recorded with and without signal-enhancing features. Evaluation: HA-enhanced recordings are compared with DNN-processed recordings using a range of objective metrics.
**Evaluation --** Objective evaluation compared the conventional HA enhancement algorithms with the DNNs. We evaluated four sets of recordings: _bypass_, _enabled_, _bypass_ post-processed with DNN and _bypass_ post-processed with DNN-C. The first set represents recordings without signal enhancement and the remaining three sets represent the different signal enhancement strategies. For each set we computed four objective metrics: Hearing-Aid Speech Quality Index (HASQI) [24], Hearing-Aid Speech Perception Index (HASPI) [25], Modified Binaural Short-Time Objective Intelligibility (MBSTOI) [26] and Scale-invariant signal-to-distortion ratio (SISDR) as in [5]. Given the clean encode target binaural speech signal \(y\), the DNN or HA estimate \(\tilde{y}\) and a baseline recording with the HA in _bypass_\(\hat{y}\) (all \(\tau\) samples long) SISDR is described in Equation 1. \(y\) was used as common reference for all metrics. Reference and estimate pairs were time-aligned using the cross-correlation method. We took the best ear for the non-binaural measures (SISDR, HASPI, HASQI) and normalized the signals as in [12]. We used a flat normal-hearing audiogram as an input to HASPI and HASQI metrics. We define the signal enhancement benefit as the difference in objective metrics between the non-enhanced and enhanced signals. For example, \(\Delta\text{SISDR}=\text{SISDR}(\tilde{y}_{t},y_{t})-\text{SISDR}(\hat{y}_{t}, y_{t})\). The rest of metrics expressing the signal enhancement benefit are denoted as \(\Delta\)HASQI, \(\Delta\)HASPI, \(\Delta\)MBSTOI and computed in the same fashion.
\[\text{SISDR}(\tilde{y}_{t},y_{t})=\frac{10}{\mathcal{T}}\sum_{t}\log_{10} \left(\frac{\left|\frac{\tilde{y}_{t}^{T}y_{t}}{\left|y_{t}\right|^{2}}y_{t} \right|^{2}}{\left|\frac{\tilde{y}_{t}^{T}y_{t}}{\left|y_{t}\right|^{2}}y_{t}- \tilde{y}_{t}\right|^{2}}\right) \tag{1}\]
## 3 DNN Dataset and Training Setup
**Task --** We approach supervised binaural speech enhancement with DNNs, which requires a large number of noisy and reverberant mixtures, as well as the corresponding targets (clean speech) also in binaural to preserve spatial cues. Both inputs and targets have to resemble the ones that would be recorded with the two frontal omnidirectional microphones in a Behind The Ear (BTE) HA. To accomplish this, we simulate reverberation in the Ambisonics spatial audio domain and then decode to binaural signals by using a decoder specifically-tailored for HA.
**Datasets --** As in [4, 5], we relied on speech recordings from audiobooks, in this case using the Spanish subset from Multilingual LibriSpeech (MLSS) dataset [27] for the clean signals. We used a sampling rate of 16kHz and took four seconds chunks, selecting the chunk that presented more energy in order to avoid silence, obtaining \(22\cdot 10^{5}\) utterances for the training set, 2408 for validation and 2385 for testing, which add up to 251 hours of clean speech. Regarding noise, we used the WHAM! [28] binaural dataset which contains babble speech, cafeteria noise and background music. We kept the original data splits from both datasets, preserving gender balance and avoiding contamination between sets. We augmented WHAM! training set to match the length of MLSS by following three main strategies: _i)_ we flipped the phase, _ii)_ we swapped left and right channels as a rough approximation of a 180\({}^{\text{o}}\) rotation in the horizontal plane and _iii)_ we randomly time-stretched from 90% to 110% of the noise duration. For validation and testing we randomly picked from WHAM! validation and test splits respectively. Details on the data splits is shown in Table 1.
**Room simulation --** MLSS contains close-mic studio recordings, so we have considered them to be a fair approximation of anechoic signals. The random room configuration details are shown in Table 2. We used MASP and our Ambisonics to binaural HA decoder as in Section 2. In an attempt to make our models agnostic to our particular HA and dummy combination, the response of the RIC coupling was compensated by taking the direction-independent frequency response between KU100 and KU100 wearing the HA HRTFs and approximating it with an IIR filter that was applied to all utterances in the dataset.
**Training Setup --** Regarding the DNN topology, we did not make any modifications to _SuDo-RM-RF_ on top of configuring its encoder for receiving stereo audio. All network topology details can be found in [5]. We trained two different models: DNN (which corresponds to _SuDoRM-RF++GC_ in their paper), a non-causal improved version that serves us as upper baseline, and DNN-C (the causal, HA oriented version that corresponds to _C-SuDoRM-RF++_ in their paper). All hyperparameters are shared between DNN and DNN-C except for the batch size --which had to be reduced from 12 for DNN-C to 2 for DNN to fit into VRAM. We used 256 input and 512 output channels on 16 successive blocks with five upsampling/downsampling layers each, an encoder and decoder kernel size of 21 generating embeddings with a length of 512, four attention heads with 256 depth and 0.1 dropout (applied only during training) and an Adam optimizer. Learning rate was \(10^{-3}\) and was divided by 3 every 8 epochs. We trained the whole dataset for 25
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{set} & \multicolumn{2}{c|}{MLSS} & \multicolumn{2}{c|}{WHAM!} & \multicolumn{3}{c|}{noise augmentations} \\ & \# & hours & \# & hours & \(\Phi inv\) & L\(\approx\)R & stretch \\ \hline \multirow{4}{*}{_tr_} & \multirow{4}{*}{221k} & \multirow{4}{*}{245.6} & 52k & 57.8 & ✗ & ✗ & ✗ \\ \cline{3-6} & & & 52k & 57.8 & ✗ & ✗ & ✗ \\ \cline{3-6} & & & 52k & 57.8 & ✗ & ✓ & ✗ \\ \cline{3-6} & & & 52k & 57.8 & ✓ & ✓ & ✗ \\ \cline{3-6} & & & 5.8k & 7.2 & ✗ & ✗ & ✓ \\ \cline{3-6} & & & 6.5k & 7.2 & ✓ & ✓ & ✓ \\ \hline _cv_ & 2.4k & 2.67 & 2.4k & 2.67 & ✗ & ✗ & ✗ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Training data splits and augmentations, where _tr_ stands for the training set, _cv_ for validation, _tt_ for the test set, # for the number of utterances, \(\Phi inv\) for changing the sign of the noise signal, L\(\approx\)R for permuting the noise channels, and _stretch_ for applying time stretching with a random factor.
\begin{table}
\begin{tabular}{l|l} \(r_{x}=\mathcal{U}(3,30)\) & \(head_{x}=\mathcal{U}(0.35r_{x},0.65r_{x})\) \\ \(r_{y}=r_{x}\cdot\mathcal{U}(0.5,1)\) & \(head_{y}=\mathcal{U}(0.35r_{y},0.65r_{y})\) \\ \(r_{z}=\mathcal{U}(2.5,5)\) & \(head_{z}=\mathcal{U}(1,2)\) \\ \(\end{tabular} \\ \(||head-target||=\mathcal{U}(0.5,3)\) & \(head_{\theta}=\mathcal{U}(-45,45)\) \\ \(\anglehead,target=\mathcal{U}(-45,45)\) & \(head_{y}=\mathcal{U}(-10,10)\) \\ \(\text{SNR}=\mathcal{U}(0,6)\) & \(\text{RT}60=\mathcal{U}(0.1,0.5)\frac{\text{SNR}+0.3}{5.3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Random room configuration, sampled from uniform distribution \(\mathcal{U}\). Room \(r\) and \(head\) dimensions and distances are in meters, SNR in dBs, RT60 in seconds and \(head\) azimuth \(\theta\), elevation \(\psi\) and angle with target in degrees. \(r_{y}\) depends on \(r_{x}\) to avoid corridor-like spaces, and RT60 depends on the SNR because the more noisy a situation, the less reverberant it should be due to the crowd’s absorption.
epochs. New mixtures and targets were generated every epoch by permuting the reverberant speech and the noise in every batch, and were normalized to zero mean and unit variance. We used SISDR as loss function and also as evaluation metric and obtained a test set performance of 11.7dB.
## 4 Results
Figure 3 displays SISDR, HASPI, HASQI, and MBSTOI metrics obtained for each hearing device averaged across receiver types. In the _bypass_ condition metrics range from -12.63dB to -8.01dB for SISDR, 0.57 to 0.73 for HASPI, 0.14 to 0.16 for HASQI, and 0.39 to 0.49 for MBSTOI. Enabling signal enhancement features in hearing aids can have a different effect on their performance depending on the specific device. For example, device 4 demonstrates improvement of 0.73dB on SISDR, 0.14 on HASPI 0.02 on HASQI and 0.01 on MBSTOI while device 2 presents -0.54dB SISDR, -0.17 HASPI, -0.07 HASQI and -0.05 MBSTOI. Apart from these slight deviations, the overall impact of HA features is typically insignificant. In contrast, significant change can be observed in recordings processed with the DNNs. For the non-causal DNN, values reach -6.89dB to -1.21dB for SISDR, 0.60 to 0.77 HASPI, 0.14 to 0.21 for HASQI and 0.57 to 0.69 for MBSTOI.
Figure 2 depicts the signal enhancement benefit. Violin plots summarize the values obtained from the 15 hearing devices averaged along all scenes. For hearing aid features the benefit is on average 0.014 dB for SISDR, -0.02 for HASQI, -0.01 for HASPI, and -0.01 for MBSTOI. A much larger improvement could be observed for DNN-based enhancement. For a causal model, the mean benefit was 4.96 dB for SISDR, -0.01 for HASQI, 0.02 for HASPI, and 0.15 for MBSTOI. The largest benefit was achieved by the non-causal DNN model, with mean values reaching 6.09 dB for SISDR, 0.02 for HASQI, 0.07 for HASPI, and 0.17 for MBSTOI.
## 5 Discussion
In this paper we have presented a setup which allows to quantify the performance gap between the current hearing aid signal enhancement strategies and DNN-based approaches. On the one hand, HA processing seems to struggle in these complex situations, even performing worse than in _bypass_ for some devices, perhaps because beamformers cannot estimate the direction of the target speech due to the noise and competing talkers being non-stationary. On the other hand, this does not seem to affect DNNs, which improve SISDR and MBSTOI consistently.
Interestingly, differences between devices in _bypass_ are still observed after being processed by the DNNs, suggesting that DNN models are sensitive to the quality of the inputs. It would be interesting to study the concatenation of DNN and traditional strategies in future work.
We acknowledge that using the binaural decoding of the anechoic signal as reference can seem counter-intuitive because HA are not designed to provide anechoic estimates yet. However, this reference was the one that matched better with our informal listening. Another limitation of the present study is that we are comparing real time HA processing with DNNs applied as a post-processing, because we found it difficult to obtain signal insert points in consumer HA devices and also because no real time implementations of these models are publicly available yet. However, the slight performance difference between DNN and DNN-C and both being far better than the HA in terms of SISDR and MBSTOI should additionally encourage this research direction.
## 6 Conclusions
We have shown that HA enhancement algorithms struggle in ecologically valid complex situations reproduced at the studio, even to the point of damaging performance compared to not applying any algorithm at all. We have also shown that DNN-based approaches have the potential of outperforming them in terms of denoising and intelligibility at the expense of quality, encouraging future work on optimization and pruning this kind of algorithms.
Figure 3: Objective metrics across devices, anonymized for avoiding any drawal of inappropriate interdevice conclusions.
Figure 2: Signal enhancement benefit for hearing aids (HA), non-causal DNN model (DNN) and causal DNN model (DNN-C). |
2305.15746 | Assessing the Spatial Structure of the Association between Attendance at
Preschool and Childrens Developmental Vulnerabilities in Queensland Australia | The research explores the influence of preschool attendance (one year before
full-time school) on the development of children during their first year of
school. Using data collected by the Australian Early Development Census, the
findings show that areas with high proportions of preschool attendance tended
to have lower proportions of children with at least one developmental
vulnerability. Developmental vulnerablities include not being able to cope with
the school day (tired, hungry, low energy), unable to get along with others or
aggressive behaviour, trouble with reading/writing or numbers. These findings,
of course, vary by region. Using Data Analysis and Machine Learning, the
researchers were able to identify three distinct clusters within Queensland,
each characterised by different socio-demographic variables influencing the
relationship between preschool attendance and developmental vulnerability.
These analyses contribute to understanding regions with high vulnerability and
the potential need for tailored policies or investments | wala Draidi Areed, Aiden Price, Kathryn Arnett, Helen Thompson, Reid Malseed, Kerrie Mengersen | 2023-05-25T05:52:05Z | http://arxiv.org/abs/2305.15746v1 | Assessing the Spatial Structure of the Association between Attendance at Preschool and Children's Developmental Vulnerabilities in Queensland, Australia
## Abstract
Demographic and educational factors are essential, influential factors of early childhood development. This study aimed to investigate spatial patterns in the association between attendance at preschool and children's developmental vulnerabilities in one or more domain(s) in their first year of full-time school at a small area level in Queensland, Australia. This was achieved by applying geographically weighted regression (GWR) followed by \(K\)-means clustering of the regression coefficients. Three distinct geographical clusters were found in Queensland using the GWR coefficients. The first cluster covered more than half of the state of Queensland, including the Greater Brisbane region, and displays a strong negative association between developmental vulnerabilities and attendance at preschool. That is, areas with high proportions of preschool attendance tended to have lower proportions of children with at least one developmental vulnerability in the first year of full-time school. Clusters two and three were characterized by stronger negative associations between developmental vulnerabilities, English as the mother language, and geographic remoteness, respectively. This research provides evidence of the need for collaboration between health and education sectors in specific regions of Queensland to update current service provision policies and to ensure holistic and appropriate care is available to support children with developmental vulnerabilities.
## Introduction
The first five years of a child's life, commonly referred to as early childhood [1] have a significant long-term impact on later development, even into adulthood [2]. As a result, early childhood health and development assessments are of great interest to communities and government agencies to facilitate targeted early intervention strategies which can allow children to reach their maximum developmental potential [3]. Many countries, including Australia, are increasingly using national progress indicators of early childhood development to aid in these assessments.
In Australia, these progress indicators are collected as part of the population-based Australian Early Development Census (AEDC), conducted every three years since 2009 [4]. Using the 2009 census results as a benchmark, scores ranging between 0 and 10 are calculated for each child for each of five development domains and children are
classified as developmentally vulnerable (less than 10th percentile), at-risk (between the 10th and 25th percentile), or on track (above the 25th percentile) in each domain.
The five developmental domains are physical health and well-being, social competence, emotional maturity, language and cognitive skills (school based), and communication skills and general knowledge; see Fig 1[5]. The scores on these developmental domains are publicly available on the AEDC data explorer at the small area level; see section Case Study Data for details.
While the focus of the AEDC is on the specific milestones in childhood development within each of the domains [6], there is potential to gain additional insight into each of the early childhood development domain by assessing the spatial relationship of these data with socio-demographic factors and educational factors. An important educational factor is attendance at preschool, defined as structured, play-based education provided to children prior to school entry by a qualified early childhood teacher [6]. Preschool provides young children with rich learning environments that can enhance their cognitive, physical, social, and emotional development [7]. Attending preschool may also improve the chance of effective school transitions, with long-term implications for future academic and occupational success. As a result, policymakers are becoming more interested in the potential of preschool to improve developmental readiness for school. In line with this viewpoint, recent national reform programs in Australia have aimed to encourage preschool attendance by providing universal access to a preschool program in the year before the start of school [6]. Despite these efforts, however, not all children attend preschool. In 2021, 85% of all 4-year-old and 22% of all 5-year-old children were enrolled in preschool programs in Australia. [8].
Understanding geographic variation in preschool and developmental vulnerability in the first year of full-time school is important for communities, health managers and policymakers. This paper uses data available at a small area level (statistical area level 2, SA2) to investigate spatial patterns and clusters for the proportion of vulnerable
Fig 1: Early childhood development domains defined by AEDC.
children within the AEDC domains [9]. The analysis concentrates on the state of Queensland, Australia, and on the association between developmental vulnerabilities and attendance at preschool, taking into account socio-demographic factors (country of birth, English is the primary language, remoteness, The Index of Relative Socio-economic Disadvantage); see section Case Study Data. In addition, The investigation of whether geographical and educational factors have similar effects across a study region has been conducted. In this study, the results for vulnerability on one or more domain(s) as a summary of the developmentally vulnerable provided by AEDC have shown; however, the analyses are carried out for all the AEDC developmental domains and are reported in supplementary material S3 Appendix.
Various statistical approaches can be used to discover clusters in the aggregated data [10, 11, 12]. These approaches include the spatial scan statistic [13, 14], the Geographical Analysis Machine [15], Bayesian varying coefficients models for areal data [16], and penalized local polynomial models [17]. In addition, some studies have employed varying coefficient regression models based on spatial cluster frameworks. For example, Lawson [18] proposed an approach that provides the grouping of regression coefficients directly when the number of groups is known a priori. Lee [19] proposed a spatial cluster detection method for regression coefficients, which directly identifies an unknown number of spatial clusters in the regression coefficients via hypothesis testing and the construction of spatially varying coefficient regression based on detected spatial clusters. More recently, Lagona [20] proposed to estimate space-varying effects on the regression coefficients by exploiting a multivariate hidden Markov field and using an expectation-maximization algorithm and composite likelihood methods.
A method that considers non-stationary variables and models the local relationships between these predictors and an outcome of interest is the geographically weighted regression algorithm (GWR) suggested by Brunson, Fotheringham, and Charlton [21]. This is a local spatial technique that addresses both spatial heterogeneity and spatial dependence (i.e. spatial dependence and spatial autocorrelation), which generates locally weighted regression coefficients that vary geographically [21, 22, 23]. The output of the GWR is a set of regression coefficients for each location.
In order to identify spatial patterns in the data, a combination of the GWR coefficients with a \(K\)-means cluster algorithm [24] employed for the local regression coefficients to identify groups of SA2 areas that are similar with respect to the relationships between attendance at preschool and developmental vulnerabilities in the first year of full-time school.
GWR has been used in a number of real-world applications that show spatial heterogeneity in estimated covariate effects [25, 26, 27, 28, 29, 30, 31]. This combination of GWR and \(K\)-means has been deployed in many fields including ecology [32, 33], economics [34, 35], health [36, 37], environment [38, 39, 40, 41], social media [42], education [43], and transportation [44]. To the best of our knowledge, no studies have been published that consider this type of spatial clustering for children's AEDC domains. Although a previous study investigated the association between early life risk factors and children's developmental vulnerabilities at age 5 using latent class analysis, spatial factors were not taken into account [45]. The results presented in this paper provide an opportunity to shed more light on the development of children in Queensland.
## Materials and methods
### Case Study Area
The state of Queensland, Australia, is divided geographically into nine large regions (Fig2) and 528 non-overlapping statistical area level 2 (SA2) regions (according to the
Australian Statistical Geography Standard (ASGS) 2016 boundaries of the Australian Bureau of Statistics(ABS)). SA2 regions are medium-sized general-purpose regions that reflect a socially and geographically region integrated community [46]. SA2 is the smallest geographic area at which ABS non-census and intercensal data are publicly released.
Brisbane is Queensland's state capital city with a population of over 2.58 million [47], which ranks as the 3rd most populated city in Australia. The Greater Brisbane region is located in south-eastern Queensland and includes 236 SA2 areas.
### Case Study Data
As described previously, the development domains data for this study were obtained from the Australian Early Development Census. The AEDC collects comprehensive statistics on children in Australia every three years. Teachers conduct a census of their students in their first year of full-time school. The data are used to establish scores for each of five domains: physical health and well-being (Physical), social competence (Social), emotional maturity (Emotional), language and cognitive skills (Language), and communication skills and general knowledge (Communication), Fig 1. Each child is given a score between zero and ten for each domain, based on the cut-offs established as a baseline in 2009. When monthly age differences are considered, children who fall below the 10th percentile in a domain are labelled "developmentally vulnerable". AEDC also derives two additional domain indicators: vulnerable on one or more domains (Vuln 1) and vulnerable on two or more domains (Vuln 2).
A range of factors are associated with the child's early development. These can be broadly classed as factors related to the child, the mother, the family and the built environment. Factors related to the child include: Indigenous status, low birth weight, number of siblings, and country of birth [48]. Maternal risk variables include teenage mother at birth of the child (less than 20 years), smoking in pregnancy, and alcohol use in pregnancy [49]. Other family risk factors include: non-English speaking parents, single parents, moved house in last 12 months, main carer and parent education [50]. Finally, built environment factors include: home yard area, distance to the nearest park, distance to nearest family support service, distance to nearest playgroup venue, distance to nearest kindergarten, residential density [51, 49, 52].
Covariates considered for inclusion in this study were selected based on the existing literature and AEDC website [53] regarding their role as potential confounders or important contextual variables at area level, in the associations between geographic and educational factors on children's development [54, 55, 56]. The final list of covariates used in this study comprised attendance at preschool (Preschool), English as the mother language (English), Australia as the country of birth (Australia), the Socio-Economic Indexes For Areas (IRSD), and remoteness ( major city, inner-regional, outer-regional, remote, and very remote). There are 294 SA2 regions classified as major cities, 113 SA2 areas classified as inner regional, 96 SA2 areas classified as outer regional, 11 SA2 areas classified as remote and 14 SA2 areas classified as very remote.
The IRSD index (The Index of Relative Socio-economic Disadvantage) is scored on a scale of one to five. A low score indicates that the region as a whole is at a disadvantage; this includes many low-income families, many people without qualifications, or low-skill occupations.
This study used the latest publicly available data from the 2018-2019 census from AEDC. The participation rate in Queensland was with 98.1% of eligible children represented in the dataset in 2018. Between 3% and 6% of the data were missing variables in the dataset. Spatial neighbourhood averages were used to impute missing continuous data. The highest frequency neighbourhood category was used for categorical data. Due to the lack of contiguous neighbours in two cases, missing values
Fig 2: The structure of the main regions in Queensland, some notable regions include: 1) Southeast Queensland (SEQ) is home to more than 70% of the state’s population. It contains two statistical regions, Greater Brisbane and Moreton. 2) Darling Downs in the state’s inland south-east, which includes the city of Toowoomba 3) South West Queensland in the state’s inland south-west. 4) Central West in the state’s inland central-west. 5) Wide Bay-Burnett is located north-east of the Darling Downs and north of the Sunshine Coast 6)Central Queensland, which includes Fitzroy and Mackay. 7) North Queensland on the state’s northern coastline, which includes the city of Townsville. 8) North West in the state’s inland north-west of Queensland and includes the city of Mount Isa. 9) Far North in the state’s extreme northern coastline and also includes the city of Cairns.
for two islands could not be filled. As a result, the scope of this study's investigation was reduced to the remaining 526 SA2 regions.
### Spatial autocorrelation analysis
Moran's I was used to investigate global spatial autocorrelation for each type of vulnerability domain [57]. This paper defines neighbours as spatial units (SA2s) that share an edge or vertex. This classification of neighbours is known as the queen criterion [57]. Moran's I spatial autocorrelation algorithm takes values between -1 and 1. Coefficients between 0 and 1 show positive spatial autocorrelation; negative coefficients between 0 and -1 imply different neighbouring values, and coefficients approaching 0 indicate weak or no spatial autocorrelation [58]. For more details, see S1 Appendix.
### Geographically Weighted Regression
The geographically weighted regression (GWR) model [59], which is an extension of ordinary least squares (OLS), was adopted for this study. A weighted spatial matrix, which depicts local geographic interactions, is produced using spatial kernel functions. The weight (W) is a matrix of weights specific to location \(i\) such that observations nearer to \(i\) are given greater weight than observations further away [21].
Given a response vector \(\underline{y}=\{y_{1},y_{2},...,y_{n}\}\) and a \(n\times p\) matrix of covariates \(X\), the GWR model is written as:
\[Y_{i}=\beta_{0}(u_{i},v_{i})+\sum_{k=1}^{p}\beta_{k}(u_{i},v_{i})X_{ik}+ \epsilon_{i}\qquad i=1,2,3,...,n, \tag{1}\]
where \(Y_{i}\) is the dependant attributes at each location \(i\) and \(y\)= \((y_{1},y_{2},...,y_{n})^{\top}\), \(X_{ik}\) is the \(k\)-th covariate at location \(i\), \((u_{i},v_{i})\) denotes the coordinates of point \(i\) in space (longitude and latitude), \(\beta_{0}(u_{i},v_{i}),...,\beta_{k}(u_{i},v_{i})\) are model parameters, and \(\epsilon_{i}\) is the random error at location \(i\) with mean zero and variance \(\sigma^{2}I\), where \(I\) is the identity matrix [21]. GWR thus allows the coefficients to vary spatially, with the estimated values of the coefficient values given by:
\[\hat{\beta}(i)=[X^{T}W(i)X]^{-1}X^{T}W(i)Y, \tag{2}\]
where \(W(i)\) denotes the spatial weight matrix for location \(i\). In this paper, different kernel functions between observations and local regressions were used to find \(W\). The fixed Gaussian kernel was adopted, given by:
\[w_{ij}=exp(-(d_{ij}^{s})^{2}/b^{2}), \tag{3}\]
where the \(b_{i}\) represents the bandwidth which is the radius of a point to the extent to which the point influences that form a circle, \(d_{ij}\) is the euclidean distance [60]. An adaptive bi-square was also considered with the expression [61],
\[w_{ij}=\begin{cases}[1-(d_{ij}^{s}/b_{i})^{2}]^{2}&\text{if}\quad d_{ij}^{s}< b_{i},\\ 0&\text{otherwise},\end{cases} \tag{4}\]
Cross-validation was used to search for the optimal value of \(b_{i}\) bandwidth with the optimal solution returning the smallest model residuals based on a given model specification [62]. Finally, a fixed kernel approach with the same bandwidth at all observation locations was used to explore the effects on the results. The performance of the models was evaluated using quasi-global \(R^{2}\), and goodness of fit \(R^{2}\)[63, 64].
### Clustering of GWR coefficients
The inferential capability of GWR was extended by clustering together locations with similar sets of parameter values. This synthesises the often vast amount of output created by the GWR model and aids the interpretation of multiple parameter estimate maps.
In this study, \(K\)-means clustering was employed [65], and the results were spatially visualised to investigate the spatial clusters based on the GWR coefficients. In the \(K\)-means clustering approach, the similarity between a pair of objects was defined by Euclidean distance, and the objects were partitioned into \(K\) clusters, such that the within-cluster sum of squares is minimised [37]. In addition, the silhouette score was used to evaluate the clustering results [66]. A silhouette score close to 1 suggests that the objects are close to the centroid of their respective clusters. In contrast, a score close to 0 indicates that the objects are outliers. The algorithm proceeds as follows. 1). Define the number of clusters \(K\). 2) Randomly select \(K\) data points within each cluster as the cluster centroids. 3) Assign data points to the closest cluster centroid. 4) Recompute the cluster centroids. 5) Repeat steps 3) and 4) until either the centroids do not change, or the maximum number of iterations is reached. Two sets of cluster analyses were performed using the GWR coefficients. The first set of analyses encompassed all 526 SA2 regions in Queensland (excluding two islands as described previously). Each domain was analysed separately using GWR and \(K\)-means. The results for Vuln 1 are reported below.
Since most of the population of QLD residents lives in the southeast corner, a separate cluster analysis was performed to understand the spatial patterns in the 236 SA2 areas in the Greater Brisbane region (see S2 Appendix). In addition, a comparison between the five domains with the cluster of the highest proportion of attendance preschool coefficient has been discussed.
### Computation
The analyses were carried out using R statistical software version R-4.1.3 packages (spgwr) [67] and (tmap) [68] for the GWR, and mclust [69], and factoextra [70] packages for the \(K\)-means algorithm. Spatial distributions of the clusters were visualised with maps using tmap [68], and ggplot2 [71] packages in R, and the ArcGIS Pro 2.9.1 software [72].
## Results
### Spatial characteristics of children's vulnerability on one or more domain(s)
The global Moran's I for Vuln 1 was 0.36 (p-value =1.0E-04), indicating a highly significant positive spatial autocorrelation between SA2 areas. Spatial differences in Vuln 1 were observed (Fig 3), Vuln 1 is declining from the north to the south. In general, small values of Vuln 1 (less than 0.30) were located in the southeastern and central parts of Queensland. In contrast, high Vuln 1 values (greater than 0.45) were mainly in the northwest and far north of Queensland.
The GWR analysis of Vuln 1 produced a quasi-global GWR \(R^{2}\) of 0.42 and local \(R^{2}\) values between 0.37 and 0.43 (S1 Appendix).
Notably, the SA2 areas with relatively high local \(R^{2}\) were mainly concentrated in the northwest and far north of Queensland, while low local \(R^{2}\) values were found in the southwest and some parts of the Darling Downs regions. The Moran's I values for the
Fig 3: Spatial distribution of the proportion of developmental vulnerable on one or more domain(s) at the SA2 level in Queensland, 2018-2019.
standardized residuals of the GWR models was 0.17 (p-value \(<\)0.001), indicating substantial spatial variation remaining even after the regression analysis.
The GWR regression coefficients are summarised in Table 1. The number of SA2 areas with statistically significant GWR coefficients and corresponding maps of these areas are provided in the S3 Appendix. Fig 4 shows the spatial distribution of the GWR coefficients. GWR coefficients corresponding to the proportion of attendance at preschool were largely negative in southeast Queensland, indicating a substantively negative relationship between the proportion of vulnerability on one or more domain(s) and the proportion of attendance at preschool at the SA2 level; the proportion of preschool increases implies the proportion of vulnerable decreases. As noted in section Case Study Area, this is the most populous region in the state and contains the capital city, Brisbane. In addition, the GWR coefficients were largely negative for the proportion of children with English as the mother language in the northwest and the far north regions. In contrast, positive GWR coefficients were found for low socioeconomic status (IRSD) in the far north, north-west and central west of Queensland. Finally, compared to the Baseline inner cities for the remoteness factor, a negative relationship with Vuln 1 for the region classified as inner regions in the south-north of Queensland has been found, and a positive relationship for remote regions in the northwest and far north of Queensland has been noticed.
### Cluster analysis of GWR coefficients
The \(K\)-means cluster analysis of the GWR coefficients for Vuln 1 identified three clusters based on the silhouette coefficients (S3 Appendix). Fig 5 shows the distribution of these clusters on the Queensland map. Boxplots of GWR coefficients for areas within each cluster are shown in Fig 6 and Fig 7. In general, cluster 1 was located in the south-east of Queensland and covered more than half of the state with 342 SA2 areas, including the Greater Brisbane region. Cluster 2 was mainly in the central west, north-west, northern and far north Queensland, with 104 SA2 areas. Cluster 3 primarily included the southwest and central Queensland regions with 80 SA2 areas. Cluster 1 had the largest negative GWR coefficients for the proportion of children attending a preschool, indicating that in these areas when the proportion of attendance at preschool increases, the proportions of Vuln 1 decrease. Cluster 2 had the largest negative GWR coefficients for English as the mother language, indicating when the proportion of children with English as the mother language increases, the proportion of Vuln 1 decreases in these regions. Finally, Cluster 3 displayed the greatest GWR coefficients with inner cities as a baseline, indicating that the proportion of Vuln 1 increases as
\begin{table}
\begin{tabular}{c c c c} \hline Explanatory variables & Mean GWR coefficient & Range GWR coefficient & Global p-value \\ \hline Preschool & -0.010 & [-0.013, -0.005] & 0.040 \\ English & -0.041 & [-0.074, -0.028] & 0.007 \\ Australia & 0.002 & [-0.046 0.035] & 0.695 \\ IRSD (Quintle 1) & 0.301 & [ 0.276, 0.345] & \(<\) 2e-16 \\ IRSD (Quintle 2) & 0.275 & [0.254, 0.310] & \(<\) 2e-16 \\ IRSD (Quintle 3) & 0.249 & [0.224, 0.291] & \(<\) 5.00e-15 \\ IRSD (Quintle 4) & 0.230 & [0.204, 0.276] & 5.45e-13 \\ IRSD (Quintle 5) & 0.202 & [0.175, 0.251] & 6.86e-11 \\ Remoteness (Inner regional) & -0.019 & [-0.031, -0.004] & 0.406 \\ Remoteness (Outer regional) & -0.012 & [-0.035, 0.011] & 0.428 \\ Remoteness (Remote) & 0.008 & [-0.024, 0.021] & 0.616 \\ Remoteness (Very remote) & 0.049 & [0.008, 0.063] & 0.0004 \\ Quasi-global \(R^{2}\) & 0.42 & & \\ \hline \end{tabular}
\end{table}
Table 1: Summary statistics for the GWR model coefficients with global p-values from an ordinary least square regression model (OLS), with major cities as a baseline for remoteness factor.
remeteness increases.
In general, all three clusters displayed a negative relationship with preschool attendance. In addition, these clusters show a very high positive association with "very remote" (compared with inner cities). In addition, these clusters show a relative linear trend that when the disadvantages decrease, Vuln 1 increases.
**Summary of the proportion of children attending at preschool for areas in the first cluster for each type of AEDC domain**
Separate GWR and cluster analyses were also performed for each AEDC domain. The results are in the S1 Appendix. We focused on the GWR coefficient corresponding to the proportion of attendance at preschool. In all AEDC domains, the first cluster was found to have the largest negative relationship with the proportion of attendance at preschool. Fig 8 shows the GWR coefficients for the proportion of attendance at preschool for the five AEDC domains. The figure reveals that preschool has a dominant effect (more negative relationship) for the social competence (Social) and communication skills (Communication) domains, followed by the physical health and well-being (Physical) domain, which shows a focus should be made particularly on improving language and emotional development domains.
## Discussion
Educational and geographical characteristics have a significant impact on children's development. However, little is known regarding the spatial heterogeneity of the association between these variables and developmental vulnerability in one and more
Fig 4: Spatial distribution of GWR coefficients in Queensland.
domain(s). This study is one of the first to investigate spatial variation in this association in a large, diverse region but spatially complex, Queensland, Australia, and to capture spatial clusters of these relationships. While preschool attendance is strongly encouraged in Queensland, the overall participation rate is the lowest among all Australian states and varies geographically. The analysis found a significant connection between the proportion of children who attended preschool before they started
Fig 5: Spatial distribution of clusters of GWR coefficients at SA2 level in Queensland. Insight into the SEQ corner is given in the S2 Appendix.
Fig 6: A comparison of the GWR coefficients for Vuln 1 and risk factors (Australia, English, Preschool, and remoteness, including inner regional, outer regional, remote, and very remote levels) across the three clusters.
mainstream school, and the proportion of children measured as developmentally vulnerable in each of the five AEDC domains, as well as in one or more domains (Vuln 1). The study found three distinct clusters inside Queensland. All three clusters were characterised by a negative mean association between attendance at preschool and Vuln 1, but this relationship varied in consistency within clusters and was affected by different sets of geographic and socio-demographic variables.
These results suggest the need for collaboration between health and education partners in an effort to increase preschool access or otherwise improve attendance. Further, health providers may need to consider additional interventions or methodologies for remote areas.
These results are consistent with other analyses of the AEDC data. For example, a study found an association between geographic jurisdictions, considering socioeconomic and demographic characteristics and the probability of being developmentally
Fig 8: A box plot representation of the coefficients obtained through Geographically Weighted Regression (GWR) for the proportion of attendance at preschool within five domains of the Australian Early Development Census (AEDC) in the first cluster.
Fig 7: A comparison of the GWR coefficients for Vuln 1 and its associated risk factor, the Index of Relative Socio-economic Disadvantage (IRSD), which has five levels, with level 1 representing the most disadvantaged regions across the three clusters.
vulnerable on one or more domain(s) by gender, using nested fixed-effects logistic regression models [48]. Another study investigated the relationship between demographic characteristics and the language development of children using descriptive statistics, t-tests, one-way analysis of variance (ANOVA) and Tukey multiple comparison tests [73]. Their study found that the parents' family income and educational background were positively associated with the children's language development. Additionally, another study investigated patterns of universal health and education service use from birth through kindergarten (age four years) and estimated associations between cumulative risk and service use patterns and between service use patterns and children's developmental vulnerability in the preparatory year (age five years) [45]. The latent class analysis used in this study identified three service use patterns. Membership of low and high-service user groups was associated with higher cumulative risk and increased odds of developmental vulnerability relative to the regular service user group. These analyses add to recent literature by providing insight into the regions with high vulnerabilities where different policies or investments may be required.
There are various limitations to this study. First, the study focuses on 2018 Census data from Queensland. The data for the 2021 census were not available at the time of this study, but it will be interesting to compare results based on this new data set to determine whether these trends are consistent over time and to assess any notable differences. Second, in this study, the data were restricted to the SA2 level, which limited the understanding of spatial patterns and the relationships between vulnerability and covariates at an individual scale. Third, the study only examined educational, socio-demographic and geographical variables. Other variables could be included in future investigations if available at a suitable aggregation level. This study excluded Indigenous status from the GWR model because of local multicollinearity; in particular, it found a high association between Indigenous status and remoteness level. Since this study focused on geographic variation, it chose to adopt the latter variable. Similarly, there was a high association between socioeconomic factors (IRSD) and Indigenous status (S4 Appendix). This is supported by other literature [74, 75]. The complex spatial relationships between Indigenous status, preschool attendance and developmental vulnerability among children should be considered in separate future work.
To identify subgroups among GWR coefficients, various unsupervised clustering methods were used, which showed adequate coverage of common characteristics across algorithms. Unsupervised clustering algorithms are useful for finding subgroups within data without prior knowledge of group labels or classifications [76]. Different algorithms emphasize different features in their clustering solutions, so analysing data from multiple algorithms is beneficial, especially when there is little prior knowledge of expected subgroups. Comparing results from several methods leads to a more confident identification of well-separated subgroups and reduces sensitivity to method choice. A comparison of three common algorithms (\(K\)-means, Partition around Medoid, and Hierarchical Clustering) was conducted, and the cluster accuracy can be found in S5 Appendix. The results show that the three clustering algorithms have a high degree of consistency with accuracy rates above 0.98. This high accuracy suggests strong agreement among the results obtained from each of the clustering algorithms and confirms the robustness of the subgroups identified using different methodologies.
## Conclusion
Using spatially weighted regression analysis can help to identify the influence of a set of variables on an outcome of interest at each specific geographic location. This study employed spatially weighted regression analysis and \(K\)-means clustering to investigate at the SA2 level spatial heterogeneity and clustering of the association between the
proportion of children attending preschool, taking into account socio-demographic and geographic factors with the proportion of children with developmental vulnerabilities in their first year of full-time school in Queensland, Australia. Three distinct clusters were found with different socio-demographic characteristics. Importantly, the largest cluster revealed a strong negative association between the proportion of attendance at preschool and the developmentally vulnerable children in their first year of full-time school in Queensland. In these clusters, region-specific interventions may be considered to promote preschool attendance, taking into account socio-demographic issues, in order to minimise development vulnerability among children.
## Supporting information
S1 Appendix. Moran's I and local \(R^{2}\).
S2 Appendix. Clusters inside Greater Brisbane and Summary of GWR coefficients.
S3 Appendix. Additional analysis. A: Silhouette score. B: GWR coefficients for each type of AEDC domain
S4 Appendix. Relation between Indigenous and other socio-demographic variables.
S5 Appendix. Cluster Accuracy.
## Acknowledgments
We would like to express our gratitude to the team at Children Health Queensland and the Center for Data Science for their invaluable assistance and support in this project.
|
2310.01585 | Two-fold degeneracy of a class of rational Painlevé V solutions | We present a construction of a class of rational solutions of the Painlev\'e
V equation that exhibit a two-fold degeneracy, meaning that there exist two
distinct solutions that share identical parameters.
The fundamental object of our study is the orbit of translation operators of
$A^{(1)}_{3}$ affine Weyl group acting on the underlying seed solution that
only allows action of some symmetry operations. By linking points on this orbit
to rational solutions, we establish conditions for such degeneracy to occur
after involving in the construction additional B\"acklund transformations that
are inexpressible as translation operators. This approach enables us to derive
explicit expressions for these degenerate solutions. An advantage of this
formalism is that it easily allows generalization to higher Painlev\'e systems
associated with dressing chains of even period $N>4$. | H. Aratyn, J. F. Gomes, G. V. Lobo, A. H. Zimerman | 2023-10-02T19:26:48Z | http://arxiv.org/abs/2310.01585v4 | # Two-fold degeneracy of a class of rational Painleve V solutions
###### Abstract
We present a construction of a class of rational solutions of the Painleve V equation that exhibit a two-fold degeneracy, meaning that there exist two distinct solutions that share identical parameters.
The fundamental object of our study is the orbit of translation operators of \(A_{3}^{(1)}\) affine Weyl group acting on the underlying seed solution that only allows action of some symmetry operations. By linking points on this orbit to rational solutions, we establish conditions for such degeneracy to occur after involving in the construction additional Backlund transformations that are inexpressible as translation operators. This approach enables us to derive explicit expressions for these degenerate solutions. An advantage of this formalism is that it easily allows generalization to higher Painleve systems associated with dressing chains of even period \(N>4\).
## 1 Introduction
Painleve equations are second order nonlinear differential equations with solutions without any movable critical singularities in the complex plane, a property referred to as Painleve property (see e.g. [4]). These solutions are generally not solvable in terms of elementary functions however for special values of the underlying parameters the Painleve equations possess rational and hypergeometric-type of solutions.
Although the discovery of Painleve equations has its origin in, mathematically motivated, search for equations satisfying the Painleve property, these equations and their solutions found many practical applications and play an important role in several branches of mathematical physics, algebraic geometry, applied mathematics,fluid dynamics and statistical mechanics. A list of the areas where the Painleve equations found their applications includes correlation functions of the Ising model, random matrix theory, plasma physics, asymptotics of nonlinear partial differential equations, quantum cohomology, conformal field theory, general relativity, nonlinear and fiber optics, Bose-Einstein condensation [4, 9]. Special solutions, such as rational solutions, turned out to play key role in these applications and various methods were applied in their study.
This project is dedicated to the study of rational solutions of Painleve V equation by presenting an approach to deal with degeneracy of these solutions. Painleve V equation
is invariant under extended affine Weyl group \(A_{3}^{(1)}\) of Backlund transformations [6]. A central object of our study is a commutative subgroup of translation operators of \(A_{3}^{(1)}\) and an orbit formed by their actions on two different types of seed solutions, one being invariant under an internal automorphism \(\pi\) of \(A_{3}^{(1)}\).
In a recent paper [1], we have shown how by acting with translation operators on a seed solution, which is invariant under automorphism \(\pi\), one obtains Umemura polynomials for Painleve V equation and their relevant recurrence relations [7]. For the other remaining seed solution we have shown that only actions by selected translation operators are allowed while the remaining translation operators produce divergencies.
The presence of degeneracy in that latter class of solutions was recently pointed out in [3], which also presented an explicit construction of Umemura and special function solutions in terms of the generalized Laguerre polynomials.
Here, as in reference [2], we link the origin of degeneracy of rational solutions to existence of divergencies resulting from actions of various translation operators and Backlund transformations on the underlying seed solution and use it to explicitly construct the two-fold degenerated solutions of the Painleve V Hamilton equations (2.2) and resulting degeneracy of the Painleve V equation (2.3) and to find the underlying consistency relations that dictate values of the parameters of degenerated solutions.
In section 2, we present the Hamiltonian approach to Painleve V equation and discuss the construction of rational solutions by actions of translation operators. We describe solutions formed out by actions with \(T_{2}^{-n_{2}}\) and \(T_{4}^{n_{4}}\) translation operators, with \(n_{i},i=2,4\) being positive integers, on the seed solution :
\[|q=z,\,p=0\rangle_{\alpha_{2}}\,, \tag{1.1}\]
that describes a solution of Hamilton equations (2.2) with values of \(q,p\) being \(q=z,\,p=0\) and an arbitrary parameter \(\mathsf{a}\) equal to \(\alpha_{1}\) and with zero parameters \(\alpha_{2}\) and \(\alpha_{3}\). We find the recurrence relation that allows finding explicitly the solutions derived from (1.1) and obtain a close expression for their parameters in terms of \(\mathsf{a}\) and integers \(n_{i},i=2,4\).
In section 3, we explain a reason for existence of degenerated solutions due to infinities associated with actions of some Backlund transformations on the seed solution (1.1) and use this observation to find the class of parameters that are being shared by a pair of different solutions. We will show that degeneracy occurs for some rational solutions derived from (1.1) for the parameter \(\mathsf{a}\) that happens to be an even integer. We propose an explicit construction of such solutions for a Backlund transformation \(M\) such that infinity is generated if we are to set two sides of inequality
\[M\mathbb{T}(n_{2},n_{4};\mathsf{a})\neq\mathbb{T}(m_{2},m_{4};\mathsf{b})\,, \;n_{i},m_{i}\in\mathbb{Z}_{+},\;i=2,4\,, \tag{1.2}\]
to be equal. This potential divergence is the cause of degeneracy. In relation (1.2), the notation is such that \(\mathbb{T}(n_{2},n_{4};\mathsf{a})=T_{2}^{-n_{2}}T_{4}^{n_{4}}|q=z,\,p=0 \rangle_{\alpha_{2}}\) is a solution linked to the orbit of the seed solution (1.1) under actions of \(T_{2}\) and \(T_{4}\) operators. To be responsible for degeneracy the Backlund transformation \(M\) must be such that it satisfies two conditions. First that it will cause the divergence, as described in equation (3.1), and secondly that the equation
\[M\left(\alpha_{n;\mathsf{a}}\right)=\alpha_{m;\mathsf{b}}\,, \tag{1.3}\]
will have a solution for some values of the parameters \(n_{i},m_{i},i=2,4\) and \(\mathsf{a},\mathsf{b}\) ensuring that both sides of inequality (1.2) will share the same parameter. These two conditions are shown to be satisfied for \(M\) being one of the Backlund transformations \(M_{12}=s_{1}s_{2}\), \(M_{34}=s_{3}s_{4}\), \(M_{1}=\pi s_{1}\), \(M_{4}=\pi^{-1}s_{4}\) and we call the corresponding set of degenerated solutions an \(M_{i}\)-sequence. One of the main points of this paper is that all these four sequences are equivalent. Specifically, the sequences \(M_{1},M_{12}\) and \(M_{4}\) are mapped into each other by Backlund transformations, while \(M_{3,4}\) happens to be equivalent to \(M_{1}\) after
a simple re-definitions of underlying parameters as discussed in subsections 3.1 - 3.3. The equivalence of these sequences is a new result not contained in reference [2].
The final section 4, offers conclusions and discussion of the results. We find that the condition for a solution constructed in section 2 to be equal to one of the degenerated solutions is that the underlying parameter \(\mathsf{a}\) of the seed solution is an even integer. We also remark that the fact that the discussion of degeneracy of Painleve systems is here placed firmly in the setting of extended affine Weyl group \(A_{N-1}^{(1)},N=4\) lends itself naturally to being generalized to Painleve systems associated with higher dressing chains of even period \(N>4\), where more richer degeneracy structure is expected to appear.
## 2 Background
We will mainly be working with the Hamiltonian approach to Painleve V equation with the Hamiltonian:
\[H=-q\left(q-z\right)p\left(p-z\right)+\left(1-\alpha_{1}-\alpha_{3}\right)pq+ \alpha_{1}zp-\alpha_{2}zq\,, \tag{2.1}\]
where \(\alpha_{i},i=1,2,3\) are three constant parameters and \(q,p\) are two canonical variables that satisfy Hamilton equations: \(zq_{z}=dH/dp\), \(zp_{z}=-dH/dq\):
\[\begin{split} zq_{z}&=-q(q-z)(2p-z)+(1-\alpha_{1}- \alpha_{3})q+\alpha_{1}z\,,\\ zp_{z}&=p(p-z)(2q-z)-(1-\alpha_{1}-\alpha_{3})p+ \alpha_{2}z\,,\end{split} \tag{2.2}\]
from which one derives Painleve V equation
\[y_{xx}=-\frac{y_{x}}{x}+\left(\frac{1}{2y}+\frac{1}{y-1}\right)y_{x}^{2}+ \frac{(y-1)^{2}}{x^{2}}\left(\alpha y+\frac{\beta}{y}\right)+\frac{\gamma}{x} y+\delta\frac{y(y+1)}{y-1}\,, \tag{2.3}\]
by eliminating one of the canonical variables and defining \(y=(q/z)(q/z-1)^{-1}\), as well as redefining the variable \(z\to x\) with \(x=\epsilon z^{2}/2\). The coefficients \(\alpha,\beta,\gamma\) of the Painleve V equation are given by:
\[\alpha=\frac{1}{8}\alpha_{3}^{2},\ \ \beta=-\frac{1}{8}\alpha_{1}^{2},\ \ \gamma=\frac{\alpha_{2}-\alpha_{4}}{2\epsilon},\ \ \delta=-\frac{1}{2}\frac{1}{\epsilon^{2}}\,, \tag{2.4}\]
in terms of components \(\alpha_{i}=(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4})\) with \(\alpha_{4}=2-\sum_{i=1}^{3}\alpha_{i}\).
For \(\delta\) to take a conventional value of \(-\frac{1}{2}\) we need \(\epsilon^{2}=1\).
The Hamilton equations are directly connected to symmetric Painleve V equations:
\[z\frac{df_{i}}{dz}=f_{i}f_{i+2}\left(f_{i+1}-f_{i-1}\right)+\left(1-\alpha_{i +2}\right)f_{i}+\alpha_{i}f_{i}\,,\ \ f_{i+4}=f_{i},\ \ i=1,2,3,4\,,\]
via relations \(f_{1}=q\), \(f_{2}=p\), \(f_{3}=z-q\), \(f_{4}=z-p\). Since our formalism will be shown to describe degeneracy of Painleve V Hamilton equations (2.2) it will also automatically provide such description for the symmetric Painleve V equations as well as equation (2.3).
The Hamilton equations are invariant under Backlund transformations, \(\pi,s_{i},i=1,\ldots,4\) that satisfy the \(A_{3}^{(1)}\) extended affine Weyl group relations:
\[s_{i}^{2}=1, s_{i}s_{j}=s_{j}s_{i}\ (j\neq i,i\pm 1), s_{i}s_{j}s_{i}=s_{j}s_{i}s_{j}\ (j=i\pm 1),\] \[\pi^{4}=1, \pi s_{j}=s_{j+1}\pi. \tag{2.5}\]
See [1, 2, 6] for an explicit form of these transformations on canonical variables \(p\) and \(q\). Imposing the periodicity condition \(\alpha_{i+4}=\alpha_{i}\) we can compactly describe the action of the Backlund transformations on the constant parameters \(\alpha_{i}\) from equations (2.2) as :
\[s_{i}(\alpha_{i})=-\alpha_{i},\quad s_{i}(\alpha_{i\pm 1})=\alpha_{i}+\alpha_{i \pm 1},\quad s_{i}(\alpha_{i+2})=\alpha_{i+2},\quad i=1,2,3,4\,. \tag{2.6}\]
Furthermore the automorphism \(\pi\) acts according to
\[\pi(\alpha_{i})=\alpha_{i-1}\,. \tag{2.7}\]
Within the \(A_{3}^{(1)}\) extended affine Weyl group one defines an abelian subgroup of translation operators defined as \(T_{i}=r_{i+3}r_{i+2}r_{i+1}r_{i},i=1,2,3,4\), where \(r_{i}=r_{4+i}=s_{i}\) for \(i=1,2,3\) and \(r_{4}=\pi\). The translation operators commute among themselves, \(T_{i}T_{j}=T_{j}T_{i}\), and as follows from relations (2.6) and (2.7) generate the following translations when acting on the \(\alpha_{i}\) parameters:
\[T_{i}(\alpha_{i})=\alpha_{i}+2,\;T_{i}(\alpha_{i-1})=\alpha_{i-1}-2,\;T_{i}( \alpha_{j})=\alpha_{j},\;j=i+1,j=i+2\,.\]
The translation operators satisfy the following commutation relations
\[s_{i}T_{i}s_{i}=T_{i+1},\;\;s_{i}T_{j}s_{i}=T_{j},\,j\neq i,i+1,\;\;\pi\;T_{i}= T_{i+1}\,\pi\,, \tag{2.8}\]
with the Backlund transformations \(s_{i},\,i=1,2,3,4\) and an automorphism \(\pi\).
The reference [1] described construction of rational solutions of Painleve V equation out of actions of translation operators on seed solutions that first appeared in [8]. Crucial for this construction is that rational solutions fall into two classes depending on which of the two types of seed solutions they have been derived from by actions of translation operators. These two classes of seed solutions are:
1. \(q=z/2\), \(p=z/2\), with the parameter \(\alpha=(\mathtt{a},1-\mathtt{a},\mathtt{a},1-\mathtt{a})\,,\)
2. \(q=z,\,p=0\), with the parameter \(\alpha_{\mathtt{a}}=(\mathtt{a},0,0,2-\mathtt{a})\) denoted here by \(|q=z,\,p=0\rangle_{\alpha_{\mathtt{a}}}\).
They both solve the Hamilton equations (2.2) for an arbitrary variable \(\mathtt{a}\). As shown in [1], the first class of seed solutions gives rise to Umemura polynomials and the second to special functions. It was also shown there that the solutions constructed with this procedure satisfy all sufficient and necessary conditions for the parameters of rational solutions of Painleve V equation first derived in [5]. The action of the Backlund transformation \(s_{i}\) on the seed solution (1.1) is :
\[\big{|}q=z,\,p=0\big{\rangle}_{\alpha_{\mathtt{a}}}\stackrel{{ s_{i}}}{{\longrightarrow}}\big{|}s_{i}(q=z),\,s_{i}(p=0)\big{\rangle}_{s_{i}( \alpha_{\mathtt{a}})}\,,\]
and similarly for all the other Backlund transformations.
Acting repeatedly with \(\pi\) automorphism on the seed solution (1.1) produces three other variants of such solution. They all serve as seed solutions in analogous way to the solution (1.1). Here we will limit our discussion only to the seed solution (1.1) and solutions generated from it as the other solutions and the corresponding structure of degeneracy follow from the same formalism under appropriate actions of \(\pi\).
The Backlund transformations \(s_{2},s_{3}\) generate infinity when applied on the solution (1.1) and accordingly only actions by some powers of \(T_{1},T_{2},T_{4}\) are well defined on a seed solution \(|q=z,\,p=0\rangle_{\alpha_{\mathtt{a}}}\). The allowed operations are as follows [1]:
\[T_{1}^{n_{1}}T_{2}^{-n_{2}}T_{4}^{n_{4}}|q=z,\,p=0\rangle_{\alpha_{\mathtt{a}} },\;n_{1}\in\mathbb{Z},\;n_{2},n_{4}\in\mathbb{Z}_{+}\,.\]
This operation is to be understood as producing new solutions \(q\) and \(p\) of the Hamilton equations equal to \(T_{1}^{n_{1}}T_{2}^{-n_{2}}T_{4}^{n_{4}}(q=z)\) and \(T_{1}^{n_{1}}T_{2}^{-n_{2}}T_{4}^{n_{4}}(p=0)\) and with a new parameter:
\[T_{1}^{n_{1}}T_{2}^{-n_{2}}T_{4}^{n_{4}}(\alpha_{\mathtt{a}})=(\mathtt{a}+2n_ {1}+2n_{2},\,-2n_{2},\,-2n_{4},\,2-\mathtt{a}+2n_{4}-2n_{1})\,. \tag{2.9}\]
Evidently, the action of \(T_{1}^{n_{1}}\) only amounts to shifting a parameter \(\mathtt{a}\) and as shown in [1] leaves the configuration \(q=z,p=0\) unchanged. Thus :
\[T_{1}^{n_{1}}|q=z,\,p=0\rangle_{\alpha_{\mathtt{a}}}=|q=z,\,p=0\rangle_{ \alpha_{\mathtt{a}+2n_{1}}}. \tag{2.10}\]
We can therefore, largely, ignore \(T_{1}\) and restrict our discussion to the solutions of Painleve V equation of the form :
\[\begin{split}\mathbb{T}(n_{2},n_{4};\mathsf{a})&=T_{2}^ {-n_{2}}T_{4}^{n_{4}}|q=z,\,p=0\rangle_{\alpha_{\mathsf{a}}},\;n_{2},n_{4}\in \mathbb{Z}_{+}\,,\\ \alpha_{n;\mathsf{a}}&=T_{2}^{-n_{2}}T_{4}^{n_{4}}( \alpha_{\mathsf{a}})=(\mathsf{a}+2n_{2},\,-2n_{2},\,-2n_{4},\,2-\mathsf{a}+2n _{4})\,,\end{split} \tag{2.11}\]
where we listed both the solution generated by translation operators and its corresponding parameter \(\alpha_{n,\mathsf{a}}\). \(\mathbb{Z}_{+}\) contains positive integers and zero.
To describe solutions \(\mathbb{T}(n_{2},n_{4};\mathsf{a})\) we will first set \(n_{4}=0\) and recall expressions for an action by \(T_{2}^{-n}\)[1]:
\[\begin{split} T_{2}^{-1}&:\big{|}q=z,\,p=0\rangle_ {\alpha_{\mathsf{a}}}\to|q=z,p=\frac{2z}{\mathsf{a}-z^{2}}\big{\rangle}_{(2+ \mathsf{a},-2,0,2-\mathsf{a})}\\ T_{2}^{-n}&:\big{|}q=z,\,p=0\rangle_{\alpha_{\mathsf{ a}}}\to|q_{n}=z,p_{n}=\frac{2nzR_{n-1}(\mathsf{a};z)}{R_{n}(\mathsf{a};z)} \big{\rangle}_{(\mathsf{a}+2n,-2n,0,2-\mathsf{a})}\,,\end{split} \tag{2.12}\]
where \(R_{n}(\mathsf{a};z)\) is found to satisfy the recurrence relation:
\[R_{n+1}(\mathsf{a};z)=2nz^{2}R_{n-1}(\mathsf{a};z)+(-z^{2}+2n+\mathsf{a})R_{n }(\mathsf{a};z),\quad n=1,2,\ldots, \tag{2.13}\]
with \(R_{0}(\mathsf{a};z)=1\).
The result for \(T_{2}^{-n_{2}}|q=z,\,p=0\rangle_{\alpha_{\mathsf{a}}}\) is obtained by inserting \(n=n_{2}\) into equation (2.12).
The further action with \(T_{4}^{n_{4}}\) utilizes expression
\[\begin{split} T_{4}(q)&=z-p-(\alpha_{1}+\alpha_{4} )/(q+\alpha_{4}/(z-p))\,,\\ T_{4}(p)&=q+\alpha_{4}/(z-p)-(\alpha_{1}+\alpha_{2} +\alpha_{4})/(p+(\alpha_{1}+\alpha_{4})/(q+\alpha_{4}/(z-p)))\,,\end{split} \tag{2.14}\]
describing action of the translation operator \(T_{4}\) on a solution \(q,p\) of the Hamilton equations (2.2) with \(\alpha_{i}=(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4})\). The recurrence relations obtained from expression (2.14) are:
\[\begin{split} q^{(k)}&=T_{4}^{k}(q_{0})=z-p^{(k-1)} -\frac{2(k+n_{2})}{v_{k}}=z-u_{k}\,.\\ p^{(k)}&=T_{4}^{k}(p_{0})=v_{k}-\frac{2k}{u_{k}}\,.\\ \alpha^{(k)}&=(\mathsf{a}+2n_{2},\,-2n_{2},\,-2k,\,2- \mathsf{a}+2k),\ \ k=1,2,\ldots,n_{4}\,,\end{split} \tag{2.15}\]
where
\[v_{k}=q^{(k-1)}+\frac{2k-\mathsf{a}}{z-p^{(k-1)}},\ \ u_{k}=p^{(k-1)}+\frac{2(k+n_{2}) }{v_{k}},\]
and \(q_{0}=z\) and \(p_{0}=\frac{2nzR_{n_{2}-1}(\mathsf{a};z)}{R_{n_{2}}(\mathsf{a};z)}\). Setting \(k=n_{4}\) into \(\alpha^{(k)}\) we recover \(\alpha_{n;\mathsf{a}}\) from expression (2.11).
In the next section we will derive the parameters of degenerated solutions (see e.g. (3.3)) and compare with the above value of the parameter \(\alpha_{n;\mathsf{a}}\) on the orbit of \(T_{2}^{-n_{2}}T_{4}^{n_{4}}\). In section 4 we will find that for any \(\mathsf{a}\) that is an even integer the parameter \(\alpha_{n;\mathsf{a}}\) can be cast in a form of a parameter of degenerated pair of solutions.
## 3 Degeneracy
The above construction of solutions in section 2 did not take into account existence of any other Backlund transformations than translation operators. The Backlund transformations that are not expressible in terms of translation operators will play a role in what follows. Our construction associates the (two-fold) degeneracy to inequality (1.2) with
two sides that are two different (finite) solutions of Painleve V Hamilton equations that share a common Painleve V parameter (1.3).
In relations (1.2) and (1.3) the symbol \(M\) denotes a Backlund transformation, which is not expressible in terms of translation operators only and such that \(T_{2}^{m_{2}}T_{4}^{-m_{4}}M\mathbb{T}(n_{2},n_{4};\mathsf{a})\) is ill-defined as we will see below. For that reason the two solutions listed in (1.2) can not be equal. We will refer to degenerated solutions of relations (1.2) and (1.3) as \(M\)-sequence.
To determine general conditions for degeneracy let us equate for the moment expressions on the left and the right sides of the inequality (1.2) with each other and multiply both sides with \(T_{2}^{m_{2}}T_{4}^{-m_{4}}\) to get:
\[|q=z,\,p=0\rangle_{\alpha_{\mathsf{b}}}=T_{2}^{m_{2}}T_{4}^{-m_{4}}MT_{2}^{-n_ {2}}T_{4}^{n_{4}}\,|q=z,\,p=0\rangle_{\alpha_{\mathsf{a}}}=M\,T_{3}^{c_{3}}\,T _{2}^{c_{2}}\,T_{4}^{c_{4}}|q=z,\,p=0\rangle_{\alpha_{\mathsf{a}}}\]
obtained after commuting \(T_{2}^{m_{2}}T_{4}^{-m_{4}}\) around \(M\) and ignoring potential presence of \(T_{1}\) on the right hand side since it only amounts to shifting of \(\mathsf{a}\). The conditions for degeneracy in this setting are
\[c_{3}\neq 0,\,\,\,\text{or}\,\,\,c_{2}>0\,,\,\,\,\text{or}\,\,\,c_{4}<0\,, \tag{3.1}\]
since they correspond to presence of operators that will cause divergence when acting on \(|q=z,\,p=0\rangle_{\alpha_{\mathsf{a}}}\). We next explore several candidates for \(M\) to see if they satisfy the conditions (1.3) and (3.1).
We can easily discard \(M=s_{2},M=s_{3}\) as they do not satisfy the condition (1.3), as it would require \(m_{2}=-n_{2}\) for \(s_{2}\) and \(m_{4}=-n_{4}\) for \(s_{3}\). Further, one finds that \(M=s_{1},M=s_{4}\) do not produce infinities and accordingly fail to satisfy the conditions of relation (3.1).
Moving on to the quadratic expressions of the type \(s_{i}s_{j}\) we find that when \(j\neq i+1\) (e.g. \(s_{1}s_{3}\) or \(s_{2}s_{4}\)) then both expressions do not satisfy the condition (1.3). The remaining cases are of the type \(s_{i}s_{i+1}\) since \(s_{i}s_{i-1}\) can be moved from the left to the right hand side of relation (1.2) to become \(s_{i}s_{i+1}\). Inspection of \(s_{1}s_{2},s_{2}s_{3},s_{3}s_{4},s_{4}s_{1}\) shows that only
1. \(M_{12}=s_{1}s_{2}\),
2. \(M_{34}=s_{3}s_{4}\),
satisfy the condition (1.3) and the condition (3.1) for some values of \(m_{i}\), \(i=2,4\). These conditions are also satisfied by
1. \(M_{1}=\pi s_{1}\),
2. \(M_{4}=\pi^{-1}s_{4}\),
that are effectively equivalent to the cases of \(M=\pi,\pi^{-1}\)[2]. It is also easy to see that \(M_{1}\) and \(M_{4}\) are not invertible in the context of relation (1.2) since \(M_{i}^{-1},i=1,4\) acting on \(\mathbb{T}(m_{2},m_{4};\mathsf{b})\) will cause a divergence. Thus if an equality between two solutions shown in (1.2) held for \(M_{1}\) or \(M_{4}\) then an attempt to invert \(M_{1}\) or \(M_{4}\) would have produced an infinity.
It suffices to consider operators \(M\) that consist of a single \(s_{i}\) multiplied by \(\pi\) or a product of two \(s_{i}\)'s due to the following identities :
\[\begin{split} s_{i}s_{i+1}&=\pi s_{i+2}T_{i+2}^{-1} =\pi T_{i+3}^{-1}s_{i+2},\hskip 14.226378pti=1,2,3,4\,,\\ s_{i+1}s_{i}&=\pi^{-1}s_{i-1}T_{i}&= \pi^{-1}T_{i-1}s_{i-1},\hskip 14.226378pti=1,2,3,4\,.\end{split} \tag{3.2}\]
for products of neighboring \(s_{i}\) that reduce them to one single \(s_{i}\) multiplied by a shift operator and an automorphism \(\pi\). Accordingly, in principle, the higher products of \(s_{i}\) can be reduced to the lower number of \(s_{i}\) transformations [2].
We will now examine if there exists equivalence between the four cases with degeneracy represented by \(M_{1},M_{4},M_{12},M_{34}\). We choose as a starting point the relation (1.3) with \(M=M_{1}=\pi s_{1}\) and accordingly with the parameter :
\[\pi s_{1}\left(\alpha_{n;\mathsf{a}}\right)=\alpha_{m;\mathsf{b}}=2(1+n_{2}+n_ {4},-m_{2},m_{2}-n_{2},-n_{4})\,, \tag{3.3}\]
shared between the two solutions appearing in the inequality:
\[\pi s_{1}\mathbb{T}\left(n_{2},n_{4};\mathtt{a}\right)\neq\mathbb{T}\left(m_{2}, m_{4};\mathtt{b}\right)\,. \tag{3.4}\]
Expression (3.3) holds when the following consistency conditions are satisfied :
\[m_{4} =n_{2}-m_{2}\geq 0,\;n_{2}\geq m_{2}\geq 0,\,n_{2},m_{2},n_{4}\in \mathbb{Z}_{+}\,, \tag{3.5}\] \[\mathtt{a} =2(m_{2}-n_{2})=-2m_{4},\;\mathtt{b}=2+2n_{4}+2m_{4}=2+2n_{4}- \mathtt{a}\,. \tag{3.6}\]
**Example 3.1**.: We consider the case of
\[n_{2}=n_{4}=2,\;m_{2}=1\;\to m_{4}=n_{2}-m_{2}=1,\,\alpha_{i}=2(5,-1,-1,-2)\,, \tag{3.7}\]
where we used relation (3.3) to calculate \(\alpha_{i}\) and the consistency condition (3.5). For the corresponding coefficients of the Painleve V equation we find from relation (2.4) for \(\epsilon=1\):
\[\alpha=\frac{1}{2},\;\;\beta=-\frac{25}{2},\;\;\gamma=1\,. \tag{3.8}\]
According to rules of the \(M_{1}\)-sequence we have two degenerated solutions corresponding to the parameters given in equation (3.7):
\[\begin{split}\pi s_{1}\mathbb{T}(n_{2}=2,n_{4}=2;\mathtt{a}=-2) &=\pi s_{1}T_{2}^{-2}T_{4}^{2}\big{|}q=z,\,p=0\rangle_{\alpha_{ \mathtt{a}=-2}}\,,\\ \mathbb{T}(m_{2}=1,m_{4}=1;\mathtt{b}=8)&=T_{2}^{- 1}T_{4}^{1}\big{|}q=z,\,p=0\rangle_{\alpha_{\mathtt{b}=8}}\,.\end{split} \tag{3.9}\]
with \(\mathtt{a}\) and \(\mathtt{b}\) determined from relation (3.6).
We first calculate \(\mathbb{T}(m_{2}=1,m_{4}=1;\mathtt{b}=8)\) from expression (3.9) using the first of relations (2.12) with the parameter \(\mathtt{b}\) followed by action with \(T_{4}\) according to (2.14) to get
\[\begin{split} q&=z\frac{(-\mathtt{b}+z^{2}+2)(z^{4 }-2z^{2}\mathtt{b}+\mathtt{b}^{2}+2\mathtt{b})}{(-\mathtt{b}+z^{2})(-2z^{2} \mathtt{b}+z^{4}+4z^{2}-2\mathtt{b}+\mathtt{b}^{2})}\\ p&=-2z\frac{(-2z^{2}\mathtt{b}+z^{4}+4z^{2}-2 \mathtt{b}+\mathtt{b}^{2})}{(-\mathtt{b}+z^{2}+2)(-2z^{2}\mathtt{b}+z^{4}-2 \mathtt{b}+\mathtt{b}^{2})}\end{split} \tag{3.10}\]
which for \(\mathtt{b}=8\) yields
\[q=\frac{z(z^{2}-6)(z^{4}-16z^{2}+80)}{(z^{2}-8)(z^{4}-12z^{2}+48)},\;\;p=\frac {-2z(z^{4}-12z^{2}+48)}{(z^{2}-6)(z-2)(z+2)(z^{2}-12)}\,, \tag{3.11}\]
with \(\alpha_{i}=(10,-2,-2,-4)=2(5,-1,-1,2)\). To obtain a solution \(y(x)\) of the Painleve V equation we transform \(q\to y=(q/z)(q/z-1)^{-1}\) and substitute \(z\) by \(x=z^{2}/2\) with the result:
\[y(x)=-\frac{(x-3)(x^{2}-8x+20)}{(x-2)(x-6)}\,, \tag{3.12}\]
which agrees with the expression of the Painleve V solution \(w_{1,1}(x;1)\) obtained in Example 4.11 of [3].
Next we calculate \(\pi s_{1}\mathbb{T}(n_{2}=2,n_{4}=2;\mathtt{a}=-2)\) from relation (3.9) acting first with \(T_{2}^{-2}\) on \(q=z,\;p=0\) that according to equation (2.12) for \(n=2\) yields:
\[T_{2}^{-2}:q=z,p=0\to q=z,p=\frac{4z(\mathtt{a}-z^{2})}{z^{4}-2 \mathtt{a}z^{2}+\mathtt{a}(\mathtt{a}+2)},\,(4+\mathtt{a},-4,0,2-\mathtt{a})\,, \tag{3.13}\]
Applying \(T_{4}^{2}\), using expression (2.14), on the configuration in equation (3.13) we get a complicated solution to Painleve equation for \(\alpha_{i}=(4+\mathtt{a},-4,-4,6-\mathtt{a})\). Inserting \(\mathtt{a}=-2\) simplifies \(\alpha_{i}\) to \((2,-4,-4,8)\) and the expressions for \(q,p\) simplify to:
\[\begin{split} q&=z\,\frac{(z^{4}+12\,z^{2}+48) \,(z^{8}+16\,z^{6}+96\,z^{4}+192\,z^{2}+192)}{(z^{8}+24\,z^{6}+216\,z^{4}+768\, z^{2}+1152)\,(8\,z^{2}+24+z^{4})}\,,\\ p&=-4\,\frac{(z^{6}+6\,z^{4}+24\,z^{2}+48)\,(z^{8}+2 4\,z^{6}+216\,z^{4}+768\,z^{2}+1152)}{z\,(z^{6}+12\,z^{4}+72\,z^{2}+192)\,(z^{8} +16\,z^{6}+96\,z^{4}+192\,z^{2}+192)}\,,\end{split} \tag{3.14}\]
Applying then \(\pi s_{1}\) that transforms : \((2,-4,-4,8)\to(10,-2,-2,-4)\) we are being taken from solution (3.14) to:
\[\begin{split} q&=z\,\frac{(8\,z^{2}+24+z^{4})\,(z^{ 6}+18\,z^{4}+144\,z^{2}+480)}{(z^{4}+12\,z^{2}+48)\,(z^{6}+12\,z^{4}+72\,z^{2}+19 2)}\,,\\ p&=z\,\frac{(z^{4}+12\,z^{2}+48)\,(z^{8}+16\,z^{6}+9 6\,z^{4}+192\,z^{2}+192)}{(z^{8}+24\,z^{6}+216\,z^{4}+768\,z^{2}+1152)\,(8\,z^{ 2}+24+z^{4})}\,,\end{split} \tag{3.15}\]
which, as it was the case with expressions (3.11), solves the Painleve V Hamilton equation with \(\alpha_{i}=(10,-2,-2,-4)\).
The corresponding solution \(y(x)=(q/z)(q/z-1)^{-1}\) of the Painleve V equation for coefficients (3.8) reads
\[y=\frac{(x^{2}+4x+6)(x^{3}+9x^{2}+36x+60)}{x^{4}+12x^{3}+54x^{2}+9 6x+72}\,, \tag{3.16}\]
that agrees with expression for \(\hat{w}_{1,2}(x;-1)\) of Example 4.11 of reference [3].
**Example 3.2**.: Next we consider the case of
\[n_{2}=3,\,n_{4}=1,\;m_{2}=2\ \to m_{4}=n_{2}-m_{2}=1,\,\alpha_{i}=2(5,-2,-1,-1)\,, \tag{3.17}\]
For the corresponding coefficients of the Painleve V equation we find from relation (2.4) for \(\epsilon=1\):
\[\alpha=\frac{1}{2},\ \ \beta=-\frac{25}{2},\ \ \gamma=-1\,. \tag{3.18}\]
We notice that the above coefficients differ from the ones in equation (3.8) of Example 3.1 only by the sign of \(\gamma\), which will be of importance below.
Again, according to rules of the \(M_{1}\)-sequence we have two degenerated solutions corresponding to the parameters given in equation (3.17):
\[\begin{split}\pi s_{1}\mathbb{T}(n_{2}=3,n_{4}=1;\mathsf{a}=-2)& =\pi s_{1}T_{2}^{-3}T_{4}^{1}\big{|}q=z,\,p=0)_{\alpha_{\mathsf{a} =-2}}\,,\\ \mathbb{T}(m_{2}=2,m_{4}=1;\mathsf{b}=6)&=T_{2}^{- 2}T_{4}^{1}\big{|}q=z,\,p=0)_{\alpha_{\mathsf{b}=6}}\,.\end{split} \tag{3.19}\]
with \(\mathsf{a}=-2m_{4}=-2,\mathsf{b}=2+2n_{4}+2m_{4}=6\).
We first use expression (2.12) that gives for \(n=3\) :
\[T_{2}^{-3}:q=z,p=0\to q=z,p=\frac{6zR_{2}(z;\mathsf{a})}{R_{3}(z;\mathsf{a})}, \,\alpha_{i}=(6+\mathsf{a},-6,0,2-\mathsf{a})\,, \tag{3.20}\]
where \(R_{2}(z;\mathsf{a})=2z^{2}+(2+\mathsf{a}-z^{2})(\mathsf{a}-z^{2})\) and
\[R_{3}(z;\mathsf{a})=4z^{2}R_{1}(z;\mathsf{a})+(4+\mathsf{a}-z^{2})R_{2}(z; \mathsf{a})\,,\]
as follows from the recurrence relation (2.13). Using the transformation rule (2.14) and applying \(\pi s_{1}\) and setting \(\mathsf{a}=-2\) so that \(\alpha_{i}=(10,-4,-2,-2)\) we obtain for the first of equations (3.19)
\[\pi s_{1}\mathbb{T}(n_{2}=3,n_{4}=1;\mathsf{a}=-2)=(q =\frac{z(z^{6}+22z^{4}+176z^{2}+480)}{(z^{2}+8)(z^{4}+48+12z^{2})},\] \[p =\frac{z(z^{2}+8)(z^{4}+12z^{2}+24)}{(6+z^{2})(z^{2}+4)(z^{2}+12)}\,,\]
which gives for \(y=(q/z)/(q/z-1)\):
\[y=\frac{(x+3)(x^{2}+8x+20)}{(x+6)(x+2)} \tag{3.21}\]
Note that going from Example 3.1 to Example 3.2 (\(\alpha_{i}=2(5,-1,-1,-2)\to\alpha_{i}=2(5,-2,-1,-1)\)) only amounts to flipping sign of \(\gamma\): \(\gamma\to-\gamma\) in the Painleve V equation. However the transformation \(\gamma\to-\gamma\) amounts to \(x\to-x\). Thus we go from the solution (3.12) of Painleve V equation to the solution (3.21) only by flipping the sign of \(x\) as it is easily verified by inspection.
Using (3.20) and the transformation rule (2.14) we get
\[\mathbb{T}(m_{2}=2,m_{4}=1;\mathsf{b}=6) =(q=\frac{z(z^{4}-8z^{2}+24)(z^{6}-18z^{4}+144z^{2}-480)}{(z^{4}-1 2z^{2}+48)(72z^{2}-12z^{4}+z^{6}-192)},\] \[p =\frac{-4z(z^{4}-12z^{2}+24)(72z^{2}-12z^{4}+z^{6}-192)}{(z^{4}-8z ^{2}+24)(-24z^{6}+216z^{4}-768z^{2}+1152+z^{8})}\,,\]
which results in \(y=(q/z)/(q/z-1)\) equal to
\[y=-\frac{(x^{2}-4x+6)(x^{3}-9x^{2}+36x-60)}{(-12x^{3}+x^{4}+54x^{2}-96x+72)}\,, \tag{3.22}\]
which also follows from equation (3.16) by flipping the sign of \(x\).
We will now discuss other choices for the transformation \(M\) and compare them to results obtained by acting with Backlund transformations \(\pi,s_{3},s_{4}\) on \(\alpha_{n;\mathsf{a}}\) from equation (3.3). We will find for \(\pi,s_{4}\) that the resulting parameters will agree with those obtained from relations (1.3) with \(M_{4}=\pi^{-1}s_{4},M_{12}=s_{1}s_{2}\), respectively, each with two degenerated solutions entering inequality (1.2). The case of \(M_{34}=s_{3}s_{4}\) will be shown to be equivalent to \(M_{1}\) although it differs from the sequence obtained by acting with \(s_{3}\).
To trace more easily the effect of these transformations we rename the integers \(n_{i}\to x_{i}\), \(m_{i}\to y_{i}\) for \(i=1,2\) to obtain from expression (3.3), \(2(1+n_{2}+n_{4},-m_{2},m_{2}-n_{2},-n_{4})\), an expression
\[\pi s_{1}\left(\alpha_{n;\mathsf{a}}\right)=\alpha_{m;\mathsf{b}}=2(1+x_{2}+x_ {4},-y_{2},y_{2}-x_{2},-x_{4})\,, \tag{3.23}\]
with the consistency condition \(x_{2}\geq y_{2}\).
Applying \(\pi^{-1},s_{3},s_{4}\) on the above relation we get the following expressions for the Backlund transforms \(\alpha_{i}\) parameters:
\[\pi^{-1} :2(-y_{2},y_{2}-x_{2},-x_{4},1+x_{2}+x_{4})\,, \tag{3.24}\] \[s_{3} :2(1+x_{2}+x_{4},-x_{2},x_{2}-y_{2},y_{2}-x_{2}-x_{4})\,,\] (3.25) \[s_{4} :2(1+x_{2},-y_{2},y_{2}-x_{2}-x_{4},x_{4})\,. \tag{3.26}\]
Next, we review these expressions in the order they appeared above in equations (3.24)-(3.26) and associate a new Backlund transformations \(M_{i}\) to each of the three cases. We will be interested in whether the consistency conditions that will hold for each of the \(M_{i}\) sequences will be fully derivable from the consistency condition (3.5) by action of the Backlund transformations \(\pi^{-1},s_{3},s_{4}\) used in the above relations. If the consistency relations are mapped into each other together with the parameters then we will conclude that the two sequences are fully equivalent and the mapping did not generate a new degeneracy.
### Case of expression (3.24) with \(M_{4}=\pi^{-1}s_{4}\)
Perform the following change of variables on variables of equation (3.24):
\[y_{2}\to n_{2},\;x_{4}\to m_{4},\;x_{2}\to n_{2}+n_{4}-m_{4} \tag{3.27}\]
with the condition \(x_{2}\geq y_{2}\) transforming into \(n_{2}+n_{4}-m_{4}>n_{2}\) or \(n_{4}\geq m_{4}\). The condition \(y_{4}=x_{2}-y_{2}\) of \(M_{1}\)-sequence is set to consistently transform to \(m_{2}=n_{4}-m_{4}\). This way we obtain :
\[\alpha=2(-n_{2},m_{4}-n_{4},-m_{4},1+n_{2}+n_{4}),\;\;\;\;\;n_{4}\geq m_{4} \geq 0,\;\;n_{2},m_{4}\in\mathbb{Z}_{+}\,, \tag{3.28}\]
which is associated with \(M_{4}=\pi^{-1}s_{4}\) and
\[\pi^{-1}s_{4}T_{2}^{-n_{2}}T_{4}^{n_{4}}|q=z,p=0\rangle_{\alpha_{ \mathfrak{a}}}\neq T_{2}^{-m_{2}}T_{4}^{m_{4}}(q=z,p=0)_{\mathfrak{b}}\,, \tag{3.29}\]
with
\[\mathfrak{a}=2(1+n_{4}-m_{4})=2+2m_{2},\ \mathfrak{b}=2(-m_{2}-n_{2})=2(1-n_{2} )-\mathfrak{a},\ \ m_{2}=n_{4}-m_{4}\,.\]
We see that the model described by \(M_{1}=\pi s_{1}\) with its condition \(n_{2}\geq m_{2}\) is being mapped into a model described by \(M_{4}=\pi^{-1}s_{4}\) with \(n_{4}\geq m_{4}\) with only difference that negative \(\mathfrak{a}\)/positive \(\mathfrak{b}\) transforms into positive \(\mathfrak{a}\)/negative \(\mathfrak{b}\). Thus with consistency conditions being mapped into each other the two sequences are fully equivalent. This will be illustrated in the following example.
**Example 3.3**.: Let us choose
\[m_{4}=0,\ n_{4}=1,\ n_{2}=1,\ \to\,\mathfrak{a}=4,\ \mathfrak{b}=-4,\ m_{2}=n_{ 4}-m_{4}=1\,.\]
The corresponding solutions are :
\[\pi^{-1}s_{4}T_{4}^{1}T_{2}^{-1}\big{|}q=z,\,p=0\big{\rangle}_{ \alpha_{\mathfrak{a}\mapsto\mathfrak{a}}}\neq T_{2}^{-1}T_{4}^{0}\big{|}q=z,\, p=0\big{\rangle}_{\alpha_{\mathfrak{b}\mapsto-4}} \tag{3.30}\]
with \(\alpha_{i}=(-2,-2,0,6)\) holding for both sides.
We find for the left hand side of inequality (3.30):
\[q=-\frac{2z(-4z^{2}+z^{4}+8)}{(-2+z^{2})(-8z^{2}+z^{4}+8)},\quad p =\frac{2z(-8z^{2}+z^{4}+8)}{(z^{2}-4)(-4z^{2}+z^{4}+8)}\,,\]
while on the right hand side of (3.30) we find:
\[q=z,\ p=\frac{2z}{-4-z^{2}}\,,\]
and indeed both solutions satisfy the Painleve V Hamilton equations (2.2) with \(\alpha_{i}=2(-1,-1,0,3)\).
Corresponding to the above parameters we find by inverting relations (3.27) that \(x_{2}=2>y_{2}=1\) and \(x_{4}=0\). Further, since the condition \(m_{2}=n_{4}-m_{4}\) transforms into \(y_{4}=x_{2}-y_{2}\) we get \(y_{4}=1\) for the \(M_{1}=\pi s_{1}\) sequence It follows that the corresponding parameter found from expression (1.3) is \(\alpha_{i}=2(3,-1,-1,0)\). Next we find that the corresponding solutions of (1.2) for \(M_{1}=\pi s_{1}\) sequence are
\[\pi s_{1}\mathbb{T}\left(x_{2}=2,x_{4}=0;\mathfrak{a}=-2\right) =\pi s_{1}T_{2}^{-2}|q=z,p=0\rangle_{\alpha_{\mathfrak{a}\mapsto -2}}\] \[=|(q=\frac{z^{6}+6z^{4}}{z(z^{4}+4z^{2})},\quad p=z\rangle_{(6,-2,-2,0)}\,,\]
versus
\[\mathbb{T}\left(y_{2}=1,y_{4}=1;\mathfrak{b}=4\right)=T_{2}^{-1} T_{4}|q=z,p=0\rangle_{\alpha_{\mathfrak{b}\mapsto\mathfrak{a}}}\] \[=|q=\frac{z\left(z^{2}-2\right)\left(z^{4}-8z^{2}+24\right)}{(z^ {2}-4)\left(z^{4}-4z^{2}+8\right)},p=-\frac{2z\left(z^{4}-4z^{2}+8\right)}{(z ^{2}-2)\left(z^{4}-8z^{2}+8\right)}\rangle_{(6,-2,-2,0)}\,,\]
with both solutions of the Painleve V equations (2.2) sharing the same parameters
\[\alpha_{i}=(6,-2,-2,0). \tag{3.31}\]
Thus, as announced, we have been able to map two solutions of \(M_{1}\) and \(M_{4}\) sequences into each other.
### Case of expression (3.25), \(s_{3}(M_{1})\) versus \(M_{34}=s_{3}s_{4}\)
Here we consider \(s_{3}(\alpha_{i})\) given in the equation (3.25) and we will show that although it agrees with the parameters \(\alpha_{i}\) given in formula (1.3) when derived from expression (1.2) with \(M_{34}=s_{3}s_{4}\) the consistency conditions will not match. To study \(M_{34}=s_{3}s_{4}\) we consider the inequality
\[s_{3}s_{4}T_{2}^{-n_{2}}T_{4}^{n_{4}}\left|q=z,p=0\right>_{\alpha_{\sf a}}\neq T _{2}^{-m_{2}}T_{4}^{m_{4}}\left|q=z,p=0\right>_{\alpha_{\sf b}}.\]
For parameters of solutions on both sides of this inequality to be equal we need to have
\[\begin{split}& s_{3}s_{4}T_{2}^{-n_{2}}T_{4}^{n_{4}}({\sf a},0,0,2-{ \sf a})=s_{3}s_{4}({\sf a}+2n_{2},-2n_{2},-2n_{4},2-{\sf a}+2n_{4})\\ &=(2+2n_{2}+2n_{4},2-{\sf a}-2n_{2};{\sf a}-2,-2n_{4})\\ &=T_{2}^{-m_{2}}T_{4}^{m_{4}}(b,0,0,2-b)=({\sf b}+2m_{2},-2m_{2}, -2m_{4},2-{\sf b}+2m_{4})\,.\end{split} \tag{3.32}\]
Solving for \({\sf a}\) and \({\sf b}\) yields
\[{\sf a}=2-2m_{4}=2+2m_{2}-2n_{2},\ \ {\sf b}=2+2m_{4}+2n_{4}=4+2n_{2}-{\sf a}>0\,, \tag{3.33}\]
with the consistency relation
\[m_{4}=n_{2}-m_{2}\,. \tag{3.34}\]
required for the above equations to hold.
We notice that this consistency relation ensures that \({\sf b}\) is always positive.
Inserting the values of \({\sf a}\) and \({\sf b}\) back into the relation (3.32) we obtain:
\[\alpha=2(1+n_{2}+n_{4},-m_{2},m_{2}-n_{2},-n_{4})\,, \tag{3.35}\]
in full agreement with equation (3.25) reproduced below:
\[s_{3}(\alpha_{i})=2(1+x_{2}+x_{4},-x_{2},x_{2}-y_{2},y_{2}-x_{2}-x_{4})\,,\]
when we identify \(x_{2}=m_{2}\), \(y_{2}=n_{2}\), \(x_{4}=n_{2}+n_{4}-m_{2}\). Note however that since \(n_{2}-m_{2}\geq 0\) it follows that (3.34) reads in terms of these variables as: \(y_{2}-x_{2}\geq 0\), which is just opposite to the original condition \(x_{2}-y_{2}\geq 0\) of the \(M_{1}\)-sequence seen below (3.25). Thus this time the consistency relations did not get mapped into each other.
Does this result mean that the \(M_{34}\)-sequence is independent of the \(M_{1}\)-sequence because \(s_{3}\) failed to connect those two cases? It turns out that \(M_{34}\)-sequence is fully equivalent to \(M_{1}\)-sequence because of relation \(s_{3}s_{4}=\pi s_{1}T_{1}^{-1}\), which is a special case of relations (3.2). It follows from this relation that
\[\begin{split} s_{3}s_{4}T_{2}^{-n_{2}}T_{4}^{n_{4}}\left|q=z,p=0 \right>_{\alpha_{\sf a=2-2m_{4}}}&=\pi s_{1}T_{1}^{-1}T_{2}^{-n_ {2}}T_{4}^{n_{4}}\left|q=z,p=0\right>_{\alpha_{\sf a=2-2m_{4}}}\\ &=\pi s_{1}T_{2}^{-n_{2}}T_{4}^{n_{4}}\left|q=z,p=0\right>_{ \alpha_{\sf a=-2m_{4}}},\end{split} \tag{3.36}\]
where we inserted the value of \({\sf a}\) from relation (3.33) and used relation (2.10). The above expression is equal to the one given in equation (3.4) then one takes into account the value of the parameter \({\sf a}\) given in (3.6). Thus the \(M_{34}\)-sequence is fully equivalent to the \(M_{1}\)-sequence.
It is still warranted to consider the sequence generated by action of \(s_{3}\) on the \(M_{1}\)-sequence. The following observation is crucial. Consider \(\alpha_{i}=(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4})\) entering expressions for the parameters \(\alpha=\alpha_{3}^{2}/8\), \(\beta=-\alpha_{1}^{2}/8\) and \(\gamma=(\alpha_{2}-\alpha_{4})/2\) of the Painleve V equation (2.3). The Backlund transformation \(s_{3}\) transforms \(\alpha_{i}\) into \((\alpha_{1},\alpha_{2}+\alpha_{3},-\alpha_{3},\alpha_{4}+\alpha_{3})\) maintaining the parameters \(\alpha,\beta,\gamma\) of the Painleve V equation (2.3) clearly invariant. Note that the remaining Backlund transformations \(s_{1},s_{2},s_{4}\) will all change the parameters \(\alpha,\beta,\gamma\). However the \(s_{3}\) transforms \(q,p\) as follows
\[s_{3}:q\to q,\ p\to p-\frac{\alpha_{3}}{z-q},\]
and accordingly will leave the solution \(y\) of the Painleve V equation (2.3) invariant. To illustrate these considerations we will act with \(s_{3}\) on configurations given in example 3.1.
**Example 3.4**.: As an example we consider acting with \(s_{3}\) on (3.15), which transforms parameters as follows: \(2(5,-1,-1,-2)\to 2(5,-2,1,-3)\) Accordingly, we deal with the case of
\[n_{2}=1,\,n_{4}=3,\ m_{2}=2\ \to m_{4}=n_{2}-m_{2}=-1,\,\alpha_{i}=(5,-2,1,-3). \tag{3.37}\]
We note that now \(m_{4}=n_{2}-m_{2}\) is negative, however the corresponding coefficients of the Painleve V equation, for \(\epsilon=1\), are the ones in (3.8) as seen in Example 3.1. Acting with \(s_{3}\) on solution (3.11) we get
\[q=\frac{z(z^{2}-6)(z^{4}-16z^{2}+80)}{(z^{2}-8)(z^{4}-12z^{2}+48)},\ \ p=\frac{(z^{4}-12z^{2}+48)}{z(z^{2}-6)}\,. \tag{3.38}\]
while acting with \(s_{3}\) on (3.15), we get
\[\begin{split}& q=z\,\frac{(8\,z^{2}+24+z^{4})\,(z^{6}+18\,z^{4}+14 \,z^{2}+480)}{(z^{4}+12\,z^{2}+48)\,(z^{6}+12\,z^{4}+72\,z^{2}+192)}\,,\\ & p=-4\,\frac{(z^{4}+12\,z^{2}+48)}{(z\,(8\,z^{2}+24+z^{4})}\,. \end{split} \tag{3.39}\]
Solutions (3.38) and (3.39) satisfy the Painleve V Hamilton equations (2.2) with \(\alpha_{i}=(10,-4,2,-6)\) that differ from solutions in Example 3.1, which satisfy the the Painleve V Hamilton equations with the \(\alpha_{i}=2(5,-1,-1,-2)\). However they give raise to the identical solutions \(y(x)\) as obtained in Example 3.1 for the Painleve V equation (2.3) with the coefficients (3.8).
### Case of expression (3.26) with \(M_{12}=s_{1}s_{2}\)
In this case we consider \(s_{4}(\alpha)\) from equation (3.26) and compare with an expression for the \(\alpha\) that we obtain from (1.3) for \(M=M_{12}\):
\[\begin{split}\alpha&=T_{2}^{-m_{2}}T_{4}^{m_{4}}( \mathsf{b},0,0,2-\mathsf{b})=T_{2}^{-m_{2}}T_{4}^{m_{4}}\ (\mathsf{b},0,0,2-\mathsf{b})\\ &=(\mathsf{b}+2m_{2},-2m_{2},-2m_{4},2-\mathsf{b}+2m_{4})\\ &=s_{1}s_{2}T_{2}^{-n_{2}}T_{4}^{n_{4}}(\mathsf{a},0,0,2- \mathsf{a})=(-\mathsf{a},\mathsf{a}+2n_{2},-2n_{2}-2m_{2},2+2n_{4})\.\end{split} \tag{3.40}\]
The consistency requires this time that:
\[m_{4}=n_{2}+n_{4}\,, \tag{3.41}\]
which leads to the following expressions:
\[\mathsf{a}=-2n_{2}-2m_{2},\ \mathsf{b}=2n_{2}=-2m_{2}-\mathsf{a}\,.\]
Plugging these values back into equation (3.40) we obtain an expression for \(\alpha\):
\[\alpha=2(n_{2}+m_{2},-m_{2},-n_{2}-n_{4},1+n_{4})\ \ \ \ m_{2},n_{4}\in\mathbb{Z}_{+}, \tag{3.42}\]
that also follows from inequality (1.2) with \(M_{12}=s_{1}s_{2}\):
\[s_{1}s_{2}T_{2}^{-n_{2}}T_{4}^{n_{4}}\Big{|}q=z,p=0\Big{\rangle}_{\alpha_{2}} \neq T_{2}^{-m_{2}}T_{4}^{m_{4}}\Big{|}q=z,p=0\Big{\rangle}_{\alpha_{\mathsf{b }}},\ m_{4}=n_{2}+n_{4}\,. \tag{3.43}\]
Expression (3.40) agrees with the result of (3.26) for :
\[m_{2}=y_{2},\ n_{4}=x_{4}-1,\ n_{2}=x_{2}-y_{2}+1\]
Thus the coefficients \(x_{2},x_{4},y_{2}\) need to satisfy inequalities \(x_{4}\geq 1,x_{2}\geq y_{2}\), which are consistent with conditions (3.6). Note that \(x_{2}+1>y_{2}\) always holds since \(x_{2}\geq y_{2}\) and accordingly \(n_{2}>0\).
We see that both sequences will map into each other when \(x_{4}\) variable of the \(M_{1}\) sequence takes values \(x_{4}=1,2,\ldots\) and correspondingly the \(n_{2}\) variable of the \(M_{12}\) sequence takes values \(n_{2}=1,2,\ldots\).
Discussion
We have examined the cases of two-fold degeneracy of the Painleve V rational solutions connected with the Backlund transformations \(M_{1}=\pi s_{1},M_{4}=\pi^{-1}s_{4},M_{34}=s_{3}s_{4},M_{12}=s_{1}s_{2}\) that enter the basic inequality (1.2) that relates the two degenerated solutions with the equal parameter (1.3) and showed that all four sequences of degenerated solutions are fully equivalent by employing Backlund transformations \(\pi^{-1}\) and \(s_{4}\) to show equivalence of \(M_{1}\)-sequence with those of \(M_{4}=\pi^{-1}s_{4},M_{12}=s_{1}s_{2}\) and relation \(s_{3}s_{4}=\pi s_{1}T_{1}^{-1}\) for equivalence between \(M_{1}=\pi s_{1}\) and \(M_{34}=s_{3}s_{4}\).
In number of Examples 3.1, 3.2 and 3.4 we have considered solutions with the Painleve V coefficients:
\[\alpha=\frac{1}{2},\ \ \beta=-\frac{25}{2},\ \ \gamma=\pm 1\,. \tag{4.1}\]
Let us now summarize the results of these considerations in the setting of \(M_{1}\)-sequence.
Recalling the expression (2.4) for the Painleve V equation coefficients with \(\epsilon=1\) and inserting the relevant components of \(\alpha_{i}\) (3.3) into these expressions we find that in order to match them with the expression (4.1) we need to solve the following three equations
\[(1+n_{2}+n_{4})^{2}=25,\ (m_{2}-n_{2})^{2}=1,\ (n_{4}-m_{2})^{2}=1\,, \tag{4.2}\]
for the three variables \(n_{2},n_{4},m_{2}\) that all need to be positive integers.
Equations (4.2) have 8 solutions in total but only half of them with positive integers \(n_{2},n_{4},m_{2}\in\mathbb{Z}_{+}\). We list these 4 relevant solutions below:
* \(n_{2}=n_{4}=2,\ \ m_{2}=1\ \ \to\ m_{4}=n_{2}-m_{2}=1,\ \gamma=1,\ \alpha_{i}=2(5,-1,-1,-2)\).
* \(n_{2}=3,\,n_{4}=1,\ \ m_{2}=2\ \ \to m_{4}=n_{2}-m_{2}=1,\ \gamma=-1,\ \alpha_{i}=2(5,-2,-1,-1)\).
* \(n_{2}=1,\,n_{4}=3,\ \ m_{2}=2\ \ \to m_{4}=n_{2}-m_{2}=-1,\ \gamma=1,\ \alpha_{i}=2(5,-2,1,-3)\).
* \(n_{2}=2,\,n_{4}=2,\ \ m_{2}=3\ \ \to m_{4}=n_{2}-m_{2}=-1,\ \gamma=-1,\ \alpha_{i}=2(5,-3,1,-2)\).
Items \(A)\) and \(B)\) have been discussed in Examples 3.1 and 3.2, where we noticed that they satisfy the condition \(n_{2}\geq m_{2}\) (or \(m_{4}\geq 0\)) and are therefore a part of the \(M_{1}\)-sequence.
We have seen that on the level of Painleve V equation (2.3) the transformation of solutions obtained inside the \(M_{1}\)-sequence with the parameters listed in case A) to solutions of case B) was fully accomplished by flipping \(\gamma\to-\gamma\) or equivalently flipping \(x\to-x\). On the level of the Hamilton Painleve V equations the corresponding \(q,p\) solutions solve the equations (2.2) with different \(\alpha_{i}\) given above in A) and B). Recall that in [1] we have introduced \(x\) as \(x=z^{2}/(2\epsilon)\) with \(\epsilon^{2}=1\). Thus here we are exercising the freedom of changing a sign of \(\epsilon\) that changes a sign of \(\gamma\) (see again [1]).
The cases C) and D) are mapped from A) and B) by action of \(s_{3}\):
\[C)=s_{3}(A)),\ \ \ D)=s_{3}(B)),\]
as can be verified by inspecting the parameters \(\alpha_{i}\). We have seen the case C) being discussed in Example 3.4. Each of these two cases exhibits therefore the two-fold degeneracy of the Hamilton Painleve V equations with solutions that are an \(s_{3}\) image of the corresponding solutions of \(M_{1}\)-sequence with parameters of case A) and B). Since \(s_{3}\) keeps both the coefficients and the solution of the Painleve V equation (2.3) invariant, we conclude that the Painleve V solutions associated to cases C) and D) are fully equal to those already found in cases A) and B).
In all examples we have seen a and b are even integers and having (to some degree) an opposite sign. For the \(M_{1}\)-sequence \(\mathtt{a}\leq 0\) and \(\mathtt{b}\geq 2\) and such that \(\mathtt{a}/2+\mathtt{b}/2=1,2,\ldots\). For the \(M_{12}\) sequence \(\mathtt{a}\leq 0\) and \(\mathtt{b}\geq 0\) and \(\mathtt{a}/2+\mathtt{b}/2=0,-1,-2,\ldots\). For the \(M_{4}\) sequence \(\mathtt{a}\geq 0\) and \(\mathtt{b}\leq 0\) such that \(\mathtt{a}/2+\mathtt{b}/2=0,-1,-2,\ldots\). For the \(M_{34}\)-sequence it holds that \(\mathtt{a}\leq 2\) and \(\mathtt{b}\geq 2\) and \(\mathtt{a}/2+\mathtt{b}/2=2,3,\ldots\) as expected since the \(M_{34}\)-sequence is equivalent to the \(M_{1}\)-sequence only with a shifted by 2.
As we have noted in section 2 the value of the parameter \(\mathsf{a}\) can be shifted by an even integer \(2n\) through the action of \(T_{1}^{n}\). For degenerated solutions one can use this freedom to set, for example, the parameter \(\mathsf{a}\) to zero since it is an even integer. However the same operation will raise or lower the value of the connected parameter \(\mathsf{b}\) and therefore maintain invariant the value of their sum.
**Example 4.1**.: As an example consider \(\mathsf{a}\) and \(\mathsf{b}\) such that \(\mathsf{a}=-2n\) and \(\mathsf{b}=2n+2k\) for \(n\in\mathbb{Z}\) and \(k=1,2,3,\ldots\). Comparing with the paragraph above we see that this case fits into the \(M_{1}\)-sequence of degenerated solutions. Comparing with the expressions (3.5) and (3.6) we find that \(n_{4}=k-1\) and \(m_{4}=n\). We conclude that for any fixed integers \(n\geq 0\) and \(k>0\) we find a pair of solutions belonging to \(M_{1}\)-sequence:
\[\pi s_{1}\mathbb{T}(n_{2},k-1;\mathsf{a}=-2n)\quad\text{and}\quad\mathbb{T}(n_ {2}-n,n;\mathsf{b}=2n+2k)\,,\]
that satisfy the Painleve V equations with the same parameters
\[\alpha_{i}=2(1+n_{2}+n_{4},-m_{2},m_{2}-n_{2},-n_{4})=2(n_{2}+k,n-n_{2},-n,1-k )\,, \tag{4.3}\]
valid for any integer \(n_{2}\) such that \(n_{2}\geq n\).
Comparing \(\alpha(m;\mathsf{b})=(\mathsf{b}+2m_{2},\,-2m_{2},\,-2m_{4},\,2-\mathsf{b}+2m _{4})\) from expression (2.11). we recognize that it agrees with expression for the parameter (4.3) for \(\mathsf{b}=2(k+n)\) and \(m_{2}=n_{2}-n\geq 0\), \(n=m_{4}\).
In summary, we have developed an explicit construction that applies to the two-fold degeneracy of Painleve V Hamilton equations and determines the two degenerated solutions and the parameters of Painleve V equations that they share. We also found a condition for a solution \(\mathbb{T}(m;\mathsf{b})\) on the orbit of \(T_{2}^{-m_{2}}T_{4}^{m_{4}}\) to agree with one of the two degenerated solutions and the condition is that the parameter \(\mathsf{b}\) is an even integer ( a positive integer for the \(M_{1}\)-sequence and a negative for the \(M_{4}\)-sequence).
Recall that the Painleve V Hamilton system is closely related to the dressing chain of even, \(N=4\), periodicity, see [1] and references therein. Our discussion based on translation operators indicates that degeneracy will exist for all dressing chains of even periodicity because of existence of exclusion rules for translation operators permitted to act on special types of seed solutions. Especially, it will occur for \(N=6\) periodic dressing chain discussed in [1]. A natural problem to investigate is whether a degree of degeneracy (how many solutions will share the parameter \(\alpha_{i}\)) will change in case of higher dressing chains of even period \(N>4\).
### Acknowledgments
This study was financed in part by the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001 (G.V.L.) and by CNPq and FAPESP (J.F.G. and A.H.Z.).
|
2304.12300 | Large-capacity and Flexible Video Steganography via Invertible Neural
Network | Video steganography is the art of unobtrusively concealing secret data in a
cover video and then recovering the secret data through a decoding protocol at
the receiver end. Although several attempts have been made, most of them are
limited to low-capacity and fixed steganography. To rectify these weaknesses,
we propose a Large-capacity and Flexible Video Steganography Network (LF-VSN)
in this paper. For large-capacity, we present a reversible pipeline to perform
multiple videos hiding and recovering through a single invertible neural
network (INN). Our method can hide/recover 7 secret videos in/from 1 cover
video with promising performance. For flexibility, we propose a
key-controllable scheme, enabling different receivers to recover particular
secret videos from the same cover video through specific keys. Moreover, we
further improve the flexibility by proposing a scalable strategy in multiple
videos hiding, which can hide variable numbers of secret videos in a cover
video with a single model and a single training session. Extensive experiments
demonstrate that with the significant improvement of the video steganography
performance, our proposed LF-VSN has high security, large hiding capacity, and
flexibility. The source code is available at https://github.com/MC-E/LF-VSN. | Chong Mou, Youmin Xu, Jiechong Song, Chen Zhao, Bernard Ghanem, Jian Zhang | 2023-04-24T17:51:35Z | http://arxiv.org/abs/2304.12300v1 | # Large-capacity and Flexible Video Steganography via Invertible Neural Network
###### Abstract
Video steganography is the art of unobtrusively concealing secret data in a cover video and then recovering the secret data through a decoding protocol at the receiver end. Although several attempts have been made, most of them are limited to low-capacity and fixed steganography. To rectify these weaknesses, we propose a **L**arge-capacity and **F**lexible **V**ideo **Steganography **N**etwork (LF-VSN) in this paper. For large-capacity, we present a reversible pipeline to perform multiple videos hiding and recovering through a single invertible neural network (INN). Our method can **hide/recover 7 secret videos infrom 1 cover video** with promising performance. For flexibility, we propose a key-controllable scheme, enabling different receivers to recover particular secret videos from the same cover video through specific keys. Moreover, we further improve the flexibility by proposing a scalable strategy in multiple videos hiding, which can hide variable numbers of secret videos in a cover video with a single model and a single training session. Extensive experiments demonstrate that with the significant improvement of the video steganography performance, our proposed LF-VSN has high security, large hiding capacity, and flexibility. The source code is available at [https://github.com/MC-E/LF-VSN](https://github.com/MC-E/LF-VSN).
## 1 Introduction
Steganography [10] is the technology of hiding some secret data into an inconspicuous cover medium to generate a stego output, which only allows the authorized receiver to recover the secret information. Unauthorized people can only access the content of the plain cover medium, and hard to detect the existence of secret data. In the current digital world, image and video are commonly used covers, widely applied in digital communication [27], copyright protection [36], information certification [31], e-commerce [26], and many other practical fields [12, 10].
Traditional video steganography methods usually hide messages in the spatial domain or transform domain by manual design. Video steganography in the spatial domain means embedding is done directly to the pixel values of video frames. Least significant bits (LSB) [8, 45] is the most well-known spatial-domain method, replacing the \(n\) least significant bits of the cover image with the most significant \(n\) bits of the secret data. Many researchers have used LSB replacement [6] and LSB matching [34] for video steganography. The transform-domain hiding [17, 5, 39] is done by modifying certain frequency coefficients of the transformed frames. For instance, [44] proposed a video steganography technique by manipulating the quantized coefficients of DCT (Discrete Cosine Transformation). [9] proposed to compare the DWT (Discrete Wavelet Transformation) coefficients of the secret image and the cover video for hiding. However, these traditional methods have low hiding capacity and invisibility, easily being cracked by steganalysis methods [15, 28, 33].
Recently, some deep-learning methods were proposed to improve the hiding capacity and performance. Early works are presented in image steganography. Baluja [3, 4] proposed the first deep-learning method to hide a full-size image into another image. Recently, [21, 32] proposed designing the steganography model as an invertible neural network (INN) [13, 14] to perform image hiding and recovering with a single model. For video steganography, Khare et al. [22] first utilized back propagation neural networks to improve the performance of the LSB-based scheme. [43] is the first deep-learning method to hide a video into another video. Unfortunately, it simply aims to hide the residual across adjacent frames in a frame-by-frame manner, and it requires several separate steps to complete the video hiding and re
covering. [35] utilize 3D-CNN to explore the temporal correlation in video hiding. However, it utilizes two separated 3D UNet to perform hiding and recovering, and it has high model complexity (\(367.2\) million parameters). While video steganography has achieved impressive success in terms of hiding capacity to hide a full-size video, the more challenging multiple videos hiding has hardly been studied. Also, the steganography pipeline is rigid.
In this paper, we study the large-capacity and flexible video steganography, as shown in Fig. 1. Concretely, we propose a reversible video steganography pipeline, achieving large capacity to hide/recover multiple secret videos in/from a cover video. At the same time, our model complexity is also attractive by combining several weight-sharing designs. The flexibility of our method is twofold. First, we propose a key-controllable scheme, enabling different receivers to recover particular secret videos with specific keys. Second, we propose a scalable strategy, which can hide variable numbers of secret videos into a cover video with a single model and a single training session. To summarize, this work has the following contributions:
* We propose a large-capacity video steganography method, which can hide/recover multiple (**up to 7**) secret videos in/from a cover video. Our hiding and recovering are fully reversible via a single INN.
* We propose a key-controllable scheme with which different receivers can recover particular secret videos from the same cover video via specific keys.
* We propose a scalable embedding module, utilizing a single model and a single training session to satisfy different requirements for the number of secret videos hidden in a cover video.
* Extensive experiments demonstrate that our proposed method achieves state-of-the-art performance with large hiding capacity and flexibility.
## 2 Related Work
### Video Steganography
Steganography can date back to the 15th century, whose goal is to encode a secret message in some transport mediums and covertly communicate with a potential receiver who knows the decoding protocol to recover the secret message. Since the human visual system is less sensitive to small changes in digital media, especially digital videos. Video steganography is becoming an important research area in various data-hiding technologies [10].
Traditional video steganography methods usually performed hiding and recovering in the spatial domain, _e.g._, Pixel Value Differencing (PVD) [20, 40] and Least Significant Bits (LSB) [6, 34, 41]. PVD embeds the secret data in the difference value of two adjacent pixels. In [40], a PVD-based video steganography system is proposed to embed the secret data in a compressed domain of the cover medium. [20] utilized enhanced pixel-value differencing (EPVD) to improve the video steganography performance. LSB methods work by replacing the \(n\) least significant bits of the cover data with the most significant \(n\) bits of the secret information. [41] utilized LSB replacement technique to hide secret text in grayscale video frames. To enhance the security in LSB-based methods, [2] shuffled the secret data and embedded the index of correct order into the cover video. In addition to spatial-domain methods, some transformed domain methods [9, 44] were proposed to perform hiding by modifying certain frequency coefficients of the transformed cover video. For instance, [44] proposed a video steganography technique by manipulating the quantized coefficients
Figure 1: Illustration of our large-capacity and flexible video steganography network (LF-VSN). Our LF-VSN reversibly solves multiple videos hiding and recovering with a single model and the same parameters. It has large-capacity, key-controllable and scalable advantages.
of DCT transformation. [9] proposed to compare the DWT transformation coefficients of the secret image and the cover video for hiding. Nevertheless, the above traditional methods have low hiding capacity and invisibility, easily producing artificial markings and being cracked by steganalysis methods [15, 28, 33].
Motivated by the success of deep learning, some deep-learning methods were proposed. [16] introduced GAN to steganography, showing that the adversarial training scheme can improve hiding security. [49] improve the hiding quality by utilizing two independent adversarial networks to critique the video quality and optimize for robustness. [25] studied the lossless steganography below 3 bits per pixel (bpp) hiding. [38] embedded the secret data in the wavelet transform coefficients of the video frames. The above methods focus more on the robustness of low-capacity hiding. One of the important applications of low-capacity steganography is watermarking [1, 42, 52], in which the secret bit string represents the sign of the owner. Some deep-learning methods were proposed for large-capacity hiding. [3, 4] first explored hiding a full-size image into another image. [21, 32] proposed a cheaper pipeline by implementing image hiding and recovering with a single invertible neural network (INN) [13, 14]. Compared with image hiding, video hiding is a more challenging task, requiring a larger hiding capacity. [43] first studied to hide/recover a video in/from another video. However, this method simply hides the residual across adjacent frames in a frame-by-frame manner. [35] explores the temporal correlation by 3D CNN in video steganography. However, it utilizes two separate 3D UNet to perform hiding and recovering and has high model complexity (\(367.2M\) model parameters). These previous works demonstrate that deep networks have great potential in video hiding, inspiring us to study the more challenging task of multiple and flexible video hiding.
### Invertible Neural Network
Since the concept of invertible neural network (INN) was proposed in [13, 14], INN has attracted more and more attention due to its pure invertible pipeline. Pioneering research on INN can be seen in image generation tasks. For instance, Glow [24] utilized INN to construct an invertible mapping between the latent variable \(\mathbf{z}\) and nature images \(\mathbf{x}\). Specifically, the generative process \(\mathbf{x}=f_{\theta}(\mathbf{z})\) given a latent variable can be specified by an INN architecture \(f_{\theta}\). The direct access to the inverse mapping \(\mathbf{z}=f_{\theta}^{-1}(\mathbf{x})\) makes inference much cheaper. Up to now, INN has been studied in several vision tasks (_e.g._, image rescaling [19, 46], image restoration [29, 30], image coloring [51], and video temporal action localization [50]) and presents promising performance.
The architecture of INN needs to be carefully designed to guarantee the invertibility. Commonly, INN is composed of several invertible blocks, _e.g._, the coupling layer [13]. Given the input \(\mathbf{h}\), the coupling layer first splits \(\mathbf{h}\) into two parts (\(\mathbf{h}_{1}\) and \(\mathbf{h}_{2}\)) along the channel axis. Then they undergo the affine transformations with the affine parameters generated by each other:
\[\hat{\mathbf{h}}_{1} =\mathbf{h}_{1}\cdot\psi_{1}(\mathbf{h}_{2})+\phi_{1}(\mathbf{h}_{ 2}) \tag{1}\] \[\hat{\mathbf{h}}_{2} =\mathbf{h}_{2}\cdot\psi_{2}(\hat{\mathbf{h}}_{1})+\phi_{2}(\hat{ \mathbf{h}}_{1}),\]
where \(\psi(\cdot)\) and \(\phi(\cdot)\) are arbitrary functions. \(\hat{\mathbf{h}}_{1}\) and \(\hat{\mathbf{h}}_{2}\) are the outputs of the coupling layer. Correspondingly, the inverse process is defined as:
\[\mathbf{h}_{1}=\frac{\hat{\mathbf{h}}_{1}-\phi_{1}(\mathbf{h}_{2})}{\psi_{1}( \mathbf{h}_{2})};\ \ \ \ \ \mathbf{h}_{2}=\frac{\hat{\mathbf{h}}_{2}-\phi_{2}(\hat{\mathbf{h}_{1}})}{\psi_{ 2}(\hat{\mathbf{h}}_{1})}. \tag{2}\]
In this paper, we employ the reversible forward and backward processes of INN to perform multiple videos hiding and recovering, respectively. We further improve INN to explore flexible video steganography.
## 3 Methodology
### Overview
An overview of our LF-VSN is presented in Fig. 2. Specifically, given \(N_{s}\) secret videos \(\mathbf{x}_{se}=\{\mathbf{x}_{se}(n)\}_{n=1}^{N_{s}}\) and a cover video \(\mathbf{x}_{co}\), the forward hiding is operated group-by-group through a sliding window, traversing each video from head to tail. After hiding, a stego video \(\mathbf{x}_{st}\) is produced, ostensibly indistinguishable from \(\mathbf{x}_{co}\) to ensure that \(\mathbf{x}_{se}\) is undetectable. In the backward recovering, a channel-wise broadcasting operation (\(\mathbb{R}^{3\times W\times H}\xrightarrow{copy}\mathbb{R}^{3L\times W\times H}\)) copies each stego frame in the channel dimension to form the reversed input. During recovering, multiple secret videos are recovered frame-by-frame in parallel. It is worth noting that the forward hiding and backward recovering share the same model architecture and parameters.
### Steganography Input and Output Design
At the beginning of each hiding step, a fusion module is applied to fuse frames in each group to take advantage of the inner temporal correlation. Considering that it is easy to produce texture artifacts and color distortion when hiding in the spatial dimension [15, 21], we perform the fusion by a frequency concatenation. Specifically, given the \(j\)-th cover group \(\mathbf{X}_{co\otimes j}\in\mathbb{R}^{L\times 3\times W\times H}\) and secret groups \(\{\mathbf{X}_{se\otimes j}(n)\in\mathbb{R}^{L\times 3\times W\times H}\}_{n=1}^{N_{s}}\) (each contains \(L\) frames), we adopt Haar discrete wavelet transform (DWT) to split each frame into four frequency bands (_i.e._, LL, HL, LH, HH). In each frame group, we concatenate the part in the same frequency band from different frames in the channel dimension and then concatenate these four bands in order of frequency magnitude, producing the final secret input \(\{\mathbf{X}_{se\otimes j}(n)\in\mathbb{R}^{12L_{s}\times\frac{W}{2}\times \frac{W}{2}}\}_{n=1}^{N_{s}}\) and cover input \(\mathbf{X}_{co\otimes j}\in\mathbb{R}^{12L_{c}\times\frac{W}{2}\times\frac{W} {2}}\). The output of the forward
hiding comprises a stego group \(\mathbf{X}_{st\otimes j}\) and several redundancy groups \(\{\mathbf{X}_{rc\otimes j}(n)\}_{n=1}^{N_{x}}\). \(\mathbf{X}_{st\otimes j}\) is converted from the frequency domain to the spatial domain by a frequency separation, _i.e_., the inverse of the frequency concatenation. \(\mathbf{X}_{rc\oplus j}(n)\) represents the redundancy of the \(\mathbf{X}_{se\oplus j}(n)\) that does not need to be hidden and will be discarded. In our LF-VSN, we utilize the adjacent frames to cooperate with hiding the central frame. Thus, we only output the central stego frame in each hiding step. The backward recovering is similarly operated in the frequency domain and converted to the spatial domain at the output.
### Invertible Block
As shown in Fig. 2, our hiding and recovering have reverse information flow constructed by several invertible blocks (IBs). The architecture of IB is presented in Fig. 3. Concretely, in the \(k\)-th IB, there are two branches to process the input cover group \(\mathbf{X}_{co\otimes j}^{k}\) and secret groups \(\{\mathbf{X}_{se\otimes j}^{k}(n)\}_{n=1}^{N_{s}}\), respectively. Several interaction pathways between these two branches construct the invertible projection. We use an additive transformation to project the cover branch and employ an enhanced affine transformation to project the secret branch. The transformation parameters are generated from each other. Here we utilize weight-sharing modules (\(\eta_{k}^{1}(\cdot)\) and \(\phi_{k}^{1}(\cdot)\)) to extract features from all secret groups, producing a feature set \(\{\mathbf{F}_{se}^{k}(n)\}_{n=1}^{N_{s}}=\{\phi_{k}(\eta_{k}(\mathbf{X}_{se \oplus j}^{k}(n)))\}_{n=1}^{N_{s}}\). \(\eta_{k}^{i}(\cdot)\) and \(\phi_{k}^{i}(\cdot)\) (\(i=1,2,3\)) refer to a \(3\times 3\) convolution layer and a five-layer dense block [18], respectively. Then, we concatenate \(\mathbf{F}_{se}^{k}\) in the channel dimension and pass through an aggregation module \(\xi_{k}(\cdot)\) to generate the transformation parameters of the cover branch. Note that \(\xi_{k}(\cdot)\) is optional in different cases. In our fixed hiding, \(\xi_{k}(\cdot)\) is a \(3\times 3\) convolution layer, and it is a scalable embedding module in our scalable hiding. The transformation parameters of the secret branch are generated from \(\mathbf{X}_{co\otimes j}^{k}\) and shared among different secret groups. Thus, in the \(k\)-th invertible block, the bijection of the forward propagation in Eq. (1) is reformulated as:
\[\begin{split}&\mathbf{X}_{co\otimes j}^{k+1}=\mathbf{X}_{co \otimes j}^{k}+\xi_{k}(||\phi_{k}^{1}(\eta_{k}^{1}(\mathbf{X}_{se\oplus j}^{k} (n)))||_{n=1}^{N_{s}})\\ &\{\mathbf{X}_{se\oplus j}^{k+1}(n)\}_{n=1}^{N_{s}}=\\ &\mathbf{X}_{se\oplus j}^{k}(n)\cdot\exp(\phi_{k}^{2}(\eta_{k}^{ 2}(\mathbf{X}_{co\otimes j}^{k+1})))+\phi_{k}^{3}(\eta_{k}^{3}(\mathbf{X}_{co \otimes j}^{k+1})),\end{split} \tag{3}\]
where \(||\cdot||\) refers to the channel-wise concatenation. \(\exp(\cdot)\) is the Exponential function. Accordingly, the backward propagation is defined as:
\[\begin{split}&\{\mathbf{X}_{se\oplus j}^{k}(n)\}_{n=1}^{N_{s}}=\\ &(\mathbf{X}_{se\oplus j}^{k+1}(n)-\phi_{k}^{3}(\eta_{k}^{3}( \mathbf{X}_{co\otimes j}^{k+1})))\cdot\exp(-\phi_{k}^{2}(\eta_{k}^{2}(\mathbf{ X}_{co\otimes j}^{k+1})))\\ &\mathbf{X}_{co\otimes j}^{k}=\mathbf{X}_{co\otimes j}^{k+1}-\xi_ {k}(||\phi_{k}^{1}(\eta_{k}^{1}(\mathbf{X}_{se\oplus j}^{k}(n)))||_{n=1}^{N_{ s}}).\end{split} \tag{4}\]
Figure 3: Illustration of the architecture of our invertible block. The dashed line refers to weight sharing.
Figure 2: Network architecture of our LF-VSN. It is composed of several invertible blocks. In the forward hiding process, multiple secret videos are hidden in a cover video to generate a stego video, together with redundancy. In the backward recovering process, the stego video and predicted redundancy are fed to the reverse data flow of the same network with the same parameters to recover secret videos.
### Redundancy Prediction Module (RPM) & Key-controllable Design
As illustrated previously, we retain the stego part and discard the redundancy information in the forward hiding. Therefore, we need to prepare a suitable redundancy in the backward process to utilize the reversibility of INN to reconstruct the forward input (, secret and cover). In different tasks, most INN-based methods [21, 24, 32, 46] constrain the generated redundancy information to obey the Gaussian distribution and utilize random Gaussian sampling to approximate this part in the backward process. Nevertheless, such random sampling lacks data specificity and adaptivity. In our LF-VSN, we predict the redundancy information from the stego group through a redundancy prediction module (RPM), as shown in Fig. 4(a). It is composed of several residual blocks (RB) without the Batch Normalization layer.
In this paper, we present a novel extension of RPM to construct key-controllable video steganography, with which we can hide multiple secret videos in a cover video and recover a secret video conditioned on a specific key. The architecture is shown in Fig. 4(b). Given the index \(n_{key}\) of a secret video \(\mathbf{X}_{se}(n_{key})\), a specific key is generated by a key encoder, which is composed of several fully connected (FC) layers. The key is then fed into a FC layer at the end of each RB in RPM to generate a condition vector with \(2C_{rpm}\) channels, which is divided into two modulation vectors \(\alpha,\beta\in\mathbb{R}^{C_{rpm}\times 1\times 1}\) in the channel dimension. \(C_{rpm}\) is the feature channel of each RB in RPM. Then we modulate the output feature \(\mathbf{F}_{rpm}\) of each RB as \(\mathbf{F}_{rpm}\cdot\alpha+\beta\). In the training process, we constrain the recovered output the same as the \(n_{key}\)-th secret video (, \(\mathbf{X}_{se}(n_{key})\)). More details can be found in Sec. 3.6.
### Scalable Embedding Module
The scalable design is used to handle the case where there are different requirements for the number of secret videos hidden in a cover video. It is succinctly designed on the feature aggregation part \(\xi_{k}(\cdot)\) in each IB, as shown in Fig. 3. The illustration of our scalable embedding module is presented in Fig. 5. It can be regarded as a special convolution layer, whose dimension of the convolution kernel can be changed according to the input. All convolution kernels \(\widetilde{\mathbf{M}}\) with different dimensions are parameter-shared from the same base kernel \(\mathbf{M}\). Technically, given the input feature \(\mathbf{F}_{in}\in\mathbb{R}^{C_{in}\times W\times H}\), we truncate a convolution kernel \(\widetilde{\mathbf{M}}\in\mathbb{R}^{C_{in}\times C_{out}\times k\times k}\) from \(\mathbf{M}\in\mathbb{R}^{C\times C_{out}\times k\times k}\) to match the input dimension and then perform convolution: \(\mathbf{F}_{out}=\widetilde{\mathbf{M}}*\mathbf{F}_{in}\). In this way, the training of \(\mathbf{M}\) is completed through the training of all sub-kernels \(\widetilde{\mathbf{M}}\).
### Loss Function
In our LF-VSN, the loss function is used to constrain two parts,, forward hiding and backward recovering. The forward hiding is to hide multiple secret videos in the cover video. The generated stego video \(\mathbf{X}_{st}\) should be undetectable to secret videos and as similar as possible to the cover video. Therefore, we constrain \(\mathbf{X}_{st}\) to be the same as the cover video \(\mathbf{X}_{co}\):
\[\mathcal{L}_{f}=||\mathbf{X}_{st\otimes j}[I_{c}]-\mathbf{X}_{co\otimes j}[I_ {c}]||_{2}^{2}, \tag{5}\]
where \(||\cdot||_{2}^{2}\) donates the \(\ell_{2}\) norm. \(I_{c}\) is the index of the central frame in each group. In the backward recovering, there are two patterns: with and without key controlling. In both patterns, we aim to recover the secret information from the cover video. The difference stands between recovering a specific secret video and all secret videos. In the pattern without key controlling, the loss function is defined as:
\[\begin{split}\mathcal{L}_{b}=\sum_{n=1}^{N_{s}}||\hat{\mathbf{X }}_{se\otimes j}(n)[I_{c}]-\mathbf{X}_{se\otimes j}(n)[I_{c}]||_{2}^{2}+\\ ||\hat{\mathbf{X}}_{co\otimes j}[I_{c}]-\mathbf{X}_{co\otimes j}[I _{c}]||_{2}^{2},\end{split} \tag{6}\]
Figure 4: The architecture of our redundancy prediction module (RPM). It has two model settings: (a) RPM without (w/o) key controlling; (b) RPM with (w) key controlling.
Figure 5: Illustration of our scalable embedding module. It takes the input feature map with scalable channels \(C_{in}\in[1,C]\) and produces output features with fixed channels \(C_{out}\).
where \(\hat{\mathbf{X}}_{se}\) and \(\hat{\mathbf{X}}_{co}\) represent the recovered secret and cover videos. In the pattern with key controlling, the loss function is defined to guarantee that the key generated from the video index \(n_{key}\) can only recover the \(n_{key}\)-th secret video. Thus, the loss function is reformulated as:
\[\mathcal{L}_{b}=||\frac{1}{N_{s}}\sum_{n=1}^{N_{s}}\hat{\mathbf{X}}_{ se\oplus j}(n)[I_{c}]-\mathbf{X}_{se\oplus j}(n_{key})[I_{c}]||_{2}^{2}+ \tag{7}\] \[||\hat{\mathbf{X}}_{co\oplus j}[I_{c}]-\mathbf{X}_{co\oplus j}[I_ {c}]||_{2}^{2}.\]
We optimize our LF-VSN by minimizing the forward loss function \(\mathcal{L}_{f}\) and backward loss function \(\mathcal{L}_{b}\) as:
\[\mathcal{L}=\mathcal{L}_{f}+\lambda\mathcal{L}_{b}, \tag{8}\]
where \(\lambda\) is a hyper-parameter to make a trade-off between forward hiding and backward recovering. We set \(\lambda=4\) to balance these two loss portions.
## 4 Experiment
### Implementation Details
In this work, we adopt the training set of Vimeo-90K [48] to train our LF-VSN. Each sequence has a fixed spatial resolution of \(448\times 256\). During training, we randomly crop training videos to \(144\times 144\) with random horizontal and vertical flipping to make a data augmentation. We use Adam optimizer [23], with \(\beta_{1}=0.9\), \(\beta_{2}=0.5\). We set the batch size as \(16\). The weight decay factor is set as \(1\times 10^{-12}\). We use an initial learning rate of \(1\times 10^{-4}\), which will decrease by half for every \(30K\) iterations. The number of total iterations is set as \(250K\). The training process can be completed on one NVIDIA Tesla V100 GPU within 3 days. For testing, we select \(200\) sequences from the testing set of Vimeo-90K, denoted as Vimeo-T200 in this paper.
### Comparison Against Other Methods
Here we compare our LF-VSN with other methods on single video steganography and challenging multiple videos steganography. The evaluation includes the stego quality in forward hiding and the secret quality in backward recovering. For single video steganography, we compare our LF-VSN with some well-known methods [4, 43] and recent proposed methods [11, 21, 32, 47]. Note that PIH [11] highlighted the need to quantize the stego image from the floating-point format of \(32\times 3\) to \(8\times 3\) bits per pixel. But PIH just added the quantization to the compared methods without retraining. Here we retrain HiNet [21] with
\begin{table}
\begin{tabular}{c c c c c c c c} \hline & Weng [43] & Baluja [4] & ISN [32] & HiNet [21] & RIIS [47] & PIH [11] & LF-VSN (Ours) \\ \hline \hline Stego & 29.43/0.862 & 34.14/0.860 & 42.08/0.965 & 42.09/0.962 & 43.50/0.951 & - & **45.17**/**0.980** \\ \hline Secret & 32.08/0.899 & 35.21/0.931 & 42.11/0.984 & 44.44/0.991 & 44.08/0.964 & 36.48/0.939 & **48.39**/**0.996** \\ \hline Params. & 42.57M & 2.65M & 3.00M & 4.05M & 8.15M & **0.67**M & 7.40M \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison (PSNR/SSIM) on Vimeo-T200. The best and second-best results are **highlighted** and **underlined**. Our LF-VSN achieves the best performance in stego and secret quality with acceptable complexity.
Figure 6: Visual comparison between our LF-VSN, ISN [32], and PIH [11] in 4 videos Steganography. We present the secret reconstruction results of video \(2\) and video \(4\). Our LF-VSN produces better result with intact color and details.
Figure 7: Visualization of our LF-VSN in \(7\) videos steganography, showing promising performance in such an extreme case.
quantization to make a more fair comparison. Thus, its performance may be slightly higher than that reported in PHH. ISN [32], RIIS [47] and PIH were originally designed with quantization, which can be directly compared. Tab. 1 presents that our method achieves the best performance on stego and secret while maintaining acceptable complexity.
For multiple videos steganography, ISN [32] and PIH [11] studied how to hide multiple secret images in a cover image, which can be competitive counterparts of our LF-VSN. ISN can hide up to \(5\) secret images into 1 cover image, and PIH can hide \(4\) secret images. The comparison in Tab. 3 shows the better performance of our LF-VSN. Even in the \(7\) videos hiding, our method still has promising stego and secret quality (\(>35dB\)). We present the visual comparison of different methods in \(4\) videos steganography in Fig. 6. Obviously, ISN has color distortion, and PIH has a loss of details. By contrast, our LF-VSN can recover high-fidelity results. We also present the secret and stego quality of our LF-VSN in \(7\) videos hiding in Fig. 7. These videos are from DAVIS [37] dataset. One can see that our LF-VSN has promising performance in such an extreme case.
### Key-controllable Video Steganography
Hiding multiple secret videos in a cover video is challenging; doing so for different receivers is even more difficult. In this paper, we present a key-controllable scheme in multiple videos steganography. It enables different receivers to recover particular secret videos through specific keys. The comparison in Tab. 3 presents that our controllable scheme still has a large hiding capacity (up to \(6\) videos) with attractive performance (\(>30dB\)). The visualization of recovering quality is presented in the second row of Fig. 8, showing the high-quality and key-controllable results of our LF-VSN in multiple videos steganography.
We also study the security of our controlling scheme, _i.e._, the key is sensitive and model-specific. Here we take two sets of parameters, producing from the 250K and 240K iterations in the same training process. We use the key produced by one model (*) to recover the secret video hidden by another. The result in the third row of Fig. 8 presents that the wrong key has no controlling and recovering ability. Thus, our key-controllable scheme not only has the controlling function but also enhances data security.
### Scalable Video Steganography
In this paper, we present a scalable scheme in multiple videos steganography. It can hide a variable number of secret videos into a cover video with a single model. We evaluate the performance of our scalable design and compare it with the fixed version in Tab. 9. Obviously, our method has an attractive performance (\(>31dB\)) in hiding a variable number (up to 7) of secret videos into a cover video by a single model. The performance degradation compared to fixed version is acceptable. With this design, a single model can satisfy multiple steganography demands.
### Steganographic Analysis
The data security is one of the most important concerns in steganography. In this section, we evaluate the anti
\begin{table}
\begin{tabular}{c|c|c c c c c} \hline \hline & Videos & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline \multirow{3}{*}{\begin{tabular}{c} \end{tabular} } & Stego & 37.60 & 36.41 & 32.56 & 31.46 & - & - \\ & Secret & 41.47 & 38.76 & 33.42 & 33.39 & - & - \\ \hline \multirow{3}{*}{\begin{tabular}{c} \end{tabular} } & Stego & - & - & - & - & - & - \\ & Secret & 35.95 & 34.96 & 34.20 & - & - & - \\ \hline \multirow{3}{*}{
\begin{tabular}{c} \end{tabular} } & Stego & **40.97** & **38.55** & **37.55** & **36.57** & **35.68** & **35.01** \\ & Secret & **44.24** & **42.27** & **40.21** & **38.88** & **36.94** & **35.71** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Multiple videos steganography comparison (PSNR) of our LF-VSN, ISN [32], and PIH [11] on Vimeo-T200 test set. Our LF-VSN can hide/recover 7 videos with promising performance.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Num. videos & 2 & 4 & 6 \\ \hline Stego (NC/C) & 40.97/38.67 & 37.55/34.41 & 35.68/30.48 \\ \hline Secret (NC/C) & 44.24/41.04 & 40.21/37.15 & 36.94/31.95 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison between controllable (C) and non-controllable (NC) video steganography of our LF-VSN.
Figure 8: Visualization of our key-controllable scheme in \(6\) videos steganography. In the second and third rows, we use the correct and wrong (*) keys of \(2\), \(4\), \(6\) to recover secret videos, respectively.
Figure 9: Performance comparison between our scalable and fixed design in multiple videos steganography.
steganalysis ability of different methods, which stands for the possibility of detecting stego frames from nature frames by steganalysis tools. We utilize the StegExpose [7] to attack different steganography methods. The detection set is built by mixing stego and cover with equal proportions. We vary the detection thresholds in a wide range in StegExpose and draw the receiver operating characteristic (ROC) curve in Fig. 10. Note that the ideal case represents that the detector has a \(50\%\) probability of detecting stego from an equally mixed cover and stego, the same as random sampling. Therefore, the closer the curve is to the ideal case, the higher the security is. Obviously, the stego frames generated by our LF-VSN are harder to be detected than other methods. Even in the multiple videos (_e.g.,_ 2 and 4 videos) hiding, our method can still achieve attractive performance, demonstrating the higher data security of our LF-VSN.
### Ablation Study
In this subsection, we present the ablation study in Tab. 4 to investigate the effect of different components in our LF-VSN. The experiments are conducted on Vimeo-T200.
**Sliding window size.** In this paper, we utilize the temporal correlation within each frame group to improve the video steganography performance. To demonstrate the effectiveness, we evaluate the performance of our LF-VSN with the window size \(L=\{1,3,5\}\) in 2, 4, and 6 videos steganography. The results in Tab. 4 present that the temporal correlation has obvious performance gains to the multiple videos steganography. Considering the model complexity, we set the sliding window size as \(3\) in our LF-VSN.
**Number of invertible blocks (IB).** As mentioned above, our LF-VSN is composed of several IBs. To investigate the effectiveness of IB, we evaluate the performance of our LF-VSN with the number of IB being 12, 16, and 20. The results in Tab. 4 present that the performance increases with the number of IB. To make a trade-off between performance and complexity, we utilize \(16\) IBs in our LF-VSN.
**Frequency concatenation (FreqCat).** In our LF-VSN, we use the DWT transform to merge each input group in the frequency domain. To demonstrate the effectiveness, we replace this operation with direct channel-wise concatenation. Tab. 4 presents that there are \(1.7dB\) and \(1.91dB\) gains of FreqCat on stego and secret quality in 3 videos steganography. The possible reason is that DWT transform can separate the low-frequency and high-frequency sub-bands, making it more effective for information fusion and hiding.
**Redundancy prediction module (RPM).** In our LF-VSN, we employ RPM to predict the redundancy in the backward process instead of randomly sampling. To demonstrate the effectiveness of RPM, we replace this module with a random Gaussian sampling. The result in Tab. 4 shows that RPM can be used not only to design key-controllable steganography, but also to improve performance.
## 5 Conclusion
In this paper, we propose a large-capacity and flexible video steganography network (LF-VSN). The novelty of our method is twofold. First, our LF-VSN has a large hiding capacity, with which we can hide \(7\)**secret videos** into a cover video and then recover them well (\(>35dB\)). Second, we explore the flexibility in multiple videos steganography by proposing a key-controllable scheme and a scalable design. Specifically, our key-controllable scheme can enable different receivers to recover particular secret videos through specific keys. Also, the key controlling is sensitive and model-specific, which can enhance data security. Our scalable design further improves the flexibility to hide a variable number of secret videos into a cover video with a single model. Extensive experiments demonstrate that our proposed LF-VSN has state-of-the-art performance with high security, large hiding capacity, and flexibility.
\begin{table}
\begin{tabular}{c||c c c|c c c|c c c||c c c||c} \hline Num. videos & \multicolumn{3}{c|}{2} & \multicolumn{3}{c|}{4} & \multicolumn{3}{c||}{6} & \multicolumn{3}{c||}{3} & \multicolumn{1}{c||}{3} \\ \hline Window size & 1 & 3 & 5 & 1 & 3 & 5 & 1 & 3 & 5 & 3 & 3 (ours) & 3 & 3 & 3 \\ Num. IB & 16 & 16 & 16 & 16 & 16 & 16 & 16 & 16 & 12 & 16 & 20 & 16 & 16 \\ FreqCat & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & \(\times\) \\ RPM & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & \(\times\) & ✓ \\ \hline \hline Stego & 39.64 & 40.97 & 41.08 & 36.41 & 37.55 & 37.86 & 34.47 & 35.46 & 35.96 & 38.03 & 38.55 & 38.91 & 38.28 & 36.85 \\ Secret & 42.97 & 44.24 & 44.43 & 37.67 & 40.21 & 40.42 & 35.11 & 36.83 & 39.97 & 41.99 & 42.27 & 42.40 & 41.69 & 40.36 \\ \hline \end{tabular}
\end{table}
Table 4: The ablation study of different components in our LF-VSN. It includes the sliding window size, number of invertible blocks (IB), frequency concatenation (FreqCat), and redundancy prediction module (RPM).
Figure 10: Statistics-based steganalysis by StegExpose [7]. The closer the detection accuracy is to \(50\%\), the higher the security is. |
2306.05005 | Non-dense orbit sets carry full metric mean dimension | Let $(X,d)$ be a compact metric space, $f:X\rightarrow X$ be a continuous
transformation with the specification property. we consider non-dense orbit set
$E(z_0)$ and show that for any non-transitive point $z_0\in X$, this set
$E(z_0)$ is empty or carries full Bowen upper and lower metric mean dimension. | Jiao Yang, Ercai Chen, Xiaoyao Zhou | 2023-06-08T07:48:11Z | http://arxiv.org/abs/2306.05005v2 | # Non-dense orbit sets carry full metric mean dimension
# Non-dense orbit sets carry full metric mean dimension
Jiao Yang\({}^{1}\), Ercai Chen\({}^{1}\), Xiaoyao Zhou*\({}^{1}\)
*corresponding author
**Abstract.** Let \((X,d)\) be a compact metric space, \(f:X\to X\) be a continuous transformation with the specification property. we consider non-dense orbit set \(E(z_{0})\) and show that for any non-transitive point \(z_{0}\in X\), this set \(E(z_{0})\) is empty or carries full Bowen upper and lower metric mean dimension.
## 1. Introduction
A number \(\lambda\) is called badly approximable if \(|\lambda-\frac{p}{q}|>\frac{c}{q^{2}}\) for some \(c>0\) and all rational numbers \(\frac{p}{q}\). In 1931, Jarnik [10] proved that the set of badly approximable numbers is of full Hausdorff dimension. In 1997, Abercrombie and Nair [1] proved the non-dense set for the expanding rational map of the Riemann sphere acting on its Julia set \(J\) has full Hausdorff dimension. In fact, authors [5] generalized the result of [1] by some more general systems.
Let \((X,d,f)\) be a topological dynamical system, where \((X,d)\) is a compact metric space and \(f:X\to X\) is a continuous map. For any \(x\in X\), let \(O_{f}(x)\) denote the orbit of \(x\), i.e., \(O_{f}(x):=\{x,f(x),\cdots,f^{n}(x),\cdots\}\). For any \(z_{0}\in X\), we define
\[E(z_{0})=\{x\in X:z_{0}\notin\overline{\{f^{n}(x):n\geq 0\}}\},\]
where \(x\in E(z_{0})\) indicates \(z_{0}\) is badly approximated by the orbit of \(x\). When \(f\) is Guass map, \(E(0)\) is just the set of the badly approximable numbers. By the definition, any point in \(E(z_{0})\) has a non-dense forward orbit in \(X\). Recently, Zhao, Yang and Zhou [13] showed that \(E(z_{0})\) can have full topological pressure.
It should be noted that on a compact smooth manifold with dimension greater than one, homeomorphisms with infinite topological entropy are \(C^{0}\) generic [12]. Recently, Bobok and Troubetzkoy [2] showed that in the space of continuous non-invertible maps of the unit interval preserving the Lebesgue measure, which is equipped with the uniform metric, the functions satisfying specification property and infinite topological entropy form a dense \(G_{\delta}\) set. Thus, a more subtle question arises naturally: Given a system having both the specification property and infinite topological entropy, does \(E(z_{0})\) carry more information besides infinite Bowen topological entropy?
Mean topological dimension introduced by Gromov [3] is a new topological invariant in topological dynamical systems. Later, Lindenstrauss and Weiss [7] introduced the metric mean dimension, which is a metric version of the mean dimension. Metric mean
dimension, similarly to the topological entropy, measures the complexity of systems with infinite entropy. Mean dimension has applications to topological dynamics [7, 6]. Similar as the topological entropy, the metric mean dimension has a strong connection with ergodic theory, and lots of variational principles have been established, see [4, 8].
Let us go to back to our question. Recently, we state our result precisely
Main result. In this paper, we consider a dynamical system \((X,d,f)\) satisfy specification, i.e., for any \(\epsilon>0\), there exists an integer \(m=m(\epsilon)\) such that for arbitrary finite intervals \(\{I_{j}=[a_{j},b_{j}]\}_{j=i}^{k}\) with \(a_{j+1}-b_{j}\geq m\) for \(j=1,2,\cdots,k-1\) and any \(x_{1},\cdots,x_{k}\in X\), there exists a point \(x\in X\) such that
\[d(f^{p+a_{j}}(x),f^{p}(x_{j}))<\epsilon\;\;\text{for all}\;\;p=0,1,\cdots,b_{j}- a_{j}\;\;\text{and every}\;\;j=1,2,\cdots,k.\]
For convenience, we call \(\{[b_{j},a_{j+1}],j=1,\cdots,k-1\}\) the gaps.
**Theorem 1.1**.: _Suppose that \((X,d,f)\) be a dynamical system with specification property. For any non-transitive point \(z_{0}\in X\), i.e. \(\overline{O_{f}(z_{0})}\neq X\), then either \(E(z_{0})=\emptyset\) or_
\[\begin{split}\overline{\operatorname{mdim}}_{M}^{B}(E(z_{0}),f,d )&=\overline{\operatorname{mdim}}_{M}(X,f,d)\\ \underline{\operatorname{mdim}}_{M}^{B}(E(z_{0}),f,d)& =\underline{\operatorname{mdim}}_{M}(X,f,d).\end{split} \tag{1.1}\]
## 2. Basic notions and definitions
Let \(n\in\mathbb{N}\). For \(x,y\), we define the \(n\)th Bowen metric \(d_{n}\) on \(X\) as
\[d_{n}(x,y)=\max\{d(f^{i}(x),f^{i}(y)):i=0,\cdots,n-1\}.\]
For each \(\epsilon>0\), the Bowen ball of radius \(\epsilon\) and order \(n\) in the metric \(d_{n}\) around \(x\) is given by
\[B_{n}(x,\epsilon)=\{y\in X:d_{n}(x,y)<\epsilon\}.\]
Now given \(Z\subseteq X,\epsilon>0\) and \(N\in\mathbb{N}\). For each \(\lambda\in\mathbb{R}\), let
\[m(Z,\lambda,N,\epsilon)=\inf_{\varGamma}\left\{\sum_{i\in I}\exp\left(- \lambda n_{i}\right)\right\},\]
where the infimum is taken over all finite or countable collection \(\varGamma=\{B_{n_{i}}(x_{i},\epsilon)\}_{i\in I}\) such that \(Z\subseteq\cup_{i\in I}B_{n_{i}}(x_{i},\epsilon)\) and \(\min\{n_{i}:i\in I\}\geq N\). Note that \(m(Z,\lambda,N,\epsilon)\) does not decrease as N increases, and therefore the following limit exists
\[m(Z,\lambda,\epsilon)=\lim_{N\to\infty}m(Z,\lambda,\varphi,N,\epsilon).\]
The function is non-increasing in \(\lambda\) and takes value \(\infty\) and \(0\) at all but at most one value of \(\lambda\). Denoting the critical value of \(\lambda\) by
\[h^{B}_{top}(Z,f,\epsilon) =\inf\{\lambda\in\mathbb{R}:m(Z,\lambda,\epsilon)=0\}\] \[=\sup\{\lambda\in\mathbb{R}:m(Z,\lambda,\epsilon)=\infty\}.\]
This implies that \(m(Z,\lambda,\epsilon)=\infty\), when \(\lambda<h^{B}_{top}(Z,f,\epsilon)\), and \(m(Z,\lambda,\epsilon)=0\) when \(s>h^{B}_{top}(Z,f,\epsilon)\). Note that \(m(Z,h^{B}_{top}(Z,f,\epsilon),\epsilon)\) could be \(\infty,0\) or some positive finite
number. The Bowen topological entropy is defined by \(h^{B}_{top}(Z,f)=\lim_{\epsilon\to 0}h^{B}_{top}(Z,f,\epsilon)\) (see [11]). The Bowen upper and lower metric mean dimension of \(f\) on \(Z\) with respect to \(d\) are respectively defined by
\[\overline{\operatorname{mdim}}^{B}_{M}(Z,f,d) =\limsup_{\epsilon\to 0}\frac{h^{B}_{top}(Z,f,\epsilon)}{| \log\epsilon|},\] \[\underline{\operatorname{mdim}}^{B}_{M}(Z,f,d) =\liminf_{\epsilon\to 0}\frac{h^{B}_{top}(Z,f,\epsilon)}{| \log\epsilon|}.\]
The classical metric mean dimension is defined as follows. Given \(n\in\mathbb{N}\) and \(\epsilon>0\). A set \(E\subset X\) is called an \((n,\epsilon)\) separated set for \(X\) if for every \(x\neq y\in E\), we have \(d_{n}(x,y)>\epsilon\). Define \(s(f,X,n,\epsilon)\) to be the largest cardinality of an \((n,\epsilon)\) separated set of \(X\). Notice that \(s_{sep}(f,X,n,\epsilon)\) is finite by compactness.The upper and lower metric mean dimension of \(f\) with respect to \(d\) respectively are given by
\[\overline{\operatorname{mdim}}_{M}(X,f,d) =\limsup_{\epsilon\to 0}\frac{\limsup_{n\to\infty}\frac{1}{n}\log s (f,X,n,\epsilon)}{|\log\epsilon|},\] \[\underline{\operatorname{mdim}}_{M}(X,f,d) =\liminf_{\epsilon\to 0}\frac{\limsup_{n\to\infty}\frac{1}{n}\log s (f,X,n,\epsilon)}{|\log\epsilon|}.\]
It is clear that the metric mean dimension vanishes if topological entropy is finite.
**Remark 2.1**.: _If \(Z_{1}\subset Z_{2}\subset X\), then_
\[\overline{\operatorname{mdim}}^{B}_{M}(Z_{1},f,d)\leq\overline{\operatorname{ mdim}}^{B}_{M}(Z_{2},f,d),\ \ \underline{\operatorname{mdim}}^{B}_{M}(Z_{1},f,d)\leq\underline{ \operatorname{mdim}}^{B}_{M}(Z_{2},f,d). \tag{2.1}\]
The following proposition complete proof is shown in [9] appendix.
**Proposition 2.1**.: _For any \(f\)-invariant and compact nonempty subset \(Z\subset X\), one has_
\[\overline{\operatorname{mdim}}^{B}_{M}(Z,f,d)=\overline{\operatorname{mdim}} _{M}(Z,f,d),\ \ \underline{\operatorname{mdim}}^{B}_{M}(Z,f,d)=\underline{ \operatorname{mdim}}_{M}(Z,f,d).\]
## 3. Proof of Theorem 1.1
In this section, let's turn to prove our main result.
### Proof for the case of the upper metric mean dimension
we assume \(E(z_{0})=\emptyset\) and show (1.1). We first consider the case of the upper metric mean dimension. Note that \(E(z_{0})\subset X\), and therefore
\[\overline{\operatorname{mdim}}^{B}_{M}(E(z_{0}),f,d)\leq\overline{ \operatorname{mdim}}^{B}_{M}(X,f,d).\]
Proposition 2.1 implies that
\[\overline{\operatorname{mdim}}^{B}_{M}(E(z_{0}),f,d)\leq\overline{ \operatorname{mdim}}_{M}(X,f,d).\]
For any constant \(C<\overline{\operatorname{mdim}}_{M}(X,f,d)\), we only need to show that
\[\overline{\operatorname{mdim}}^{B}_{M}(E(z_{0}),f,d)\geq C. \tag{3.1}\]
Firstly, since \(z_{0}\) is non-transitive point, we can choose \(y\in X\) and \(\epsilon_{0}>0\) such that
\[d(y,\overline{O_{f}(z_{0})})\geq 2\epsilon_{0}. \tag{3.2}\]
Fix \(\gamma>0\) we can choose a \(\epsilon<\epsilon_{0}\) and a sequence \(\{n_{k}\}_{k\geq 1}\subset\mathbb{N}\) such that there exists maximal \((n_{k},7\epsilon)\)-separated set \(\mathcal{S}_{k}\) of \(X\) which is always a \((n_{k},7\epsilon)\)-spanning set with
\[\#\mathcal{S}_{k}\geq\exp\left(n_{k}(C-\gamma)\right)|\log 7\epsilon|, \tag{3.3}\]
\[\frac{h_{top}^{B}(E(z_{0}),f,\epsilon)}{|\log\epsilon|}\leq\overline{\text{ mdim}}_{M}^{B}(E(z_{0}),f,d)+\gamma \tag{3.4}\]
and
\[(\overline{\text{mdim}}_{M}^{B}(E(z_{0}),f,d)+\gamma)\cdot\frac{|\log\epsilon |}{|\log 7\epsilon|}\leq\overline{\text{mdim}}_{M}^{B}(E(z_{0}),f,d)+2\gamma \tag{3.5}\]
by the definitions
\[\overline{\text{mdim}}_{M}(X,f,d)=\limsup_{\epsilon\to 0}\frac{\limsup_{n\to\infty} \frac{1}{n}\log s(f,X,n,\epsilon)}{|\log\epsilon|},\]
and
\[\overline{\text{mdim}}_{M}^{B}(Z,f,d)=\limsup_{\epsilon\to 0}\frac{h_{top}^{B}(Z,f, \epsilon)}{|\log\epsilon|}.\]
#### 3.1.1. **Construction of the Moran-like fractal \(\mathcal{F}\).**
Choose \(M>0\) such that
\[\frac{2m(\epsilon)C}{M+2m(\epsilon)}<\gamma. \tag{3.6}\]
Without lose of generality, we can assume that \(M<n_{1}\). Let \(c_{k}=\lceil\frac{n_{k}}{M}\rceil\) then we break the \(n_{k}\) orbit of \(x\in\mathcal{S}_{k}\) as follows
\[\{x,f(x),\cdots,f^{M-1}(x)\}\cup\{f^{M}(x),\cdots,f^{2M-1}(x)\}\cup\cdots\] \[\cup\{f^{(c_{k}-1)M}(x),f^{(c_{k}-1)M+1}(x),\cdots,f^{n_{k}}(x)\}.\]
Now we define the point pair \((x,n)\in X\times\mathbb{N}\) by \((x,n):=\{x,f(x),\cdots,f^{n-1}(x)\}\).
We insert \(y\) into every gap, in fact, we translate the point pair \((x,n_{k})\) to
\[(x,M),(y,1),(f^{M}(x),M),(y,1),\cdots,(y,1),(f^{(c_{k}-1)M}(x),n_{k}-(c_{k}-1) M).\]
Denote \(m:=m(\epsilon)\). By the specification property, there exists \(y^{\prime}\in X\) such that
\[d_{M}(y^{\prime},x)<\epsilon,d(f^{M+m}y^{\prime},y)<\epsilon, \cdots,d_{M}(f^{(j-1)(M+2m+1)}y^{\prime},f^{(j-1)M}x)<\epsilon\] \[d(f^{(j-1)(M+2m+1)+M+m}y^{\prime},y)<\epsilon,\cdots,d_{n_{k}-(c _{k}-1)M}(f^{(c_{k}-1)(M+2m+1)+M+m}y^{\prime},f^{(c_{k}-1)M}x)<\epsilon\]
i.e., the following set is a non-empty,
\[B(x,n_{k},\epsilon;y)= \bigcap_{j=1}^{c_{k}-1}\left\{f^{-(j-1)(M+2m+1)}B_{M}(f^{(j-1)M}x,\epsilon)\cap f^{-(j-1)(M+2m+1)-M-m}B(y,\epsilon)\right\}\] \[\cap f^{-(c_{k}-1)(M+2m+1)-M-m}B_{n_{k}-(c_{k}-1)M}(f^{(c_{k}-1)M }x)\neq\emptyset.\]
From the above setting, we define \(\hat{n}_{k}:=n_{k}+(c_{k}-1)(2m+1)\) which denotes the length of the orbits in the set \(B(x,n_{k},\epsilon;y)\).
Next, we can choose a sequence and \(N_{k}\) increasing to \(\infty\) with \(N_{0}=0\). We enumerate the points in the sets \(\mathcal{S}_{k}\) provided by (3.3) and write them as follows
\[\mathcal{S}_{k}=\{x_{i}:i=1,2,\cdots,\#\mathcal{S}_{k}\}.\]
We enumerate the points in the set \(\mathcal{S}_{k}\) and consider the set \(\mathcal{S}_{k}\). Let \(\overline{x}_{k}=(x_{1}^{k},\cdots,x_{N_{k}}^{k})\in\mathcal{S}_{k}^{N_{k}}\) where \(\mathcal{S}_{k}^{N_{k}}=\mathcal{S}_{k}\times\cdots\times\mathcal{S}_{k}\). Now we set \(t_{1}:=\hat{n}_{1}N_{1}+(N_{1}-1)m\) and if \(t_{k}\) have been defined, we define \(t_{k+1}:=t_{k}+N_{k+1}(\hat{n}_{k+1}+m)\). By the specification property, we have
\[B(\overline{x}_{1},\cdots,\overline{x}_{k};y)=\bigcap_{i=1}^{k}\bigcap_{j=1} ^{N_{i}}f^{-t_{i-1}-m-(j-1)(\hat{n}_{i}+m)}B(x_{j}^{i},n_{i},\epsilon;y)\neq \emptyset.\]
We define \(\mathcal{F}_{k}\) by
\[\mathcal{F}_{k}=\bigcap\{\overline{B(\overline{x}_{1},\cdots,\overline{x}_{k} ;y)}:(\overline{x}_{1},\cdots,\overline{x}_{k})\in\prod_{i=1}^{k}\mathcal{S}_{ i}^{N_{i}}\}.\]
Obviously, \(\mathcal{F}_{k}\) is a closed subset of \(X\) and \(\mathcal{F}_{k+1}\subset\mathcal{F}_{k}\). Define
\[\mathcal{F}=\bigcap_{k=1}^{\infty}\mathcal{F}_{k}\]
The above construction implies for each \(p\in\mathcal{F}\) shadows the points in \(\mathcal{S}_{i}\) for some \(i\) with the gap segments \(m(\epsilon)\) by the specification property. For any \(n>0\) we denotes \(n_{rel}\) by the segment of times which shadow the separated points in \(\mathcal{S}_{i}\) for some \(i\geq 1\). The following lemma shows that \(\mathcal{F}\subset E(z_{0})\).
**Lemma 3.1**.: _For any \(x\in\mathcal{F}\), then \(x\in E(z_{0})\), i.e. \(\mathcal{F}\subset E(z_{0})\)._
Proof.: Since \(B_{2M+m(\epsilon)}(z_{0},\epsilon)\) is open set which contains \(z_{0}\), we only need to show that \(O_{f}(x)\cap B_{2M+m(\epsilon)}(z_{0},\epsilon)=\varnothing\). Then \(O_{f}(x)\subset X\setminus B_{2M+m(\epsilon)}(z_{0},\epsilon)\). Furthermore, \(\overline{O_{f}(x)}\subset X\setminus B_{2M+m(\epsilon)}(z_{0},\epsilon)\), which implies that \(z_{0}\notin\overline{O_{f}(x)}\).
Now we assume that \(O_{f}(x)\cap B_{2M+m(\epsilon)}(z_{0},\epsilon)\neq\varnothing\) and we can choose \(f^{j}(x)\in B_{2M+m(\epsilon)}(z_{0},\epsilon)\). By the construction of \(\mathcal{F}\), for any \(k\) with \(t_{k}\gg j\), there exists some \(\vec{x}_{1},\cdots,\vec{x}_{k}\) such that \(x\in B(\vec{x}_{1},\cdots,\vec{x}_{k};y)\). Hence, we can choose \(q<2M+m(\epsilon)\) such that \(d(f^{j+q}x,y)<\epsilon\) and \(d(f^{j+q}x,f^{q}z_{0})<\epsilon\). Then we have
\[d(y,f^{q}z_{0})\leq d(f^{j+q}x,y)+d(f^{j+q}x,f^{q}z_{0})\leq\epsilon+\epsilon <2\epsilon_{0},\]
which contracts with (3.2) i.e.,
\[d(y,\overline{O_{f}(z_{0})})\geq 2\epsilon_{0}.\]
#### 3.1.2. Construction of a special sequence of measures \(\mu_{k}\)
For each \((\overline{x}_{1},\cdots,\overline{x}_{k})\in\prod\limits_{i=1}^{k}\mathcal{S}_{i }^{N_{i}}\), we choose \(z(\overline{x}_{1},\cdots,\overline{x}_{k};y)\in B(\overline{x}_{1},\cdots, \overline{x}_{k};y)\). Let \(L_{k}\) be the set of all points constructed in this way. The following simple lemma shows that
\[\#L_{k}=\prod\limits_{i=1}^{k}(\#\mathcal{S}_{i})^{N_{i}}. \tag{3.7}\]
**Lemma 3.2**.: _Let \(\overline{x}\) and \(\overline{y}\) be distinct elements of \(\prod\limits_{i=1}^{k}\mathcal{S}_{i}^{N_{i}}\). Then \(z_{1}=z(\overline{x})\) and \(z_{2}=z(\overline{y})\) are \((t_{k},5\epsilon)\) separated points._
Proof.: Assume that \(\overline{x}=(\overline{x}_{1},\overline{x}_{2},\cdots,\overline{x}_{k})\) and \(\overline{y}=(\overline{y}_{1},\overline{y}_{2},\cdots,\overline{y}_{k})\) and \(\overline{x}_{i}\neq\overline{y}_{i}\) with \(\overline{x}_{s}=\overline{y}_{s}\) for each \(s<i,\ 1\leq i\leq k\). Let \(\overline{x}_{i}=(x_{1}^{i},\cdots,x_{N_{i}}^{i})\) and \(\overline{y}_{i}=(y_{1}^{i},\cdots,y_{N_{i}}^{i})\). Without lose of generality, we assume that \(x_{q}^{i}\neq y_{q}^{i}\) and for each \(u<q\), \(x_{u}^{i}=y_{u}^{i}\).
Then we have for each \(0\leq j\leq c_{i}-1,\ 0\leq s\leq M-1\)
\[d(f^{j(M+2m+1)+s}f^{t_{i-1}+(q-1)(m+\hat{n}_{i})+m}z(\overline{x}),f^{jM+s}x_{ q}^{i})<\epsilon\]
and
\[d(f^{j(M+2m+1)+s}f^{t_{i-1}+(q-1)(m+\hat{n}_{i})+m}z(\overline{y}),f^{jM+s}y_{ q}^{i})<\epsilon\]
Since \(x_{q}^{i}\neq y_{q}^{i}\in\mathcal{S}_{i}\) are \((n_{i},7\epsilon)\)-separated points, there exists \(0\leq\hat{j}\leq c_{i}-1,\ 0\leq\hat{s}\leq M-1\) such that
\[d(f^{\hat{j}M+\hat{s}}x_{q}^{i}),f^{\hat{j}M+\hat{s}}y_{q}^{i})\geq 7\epsilon.\]
Hence
\[d_{t_{k}}(z_{1},z_{2})\geq d_{t_{i}}(z_{1},z_{2})\] \[\geq d(f^{\hat{j}(M+2m+1)+\hat{s}}f^{t_{i-1}+(q-1)(m+\hat{n}_{i})+m} z(\overline{x}),f^{\hat{j}(M+2m+1)+\hat{s}}f^{t_{i-1}+(q-1)(m+\hat{n}_{i})+m}z( \overline{y})\] \[\geq d(f^{\hat{j}M+\hat{s}}x_{q}^{i},f^{\hat{j}M+\hat{s}}y_{q}^{i})-d (f^{\hat{j}(M+2m+1)+\hat{s}}f^{t_{i-1}+(q-1)(m+\hat{n}_{i})+m}z(\overline{x}),f^{\hat{j}M+\hat{s}}x_{q}^{i})\] \[- d(f^{\hat{j}(M+2m+1)+\hat{s}}f^{t_{i-1}+(q-1)(m+\hat{n}_{i})+m} z(\overline{y}),f^{\hat{j}M+\hat{s}}y_{q}^{i})\] \[\geq 7\epsilon-\epsilon-\epsilon=5\epsilon.\]
So we have done.
We now define the measures on \(\mathcal{F}\) which yield the required estimates for the similar entropy distribution principle. For each \(k\), an atomic measure centered on \(L_{k}\). Precisely, if \(z=z(\overline{x}_{1},\cdots,\overline{x}_{k})\), we define probability measure
\[\mu_{k}:=\frac{1}{\#L_{k}}\sum_{z\in L_{k}}\delta_{z}.\]
In order to prove the main results of this paper, we present some lemmas.
**Lemma 3.3**.: _The sequence of measures \(\{\mu_{k}\}_{k\in\mathbb{N}}\) converges to a measure in \(\mathcal{M}(X)\) with respect to the weak\({}^{*}\)-topology \(\mu\). Furthermore, the limiting measure \(\mu\) satisfies \(\mu(\mathcal{F})=1\)._
Proof.: The similar proof as [11, Lemma 5.4] can be applied to show \(\mu_{k}\) converges in the weak\({}^{*}\)-topology.
Suppose \(\mu\) is a limit measure of the sequence of probability measures \(\mu_{k}\). Then \(\mu=\lim\limits_{k}\mu_{s_{k}}\) for some \(s_{k}\to\infty\). For some fixed \(s\) and all \(p\geq 0\), we have \(\mu_{s+p}(\mathcal{F}_{s})=1\) since \(\mu_{s+p}(\mathcal{F}_{s+p})=1\) and \(\mathcal{F}_{s+p}\subset\mathcal{F}_{s}\). Therefore,
\[\mu(\mathcal{F}_{s})\geq\limsup\limits_{k\to\infty}\mu_{s_{k}}(\mathcal{F}_{s })=1.\]
It follows that \(\mu(\mathcal{F})=\lim\limits_{s\to\infty}\mu(\mathcal{F}_{s})=1\).
Next we set \(b_{n}\) denote the mistake segment which at most \(n\) i.e.,
\[b_{n}:=n-n_{rel}.\]
Let \(\mathcal{B}=B_{n}(q,\epsilon)\) be an arbitrary ball which intersects \(\mathcal{F}\). Let \(k\) be the unique number which satisfies \(t_{k}\leq n<t_{k+1}.\) Let \(j\in\{0,1,\cdots,N_{k+1}-1\}\) be the unique number so
\[t_{k}+j(\hat{n}_{k+1}+m(\epsilon))\leq n<t_{k}+(j+1)(\hat{n}_{k+1}+m(\epsilon))\]
Let \(\Delta_{j}^{k+1}:=j(\hat{n}_{k+1}+m(\epsilon))\), we have
\[t_{k}+\Delta_{j}^{k+1}\leq n<t_{k}+\Delta_{j+1}^{k+1}\]
We assume that \(j\geq 1\) and the simpler case \(j=0\) is similar.
**Lemma 3.4**.: _For any \(p\geq 1\), suppose \(\mu_{k+p}(\mathcal{B})>0\), we have_
\[\mu_{k+p}(\mathcal{B})\leq\frac{1}{\#L_{k}\cdot(\#\mathcal{S}_{k+1})^{j}}\]
_where \(b_{n}\) denote the length of mistake segment._
Proof.: (1) Case \(p=1\). Suppose \(\mu_{k+1}(\mathcal{B})>0\), then \(L_{k+1}\cap\mathcal{B}\neq\emptyset\). Let \(z=z(\overline{x},\overline{x}_{k+1})\in L_{k+1}\cap\mathcal{B}\), where \(\overline{x}=(\overline{x}_{1},\cdots,\overline{x}_{k})\in\mathcal{S}_{1}^{N_{ 1}}\times\cdots\times\mathcal{S}_{k}^{N_{k}}\) and \(\overline{x}_{k+1}=(x_{1}^{k+1},\cdots,x_{N_{i}}^{k+1})\in\mathcal{S}_{k+1}^ {N_{k+1}}\). Let
\[\mathcal{A}_{\overline{x};x_{1}^{k+1},\cdots,x_{j}^{k+1}}=\left\{z(\overline {x},(y_{1}^{k+1},\cdots,y_{N_{k+1}}^{k+1}))\in L_{k+1}:y_{1}^{k+1}=x_{1}^{k+1 },\cdots,y_{j}^{k+1}=x_{j}^{k+1}\right\}\]
Suppose that \(z^{\prime}=z(\overline{y},\overline{y}_{k+1})\in L_{k+1}\cap\mathcal{B}\). Since \(d_{n}(z,z^{\prime})<2\epsilon\), by Lemma 3.2, we have \(\overline{x}=\overline{y}\) and \(y_{l}^{k+1}=x_{l}^{k+1}\) for \(l\in\{1,\cdots,j\}\). Thus we have \[\mu_{k+1}(\mathcal{B}) =\frac{1}{\#L_{k+1}}\sum_{z\in L_{k+1}}\delta_{z}(\mathcal{B})\] \[\leq\sum_{z\in\mathcal{A}_{\pi,x_{1}^{k+1},\cdots,x_{j}^{k+1}}} \frac{1}{\#L_{k+1}}\delta_{z}(\mathcal{B})\] \[\leq\frac{\#\mathcal{S}_{k+1}^{N_{k+1}-j}}{\#L_{k+1}}=\frac{1}{ \#L_{k}\cdot(\#\mathcal{S}_{k+1})^{j}}.\]
2. Case \(p>1\). Similarly, we have \[\mu_{k+p}(\mathcal{B}) \leq\frac{\#\mathcal{S}_{k+1}^{N_{k+1}-j}\cdot\#\mathcal{S}_{k+2 }^{N_{k+2}}\cdots\#\mathcal{S}_{k+p}^{N_{k+p}}}{\#L_{k+p}}\] \[=\frac{1}{\#L_{k}\cdot(\#\mathcal{S}_{k+1})^{j}}.\]
**Lemma 3.5**.: _There exists \(N\in\mathbb{N}\) such that for any \(n\geq N\),_
\[\mu(\mathcal{B})\leq\exp\left\{-n(C-2\gamma)|\log 7\epsilon|\right\}\]
Proof.: By (3.3), we have
\[\#L_{k}\cdot(\#\mathcal{S}_{k+1})^{j} =\#\mathcal{S}_{1}^{N_{1}}\cdots\#\mathcal{S}_{k}^{N_{k}}\cdot \#\mathcal{S}_{k+1}^{j}\] \[\geq\exp\left\{\left(\sum_{i=1}^{k}N_{i}n_{i}+j\right)(C-\gamma) |\log 7\epsilon|\right\}\] \[=\exp\left\{(n-b_{n})(C-\gamma)|\log 7\epsilon|\right\}.\]
By Lemma 3.4, we get
\[\mu_{k+p}(\mathcal{B}) \leq\frac{1}{\#L_{k}\cdot(\#\mathcal{S}_{k+1})^{j}}\leq\exp\left\{ -n(C-\gamma)|\log 7\epsilon|+b_{n}(C-\gamma)|\log 7\epsilon|\right\}\] \[\leq\exp\left\{-n(C-\gamma)|\log 7\epsilon|+b_{n}C|\log 7\epsilon|\right\}\] \[\leq\exp\left\{-n(C-2\gamma)|\log 7\epsilon|\right\}\]
The above inequality follows from (3.6) that \(\frac{Cb_{n}}{n}<\gamma\). So
\[\mu(\mathcal{B})\leq\liminf_{p\to\infty}\mu_{k+p}(\mathcal{B})\leq\exp\left\{ -n(C-2\gamma)|\log 7\epsilon|\right\}.\]
Hence the desired result follows.
#### 3.1.3. Apply pressure distribution principle type argument
Now we are able to finish the proof of Theorem 1.1 by using the pressure distribution principle type argument.
Let \(N\) be the number defined in Lemma 3.5. Let \(\varGamma=\{B_{n_{i}}(x_{i},\epsilon)\}_{i\in I}\) be any finite cover of \(\mathcal{F}\) with \(n_{i}\geq N\) for all \(i\in I\). Without loss of generality, we may assume that \(B_{n_{i}}(x_{i},\epsilon)\cap\mathcal{F}\neq\emptyset\) for every \(i\in I\). Applying Lemma 3.5 on each \(B_{n_{i}}(x_{i},\epsilon)\), one has
\[\sum_{i\in I}\exp\left\{-n_{i}(C-2\gamma)|\log 7\epsilon|\right\}\geq\sum_{i \in I}\mu(B_{n_{i}}(x_{i},\epsilon))\geq\mu(\mathcal{F})=1\]
As \(\varGamma\) is arbitrary, one has
\[m(\mathcal{F},(C-2\gamma)|\log 7\epsilon|,N,\epsilon)\geq 1>0\]
Therefore, by the fact that \(m(\mathcal{F},(C-4\gamma)|\log 7\epsilon|,N,\epsilon)\) does not decrease as \(N\) increases,
\[m(\mathcal{F},(C-2\gamma)|\log 7\epsilon|,\epsilon)\geq 1>0\]
which implies that
\[h^{B}_{top}(\mathcal{F},f,\epsilon)\geq(C-2\gamma)|\log 7\epsilon|.\]
So, by Lemma 3.1, (3.4) and (3.5), we have
\[C-2\gamma \leq\frac{h^{B}_{top}(\mathcal{F},f,\epsilon)}{|\log 7\epsilon|} \leq\frac{h^{B}_{top}(E(z_{0}),f,\epsilon)}{|\log 7\epsilon|}=\frac{h^{B}_{ top}(E(z_{0}),f,\epsilon)}{|\log\epsilon|}\cdot\frac{|\log\epsilon|}{|\log 7 \epsilon|}\] \[\leq(\overline{\operatorname{mdim}}^{B}_{M}(E(z_{0}),f,d)+\gamma )\cdot\frac{|\log\epsilon|}{|\log 7\epsilon|}\leq\overline{\operatorname{mdim}}^{B}_{M}(E(z_{0}),f,d)+2\gamma.\]
Thus, \(\overline{\operatorname{mdim}}^{B}_{M}(E(z_{0}),f,d)\geq C-4\gamma\). As \(\gamma>0\) and \(C\) are arbitrary, we obtain
\[\overline{\operatorname{mdim}}^{B}_{M}(E(z_{0}),f,d)\geq\overline{ \operatorname{mdim}}_{M}(X,f,d).\]
### Proof for the case of the lower metric mean dimension
In this subsection, we briefly prove the following equation
\[\underline{\operatorname{mdim}}^{B}_{M}(E(z_{0}),f,d)=\underline{ \operatorname{mdim}}_{M}(X,f,d)\]
under the assumptions \(E(z_{0})\neq\emptyset\).
Proposition 2.1 and (2.1) imply \(\underline{\operatorname{mdim}}^{B}_{M}(E(z_{0}),f,d)\leq\underline{ \operatorname{mdim}}_{M}(X,f,d)\). In the following, we prove \(\underline{\operatorname{mdim}}^{B}_{M}(E(z_{0}),f,d)\geq\underline{ \operatorname{mdim}}_{M}(X,f,d)\).
We fix any constant \(C^{\prime}<\underline{\operatorname{mdim}}_{M}(X,f,d)\). Next, we only need to show that
\[\underline{\operatorname{mdim}}^{B}_{M}(E(z_{0}),f,d)\geq C^{\prime}. \tag{3.8}\]
Fix \(\gamma>0\) we can choose a \(\epsilon^{\prime}<\epsilon_{0}\) and a sequence \(\{n_{k}\}_{k\geq 1}\subset\mathbb{N}\) such that there exists maximal \((n_{k},7\epsilon^{\prime})\)-separated set \(\mathcal{S}^{\prime}_{k}\) of \(X\) which is always a \((n_{k},7\epsilon^{\prime})\)-spanning set such that
\[\#\mathcal{S}^{\prime}_{k}\geq\exp\left(n_{k}(C-\gamma)\right)|\log 7 \epsilon^{\prime}|, \tag{3.9}\]
\[\frac{h^{B}_{top}(E(z_{0}),f,\epsilon^{\prime})}{|\log\epsilon^{\prime}|}\leq \underline{\mathrm{mdim}}^{B}_{M}(E(z_{0}),f,d)+\gamma \tag{3.10}\]
and
\[\underline{(\mathrm{mdim}}^{B}_{M}(E(z_{0}),f,d)+\gamma)\cdot\frac{|\log \epsilon^{\prime}|}{|\log 7\epsilon^{\prime}|}\leq\underline{\mathrm{mdim}}^{B}_{M} (E(z_{0}),f,d)+2\gamma. \tag{3.11}\]
We can use the parallel proof in the subsection 3.1.1 and subsection 3.1.2 to show that there exist a Moran-like fractal \(\mathcal{F}^{\prime}\) and a measure \(\mu^{\prime}\) concentrated on \(\mathcal{F}^{\prime}\) satisfying the following property.
**Lemma 3.6**.: _There exists \(N^{\prime}\in\mathbb{N}\) such that for any \(n\geq N^{\prime}\), if \(B_{n}(z,\epsilon^{\prime})\cap\mathcal{F}^{\prime}\neq\emptyset\), then_
\[\mu^{\prime}(B_{n}(z,\epsilon^{\prime}))\leq\exp\left\{-n(C^{\prime}-2\gamma) |\log 7\epsilon^{\prime}|\right\}.\]
Let \(N^{\prime}\) be the number defined in Lemma 3.6. Let \(\varGamma=\{B_{n_{i}}(x_{i},\epsilon^{\prime})\}_{i\in I}\) be any finite cover of \(\mathcal{F}^{\prime}\) with \(n_{i}\geq N^{\prime}\) for all \(i\in I\). Without loss of generality, we may assume that \(B_{n_{i}}(x_{i},\epsilon^{\prime})\cap\mathcal{F}^{\prime}\neq\emptyset\) for every \(i\in I\). Applying Lemma 3.6 on each \(B_{n_{i}}(x_{i},\epsilon^{\prime})\), one has
\[\sum_{i\in I}\exp\left\{-n_{i}(C^{\prime}-2\gamma)|\log 7\epsilon^{\prime}| \right\}\geq\sum_{i\in I}\mu(B_{n_{i}}(x_{i},\epsilon^{\prime}))\geq\mu( \mathcal{F}^{\prime})=1\]
As \(\varGamma\) is arbitrary, one has
\[m(\mathcal{F}^{\prime},(C^{\prime}-2\gamma)|\log 7\epsilon^{\prime}|,N, \epsilon^{\prime})\geq 1>0\]
Therefore, by the fact that \(m(\mathcal{F}^{\prime},(C^{\prime}-2\gamma)|\log 7\epsilon^{\prime}|,N,\epsilon^{ \prime})\) does not decrease as \(N\) increases,
\[m(\mathcal{F}^{\prime},(C^{\prime}-2\gamma)|\log 7\epsilon^{\prime}|, \epsilon^{\prime})\geq 1>0\]
which implies that
\[h^{B}_{top}(\mathcal{F}^{\prime},f,\epsilon^{\prime})\geq(C^{ \prime}-2\gamma)|\log 7\epsilon^{\prime}|.\]
So, by Lemma 3.1, (3.10) and (3.11), we have
\[C^{\prime}-2\gamma \leq\frac{h^{B}_{top}(\mathcal{F}^{\prime},f,\epsilon^{\prime} )}{|\log 7\epsilon^{\prime}|}\leq\frac{h^{B}_{top}(E(z_{0}),f,\epsilon^{ \prime})}{|\log 7\epsilon^{\prime}|}=\frac{h^{B}_{top}(E(z_{0}),f,\epsilon^{ \prime})}{|\log\epsilon^{\prime}|}\cdot\frac{|\log\epsilon^{\prime}|}{|\log 7 \epsilon^{\prime}|}\] \[\leq(\underline{\mathrm{mdim}}^{B}_{M}(E(z_{0}),f,d)+\gamma) \cdot\frac{|\log\epsilon^{\prime}|}{|\log 7\epsilon^{\prime}|}\leq\underline{ \mathrm{mdim}}^{B}_{M}(E(z_{0}),f,d)+2\gamma.\]
Thus, \(\underline{\mathrm{mdim}}^{B}_{M}(E(z_{0}),f,d)\geq C^{\prime}-4\gamma\). As \(\gamma>0\) and \(C^{\prime}\) are arbitrary, we obtain
\[\underline{\mathrm{mdim}}^{B}_{M}(E(z_{0}),f,d)\geq\underline{\mathrm{mdim}} _{M}(X,f,d).\]
The proof of Theorem 1.1 is complete.
**Acknowledgements.** The work was supported by the National Natural Science Foundation of China (Nos.1207122 and 11971236), China Postdoctoral Science Foundation (No.2016M591873), and China Postdoctoral Science Special Foundation (No.2017T100384). The work was also funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions. We would like to express our gratitude to Tianyuan
Mathematical Center in Southwest China(No.11826102), Sichuan University and Southwest Jiaotong University for their support and hospitality.
|
2307.12650 | Active Flow Control for Bluff Body Drag Reduction Using Reinforcement
Learning with Partial Measurements | Active flow control for drag reduction with reinforcement learning (RL) is
performed in the wake of a 2D square bluff body at laminar regimes with vortex
shedding. Controllers parameterised by neural networks are trained to drive two
blowing and suction jets that manipulate the unsteady flow. RL with full
observability (sensors in the wake) successfully discovers a control policy
which reduces the drag by suppressing the vortex shedding in the wake. However,
a non-negligible performance degradation (~50% less drag reduction) is observed
when the controller is trained with partial measurements (sensors on the body).
To mitigate this effect, we propose an energy-efficient, dynamic, maximum
entropy RL control scheme. First, an energy-efficiency-based reward function is
proposed to optimise the energy consumption of the controller while maximising
drag reduction. Second, the controller is trained with an augmented state
consisting of both current and past measurements and actions, which can be
formulated as a nonlinear autoregressive exogenous model, to alleviate the
partial observability problem. Third, maximum entropy RL algorithms (Soft Actor
Critic and Truncated Quantile Critics) which promote exploration and
exploitation in a sample efficient way are used and discover near-optimal
policies in the challenging case of partial measurements. Stabilisation of the
vortex shedding is achieved in the near wake using only surface pressure
measurements on the rear of the body, resulting in similar drag reduction as in
the case with wake sensors. The proposed approach opens new avenues for dynamic
flow control using partial measurements for realistic configurations. | Chengwei Xia, Junjie Zhang, Eric C. Kerrigan, Georgios Rigas | 2023-07-24T09:43:53Z | http://arxiv.org/abs/2307.12650v2 | Active Flow Control for Bluff Body Drag Reduction Using Reinforcement Learning with Partial Measurements
###### Abstract
Active flow control for drag reduction with reinforcement learning (RL) is performed in the wake of a 2D square bluff body at laminar regimes with vortex shedding. Controllers parameterized by neural networks are trained to drive two blowing and suction jets. RL with full observability (sensors in the wake) successfully discovers a control policy which reduces the drag by suppressing the vortex shedding in the wake. However, a non-negligible performance degradation (\(\sim 50\%\) less drag reduction) is observed when the controller is trained with partial measurements (sensors on the body). To mitigate this effect, we propose a dynamic, energy-efficient, maximum entropy RL control scheme. First, an energy-efficiency-based reward function is proposed to optimize the energy consumption of the controller while maximising drag reduction. Second, the controller is trained with an augmented state consisting of both current and past observations and actions, which can be formulated as a nonlinear autoregressive exogenous model, to alleviate the partial observability problem. Third, maximum entropy RL algorithms which promote exploration and exploitation in a sample efficient way are used and discover near-optimal policies in the challenging case of partial measurements. Complete stabilisation of the vortex shedding is achieved in the near wake using only surface pressure measurements on the rear of the body, resulting in similar drag reduction as in the case with wake sensors. The proposed approach opens new avenues for dynamic flow control using partial measurements for realistic configurations.
## 1 Introduction
Up to 50% of total road vehicle energy consumption is due to aerodynamic drag (Sudin _et al._, 2014). In order to improve vehicle aerodynamics, flow control approaches have been applied targeting the wake pressure drag, which is the dominant source of drag. Passive flow control has been applied (Choi _et al._, 2014) through geometry/surface modifications, e.g., boat tails (Lanser _et al._, 1991) and vortex generators (Lin, 2002). However, passive control designs do not adapt to environmental changes (disturbances, operating regimes), leading to sub-optimal performance under variable operating conditions. Active open-loop techniques, where pre-determined signals drive actuators, are typically energy inefficient since they target mean flow modifications. Actuators typically employed are synthetic jets (Glezer & Amitay, 2002), movable flaps (Beaudoin _et al._, 2006; Brackston _et al._, 2016) and plasma actuators (Corke _et al._, 2010), among others. Since the flow behind vehicles is unsteady and subject to environmental disturbances and uncertainty,
active feedback control is required to achieve optimal performance. However, two major challenges arise in feedback control design, which we aim to tackle in this study: (i) the flow dynamics are governed by the infinite-dimensional, nonlinear and non-local Navier-Stokes equations (Brunton & Noack, 2015) and (ii) are partially observable in realistic applications due to sensor limitations.
### Model-based active flow control
Model-based feedback control design requires a tractable model for the dynamics of the flow, usually obtained by data-driven or operator-driven techniques. Such methods have been applied successfully to control benchmark two-dimensional (2D) bluff body wakes, obtaining improved aerodynamic performance, e.g. vortex shedding suppression and drag reduction. For example, Gerhard _et al._ (2003) controlled the circular cylinder wake at low Reynolds numbers based on a low-dimensional model obtained from the Galerkin projection of Karhunen-Loeve modes on the governing Navier-Stokes equations. Protas (2004) applied Linear Quadratic Gaussian control to stabilize vortex shedding based on a Foppl point vortex model. Illingworth (2016) applied the Eigensystem Realization Algorithm as a system identification technique to obtain a reduced-order model of the flow and used robust control methods to obtain feedback control laws. Jin _et al._ (2020) employed resolvent analysis to obtain a low-order input-output model from the Navier-Stokes equations based on which feedback control was applied to suppress vortex shedding.
Model-based flow control has also been applied at high Reynolds numbers to control dominant coherent structures (persisting spatio-temporal symmetry breaking modes) which contribute to drag, including unsteady vortex shedding (Pastoor _et al._, 2008; Dahan _et al._, 2012; Dalla Longa _et al._, 2017; Brackston _et al._, 2018) and steady spatial symmetry breaking modes (Li _et al._, 2016; Brackston _et al._, 2016). For inhomogeneous flows in all three spatial dimensions, low-order models typically fail to capture the intractable and complex turbulent dynamics, leading inevitably to suboptimal control performance when used in control synthesis.
### Model-free active flow control by reinforcement learning
Model-free data-driven control methods bypass the above limitations by using input/output data from the dynamical system (environment) to learn the optimal control law (policy) directly without exploiting information from a mathematical model of the underlying process (Hou & Xu, 2009).
Model-free reinforcement learning (RL) has been successfully used for controlling complex systems, for which obtaining accurate and tractable models can be challenging. RL learns an optimal policy (controller) based on observed states that generate control actions which maximize a reward by exploring and exploiting state-action pairs. The system dynamics governing the evolution of the states for a specific action (environment) are assumed to be a Markov Decision Process (MDP). The policy is parameterized by artificial neural networks as a universal function approximator that can be optimized to an arbitrary control function with any order of complexity. RL can also be interpreted as parameterized dynamic programming with the feature of universal function approximation (Bertsekas, 2019), showing its ability to conduct optimization with input-output data from complex systems.
RL can effectively control complex systems in various types of tasks, such as robotics (Kober _et al._, 2013), and autonomous driving (Kiran _et al._, 2021). In the context of fluid dynamics, (Bucci _et al._, 2019; Zeng & Graham, 2021) applied RL to control the chaotic
Kuramoto-Sivashinsky system. In the context of flow control for drag reduction, Rabault et al. (2019); Rabault and Kuhnle (2019) used RL control for the first time in 2D bluff body simulations at a laminar regime. The RL algorithm discovered an optimal policy that, using pressure sensors in the wake and near the body, drives blowing and suction actuators on the circular cylinder to decrease the mean drag and wake unsteadiness. Paris et al. (2021) applied the "S-PPO-CMA" RL algorithm to control the wake behind a 2D cylinder and optimise the sensor locations in the near wake. Li and Zhang (2022) augmented and guided RL with global linear stability and sensitivity analyses in order to control the confined cylinder wake. They showed that, if the sensors cover the wavemaker region, the RL is robust and successfully stabilises the vortex shedding. Paris et al. (2023) proposed an RL methodology to optimize actuator placement in a laminar 2D flow around an airfoil, addressing the trade-off between performance and the number of actuators. Xu and Zhang (2023) used RL to suppress instabilities both in the Kuramoto-Sivashinsky system and 2D boundary layers, showing the effectiveness and robustness of RL control. Pino et al. (2023) compared RL and genetic programming algorithms to global optimization techniques for various cases, including the viscous Burger's equation and vortex shedding behind a 2D cylinder. Further information about RL and its applications in fluid mechanics can be found in the reviews of Garnier et al. (2021) and Vignon et al. (2023).
### Maximum entropy reinforcement learning
In RL algorithms, two major branches have been developed:"on-policy" learning and "off-policy" learning. RL algorithms can also be classified into value-based, policy-based, and actor-critic methods (Sutton and Barto, 2018). The actor-critic architecture combines advantages from both value-based and policy-based methods, so the state-of-the-art algorithms mainly use actor-critic architecture.
The state-of-the-art on-policy algorithms include Trust Region Policy Optimization (TRPO, Schulman et al. (2015)), Asynchronous Advantage Actor-Critic (A3C, Mnih et al. (2016)), and Proximal Policy Optimization (PPO, Schulman et al. (2017)). On-policy algorithms require fewer computational resources than off-policy algorithms, but they are demanding in terms of available data (interactions with the environment). They use the same policy to obtain experience in the environment and update with policy gradient, which introduces a high self-relevant experience that may restrict convergence to a local minimum and limit exploration. As the amount of data needed for training grows with the complexity of applications, on-policy algorithms usually require a long training time for collecting data and converging.
By contrast, off-policy algorithms usually have both behaviour and target policies to facilitate exploration while retaining exploitation. The behaviour policy usually employs stochastic behaviour to interact with an environment and collect experience, which is used to update the target policy. There are many off-policy algorithms emerging in the past decade, such as Deterministic Policy Gradient (DPG, Silver et al. (2014)), Deep Deterministic Policy Gradient (DDPG, Lillicrap et al. (2015)), Actor-Critic with Experience Replay (ACER, Wang et al. (2016)), Twin Delayed Deep Deterministic Policy Gradient (TD3, Fujimoto et al. (2018)), Soft Actor-Critic (SAC, Haarnoja et al. (2018\(a\),_b_)) and Truncated Quantile Critics (TQC, Kuznetsov et al. (2020)). Due to the behaviour-target framework, off-policy algorithms are able to exploit past information from a replay buffer to further increase sample efficiency. This "experience replay" suits a value-function-based method (Mnih et al., 2015), such as Q-learning (Watkins and Dayan, 1992), instead of calculating the policy gradient directly. Therefore, most of the off-policy algorithms implement an actor-critic architecture with a Q-learning basis, e.g. SAC.
One of the challenges of off-policy algorithms is the brittleness in terms of convergence. Sutton _et al._ (2008, 2009) solved the instability issue of off-policy learning with linear approximations. They used a Bellman-error-based cost function together with stochastic gradient descent (SGD) to ensure the convergence of learning. Maei _et al._ (2009) further extended this method to nonlinear function approximation using a modified temporal difference algorithm. However, some algorithms nowadays still experience the problem of brittleness when using improper hyperparameters. Adapting these algorithms for control in various environments is sometimes challenging, as the learning stability is sensitive to their hyperparameters, such as DDPG (Duan _et al._, 2016; Henderson _et al._, 2018).
To increase sample efficiency and learning stability, state-of-the-art off-policy algorithms were developed with a maximum entropy framework (Ziebart _et al._, 2008; Haarnoja _et al._, 2017), known as "maximum entropy reinforcement learning". Maximum entropy RL solves an optimization problem by maximizing the cumulative reward augmented with an entropy term. With the entropy term in the cost function, these algorithms have wider exploration in the action space during the learning, and the policy can approximate near-optimal behaviours, increasing the robustness of the RL controller. More details about two particular maximum entropy RL algorithms (SAC and TQC) can be found in SS2.2.
### Partial measurements and POMDP
In most RL flow control applications, RL controllers have been assumed to have information from the entire flowfield or an optimal sensor layout without any limitations on the sensor locations. This is denoted as "full measurement" (FM) in this study as the measurements contain full-state information. In practical applications, measurements are typically obtained on the surface of the body (e.g. pressure taps), and only partial-state information is available. This is denoted as "partial measurement" (PM), comparatively. PM can lead to control performance degradation compared to FM because the sensors are restricted from observing enough information from the entire flow.
In the language of RL, control with PM can be described as a Partially Observable Markov Decision Process (POMDP)(Cassandra, 1998) instead of an MDP. In POMDP problems, the best stationary policy can be arbitrarily worse than the optimal policy in the underlying MDP (Singh _et al._, 1994). In order to improve the performance of RL with POMDP, additional steps are required to reduce the POMDP problem to an MDP problem. This can be done trivially by using an augmented state, known as "sufficient statistic" (Bertsekas, 2012), i.e. augmenting the state vector with past measurements and actions (Bucci _et al._, 2019; Wang _et al._, 2023), or Recurrent Neural Networks (RNN), such as Long-Short Term Memory (Verma _et al._, 2018).
### Contribution of the present work
The present work uses RL to discover control strategies of partially observable fluid flow environments. Fluid flow systems typically exhibit more complex sampling in higher dimensional observation space compared to other physical systems, necessitating a robust exploration strategy and rapid convergence in the optimization process. To address these challenges, we employ off-policy-maximum entropy RL algorithms (SAC and TQC) that efficiently identify optimal policies in the large action space inherent to fluid flow systems, especially for cases with partial measurements and observability.
We aim to achieve two objectives related to RL flow control for bluff body drag reduction problems. First, we aim to improve the RL control performance in a PM environment by reducing a POMDP problem to an MDP problem. More details about this method are introduced in SS2.4. Second, we present investigations on different reward
functions and key hyperparameters to develop an approach that can be adapted to a broader range of flow control applications. We demonstrate the proposed framework and its capability to discover optimal feedback control strategies in the benchmark laminar flow of a square 2D bluff body with fixed separation at the trailing edge, using sensors only on the base of the body.
The article is structured as follows. In Section SS2, the RL framework is presented, which consists of the SAC and TQC optimization algorithms interacting with the flow simulation environment. A hyperparameter-free reward function is proposed to optimise the energy efficiency of the dynamically controlled system. Exploiting past action-state information converts the POMDP problem in a PM environment to an MDP, enabling the discovery of near-optimal policies. Results will be presented and discussed in Section SS3. The convergence study of RL is first introduced. The degradation of RL control performance in PM environments (POMDP) is presented, and the improvement is addressed by exploiting a sequence of past action-measuremnt information. At the end of this section, we compare the results from TQC with SAC, addressing the advantages of using TQC as an improved version of SAC. In Section SS4, we provide conclusions for the current research and discuss future research directions.
## 2 Methodology
We demonstrate the RL drag reduction framework on the flow past a 2D square bluff body at laminar regimes characterized by two-dimensional vortex shedding. Control is applied by two jet actuators at the rear edge of the body before the fixed separation and partial- or full-state observations are obtained from pressure sensors on the rear base or near wake region, respectively. The RL agent handles the optimization, control and interaction with the flow simulation environment, as shown in figure 1. \(a_{t}\), \(o_{t}\) and \(r_{t}\) are used to denote actions, observations and rewards at time step \(t\).
Details of the flow environment are provided in SS2.1. The SAC and TQC RL algorithms
Figure 1: Reinforcement learning framework. The RL agent, flow environment and the interaction between them are demonstrated. The partial measurement (PM) case is shown, where sensors are located on the base of the square bluff body. Two jets located upstream the rear separation points are trained to control the unsteady wake dynamics (vortex shedding).
used in this work are introduced in SS2.2. The reward functions based on optimal energy efficiency are presented in SS2.3. The method to convert a POMDP to an MDP by designing a dynamic controller for achieving nearly-optimal RL control performance is discussed in SS2.4.
### Flow environment
The environment is a 2D Direct Numerical Simulation (DNS) of the flow past a square bluff body of height \(B\). The velocity profile at the inflow of the computational domain is uniform with freestream velocity \(U_{\infty}\). All quantities are non-dimensionalized with the bluff body height \(B\) and the freestream velocity \(U_{\infty}\). The Reynolds number, defined as \(Re=U_{\infty}B/\nu\), is \(100\). The computational domain is rectangular with boundaries at \((-20.5,26.5)\) in the streamwise \(x\) direction and \((-12.5,12.5)\) in the transverse \(y\) direction. The centre of the square bluff body is at \((x,y)=(0,0)\).
The DNS flow environment is simulated using FEniCS and the Dolfin library (Logg _et al._, 2012), based on the implementation of Rabault _et al._ (2019); Rabault & Kuhnle (2019). The incompressible unsteady Navier-Stokes equations are solved using a finite element method and the incremental pressure correction scheme. The DNS time step is \(dt=0.004\).
Two blowing and suction jet actuators are placed on the top and bottom surfaces of the bluff body before separation. The velocity profile \(\mathbf{U_{j}}\) of the two jets (\(j=1,2\); \(1\) for the top jet and \(2\) for the bottom jet) is defined as
\[\mathbf{U_{j}}=\left(0,\quad\frac{3Q_{j}}{2w}\left[1-\left(\frac{2x-B-w}{w}\right) ^{2}\right]\right), \tag{1}\]
where \(Q\) is the mass flow rate of the jets, and \(w\) is the width of the jet actuator. In this study, \(w=0.1\). A zero mass flow rate condition of the two jets enforces momentum conservation as
\[Q_{1}+Q_{2}=0. \tag{2}\]
The mass flow rate of the jets is also constrained as \(|Q_{j}|\leq 0.1\) to avoid excessive actuation.
In PM environments, \(N=64\) vertically equispaced pressure sensors are placed on the base of the bluff body, the coordinates of which are given by
\[\mathbf{P_{base,k}}=\left(\frac{B}{2},\frac{-B}{2}+k\frac{B}{N+1}\right), \tag{3}\]
where \(k=1,2....,N\). In FM environments, \(64\) pressure sensors are placed in the wake region with a refined bias close to the body. The locations of sensors in the wake are defined with sets \(\mathbf{x_{s}}=[0.25,0.5,1.0,1.5,2.0,3.0,4.0,5.0]\) and \(\mathbf{y_{s}}=[-1.5,-1.0,-0.5,-0.25,0.25,0.5,1.0,1.5]\), following the formula
\[\mathbf{P_{wake,i,j}}=\left(\frac{B}{2}+x_{s,i},y_{s,j}\right), \tag{4}\]
where \(i=1,2....,8\) and \(j=1,2....,8\).
The bluff body drag coefficient \(C_{D}\) is defined as
\[C_{D}=\frac{F_{D}}{\frac{1}{2}\rho_{\infty}{U_{\infty}}^{2}B}, \tag{5}\]
and the lift coefficient \(C_{L}\) as
\[C_{L}=\frac{F_{L}}{\frac{1}{2}\rho_{\infty}{U_{\infty}}^{2}B}, \tag{6}\]
where \(F_{D}\) and \(F_{L}\) are the drag and lift forces, defined as the surface integral of the pressure and viscous forces on the bluff body with respect to the \(x\) and \(y\) coordinates, respectively.
### Maximum entropy reinforcement learning of MDPs
RL can be defined as policy search in a Markov Decision Process (MDP), with a tuple \(\left(\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R}\right)\) where \(\mathcal{S}\) is a set of states, and \(\mathcal{A}\) is a set of actions. \(\mathcal{P}\left(s_{t+1}\mid s_{t},a_{t}\right)\) is a state transition function that contains the probability from current states \(s_{t}\) and actions \(a_{t}\) to the next state \(s_{t+1}\). \(\mathcal{R}(s,a)\) is a reward function (cost function) to be maximised. The RL agent collects data as states \(s_{t}\in\mathcal{S}\) from the environment, and a policy \(\pi(s_{t})\) executes actions \(a_{t}\in\mathcal{A}\) to drive the environment to the next state \(s_{t+1}\).
A state is considered to have the Markov property if the state at time \(t\) retains all the necessary information to determine the future dynamics at \(t+1\), without any information from the past (Sutton & Barto, 2018). This property can be presented as
\[\mathcal{P}\left\{r_{t+1},s_{t+1}\mid s_{t},a_{t}\right\}\equiv\mathcal{P} \left\{r_{t+1},s_{t+1}\mid s_{0},a_{0},r_{1},\ldots,s_{t-1},a_{t-1},r_{t},s_{ t},a_{t}\right\}. \tag{7}\]
In the present flow control application, the control task can be regarded as an MDP if observations \(o_{t}\) contain full-state information, i.e. \(o_{t}=s_{t}\), and satisfy (7).
SAC and TQC are two maximum entropy RL algorithms used in the present work. TQC is used by default since it is an improved version of SAC. The maximum entropy RL maximizes
\[J\left(\pi\right)=\sum_{t=0}^{T}\mathbb{E}\left[r_{t}\left(s_{t},a_{t}\right) +\alpha\mathcal{H}\left(\pi\left(\cdot\mid s_{t}\right)\right)\right], \tag{8}\]
where \(r_{t}\) is the reward (reward functions given in SS2.3). The entropy term or "information entropy" is by definition \(\mathcal{H}\left(\pi\right)=\mathbb{E}\left[-\log\pi\right]\) and \(\alpha\) is the entropy coefficient, which controls the stochasticity (exploration) of the optimal policy. For \(\alpha=0\), the standard reward optimisation in conventional reinforcement learning is recovered.
SAC was developed based on Soft Policy Iteration (SPI) (Haarnoja _et al._, 2018_b_). SPI uses a soft Q-function to evaluate the value of a policy and optimizes the policy based on its value. The soft Q-function is calculated by applying a Bellman backup operator \(\mathcal{T}^{\pi}\) as
\[\mathcal{T}^{\pi}Q\left(s_{t},a_{t}\right)\triangleq r_{t}\left(s_{t},a_{t} \right)+\gamma\mathbb{E}_{s_{t+1}\sim\mathcal{P}}\left[V\left(s_{t+1}\right) \right], \tag{9}\]
where \(\gamma\) is a discount factor (here \(\gamma=0.99\)), and \(V\left(s_{t+1}\right)\) satisfies
\[V\left(s_{t}\right)=\mathbb{E}_{a_{t}\sim\pi}\left[Q\left(s_{t},a_{t}\right)- \log\pi\left(a_{t}\mid s_{t}\right)\right]. \tag{10}\]
The target soft Q-function can be obtained by repeating \(Q=\mathcal{T}^{\pi}Q\), and the proof of convergence can be referred to as Soft Policy Evaluation (Lemma 1) in Haarnoja _et al._ (2018_b_). With a soft Q-function rendering values for the policy, the policy optimization is given as Soft Policy Improvement (Lemma 2 in Haarnoja _et al._ (2018_b_)).
In SAC, a stochastic soft Q-function \(Q_{\theta}\left(s_{t},a_{t}\right)\) and a policy \(\pi_{\phi}\left(a_{t}\mid s_{t}\right)\) are parameterized by artificial neural networks \(\theta\) (critic) and \(\phi\) (actor) respectively. During training, \(Q_{\theta}\left(s_{t},a_{t}\right)\) and \(\pi_{\phi}\left(a_{t}\mid s_{t}\right)\) are optimized with stochastic gradients \(\nabla_{\theta}J_{Q}(\theta)\) and \(\nabla_{\phi}J_{\pi}(\phi)\) designed corresponding to Soft Policy Evaluation and Soft Policy Improvement respectively (see equation (6) and (10) in Haarnoja _et al._ (2018_b_)). With these gradients,
SAC updates the critic and actor networks by
\[\theta\leftarrow\theta-\lambda_{Q}\nabla_{\theta}J_{Q}\left(\theta\right), \tag{11}\]
\[\phi\leftarrow\phi-\lambda_{\pi}\nabla_{\phi}J_{\pi}(\phi), \tag{12}\]
where \(\lambda_{Q}\) and \(\lambda_{\pi}\) are the learning rates of Q-function and policy, respectively. Typically, two Q-functions are trained independently, and then the minimum of the Q-functions is brought into the calculation of stochastic gradient and policy gradient. This method is also used in our work to increase the stability and speed of training. SAC also supports automatic adjustment of temperature \(\alpha\) by optimization,
\[\alpha^{*}=\arg\min_{\alpha}\mathbb{E}_{a_{t}\sim\pi^{*}}\left[-\alpha\log\pi^ {*}\left(a_{t}\mid s_{t};\alpha\right)-\alpha\overline{\mathcal{H}}\right]. \tag{13}\]
This adjustment transforms a hyperparameter-tuning challenge into a trivial optimization problem (Haarnoja _et al._, 2018_b_).
TQC (Kuznetsov _et al._, 2020) can be regarded as an improved version of SAC as it alleviates the overestimation bias of the Q-function on the basic algorithm of SAC. TQC adapts the idea of distributional reinforcement learning with quantile regression, i.e. QR-DQN (Dabney _et al._, 2018), to format the return function \(R(s,a):=\sum_{t=0}^{\infty}\gamma^{t}r_{t}\left(s_{t},a_{t}\right)\) into a distributional representation with Dirac delta functions as
\[R_{\psi}(s,a):=\frac{1}{M}\sum_{m=1}^{M}\delta\left(z_{\psi}^{m}(s,a)\right), \tag{14}\]
where \(R(s,a)\) is parameterized by \(\psi\), and \(R_{\psi}(s,a)\) is converted into a summation of \(M\) "atoms" as \(z_{\psi}^{m}(s,a)\). Here only one approximation of \(R(s,a)\) is used for demonstration. Then, only \(k\) smallest atoms of \(z_{\psi}^{m}(s,a)\) are preserved as a truncation to obtain truncated atoms
\[y_{i}(s,a):=r(s,a)+\gamma\left[z_{\psi}^{i}\left(s^{\prime},a^{\prime}\right)- \alpha\log\pi_{\phi}\left(a^{\prime}\mid s^{\prime}\right)\right],\quad i\in[ 1..k], \tag{15}\]
where \(s^{\prime}\sim\mathcal{P}(\cdot\mid s,a)\) and \(a^{\prime}\sim\pi\left(\cdot\mid s^{\prime}\right)\). The truncated atoms form a target distribution as
\[Y(s,a):=\frac{1}{k}\sum_{i=1}^{k}\delta\left(y_{i}(s,a)\right), \tag{16}\]
and the algorithm minimizes the 1-Wasserstein distance between the original distribution \(R_{\psi}(s,a)\) and the target distribution \(Y(s,a)\) to obtain a truncated quantile critic. Further details, such as the design of loss functions and the pseudocode of TQC can be found in Kuznetsov _et al._ (2020).
In this work, the RL interaction runs on a longer time step \(t_{a}=0.5\) compared to the numerical time step \(dt\). RL-related data \(o_{t}\), \(a_{t}\) and \(r_{t}\) are sampled every \(t_{a}\) time interval. With a different numerical step and an RL step, control actuation \(c_{n_{s}}\) for every numerical step should be distinguished from action \(a_{t}\) in RL. There are \(\frac{t_{a}}{dt}=125\) numerical steps between two RL steps, and control actuation is applied based on a first-order-hold function as
\[c_{n_{s}}=a_{t-1}+(a_{t}-a_{t-1})\frac{n_{s}dt}{t_{a}}, \tag{17}\]
where \(n_{s}\) denotes the number of numerical steps after the previous action \(a_{t-1}\). Equation (17) smooths the control actuation with linear interpolation to avoid numerical instability. Unless specified, the neural network configuration is set as 3 layers of 512 neurons for both actor and critic. The entropy coefficient in (8) is initialised to 0.01 and automatically tuned based on (13) during training.
### Reward design for optimal energy efficiency
We propose a hyperparameter-free reward function based on net power saving to discover energy-efficient flow control policies, calculated as the difference between the power saved from drag reduction \(\Delta P_{D}\) and the power consumed from actuation \(P_{act}\). Then, the power reward ("PowerR") at the RL control frequency is
\[r_{t}=\underbrace{\Delta P_{D}}_{\text{power saved}}-\underbrace{P_{act}}_{ \text{power spent}}. \tag{18}\]
The power saved from drag reduction is given by
\[\Delta P_{D}=P_{D0}-P_{Dt}=\left(\left\langle F_{D0}\right\rangle_{T}-\left \langle F_{Dt}\right\rangle_{a}\right)U_{\infty}, \tag{19}\]
where \(P_{D0}\) is the time-averaged baseline drag power without control, and \(\left\langle F_{D0}\right\rangle_{T}\) is the time-averaged baseline drag over a sufficiently long period. \(P_{Dt}\) denotes the time-averaged drag power calculated from the time-averaged drag \(\left\langle F_{Dt}\right\rangle_{a}\) during one RL step \(t_{a}\). Specifically, \(\left\langle\ \right\rangle_{a}\) quantities are calculated at each RL step using 125 DNS samples. The jet power consumption of actuation \(P_{act}\)(Barros _et al._, 2016) is defined as
\[P_{act}=\sum_{j=1}^{2}\left|\rho_{\infty}\langle U_{j}\rangle_{a}^{3}S_{j} \right|=\sum_{j=1}^{2}\left|\frac{\left\langle a_{t}\right\rangle_{a}^{3}}{ \rho_{\infty}^{2}S_{j}^{2}}\right|, \tag{20}\]
where \(\langle U_{j}\rangle_{a}\) is the average jet velocity, and \(S_{j}\) denotes the area of one jet.
The reward function given by (18) quantifies the control efficiency of a controller directly. Thus, it guarantees the learning of a control strategy which simultaneously maximises the drag reduction and minimises the required control actuation. Additionally, this energy-based reward function avoids the effort of hyperparameter tuning.
All the cases in this work use the power-based reward function defined in (18) unless otherwise specified. For comparison, a reward function based on drag and lift coefficient ("ForceR") is also implemented, as suggested by (Rabault _et al._, 2019) with a pre-tuned hyperparameter \(\epsilon=0.2\), as
\[r_{t}^{a}=C_{D0}-\left\langle C_{Dt}\right\rangle_{a}-\epsilon\left|\left\langle C _{Lt}\right\rangle_{a}\right|, \tag{21}\]
where \(C_{D0}\) and \(\left\langle C_{Dt}\right\rangle_{a}\) are calculated from a constant baseline drag and RL-step-averaged drag and lift. The RL-step-averaged lift \(\left|\left\langle C_{Lt}\right\rangle_{a}\right|\) is used to penalize the amplitude of actuation on both sides of the body, avoiding excessive lift force (i.e. the lateral deflection of the wake reduces the drag but increases the side force), and indirectly penalising control actuation and the discovery of unrealistic control strategies. \(\epsilon\) is a hyperparameter designed to balance the penalty on drag and lift force.
The instantaneous versions of these two reward functions are also investigated for practical implementation purposes (both experimentally and numerically) because they can significantly reduce memory used during computation and also support a lower sampling rate. These instantaneous reward functions are computed only from observations at each RL step. In comparison, the reward functions above take into account the time history between two RL steps, while the instantaneous version of the power reward ("PowerInSR") is defined as
\[r_{t,ins}=\Delta P_{D,ins}-P_{act,ins}, \tag{22}\]
where \(\Delta P_{D,ins}\) is given by
\[\Delta P_{D,ins}=\left(\left\langle F_{D0}\right\rangle_{T}-F_{Dt}\right)U_{ \infty}, \tag{23}\]
and \(P_{act,ins}\) is defined as
\[P_{act,ins}=\sum_{j=1}^{2}\left|\rho_{\infty}\overline{U_{j}}^{3}S_{j}\right|= \sum_{j=1}^{2}\left|\frac{a_{t}^{3}}{\rho_{\infty}^{2}S_{j}^{2}}\right|. \tag{24}\]
Notice that the definition of reward in (22) - (24) is similar to (18) - (24), and the only difference is that the average operator \(\langle\ \rangle_{a}\) is removed. Similarly, the instantaneous version of the force-based reward function ("ForceInsR") is defined as
\[r_{t,ins}^{a}=C_{D0}-C_{Dt}-\epsilon\left|C_{Lt}\right|. \tag{25}\]
In SS3.5, we present results on the study of different reward functions and compare the RL performance.
### POMDP and dynamic controllers
In practical applications, the Markov property (7) is often not valid due to noise, broken sensors, partial state information and delays. This means the observations available to the RL agent do not provide full or true state information, i.e. \(o_{t}\neq s_{t}\), while in MDP \(o_{t}=s_{t}\). Then, RL can be generalized as a Partially Observable Markov Decision Process (POMDP) (Cassandra, 1998). A POMDP can be defined as a tuple \((\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\mathcal{Y},\mathcal{O})\), where \(\mathcal{Y}\) is a finite set of observations \(o_{t}\) and \(\mathcal{O}\) is an observation function that relates observations to underlying states.
With only PM available in the flow environments (sensors on the base of the body instead in the wake), the spatial information is missing along the streamwise direction. Takens' embedding theorem (Takens, 1981) states that the underlying dynamics of a high-dimensional dynamical system can be reconstructed from low-dimensional measurements with their time history. Therefore, past measurements can be incorporated into a sufficient statistic. Furthermore, convective delays may be introduced in the state observation, since the sensors are not located in the wavemaker region of the flow. According to Altman & Nain (1992), past actions are also required in the states of an delayed problem to reduce it into an undelayed problem. This is because a typical delayed-MDP (DMDP) implicitly subverts the Markov Property, as the past measurements and actions only encapsulate partial information.
Therefore, combining the ideas of augmenting past measurements and past actions, we form a sufficient statistic (Bertsekas, 2012), for reducing the POMDP problem to an MDP, defined as
\[I_{k}=[p_{0},...,p_{k},a_{0},...,a_{k-1}], \tag{26}\]
which consists of the time history of pressure measurements \(p_{0},...,p_{k}\) and control actions \(a_{0},...,a_{k-1}\) at time steps \(0,...,k\). This enlarged state at time \(k\) contains all the information known to the controller at time \(k\).
However, the size of the sufficient statistic in (26) grows over time, leading to a nonstationary closed-loop system with control and introducing a challenge in RL that the number of inputs to the networks varies over time. This problem can be solved by reducing (26) to a finite-history approximation (White III & Scherer, 1994). The controller using this finite-history approximation of the sufficient statistic is usually known as a "finite-state" controller, and the error of this approximation converges as the size of the finite history increases (Yu & Bertsekas, 2008). The trade-off is that the dimension of the input increases based on the history length required. The nonlinear policy, which is
parameterised by a neural network controller, has the algebraic description
\[a_{t}\sim\pi_{\phi}\left(a_{t}\mid\underbrace{a_{t-1},a_{t-2},\ldots,a_{t-N_{fs}- 1}}_{\text{past actions}},p_{t},\underbrace{p_{t-1},p_{t-2},\ldots,p_{t-N_{fs}}}_ {\text{past measurements}}\right), \tag{27}\]
where \(p_{t}\) represents pressure measurements at time step \(t\), and \(N_{fs}\) denotes the size of the finite history. The above expression is equivalent to a nonlinear autoregressive exogenous model (NARX).
A "frame stack" technique is used to feed the "finite history sufficient statistic" to the RL agent as input to both the actor and critic neural networks. Frame stack constructs the observation \(o_{t}\) from the latest actions and measurements at step \(t\) as a "frame" \(o_{t}=(a_{t-1},p_{t})\), and piles up the finite history of \(N_{fs}\) frames together into a stack. The number of stacked frames is equivalent to the size of the finite history \(N_{fs}\).
The neural network controller trained as a NARX model benefits from past information to approximate the next optimal control action since the policy has been parameterised as a nonlinear transfer function. Thus, a controller parameterised as a NARX model is denoted as a "dynamic" controller because the time history in the NARX model contains dynamic information of the system. Correspondingly, a controller fed with only the latest actions and measurements is denoted as a "static" controller.
Figure 2 demonstrates three cases with both FM and PM environments which will be investigated. In the FM environment, sensors are located in the wake as \(\mathbf{P_{base}}\) given by (3). In the PM environment, sensors are placed only on the back surface of the body as \(\mathbf{P_{wake}}\) given by (4). The static controller is employed in the FM environment, and both static and dynamic controllers are applied in the PM environment. Results will be shown with \(N_{fs}=27\) and in SS3.3 a parametric study of the effect of the finite history length is presented.
Figure 2: Demonstration of an full-measurement (FM) environment with a static controller (“FM-Static”); a partial-measurement (PM) environment with a static controller (“PM-Static”); and a PM environment with a dynamic controller formulated as a NARX model (case “FM-Static”). The dashed curve represents the bottom blowing/suction jet, and the red dots demonstrate schematically the location of the sensors.
## 3 Results of RL active flow control
### Convergence of learning
We performed RL with the maximum entropy TQC algorithm to discover control policies for the three cases shown in figure 2, which maximise the net-power-saving reward function given by (18). During the learning stage, each episode (1 DNS simulation) corresponds to 200 non-dimensional time units. To accelerate learning, 65 environments run in parallel.
Figure 3 shows the learning curves of the three cases. Table 1 shows the number of episodes needed for convergence and relevant parameters for each case. It can be observed from the curve of episode reward that the RL agent is updated after every 65 episodes, i.e. 1 iteration, where the episode reward is defined as
\[R_{ep}=\sum_{k=1}^{N_{k}}r_{k}, \tag{10}\]
where \(k\) denotes the \(k^{th}\) RL step in one episode and \(N_{k}\) is the total number of samples in one episode. The root mean square (RMS) value of the drag coefficient, \(C_{D}^{RMS}\), at the asymptotic regime of control is also shown to demonstrate convergence, defined as \(C_{D}^{RMS}=\sqrt{(\mathcal{D}(\langle C_{D}\rangle_{env}))^{2}}\), where the operator \(\mathcal{D}\) detrends the signal with a \(9^{th}\)-order polynomial and removes the transient part, and \(\langle\ \rangle_{env}\) denotes the average value of parallel environments in a single iteration.
In figure 3, it can be noticed that in the FM environment, RL converges around 325 episodes (5 iterations) to an optimal policy using a static controller. As will be shown in SS3.2, this policy is (globally) optimal since the vortex shedding is fully attenuated
Figure 3: Episode rewards (solid lines) and RMS of drag coefficient (dashed lines) against episode number during the maximum entropy reinforcement learning phase with TQC.
and the jets converge to zero mass flow actuation, thus recovering the unstable base flow and the minimum drag state. However, with the same static controller in a PM environment (POMDP), the RL agent fails to discover the optimal solution, requiring around 1235 episodes for convergence but only obtaining a relatively low episode reward. Introducing a dynamic controller in the PM environment, the RL agent convergences to a near-optimal solution in 735 episodes. The dynamic controller trained by RL achieves a higher episode reward (34.35) than the static controller in the PM case (21.87), which is close the optimal one in the FM case (37.72). The learning curves illustrate that using a finite horizon of past actions-measurements to train a dynamic controller facilitates learning in terms of the speed of convergence and the accumulated reward.
### Drag reduction with dynamic RL controllers
The trained controllers for the cases shown in figure 2 are evaluated to obtain the results shown in figure 4. The evaluation of control starts at \(t=0\) with the same initial condition, i.e. steady vortex shedding and average drag coefficient \(\langle C_{D}\rangle_{T}\approx 1.45\) (baseline case). Consistent with the learning curves, the discrepancy of control performance in the three cases can be observed both from the drag coefficient \(C_{D}\) after control and the actuation \(Q_{1}\).
* **FM-Static:** With a static controller trained in an full-measurement environment, a drag reduction of 102% is obtained with respect to the base flow (steady unstable fixed point; maximum drag reduction). This indicates that an RL controller informed with full-state information can entirely stabilize the vortex shedding and cancel the unsteady part of the pressure drag.
* **PM-Static:** A static/memoryless controller in a partial-measurement environment leads to a performance degradation and a drag reduction of 47.57% in the asymptotic control stage, i.e. after \(t=80\), compared to the performance of "FM-Static". This performance loss can also be observed from the control actuation curve, as \(Q_{1}\) oscillates with a relatively large fluctuation in "PM-Static" while it stays about zero in the "FM-Static" case. The discrepancy between FM and PM environments using a static controller reveals the challenge of designing a controller with a POMDP environment. The RL agent cannot fully identify the dominant dynamics with only partial measurements on the base surface of the bluff body, resulting in a sub-optimal control behaviour.
* **PM-Dynamic:** With a dynamic controller (NARX model specified in SS2.4) in a partial-measurement environment, the vortex shedding is stabilised and the dynamic controller achieves 100.78% of the maximum drag reduction after time \(t=60\). Although there are minor fluctuations in the actuation \(Q_{1}\), the energy spent in the synthetic jets is relatively low compared to the "PM-Static" case. Thus, a dynamic controller in PM environments can achieve near-optimal drag reduction, even if the RL agent only collects
\begin{table}
\begin{tabular}{l c c c c c c} Environment & Algorithm & \(N_{c}\) & \(R_{c}\) & (Layers, Neurons) & \(N_{fs}\) & Number of Inputs \\ FM-Static & TQC & 325 & 37.72 & (3,512) & 0 & \(64p_{t}+2a_{t-1}\) \\ PM-Static & TQC & 1235 & 21.87 & (3,512) & 0 & \(64p_{t}+2a_{t-1}\) \\ PM-Dynamic & TQC & 715 & 34.35 & (3,512) & 27 & \(N_{fs}(64p_{t}+2a_{t-1})\) \\ \end{tabular}
\end{table}
Table 1: Number of episodes \(N_{c}\) required for RL convergence in different environments. The episode reward \(R_{c}\) at the convergence point, the configuration of NN and the dimension of inputs are presented for each case. \(N_{fs}\) is the finite-horizon length of past actions-measurements.
information from pressure sensors on the body rear surface. The improvement in control indicates that the POMDP due to the PM condition of the sensors can be reduced to an approximate MDP by training a dynamic controller with a finite horizon of past actions-measurements. Furthermore, high frequency action oscillations, which can be amplified with static controllers, are attenuated in the case of dynamic control. These encouraging and unexpected results support the effectiveness and robustness of model-free RL control in practical flow control applications, in which sensors can only be placed on a solid surface/wall.
In figure 5, snapshots of the velocity magnitude field are presented for "Baseline" without control, "PM-Static", "PM-Dynamic" and "FM-Static" control cases. Snapshots are captured at \(t=100\) in the asymptotic regime of control. A vortex-shedding structure of different strengths can be observed in the wake of all three controlled cases. In "PM-Static", the recirculation area is lengthened compared to the baseline flow, corresponding to base pressure recovery and pressure drag reduction. A longer recirculation area can be noticed in "PM-Dynamic", due to the enhanced attenuation of vortex shedding and pressure drag reduction. The dynamic controller in the PM case renders a 326.22% increase of recirculation area with respect to the baseline flow, while only a 116.78% increase is achieved by a static controller. The "FM-Static" case has the longest recirculation area, and the vortex shedding is almost fully stabilized, which is consistent with the most drag reduction as figure 4.
Figure 4: Top figure: Drag coefficient \(C_{D}\) without control (“Baseline”) and with active flow control by RL in both FM and PM cases. In PM cases, control results with a dynamic and static controller are presented. Dashed lines show average values of \(C_{D}\) from the asymptotic regime (i.e. after \(t=80\)) in both cases. The dot-dashed line represents the base flow \(C_{D}\). Bottom figure: The mass flow rate \(Q_{1}\) of one of the blowing and suction jets is presented for both FM and PM cases with static and dynamic controllers.
Figure 6 presents first- and second-order base pressure statistics for the baseline case without control and PM cases with control. In figure 6(a), the time-averaged value of base pressure, \(\overline{p}\) demonstrates the base pressure recovery after control is applied. Due to flow separation and recirculation, the time-averaged base pressure is higher at the middle of the bluff body base, which is retained with control. The base pressure increase is directly linked to pressure drag reduction, which quantifies the control performance of both static and dynamic controllers. Up to 49.56% of pressure increase at the centre of the base is obtained in the "PM-Dynamic" case while only 21.15% can be achieved by a static controller. In figure 6(b), the base pressure RMS is shown. For the baseline flow, strong vortex-induced fluctuations of the base pressure can be noticed around the top and bottom base of the bluff body. In the "PM-Static" case, the RL controller suppresses partially the vortex shedding, leading to a sub-optimal reduction of the pressure fluctuation. The sensors close to the top and bottom corners are also affected by the synthetic jets, which change the RMS trend for the two top and bottom measurements. In the
Figure 5: Contours of velocity magnitude in the asymptotic regime of control. (a) “Baseline” (no control); (b) “PM-Static”; (c) “PM-Dynamic”; (d) “FM-Static”.
"PM-Dynamic" case, the pressure fluctuations are nearly zero for all the measurements on the base, highlighting the success of vortex shedding suppression by a dynamic RL controller in a PM environment.
### Horizon of the finite-history sufficient statistic
A parametric study on the horizon of the finite history in NARX (equation (4)), i.e. the number of frames stacked \(N_{fs}\), is presented in this section. Since the NARX model uses an finite horizon of past actions-measurements in (26), the horizon of the finite history affects the convergence of the approximation, as discussed in Yu & Bertsekas (2008). And this approximation affects the optimization during the learning of RL because it determines whether the RL agent can observe sufficient information to converge to an optimal policy.
Since vortex shedding is the dominant instability to be controlled, the choice of \(N_{fs}\) should intuitively link to the timescale of the vortex shedding period. The "frames" of observations are obtained every RL step (0.5 time units), while the vortex shedding period is \(t_{vs}\approx 6.85\) time units. Thus, \(N_{fs}\) is rounded to integer values for different numbers of vortex shedding periods, as shown in table 2.
The results of time-averaged drag coefficients \(\langle C_{D}\rangle\) after control and the average episode rewards \(\langle R_{ep}\rangle\) in the final stage of training are presented in figure 7. As
\begin{table}
\begin{tabular}{c c c} Number of & Time units & History length \\ VS periods & & (\(N_{fs}\)) \\ \hline
0.5 & 3.43 & 7 \\
1 & 6.85 & 14 \\
2 & 13.70 & 27 \\
3 & 20.55 & 41 \\
4 & 27.40 & 55 \\
5 & 34.25 & 68 \\ \end{tabular}
\end{table}
Table 2: Correspondence between number of vortex shedding (VS) periods and frame stack (history) length in samples \(N_{fs}\). The RL control step size is \(t_{a}=0.5\) and \(N_{fs}\) is rounded to an integer.
Figure 6: Mean (a) and RMS (b) base pressure for controlled and uncontrolled cases. 64 measurements are numbered in the order from top to bottom of the bluff body base.
increases from 0 to the optimal values of 27, the performance of RL control improves, resulting in a lower \(\langle C_{D}\rangle\) and a higher \(\langle R_{ep}\rangle\). \(N_{fs}=2\) is specially examined because the latent dimension of vortex shedding limit cycle is 2. However, the control performance with \(N_{fs}=2\) is marginally improved to the one with \(N_{fs}=0\), i.e. a static controller. This result indicates that the horizon consistent with the vortex shedding dimension is not long enough for the finite horizon of past actions-measurements. The optimal history length to achieve stabilisation of the vortex shedding with sensors only on the base of the body is 27 samples, which are equivalent to 13.5 convective time units or \(\sim 2\) vortex shedding periods.
With \(N_{fs}=41\) and \(N_{fs}=55\), the drag reduction and episode rewards drop slightly compared to \(N_{fs}=27\). The decline in performance is non-negligible as \(N_{fs}\) increases further to 68. This decline shows that excessive inputs to the neural networks, as table 1, may impede training because more parameters need to be tuned or larger neural networks need to be trained.
### Observation sequence with past actions
Past actions (exogenous terms in NARX) facilitate reducing a POMDP to an MDP problem, as discussed in SS2.4. In the near-optimal control of a PM environment using a dynamic controller with inputs \(\left({{o_{t}},{o_{t-1}},...,{o_{t-N_{fs}}}}\right)\), a sequence of observations \({o_{t}}=\{{p_{t}},{a_{t-1}}\}\) at step \(t\) is constructed to include pressure measurements and actions. In the FM environment, due to the introduction of one-step delayed action due to the first-order-hold interpolation given by (17), the inclusion of the past action along with the
Figure 7: Average drag coefficient \(\langle C_{D}\rangle\) and average episode reward \(\langle R_{ep}\rangle\) in PM cases against number history length (numbers of stacked frames) \(N_{fs}\). \(\langle C_{D}\rangle\) and is obtained from the asymptotic regime of control. \(\langle R_{ep}\rangle\) is calculated from 2 episodes after convergence of RL.
current pressure measurement, meaning \(o_{t}=\{p_{t},a_{t-1}\}\), is required even when the sensors are placed in the wake and cover the wavemaker region.
Figure 8 presents the control performance for the same environment with and without past actions included. In the FM case, there is no apparent difference between RL control with \(o_{t}=\{p_{t},a_{t-1}\}\) or \(o_{t}=\{p_{t}\}\), which indicates that the inclusion of the past action is negligible to the performance. This is the case when the RL sampling frequency is sufficiently faster than the timescale of the vortex shedding dynamics. In PM cases, if exogenous action terms are not included in the observations but only the finite history of pressure measurements is used, the RL control fails to converge to a near-optimal policy, with only 67.96% drag reduction. With past actions included, the drag reduction of the same environment increases up to 100.78%.
The above results show that in PM environments the sufficient statistic cannot be constructed only from the finite history of measurements. This can be explained as the missing state information needs to be replaced by both state-related measurements and control actions. With past actions, an OD-MDP is also reduced to an undelayed augmented MDP, thus improving the control performance.
### Reward study
In SS3.2, a power-based reward function given by (18) has been implemented and stabilising controllers can be learned by RL, as shown. In this section, RL control results with other forms of reward functions (introduced in SS2.3) are provided and discussed.
The control performance of RL control with the different reward functions is evaluated based on the drag coefficient \(C_{D}\) shown in figure 9. Static controllers are trained in FM
Figure 8: Curves of drag coefficients after control being applied in both FM and PM environments. Results from FM cases are presented as references, while a performance difference can be observed in the PM cases with and without past actions included.
environments, and dynamic controllers are trained in PM environments. In FM cases, control performance is not sensitive to the choice of reward function (power or force based). In PM cases, the discrepancies between RL-step time-averaged and instantaneous rewards can be observed in the asymptotic regime of control. The controllers with both rewards (power or force) achieve nearly-optimal control performance, but there is some unsteadiness in the cases using instantaneous rewards due to slow statistical convergence of the rewards and limited correlation to the partial observations.
All four types of reward functions studied in this work achieve nearly-optimal drag reduction around 100%. However, the energy-based reward ("PowerR") offers an intuitive reward design, attributable to its physical properties and the dimensionally consistent addition of the constituent terms of the reward function. Further enhancing its practicality, since the power of the actuator can be directly measured, it avoids the necessity for hyperparameter tuning, as in the force-based reward. Additionally, the results show similar performance with both time-averaged between RL steps and instantaneous rewards, avoiding the necessity for faster sampling for the calculation of the rewards. This choice of reward function can be extended to various RL flow control problems and can be beneficial to experimental studies.
Figure 9: Evaluation of RL-trained controllers with various reward functions. Drag coefficient \(C_{D}\) curves are presented for each case. Dotted lines denote the cases with FM environments, while solid lines denote PM environments. Dashed lines are time-averaged values after \(t=80\) when the control is in a steady phase. The dash-dotted line represents \(C_{D}\) in the base flow which has no vortex shedding. Control starts at \(t=0\) with the same initial conditions for every case.
### TQC vs SAC
Control results with TQC and SAC are presented in figure 10 in terms of \(C_{D}\). TQC shows a more robust control performance. In the case of FM, SAC might demonstrate a slightly more stable transient behaviour attributed to the fact that the quantile regression process in TQC introduced complexity to the optimization process. Both controllers achieved an identical level of drag reduction in the FM case.
However, in the context of the PM cases, it is observed that TQC outperforms SAC in drag reduction with both static and dynamic controllers. For static control, TQC achieved an average drag reduction of 58.5%, compared to the 48.7% reduction achieved by SAC. The performance under dynamic control conditions is more compelling, where TQC fully reduced the drag, achieving 100.78% of drag reduction, reverting it to a near-base-flow scenario. In contrast, SAC managed to achieve an average drag reduction of 96.6%.
The fundamental mechanism for updating Q-functions in RL involves selecting the maximum expected Q-functions among possible future actions. This process, however, can potentially lead to overestimation of certain Q-functions (Hasselt, 2010). In POMDP, this overestimation bias might be exacerbated due to the inherent uncertainty arising from the partial-state information. Therefore, the Q-learning-based algorithm, when applied to POMDPs, might be more prone to choosing these overestimated values, thereby affecting the overall learning and decision-making process.
As mentioned in SS2.2, the core benefit of TQC under these conditions can be attributed to its advanced handling of the overestimation bias of rewards. By constructing a more
Figure 10: Comparison of control performance in terms of \(C_{D}\) between SAC and TQC. Control starts at \(t=0\). Solid curves show the cases using TQC and “Baseline” while dotted curves show SAC. Dashed curves present the time-averaged value of TQC and “Baseline” after \(t=80\). The dash-dotted curve corresponds to the baseflow \(C_{D}\).
accurate representation of possible returns, TQC provides a more accurate Q-function approximation than SAC. This process of modulating the probability distribution of the Q-function assists TQC in managing the uncertainties inherent in environments with only partial-state information. In this case, TQC can adapt more robustly to changes and uncertainties, leading to betetr performance in both static and dynamic control tasks.
## 4 Conclusions
In this study, maximum entropy RL with TQC is performed in an active flow control application with partial measurements to learn a feedback controller for bluff body drag reduction. Neural network controllers have been trained by the RL algorithm to discover a control strategy to stabilize the vortex shedding behind a 2D square bluff body at \(Re=100\). By comparing the control performances in FM environments to PM environments, we showed a non-negligible degradation of RL control performance if the controller is not trained with full-state information. To solve this issue, we proposed a method to train a dynamic neural network controller with an approximation of a finite-history sufficient statistic and formulate the dynamic controller as a NARX model. The dynamic controller was able to improve the drag reduction performance in PM environments and achieve near-optimal performance (\(100\%\) with respect to the baseflow drag) compared to a static controller (\(48\%\)). We found that the optimal horizon of the finite history in NARX is approximately two vortex shedding periods, when the sensors are located only on the base of the body. The importance of including exogenous action terms in the observations of RL is discussed, by pointing out the degradation of \(32.04\%\) on drag reduction if only past measurements are used in the PM environment. Finally, we proposed an optimal power consumption design for the reward function based on the drag power savings and the power of the actuator. This power-based reward function offers an intuitive understanding of the system's performance whereas electromechanical losses can be also added directly, once a specific actuator is chosen. Moreover, its inherent feature of being hyperparameter-free contributes to a straightforward reward function design process in the context of flow control problems.
It was shown that model-free RL was able to discover an optimal control strategy without any prior knowledge of the system dynamics using partial realistic measurements, exploiting only input-output data from the simulation environment. Therefore, this particular study on RL-based active flow control in 2D laminar flow simulations can be seen as a promising direction for controlling the complex dynamics of 3D turbulent flows by replacing the simulation environment with the experimental setup.
Data availability: the RL code and data will become available at github.com/orgs/RigasLab.
|
2303.11876 | An implicit function theorem for the stream calculus | In the context of the stream calculus, we present an Implicit Function
Theorem (IFT) for polynomial systems, and discuss its relations with the
classical IFT from calculus. In particular, we demonstrate the advantages of
the stream IFT from a computational point of view, and provide a few example
applications where its use turns out to be valuable. | Michele Boreale, Luisa Collodi, Daniele Gorla | 2023-03-21T14:19:29Z | http://arxiv.org/abs/2303.11876v4 | # An implicit function theorem for the stream calculus
###### Abstract
In the context of the stream calculus, we present an Implicit Function Theorem (IFT) for polynomial systems, and discuss its relations with the classical IFT from calculus. In particular, we demonstrate the advantages of the stream IFT from a computational point of view, and provide a few example applications where its use turns out to be valuable.
## 1 Introduction
In theoretical computer science, the last two decades have seen an increasing interest in the concept of _stream_ and in the related proof techniques, collectively designated as the _stream calculus_[18, 19, 21]. A _stream_\(\sigma=(r_{0},r_{1},...)\) is an infinite sequence of elements (coefficients) \(r_{i}\) drawn from a set endowed with some algebraic structure, such as a field. Therefore, as a concrete mathematical object, a stream is just the same as a _formal power series_ considered in Combinatorics and other field of mathematics. The use of a different terminology here is motivated by the fact that, with streams, the basic computational device is that of _stream derivative_, as opposed to the classical derivative from calculus considered in formal power series. The stream derivative \(\sigma^{\prime}\) is obtained by simply removing the first element \(r_{0}\) from \(\sigma\); stream derivative enjoys a nice relation with the operation of _convolution_\(\times\) (one of the possible notions of product for streams) as expressed by the so-called _fundamental theorem_ of the stream calculus:
\[\sigma=\sigma(0)+x\times\sigma^{\prime}\]
where \(x\) represents the stream \((0,1,0,0,...)\).
A powerful and elegant proof technique for streams is _coinduction_[23], whose step-by-step flavour naturally agrees with the above mentioned features of streams, in particular stream derivative. Moreover, an important specification and computational device is represented by _stream differential equations_ (SDEs, [11]), the analog of _ordinary_ differential
equations (ODEs) of formal power series and functions. It is this toolkit of mechanisms and proof techniques that one collectively designates as the stream calculus [18, 19, 21]. One point of strength of the stream calculus is that it provides simple, direct and unified reasoning techniques, that can be applied to a variety of systems that involve the treatment of sequences. A distinguished feature of proofs conducted within the stream calculus is that issues related to convergence (of sequences, functions etc.) basically never enter the picture. As an example, the stream calculus has been proved valuable in providing a coinductive account of analytic functions and Laplace transform [16], in solving difference and differential equations [19, 21, 4, 5], as well as in formalizing several versions of signal flow graphs [2, 3, 20, 22]. In Section 2, we provide a quick overview of the basic definitions and features of the stream calculus.
The main goal and contribution of the present paper is to add yet another tool to the stream calculus: an Implicit Function Theorem (IFT) for systems of stream polynomial equations. Indeed, while SDEs represent a powerful computational device, depending on the problem at hand streams may be more naturally expressed in an algebraic fashion, that is as the (unique) solution of systems of polynomial equations. In analogy with the classical IFT from calculus [17, 12, 15], our main result provides sufficient syntactic conditions under which a system of polynomial equations has a unique stream solution. Moreover, the theorem also provides an equivalent system of SDEs, that is useful to actually compute the stream solution.
In the classical IFT [17, Th.9.28], one considers a system of equations in the variables \((x,\textbf{y})\), say \(\textbf{F}(x,\textbf{y})=0\). For simplicity, here we assume \(x\) is a scalar, while **y** can be a vector. The theorem provides sufficient conditions under which a given solution (point) \((x_{0},\textbf{y}_{0})\) of the equation extends to a unique family of solutions \((x,\textbf{y})\) such that **y** is a function of \(x\): say \(\textbf{y}=f(x)\), with \(f(x_{0})=\textbf{y}_{0}\). Otherwise said, \(\textbf{F}(x,\textbf{y})=0\) implicitly defines a function \(f\) s.t. \(\textbf{F}(x,f(x))\) is identically \(0\), hence the name of the theorem. The required sufficient condition is that the jacobian matrix (the matrix of partial derivatives) of **F** with respect to **y** be nonsingular when evaluated at \((x_{0},\textbf{y}_{0})\). The theorem also gives a system of ODEs whose solution is the function \(f(x)\). Although the ODE system will be in general impossible to solve analytically, it can be used to compute a truncated Taylor series of \(f(x)\) to the desired degree of approximation.
In Section 3, in the setting of the stream calculus and of polynomial equations, we obtain a version of the IFT whose form resembles closely the classical one (Theorem 2). The major difference is that the stream version relies, of course, on stream derivatives, and on a corresponding notion of stream jacobian. In particular, the system of ODEs that defines the solution is here replaced by a system of SDEs. A crucial step toward proving the result is devising a stream version of the chain rule from calculus, whereby one can express the derivative of a function \(\textbf{F}(x,y_{1}(x),...,y_{n}(x))\) w.r.t. \(x\) in terms of the partial derivatives of **F** w.r.t. \(y_{i}\) and the ordinary derivative of the \(y_{i}\) w.r.t. \(x\).
In Section 4, beyond the formal similarity, we discuss the precise mathematical relation of the stream IFT with the classical IFT (Theorem 3). We show that the to theorems can be applied precisely under the same assumptions on the classical jacobian of \(\textbf{F}(x,\textbf{y})\); moreover, the sequence of Taylor coefficients of the function defined by the classical IFT coincides with the solution identified by the stream IFT. Therefore, one has two alternative methods to compute the (stream) solution. Despite this close relationship, the stream version of the theorem is conceptually and computationally very different from the classical
one; the computational aspects will be further discussed below. In Section 4, we also discuss the relation of the stream IFT with algebraic series as considered in enumerative combinatorics [9, 25].
As an extended example of application of the stream IFT, in Section 5 we apply the result to the problem of enumerating _three-colored trees_[9, Sect.4,Example 14], a typical class of combinatorial objects that are most naturally described by algebraic equations.
In Section 6 we discuss the computational aspects of the stream IFT. We first outline an efficient method to calculate the coefficients of the stream solution up to a prescribed order, based on the SDE system provided by the theorem. Then we offer an empirical comparison between two methods to compute the stream solution: the above mentioned method based on the stream IFT, and the method based on the ODEs provided by the classical IFT. This comparison clearly shows the computational benefits of first method (stream IFT) over the second one (classical IFT) in terms of running time. An important point is that, when applied to polynomials, the syntactic size of stream derivatives is approximately _half_ the size of classical derivatives.
We conclude the paper in Section 7 with a brief discussion on possible directions for future research.
Related workThe stream calculus in the form considered here has been introduced by Rutten in a series of works, in particular [18] and [19]. In [18], streams and operators on streams are introduced via coinductive definitions and behavioural differential equations, later called stream differential equations, involving initial conditions and derivatives of streams. Several applications are also presented to: difference equations, analytical differential equations, and some problems from discrete mathematics and combinatorics. In [19] streams, automata, languages and formal power series are studied in terms of the notions of coalgebra homomorphisms and bisimulation.
A recent development of the stream calculus that is related to the present work is [6], where the authors introduce a polynomial format for SDEs and an algorithm to automatically check polynomial equations, with respect to a _generic_ notion of product for streams satisfying certain conditions. These results can be applied to convolution and _shuffle_ products, among the others.
In formal language theory, context-free grammars can be viewed as instances of polynomial systems: see [13]. A coinductive treatment of this type of system is found in Winter's work [26, Ch.3]. Note that, on one hand, the polynomial format we consider here is significantly more expressive than context-free grammars, as we can deal with such equations as \(x^{2}+y^{2}-1=0\) (see Example 1 in Section 3) that are outside the context-free format. On the other hand, here we confine to _univariate_ streams, which can be regarded as weighted languages on the alphabet \(\{x\}\), whereas in language theory alphabets of any finite size can be considered. How to extend the present results to multivariate streams is an open problem.
In enumerative combinatorics [25, 9], formal power series defined via polynomial equations are named _algebraic series_. [9, Sect.4] discusses several aspects of algebraic series, including several methods of reduction, involving the theory of resultants and Groebner bases. We compare our approach to algebraic series in Section 3, Remark 2.
Background
### Streams
We let \(\Sigma\langle\mathbb{K}\rangle:=\mathbb{K}^{\omega}\), ranged over by \(\sigma,\tau,...\), denote the set of _streams_, that are infinite sequences of elements from \(\mathbb{K}\): \(\sigma=(r_{0},r_{1},r_{2},...)\) with \(r_{i}\in\mathbb{K}\). Often \(\mathbb{K}\) is understood from the context and we shall simply write \(\Sigma\) rather than \(\Sigma\langle\mathbb{K}\rangle\). When convenient, we shall explicitly consider a stream \(\sigma\) as a function from \(\mathbb{N}\) to \(\mathbb{K}\) and, e.g., write \(\sigma(i)\) to denote the \(i\)-th element of \(\sigma\). By slightly overloading the notation, and when the context is sufficient to disambiguate, the stream \((r,0,0,...)\) (\(r\in\mathbb{K}\)) will be simply denoted by \(r\), while the stream \((0,1,0,0,...)\) will be denoted by \(x\); see [19] for motivations behind these notations. Given two streams \(\sigma\) and \(\tau\), we define the streams \(\sigma+\tau\) (sum) and \(\sigma\times\tau\) (convolution product) by
\[(\sigma+\tau)(i):=\sigma(i)+\tau(i)\qquad\qquad(\sigma\times\tau)(i):=\sum_{0 \leq j\leq i}\sigma(j)\cdot\tau(i-j) \tag{1}\]
for each \(i\geq 0\), where the \(+\) and \(\cdot\) on the right-hand sides above denote sum and product in \(\mathbb{K}\), respectively. Sum enjoys the usual commutativity and associativity properties, and has the stream \(0=(0,0,...)\) as an identity. Convolution product is commutative, associative, has \(1=(1,0,0,...)\) as an identity, and distributes over \(+\). Multiplication of \(\sigma=(r_{0},r_{1},...)\) by a scalar \(r\in\mathbb{K}\), denoted \(r\sigma=(r\,r_{0},r\,r_{1},...)\), is also defined and makes \((\Sigma,+,0)\) a vector space over \(\mathbb{K}\). Therefore, \((\Sigma,+,\times,0,1)\) forms a \(\mathbb{K}\)-algebra. We also record the following facts for future use: \(x\times\sigma=(0,\sigma(0),\sigma(1),...)\) and \(r\times\sigma=(r\,\sigma(0),r\,\sigma(1),...)\), where \(r\in\mathbb{K}\). In view of the second equation above, \(r\,\times\,\sigma\) coincides with \(r\sigma\).
For each \(\sigma\), we let its _derivative_\(\sigma^{\prime}\) be the stream defined by \(\sigma^{\prime}(i)=\sigma(i+1)\) for each \(i\geq 0\). In other words, \(\sigma^{\prime}\) is obtained from \(\sigma\) by removing the first element \(\sigma(0)\). The equality \(x\times\sigma=(0,\sigma(0),\sigma(1),...)\) above leads to the so called fundamental theorem of the stream calculus, whereby for each \(\sigma\in\Sigma\)
\[\sigma=\sigma(0)+x\times\sigma^{\prime}\,. \tag{2}\]
Every stream \(\sigma\) s.t. \(\sigma(0)\neq 0\) has a unique inverse w.r.t. convolution, denoted \(\sigma^{-1}\), that satisfies the equations:
\[(\sigma^{-1})^{\prime}=-\sigma(0)^{-1}\cdot(\sigma^{\prime}\times\sigma^{-1}) \qquad\qquad\qquad(\sigma^{-1})(0)=\sigma(0)^{-1}\,. \tag{3}\]
### Polynomial stream differential equations
Let us fix a finite, non empty set of symbols or _variables_\(\mathcal{Y}=\{y_{1},\ldots,y_{n}\}\) and a distinct variable \(x\notin\mathcal{Y}\). Notationally, when fixed an order on such variables, we use the notation \(\mathbf{y}:=(y_{1},...,y_{n})\). We fix a generic field \(\mathbb{K}\) of characteristic \(0\); \(\mathbb{K}=\mathbb{R}\) and \(\mathbb{K}=\mathbb{C}\) will be typical choices. We let \(\mathcal{P}:=\mathbb{K}[x,y_{1},...,y_{n}]\), ranged over by \(p,q,...\), be the set of polynomials with coefficients in \(\mathbb{K}\) and indeterminates in \(\{x\}\cup\mathcal{Y}\). As usual, we shall denote polynomials as formal finite sums of distinct monomials with coefficients in \(\mathbb{K}\): \(p=\sum_{i\in I}r_{i}m_{i}\), for \(r_{i}\in\mathbb{K}\) and \(m_{i}\) monomials over \(\{x\}\cup\mathcal{Y}\). For the sake of uniform notation, we shall sometimes let \(y_{0}\) denote \(x\), so we can write a generic monomial in \(\mathcal{P}\) as \(y_{0}^{k_{0}}\cdots y_{n}^{k_{n}}\), for \(k_{i}\in\mathbb{N}\) for every \(i\). By slight abuse of notation, we shall write the zero polynomial and the empty monomial as \(0\) and \(1\), respectively.
Over \(\mathcal{P}\), one defines the usual operations of sum \(p+q\) and product \(p\cdot q\), with \(0\) and \(1\) as identities, and enjoying commutativity, associativity and distributivity, which make \(\mathcal{P}\) a ring. Multiplication of \(p\in\mathcal{P}\) by a scalar \(r\in\mathbb{K}\), denoted \(rp\), is also defined and makes \((\mathcal{P},+,0)\) a vector space over \(\mathbb{K}\). Therefore, \((\mathcal{P},+,\times,0,1)\) as well forms a \(\mathbb{K}\)-algebra. For each \(n\)-tuple of streams \(\boldsymbol{\sigma}=(\sigma_{1},...,\sigma_{n})\), there is a unique \(\mathbb{K}\)-algebra homomorphism \(\phi_{\boldsymbol{\sigma}}:\mathcal{P}\longrightarrow\Sigma\) such that \(\phi_{\boldsymbol{\sigma}}(x)=(0,1,0,...)\) and \(\phi_{\boldsymbol{\sigma}}(y_{i})=\sigma_{i}\) for \(i=1,...,n\). For any \(p\in\mathcal{P}\), we let \(p(x,\boldsymbol{\sigma}):=\phi_{\boldsymbol{\sigma}}(p)\in\Sigma\): we identify this as the result of the substitution of the variables \(x\) and \(\mathbf{y}\) in \(p\) with the streams \(x=(0,1,0,...)\) and \(\boldsymbol{\sigma}\), respectively.
**Definition 1** (Sde [19]).: _Given a tuple of polynomials \((p_{1},...,p_{n})\in\mathcal{P}^{n}\) and \(\mathbf{r}_{0}=(r_{1},...,r_{n})\in\mathbb{K}^{n}\), the corresponding system of (polynomial) stream differential equations (SDEs) \(\mathcal{D}\) and initial conditions are written as follows_
\[\mathcal{D}=\{y^{\prime}_{1}=p_{1},...,y^{\prime}_{n}=p_{n}\}\qquad\qquad \rho=\{y_{1}(0)=r_{1},...,y_{n}(0)=r_{n}\}\,. \tag{4}\]
_The pair \((\mathcal{D},\rho)\) is also said to form a (polynomial) SDE initial value problem for the variables \(\mathbf{y}\). A solution of (4) is a tuple of streams \(\boldsymbol{\sigma}=(\sigma_{1},...,\sigma_{n})\in\Sigma^{n}\) such that \(\sigma^{\prime}_{i}=p_{i}(x,\boldsymbol{\sigma})\) (on the right-hand side, \(x\) denotes a stream) and \(\sigma_{i}(0)=r_{i}\) for \(i=1,...,n\)._
For a proof of the following theorem (in a more general context), see e.g. [11, 6].
**Theorem 1** (existence and uniqueness of solutions).: _Every polynomial SDE initial value problem of the form (4) has a unique solution._
**Remark 1** (stream coefficients computation).: We record for future use that a SDE initial value problem \((\mathcal{D},\rho)\) like (4) implies a recurrence relation, hence an algorithm, to compute the coefficients of the solution streams \(\sigma_{i}\). Indeed, denote by \(\sigma_{:k}\) the stream that coincides with \(\sigma\) when restricted to \(\{0,...,k\}\) and is \(0\) elsewhere. This notation is extended to a tuple \(\boldsymbol{\sigma}\) componentwise. Then we have, for each \(i=1,...,n\) and \(k\geq 0\):
\[\sigma_{i}(0) =y_{i}(0) \tag{5}\] \[\sigma_{i}(k+1) =\sigma^{\prime}_{i}(k)\,=\,p_{i}(x,\boldsymbol{\sigma})(k)\,=\,p _{i}(x,\boldsymbol{\sigma}_{:k})(k) \tag{6}\]
where the last step follows from the fact that the \(k\)-th coefficient of \(p_{i}(x,\boldsymbol{\sigma})\) only depends on the first \(k\) coefficients of \(\boldsymbol{\sigma}\) (see (1)). As an example, consider
\[y^{\prime}=y^{2}\qquad\qquad y(0)=1\]
for which we get the recurrence: \(\sigma(0)=1\) and \(\sigma(k+1)=\sigma^{2}(k)=\sum_{j=0}^{k}\sigma(j)\cdot\sigma(k-j)\). From the computational point of view, in the case of one single equation (\(n=1\)), this is far from optimal; in the case of \(n>1\) equations, the situation is more complicated. We defer to Section 6 further considerations on the computation of stream coefficients, including details on an effective implementation of (6).
## 3 An implicit function theorem for the stream calculus
Let \(\mathcal{E}\subseteq\mathcal{P}\) be a finite, nonempty set of polynomials. A _stream solution_ of \(\mathcal{E}\) is a tuple of streams \(\boldsymbol{\sigma}=(\sigma_{1},...,\sigma_{n})\) such that \(p(x,\boldsymbol{\sigma})=0\) for each \(p\in\mathcal{E}\). We want to show that, under certain syntactic conditions, any stream solution of \(\mathcal{E}\) can be uniquely defined via
a polynomial SDE initial value problem \((\mathcal{D},\rho)\). Instrumental to establish this result is a close stream analog of the well known Implicit Function Theorem (IFT) from calculus.
Let us introduce some extra notation on polynomials and streams. Beside the variables \(x\) and \(\mathbf{y}=(y_{1},...,y_{n})\), we shall consider a set of new, distinct _initial value indeterminates_\(\mathbf{y}_{0}=(y_{01},...,y_{0n})\) and _primed indeterminates_\(\mathbf{y^{\prime}}=(y^{\prime}_{1},...,y^{\prime}_{n})\).
**Definition 2** (syntactic stream derivative).: _The syntactic stream derivative operator \((\cdot)^{\prime}\) on \(\mathcal{P}\) is first inductively defined on monomials as:_
\[(1)^{\prime}:=0\qquad(x)^{\prime}:=1\qquad(y_{i})^{\prime}:=y^{\prime}_{i} \qquad(y_{i}\cdot m)^{\prime}:=y^{\prime}_{i}\cdot m+y_{0i}\cdot(m)^{\prime}\]
_It is then extend to polynomials in \(\mathcal{P}\) by linearity._
As an example, \((xy_{1}^{2}+y_{1}y_{2})^{\prime}=y_{1}^{2}+y^{\prime}_{1}y_{2}+y_{01}y^{\prime}_ {2}\). Note that \(p^{\prime}\) lives in the polynomial ring \(\mathbb{R}[x,\mathbf{y}_{0},\mathbf{y},\mathbf{y}^{\prime}]\supseteq\mathcal{P}\). We shall write \(p^{\prime}\) as \(p^{\prime}(x,\mathbf{y}_{0},\mathbf{y},\mathbf{y}^{\prime})\) when wanting to make the indeterminates that may occur in \(p^{\prime}\) explicit. With this notation, it is easy to check that \((\cdot)^{\prime}\) commutes with substitution: for every \(p(x,\mathbf{y})\) and \(\boldsymbol{\sigma}\), we have that \((p(x,\boldsymbol{\sigma}))^{\prime}=p^{\prime}(x,\boldsymbol{\sigma}(0), \boldsymbol{\sigma},\boldsymbol{\sigma}^{\prime})\), where, as usual, the \(x\) on the right-hand side denotes a stream.
We start with a technical lemma for converting SDEs in rational form into polynomial form. The proof is an easy application of (3) and is omitted.
**Lemma 1** (from rational to polynomial SDEs).: _Let \(f_{i}(x,\mathbf{y}_{0},\mathbf{y})\) for \(i=1,...,n\) and \(g(x,\mathbf{y}_{0},\mathbf{y})\) be polynomials, and \(\mathbf{r}_{0}\in\mathbb{R}^{n}\) such that \(g(0,\mathbf{r}_{0},\mathbf{r}_{0})\neq 0\). Let \(\boldsymbol{\sigma}=(\sigma_{1},....,\sigma_{n})\) be any tuple of streams satisfying the following system of rational SDEs and initial conditions for \(\mathbf{y}=\boldsymbol{\sigma}\):_
\[y^{\prime}_{i}=f_{i}(x,\mathbf{r}_{0},\mathbf{y})\cdot g(x,\mathbf{r}_{0}, \mathbf{y})^{-1}\qquad\qquad y_{i}(0)=r_{0i}\qquad\qquad(i=1,...,n)\,. \tag{7}\]
_Then, for a new variable \(w\), there is a polynomial \(h(x,\mathbf{y}_{0},\mathbf{y},w)\), not depending on \(\boldsymbol{\sigma}\), such that \((\boldsymbol{\sigma},\tau)\), with \(\tau:=g(x,\mathbf{r}_{0},\boldsymbol{\sigma})^{-1}\), is the unique solution of the following initial value problem of \(n+1\) polynomial SDEs and initial conditions for the variables \((\mathbf{y},w)\):_
\[y^{\prime}_{i} =f_{i}(x,\mathbf{r}_{0},\mathbf{y})\cdot w y_{i}(0)=r_{0i}\qquad \qquad\qquad(i=1,...,n) \tag{8}\] \[w^{\prime} =-g(0,\mathbf{r}_{0},\mathbf{r}_{0})^{-1}\cdot h(x,\mathbf{r}_{0},\mathbf{y},w)\cdot w w(0)=g(0,\mathbf{r}_{0},\mathbf{r}_{0})^{-1}\,. \tag{9}\]
_In particular, \(h(x,\mathbf{y}_{0},\mathbf{y},w)\) is obtained from \(g^{\prime}=g^{\prime}(x,\mathbf{y}_{0},\mathbf{y},\mathbf{y}^{\prime})\) by replacing each \(y^{\prime}_{i}\) with \(f_{i}(x,\mathbf{y}_{0},\mathbf{y})\cdot w\), for \(i=1,...,n\). Conversely, for any \((\boldsymbol{\sigma},\tau)\) satisfying (8) and (9), we have that \(\boldsymbol{\sigma}\) also satisfies (7)._
An important technical ingredient in the proof of the IFT for streams is an operator of _stream partial derivative_\(\frac{\mathfrak{d}}{\mathfrak{d}y_{i}}\) on polynomials: this will allow us to formulate a stream analog of the chain rule from calculus1.
Footnote 1: The chain rule from calculus is: \(\frac{\mathrm{d}}{\mathrm{d}x}f(y_{1}(x),...,y_{n}(x))=\sum_{i=1}^{n}\frac{ \partial}{\partial y_{i}}f(y_{1}(x),...,y_{n}(x))\cdot\frac{\mathrm{d}}{ \mathrm{d}x}y_{i}(x)=\nabla_{\mathbf{y}}\ f(y_{1}(x),...,y_{n}(x))\cdot(\frac{ \mathrm{d}}{\mathrm{d}x}y_{1}(x),...,\frac{\mathrm{d}}{\mathrm{d}x}y_{1}(x))^{T}\).
**Lemma 2** (stream chain rule).: _For every \(p\in\mathcal{P}\), any \(y^{\prime}_{i}\in\mathbf{y}^{\prime}\) can only occur linearly in \(p^{\prime}\). In other words, there is a unique \((n+1)\)-tuple of polynomials in \(\mathbb{R}[x,\mathbf{y}_{0},\mathbf{y}]\), say \((q_{0},q_{1},...,q_{n})\), such that \(p^{\prime}=q_{0}+\sum_{i=1}^{n}q_{i}\cdot y^{\prime}_{i}\). As a consequence, for any \(\boldsymbol{\sigma}\) and \(\mathbf{r}_{0}=\boldsymbol{\sigma}(0)\), we have: \((p(x,\boldsymbol{\sigma}))^{\prime}=q_{0}(x,\mathbf{r}_{0},\boldsymbol{\sigma}) +\sum_{i=1}^{n}q_{i}(x,\mathbf{r}_{0},\boldsymbol{\sigma})\cdot\sigma^{\prime}_ {i}\)._
Proof. If \(p\) is a monomial, we simply inspect the definition of syntactic stream derivative: for \(p=1\), set all \(q_{i}\)'s to \(0\); for \(p=x\), set \(q_{0}=1\) and all other \(q_{i}\)'s to \(0\); for \(p=y_{i}\), set \(q_{i}=1\) and all other \(q_{j}\)'s to \(0\); for \(p=y_{i}\cdot m\), set \(q_{i}=m\), \(q_{0}=y_{0i}\cdot(m)^{\prime}\), and all other \(q_{j}\)'s to \(0\). If \(p\) is a linear combination of monomials, extend the previous by linearity. \(\Box\)
For reasons that will be apparent in a while, we introduce the following suggestive notation for the polynomials \(q_{i}\) uniquely determined by \(p\) according to Lemma 2:
\[\frac{\eth p}{\eth x}:=q_{0}\qquad\qquad\frac{\eth p}{\eth y_{i}}:=q_{i}\ \ \ (i=1,...,n)\qquad\qquad\nabla_{\bf y}\ p:=\left(\frac{\eth p}{\eth y_{1}},..., \frac{\eth p}{\eth y_{n}}\right)\,.\]
With this notation, the equality for \(p^{\prime}\) in the lemma can be written in the form of a chain rule:
\[p^{\prime}:=\frac{\eth p}{\eth x}+(\nabla_{\bf y}\ p)\cdot{\bf y^{\prime}}^{T}\,. \tag{10}\]
Also, it is easy to check that \(\frac{\eth p}{\eth x}\in\mathcal{P}\), so that one may write \(\frac{\eth p}{\eth x}(x,{\bf y})\) if wanting to emphasize the dependence on indeterminates. Both \(\frac{\eth p}{\eth x}\) and \(\nabla_{\bf y}\ p\) can be easily computed from \(p\) via polynomial manipulations. A couple of useful rules are the following. For \(x\) not occurring in \(\alpha\) and \(j\geq 1\), we have \(\frac{\eth}{\eth x}x^{j}\alpha=x^{j-1}\alpha\) and \(\frac{\eth}{\eth x}\alpha=0\); while for \(y_{i}\neq x\), we have \(\frac{\eth}{\eth y_{i}}x^{j}\alpha=0\). As an example, for \(p=x^{2}y_{1}y_{2}^{3}+2y_{1}y_{2}^{2}+2x+1\), we have: \(\frac{\eth p}{\eth x}=xy_{1}y_{2}^{3}+2\), \(\frac{\eth p}{\eth y_{1}}=2y_{2}^{2}\) and \(\frac{\eth p}{\eth y_{2}}=2y_{01}(y_{02}+y_{2})\). This simple example enlightens the difference between ordinary and stream partial derivatives.
Now we assume \(|\mathcal{E}|=n\), say \(\mathcal{E}=\{p_{1},...,p_{n}\}\). Fixing some order on its elements, we will sometimes regard \(\mathcal{E}\) as a _vector_ of polynomials, and use the notation \(\mathcal{E}(x,{\bf y})\) accordingly. In particular, we let \(\nabla_{\bf y}\ \mathcal{E}\) denote the \(n\times n\) matrix of polynomials whose rows are \(\nabla_{\bf y}\ p_{i}\), for \(i=1,...,n\). Evidently, this is the stream analog of the _jacobian_ of \(\mathcal{E}\). Moreover, we let \(\frac{\eth\mathcal{E}}{\eth x}:=\left(\frac{\eth p_{1}}{\eth x},...,\frac{ \eth p_{n}}{\eth x}\right)\). The following lemma is an immediate consequence of the previous one.
**Lemma 3**.: _Let \(\boldsymbol{\sigma}=(\sigma_{1},...,\sigma_{n})\) be a solution of \(\mathcal{E}\) and \({\bf r}_{0}=\boldsymbol{\sigma}(0)\). Then_
\[(\nabla_{\bf y}\ \mathcal{E})(x,{\bf r}_{0},\boldsymbol{\sigma})\cdot \boldsymbol{\sigma^{\prime}}^{T}+\left(\frac{\eth\mathcal{E}}{\eth x}(x, \boldsymbol{\sigma})\right)^{T}=0. \tag{11}\]
Let us recall a few facts from the theory of matrices and determinants in a commutative ring, applied to the ring \(\Sigma\). By definition, a matrix of streams \(A\in\Sigma^{n\times n}\) is invertible iff there exists a matrix of streams \(B\in\Sigma^{n\times n}\) s.t. \(A\times B=B\times A=I\) (the identity matrix of streams); this \(B\), if it exists, is unique and denoted by \(A^{-1}\). It is easy to show that \(A\in\Sigma^{n\times n}\) is invertible if and only if \(A(0)\in\mathbb{K}^{n\times n}\) is invertible2. By general results on determinants, \(\det(A\times B)=\det(A)\cdot\det(B)\) (Binet's theorem). For streams, this implies that, if \(A\) is invertible, then \(\det(A)\) as a stream is invertible, that is \(\det(A)(0)\neq 0\). Moreover, again by virtue of these general results, the formula for the element of row \(i\) and column \(j\) of \(A^{-1}\) is given by:
Footnote 2: Note this is true only because we insist that the inverse matrix must also lie in \(\Sigma^{n\times n}\). Working in the field of formal _Laurent_ series, which strictly includes \(\Sigma\), this would be false: e.g. \(x(0)=0\), but \(x\) has \(x^{-1}\) as an inverse.
\[A^{-1}(i,j)=(-1)^{i+j}\det(A)^{-1}\cdot\det(A_{ji}) \tag{12}\]
where \(A_{ji}\) denotes the \((n-1)\times(n-1)\)_adjunct_ matrix obtained from \(A\) by deleting its \(j\)-th row and \(i\)-th column. Also note that, for a \(n\times n\) matrix of polynomials, say \(P=P(x,\mathbf{y}_{0},\mathbf{y})\), \(\det(P)\) is a polynomial in \(x,\mathbf{y}_{0},\mathbf{y}\), and \(\det(P(x,\mathbf{r}_{0},\boldsymbol{\sigma}))=(\det(P))(x,\mathbf{r}_{0}, \boldsymbol{\sigma})\).
**Theorem 2** (IFT for streams).: _Let \(\mathbf{r}_{0}\in\mathbb{R}^{n}\) be such that \(\mathcal{E}(0,\mathbf{r}_{0})=0\) and \((\nabla_{\mathbf{y}}\)\(\mathcal{E})(0,\mathbf{r}_{0},\mathbf{r}_{0})\) is invertible as a matrix in \(\mathbb{K}^{n\times n}\). Then there is a unique stream solution \(\boldsymbol{\sigma}\) of \(\mathcal{E}\) s.t. \(\boldsymbol{\sigma}(0)=\mathbf{r}_{0}\). Moreover, \((\nabla_{\mathbf{y}}\)\(\mathcal{E})(x,\mathbf{r}_{0},\boldsymbol{\sigma})\) is invertible as a matrix in \(\Sigma^{n\times n}\) and \(\boldsymbol{\sigma}\) satisfies the following system of \(n\) rational_ SDE_s and initial conditions:_
\[\boldsymbol{\sigma}^{\prime T}=-(\nabla_{\mathbf{y}}\ \mathcal{E})(x, \mathbf{r}_{0},\boldsymbol{\sigma})^{-1}\cdot\left(\frac{\partial\mathcal{E}} {\partial x}(x,\boldsymbol{\sigma})\right)^{T} \boldsymbol{\sigma}(0)=\mathbf{r}_{0}\,. \tag{13}\]
_Moreover, from (13) it is possible to build a system of \(n+1\) polynomial_ SDE_s in \(n+1\) variables and corresponding initial conditions, whose unique solution is \((\boldsymbol{\sigma},\tau)\), for a suitable \(\tau\)._
Proof. We will first show that the initial value problem given in (13) is satisfied by every, if any, stream solution \(\boldsymbol{\sigma}\) of \(\mathcal{E}\) such that \(\boldsymbol{\sigma}(0)=\mathbf{r}_{0}\). Indeed, consider any such \(\boldsymbol{\sigma}\). As \((\nabla_{\mathbf{y}}\ \mathcal{E})(x,\mathbf{r}_{0},\boldsymbol{\sigma})(0)=( \nabla_{\mathbf{y}}\ \mathcal{E})(0,\mathbf{r}_{0},\mathbf{r}_{0})\), which is invertible by hypothesis, the above considerations on matrix invertibility imply that there exists \((\nabla_{\mathbf{y}}\ \mathcal{E})(x,\mathbf{r}_{0},\boldsymbol{\sigma})^{-1}\) in \(\Sigma^{n\times n}\). Multiplying equality (11) from Lemma 3 to the left by \((\nabla_{\mathbf{y}}\ \mathcal{E})(x,\mathbf{r}_{0},\boldsymbol{\sigma})^{-1}\), we obtain that \(\boldsymbol{\sigma}\) satisfies (13). Now define the following (matrix of) polynomials:
* \(g(x,\mathbf{y}_{0},\mathbf{y}):=\det(\nabla_{\mathbf{y}}\ \mathcal{E})\)
* \(\tilde{A}:=[\tilde{a}_{ij}]\) with \(\tilde{a}_{ij}:=(-1)^{i+j}\det((\nabla_{\mathbf{y}}\ \mathcal{E})_{ji})\)
* \(f_{i}(x,\mathbf{y}_{0},\mathbf{y}):=-\tilde{A}_{i}\cdot\left(\frac{\partial \mathcal{E}}{\partial x}\right)^{T}\), where \(\tilde{A}_{i}\) denotes the \(i\)-th row of \(\tilde{A}\).
Applying our previous observation on the determinant of a matrix of polynomials, we have that \(\det((\nabla_{\mathbf{y}}\ \mathcal{E})(x,\mathbf{r}_{0},\boldsymbol{\sigma}))=g(x, \mathbf{r}_{0},\boldsymbol{\sigma})\), and similarly \((-1)^{i+j}\det((\nabla_{\mathbf{y}}\ \mathcal{E})(x,\mathbf{r}_{0}, \boldsymbol{\sigma}))_{ji})=\tilde{a}_{ij}(x,\mathbf{r}_{0},\boldsymbol{ \sigma})\). Therefore, by the formula for the inverse matrix (12), equation (13) can be written componentwise as follows
\[\sigma_{i}^{\prime}=f_{i}(x,\mathbf{r}_{0},\boldsymbol{\sigma})\cdot g(x, \mathbf{r}_{0},\boldsymbol{\sigma})^{-1} \sigma_{i}(0)=r_{i0} (i=1,...,n)\,. \tag{14}\]
This is precisely the rational form in (7). Then Lemma 1 implies that there is a set \(\mathcal{D}\) of \(n+1\) polynomial SDEs in the indeterminates \(\mathbf{y},w\), and corresponding initial conditions \(\rho:=(\mathbf{r}_{0},g(0,\mathbf{r}_{0},\mathbf{r}_{0})^{-1})\), satisfied when letting \(\mathbf{y},w=\boldsymbol{\sigma},\tau\), where \(\tau=g(x,\mathbf{r}_{0},\boldsymbol{\sigma})^{-1}\):
\[y_{i}^{\prime} =f_{i}(x,\mathbf{r}_{0},\mathbf{y})\cdot w y_{i}(0) =r_{i0} (i=1,...,n) \tag{15}\] \[w^{\prime} =-g(0,\mathbf{r}_{0},\mathbf{r}_{0})^{-1}\cdot h(x,\mathbf{r}_{0},\mathbf{y},w)\cdot w w (0) =g(0,\mathbf{r}_{0},\mathbf{r}_{0})^{-1} \tag{16}\]
with \(h\) obtained from \(g\) as described in Lemma 1. Note the SDEs \(\mathcal{D}\) we have arrived at are purely syntactic and do not depend on the existence of any specific \(\boldsymbol{\sigma}\). Now, by Theorem 1 there is a (unique) solution, say \((\boldsymbol{\sigma},\tau)\), of the polynomial SDE initial value problem \((\mathcal{D},\rho)\) defined by (15)-(16).
We now show that \(\boldsymbol{\sigma}\) is a stream solution of \(\mathcal{E}\). By the last part of Lemma 1, \(\boldsymbol{\sigma}\) satisfies (14), which, as discussed above, is just another way of writing (13). Now we have
\[\mathcal{E}(0,\boldsymbol{\sigma})(0) =\mathcal{E}(0,\mathbf{r}_{0})=0\] \[\mathcal{E}(0,\boldsymbol{\sigma})^{\prime} =(\nabla_{\mathbf{y}}\;\mathcal{E})(x,\mathbf{r}_{0}, \boldsymbol{\sigma})\,\cdot\,\boldsymbol{\sigma}^{\prime T}\;+\;\left(\frac{ \eth\mathcal{E}}{\eth x}(x,\boldsymbol{\sigma})\right)^{T}\] \[=-\left(\nabla_{\mathbf{y}}\;\mathcal{E})(x,\mathbf{r}_{0}, \boldsymbol{\sigma})\,\cdot\,(\nabla_{\mathbf{y}}\;\mathcal{E})(x,\mathbf{r}_ {0},\boldsymbol{\sigma})^{-1}\cdot\left(\frac{\eth\mathcal{E}}{\eth x}(x, \boldsymbol{\sigma})\right)^{T}\;+\;\left(\frac{\eth\mathcal{E}}{\eth x}(x, \boldsymbol{\sigma})\right)^{T}\] \[=-\left(\frac{\eth\mathcal{E}}{\eth x}(x,\boldsymbol{\sigma}) \right)^{T}+\left(\frac{\eth\mathcal{E}}{\eth x}(x,\boldsymbol{\sigma}) \right)^{T}\] \[=0\]
where the second equality is just the chain rule and the third equality follows from (13). As \(\mathcal{E}(0,\boldsymbol{\sigma})(0)=0\) and \(\mathcal{E}(0,\boldsymbol{\sigma})^{\prime}=0\), by e.g. the fundamental theorem of the stream calculus (2) it follows that \(\mathcal{E}(0,\boldsymbol{\sigma})=0\). This completes the _existence_ part of the statement.
As to _uniqueness_, consider any tuple of streams \(\boldsymbol{\zeta}\in\Sigma^{n}\) that is a stream solution of \(\mathcal{E}\) and such that \(\boldsymbol{\zeta}(0)=\mathbf{r}_{0}\). As shown above, \((\boldsymbol{\zeta},\xi)\), with \(\xi=g(x,\mathbf{r}_{0},\boldsymbol{\zeta})^{-1}\), satisfies the polynomial SDE initial value problem \((\mathcal{D},\rho)\) defined by (15)-(16). By uniqueness of the solution (Theorem 1), \((\boldsymbol{\zeta},\xi)=(\boldsymbol{\sigma},\tau)\).
The above theorem guarantees existence and uniqueness of a solution of \(\mathcal{E}\), provided that there exists a unique tuple of "initial conditions" \(\mathbf{r}_{0}\in\mathbb{K}^{n}\) for which \(\mathcal{E}\) satisfies the hypotheses of Theorem 2. The existence and uniqueness of such a \(\mathbf{r}_{0}\) must be ascertained by other means. In particular, it is possible that the algebraic conditions \(\mathcal{E}(0,\mathbf{r}_{0})=0\) and \(\det((\nabla_{\mathbf{y}}\;\mathcal{E})(x,\mathbf{r}_{0},\mathbf{r}_{0}))\neq 0\) are already sufficient to uniquely determine \(\mathbf{r}_{0}\). There are powerful tools from algebraic geometry that can be applied to this purpose, such as elimination theory: we refer the interested reader to [8] for an introduction. For now we shall content ourselves with a couple of elementary examples. An extended example is presented in Section 5.
**Example 1**.: Consider the single equation \(\mathcal{E}=\{p\}\) where \(p(x,y):=y-(1+xy^{2})\), letting \(y=y_{1}\). Note that \(p(0,r_{0})=0\) uniquely identifies the initial condition \(r_{0}=1\). Moreover, \(\nabla_{y}\;p=-y^{2}\) is invertible at \(y=r_{0}=1\): hence Theorem 2 applies. The system (13) followed by the transformation of Lemma 1 becomes the following polynomial system of SDEs and initial conditions:
\[y^{\prime} =y^{2}w y(0) =1\] \[w^{\prime} =0 w(0) =1\,.\]
Note that the SDEs and initial condition for \(w\) define the constant stream \(1=(1,0,0,...)\), hence the above system can be simplified to the single SDE and initial condition: \(y^{\prime}=y^{2}\) and \(y(0)=1\). The unique stream solution of this initial value problem is \(\sigma=(1,1,2,5,14,42,...)\), the stream of Catalan numbers. Hence \(\sigma\) is the only stream solution of \(\mathcal{E}\).
More generally, any set of _guarded_ polynomial equations of the form \(\mathcal{E}=\{y_{i}-(c_{i}+xp_{i})\,:\,i=1,...,n\}\) satisfies the hypotheses of Theorem 2 precisely when \(\mathbf{r}_{0}=(c_{1},...,c_{n})\). Indeed, \(\mathcal{E}(0,\mathbf{r}_{0})=0\), while \(\nabla_{\mathbf{y}}\;\mathcal{E}=I\), the \(n\times n\) identity matrix, which is clearly invertible.
The SDEs and initial conditions \((\mathcal{D},\rho)\) determined by the theorem are given by \(y^{\prime}_{i}=p_{i}\) and \(y_{i}(0)=c_{i}\) for \(i=1,...,n\), plus the trivial \(w^{\prime}=0\) and \(w(0)=1\), that can be omitted.
For a non guarded example, consider \(\mathcal{E}=\{p\}\) where \(p:=x^{2}+y^{2}-1\). Here \(p(0,r_{0})=0\) gives two possible values, \(r_{0}=\pm 1\). Let us fix \(r_{0}=1\). We have \(\nabla_{y}\ p=y+y_{0}\), which is \(\neq 0\) when evaluated at \(y=y_{0}=r_{0}\). Applying Theorem 2 yields the following SDEs and initial conditions:
\[y^{\prime} =-xw y(0) =1\] \[w^{\prime} =\frac{xw^{2}}{2} w(0) =\frac{1}{2}\,.\]
The unique solution of this initial value problem is the stream \(\sigma=(1,0,-1/2,0,-1/8,0,-1/16,...)\); these are the Taylor coefficients of the function \(\sqrt{1-x^{2}}\) around \(x=0\). This stream is therefore the unique solution of \(\mathcal{E}\) with \(r_{0}=1\). If we fix \(r_{0}=-1\), we obtain \(-\sigma\) as unique solution, as expected.
**Remark 2** (relation with algebraic series).: Recall from [9, 25] that a stream \(\sigma\) is _algebraic_ if there exists a nonzero polynomial \(p(x,y)\) in the variables \(x,y\) such that \(p(x,\sigma)=0\) (again, the \(x\) in the left-hand side of this equation is a stream). For \(|\mathcal{E}|>1\), algebraicity of the solution is not in general guaranteed. [9, Th.8.7] shows that a sufficient condition for algebraicity in this case is that \(\mathcal{E}\) be _zero-dimensional_, i.e., \(\mathcal{E}\) has finitely many solutions when considered as a set of polynomials with coefficients in \(\mathbb{C}(x)\), the fraction field of univariate polynomials in \(x\) with coefficients in \(\mathbb{C}\). In this case, in fact, for each variable \(y_{i}\) one can apply results from elimination theory to get a single nonzero polynomial \(p(x,y_{i})\) satisfied by \(\sigma_{i}\). See also the discussion in Section 5.
On the other hand, we do not require zero-dimensionality of \(\mathcal{E}\) in Theorem 2. Moreover, for the case of polynomials with rational coefficients, [6, Cor.5.3] observes that the unique solution of a polynomial SDE initial value problem like (4) is a tuple of algebraic streams. Then, an immediate corollary of Theorem 2 is that, under the conditions stated for \(\mathcal{E}\) and \(\mathbf{r}_{0}\), the unique stream solution of \(\mathcal{E}\) is algebraic, even for positive-dimensional systems -- at least in the case of polynomials with rational coefficients. As an example, consider the following system of three polynomials in the variables \(x\) and \(\mathbf{y}=(y_{1},y_{2},y_{3})\):
\[\mathcal{E} =\{\,y_{1}{y_{3}}^{4}+x^{2}-{y_{2}}^{2}+y_{2}\;,\;-{y_{1}}^{2}y_ {2}+xy_{3}+y_{1}\;, \tag{17}\] \[\qquad-{y_{1}}^{3}{xy_{3}}^{5}+{y_{1}}^{4}{y_{3}}^{4}-{y_{1}}^{2} x^{3}y_{3}+x^{2}{y_{1}}^{3}+x^{2}{y_{2}}{y_{3}}^{2}-x^{2}{y_{3}}^{2}+{y_{1}}^{2}- xy_{3}-y_{1}\}\;.\]
Considered as a system of polynomials with coefficients in \(\mathbb{C}(x)\), \(\mathcal{E}\) is not zero-dimensional -- in fact, its dimension is \(1\). It is readily checked, though, that for \(\mathbf{r}_{0}=(1,1,1)\) we have \(\mathcal{E}(0,\mathbf{r}_{0})=0\) and \(\det((\nabla_{\mathbf{y}}\ \mathcal{E})(0,\mathbf{r}_{0},\mathbf{r}_{0}))=12\neq 0\). From Theorem 2, we conclude that the unique stream solution \(\boldsymbol{\sigma}\) of \(\mathcal{E}\) satisfying \(\boldsymbol{\sigma}(0)=\mathbf{r}_{0}\) is algebraic.
## 4 Relations with the classical IFT
We now discuss a relation of our IFT with the classical IFT from calculus. We start with the following lemma. In the rest of the section, we fix \(\mathbb{K}=\mathbb{R}\).
**Lemma 4**.: _Let \(p(x,\mathbf{y})\) be a polynomial, \(\mathbf{r}_{0}\in\mathbb{R}^{n}\), and \(y_{i}\) in \(\mathbf{y}\). Consider the ordinary \(\frac{\partial p}{\partial y_{i}}(x,\mathbf{y})\) and stream \(\frac{\partial p}{\partial y_{i}}(x,\mathbf{y}_{0},\mathbf{y})\) partial derivatives. Then \(\frac{\partial p}{\partial y_{i}}(0,\mathbf{r}_{0})=\frac{\partial p}{\partial y _{i}}(0,\mathbf{r}_{0},\mathbf{r}_{0})\)._
Proof. Let \(p=x\cdot p_{0}+q\) where \(x\) does not occur in \(q\). Write \(q\) as a sum of \(k\) monomials, \(q=\sum_{j=1}^{k}\alpha_{j}y_{i}^{k_{j}}\), where both \(x\) and \(y_{i}\) do not occur in any of the monomials \(\alpha_{j}\). Moreover, let us write each \(\alpha_{j}\) as \(\alpha_{j}=\beta_{j}\cdot\gamma_{j}\), where \(\beta_{j}\) (resp. \(\gamma_{j}\)) contains all the \(y\)'s with index smaller (resp., greater) than \(i\).
For the ordinary partial derivative, we have that
\[\frac{\partial p}{\partial y_{i}}(x,\mathbf{y})=x\cdot\frac{\partial p_{0}}{ \partial y_{i}}+\sum_{j=1}^{k}k_{j}\alpha_{j}y_{i}^{k_{j}-1}\,.\]
For the stream partial derivative, let us denote with \(\mathbf{c}_{y_{i}}^{h}\) the quantity \(\sum_{j=0}^{h}y_{0i}^{j}y_{i}^{h-j}\), with \(c_{y_{i}}^{0}:=1\) and \(c_{y_{i}}^{-1}:=0\). Taking into account the rules for \(\etheth\) and writing \(m(\mathbf{u})\) for the evaluation of a monomial \(m\) at \(\mathbf{y}=\mathbf{u}\), we have
\[\frac{\eth p}{\eth y_{i}}(x,\mathbf{y}_{0},\mathbf{y})=\sum_{j=1}^{k}\beta_{j }(\mathbf{y}_{0})(\mathbf{c}_{y_{i}}^{k_{j}-1})\gamma_{j}(\mathbf{y}_{0})\,.\]
By denoting with \(\mathbf{c}_{y_{i}}^{h}(r_{1},r_{2})\) the term \(\mathbf{c}_{y_{i}}^{h}\) with \(r_{1}\) (\(\in\mathbb{R}\)) in place of \(y_{0i}\) and \(r_{2}\) (\(\in\mathbb{R}\)) in place of \(y_{i}\), we have that \(\mathbf{c}_{y_{i}}^{h}(r,r)=(h+1)\,r^{h}\), for any \(r\in\mathbb{R}\). Upon evaluation of the above polynomials at \(x=0\), \(\mathbf{y}_{0}=\mathbf{r}_{0}\), \(\mathbf{y}=\mathbf{r}_{0}\), we get
\[\frac{\partial p}{\partial y_{i}}(0,\mathbf{r}_{0}) =\sum_{j=1}^{k}k_{j}\alpha_{j}(\mathbf{r}_{0}){r_{0i}}^{k_{j}-1}\] \[\frac{\eth p}{\eth y_{i}}(0,\mathbf{r}_{0},\mathbf{r}_{0}) =\sum_{j=1}^{k}\beta_{j}(\mathbf{r}_{0})(\mathbf{c}_{y_{i}}^{k_{j }-1}(r_{0i},r_{0i}))\gamma_{j}(\mathbf{r}_{0})\] \[=\sum_{j=1}^{k}\beta_{j}(\mathbf{r}_{0})(k_{j}{r_{0i}}^{k_{j}-1}) \gamma_{j}(\mathbf{r}_{0})\,=\,\sum_{j=1}^{k}k_{j}\alpha_{j}(\mathbf{r}_{0}){r _{0i}^{k_{j}-1}}\,.\]
\(\square\)
The above lemma implies that the classical and stream jacobian matrices evaluated at \(x=0,\mathbf{y}=\mathbf{r}_{0}\) are the same: \((\nabla_{\mathbf{y}}\ \mathcal{E})(0,\mathbf{r}_{0})=(\nabla_{\mathbf{y}}\ \mathcal{E})(0,\mathbf{r}_{0},\mathbf{r}_{0})\). In particular, the first is invertible if and only if the latter is. Therefore, the classical and stream IFT can be applied exactly under the same hypotheses on \(\mathcal{E}\) and \(\mathbf{r}_{0}\). What is the relationship between the solutions provided by the two theorems? The next theorem precisely characterizes this relationship. In its statement and proof, we make use of the following concept. Consider the set \(\mathcal{A}\) of functions \(\mathbb{R}\to\mathbb{R}\) that are real analytic around the origin, i.e., those functions that admit a Taylor expansion with a positive radius of convergence around \(x=0\). It is well-known that \(\mathcal{A}\) forms a \(\mathbb{R}\)-algebra. Now consider the function \(\mathcal{T}\) that sends each \(f\in\mathcal{A}\) to the stream \(\mathcal{T}[f]\) of its Taylor coefficients around \(0\), that is \(\mathcal{T}[f](j)=f^{(j)}(0)/j!\) for each \(j\geq 0\).3 It is easy to check that \(\mathcal{T}\) acts as a \(\mathbb{R}\)-algebra homomorphism from \(\mathcal{A}\) to \(\Sigma\); in particular, by denoting with '\(\cdot\)' the (pointwise) product of functions, we have that \(\mathcal{T}[f\cdot g]=\mathcal{T}[f]\times\mathcal{T}[g]\).
Footnote 3: Let \(\mathcal{G}\) be the partial function that sends each stream \(\sigma\) to its ordinary generating function \(\mathcal{G}[\sigma](x)=\sum_{j\geq 0}\sigma(j)x^{j}\), provided the latter has a positive radius of convergence; then, \(\mathcal{T}=\mathcal{G}^{-1}\).
**Theorem 3** (stream IFT, classical version).: _Let \(\mathbf{r}_{0}\in\mathbb{R}^{n}\) be such that \(\mathcal{E}(0,\mathbf{r}_{0})=0\) and \((\nabla_{\mathbf{y}}\ \mathcal{E})(0,\mathbf{r}_{0})\) is invertible as a matrix in \(\mathbb{R}^{n\times n}\). Then there is a unique stream solution \(\boldsymbol{\sigma}\) of \(\mathcal{E}\) s.t. \(\boldsymbol{\sigma}(0)=\mathbf{r}_{0}\). In particular, \(\boldsymbol{\sigma}=\mathcal{T}[f]\), for \(f:\mathbb{R}\to\mathbb{R}^{n}\) a real analytic function at the origin, which is the unique solution around the origin of the following system of \(n\) rational ODE\(s\) and initial conditions:_
\[\frac{\mathrm{d}}{\mathrm{d}x}f(x)=-(\nabla_{\mathbf{y}}\ \mathcal{E})(x,f(x))^{-1} \cdot\left(\frac{\partial\mathcal{E}}{\partial x}(x,f(x))\right)^{T}\qquad \qquad f(0)=\mathbf{r}_{0}\,. \tag{18}\]
Proof. Under the conditions on \(\mathcal{E}\) and \(\mathbf{r}_{0}\) stated in the hypotheses, the classical IFT implies the existence of a unique real analytic function \(f:\mathbb{R}\to\mathbb{R}^{n}\), say \(f=(f_{1},...,f_{n})\), such that \(f(0)=\mathbf{r}_{0}\) and \(\mathcal{E}(x,f(x))\) is identically \(0\). Moreover, it tells us that \(f\) satisfies the system of (polynomial) nonlinear ODEs and initial conditions in (18). Note that \(\det((\nabla_{\mathbf{y}}\ \mathcal{E})(0,\mathbf{r}_{0}))\neq 0\) and the continuity of \(\det(\mathcal{E}(x,f(x)))\) around the origin, guaranteed by the IFT [17, Th.9.28], in turn guarantee that \((\nabla_{\mathbf{y}}\ \mathcal{E})(x,f(x))\) is nonsingular in a neighborhood of \(x=0\). Let \(\boldsymbol{\sigma}=(\sigma_{1},...,\sigma_{n})\) be the stream of the coefficients of the Taylor series of \(f\) expanded at \(x=0\), taken componentwise: \(\boldsymbol{\sigma}=\mathcal{T}[f]:=(\mathcal{T}[f_{1}],...,\mathcal{T}[f_{ n}])\). Now \(\boldsymbol{\sigma}\) is a stream solution of \(\mathcal{E}\), as a consequence of the fact that \(\mathcal{T}\) is a \(\mathbb{R}\)-algebra homomorphism between \(\mathcal{A}\) and \(\Sigma\): indeed, for each \(p(x,\mathbf{y})\in\mathcal{E}\), \(0=p(x,f(x))\) implies \((0,0,...)=\mathcal{T}[0]=\mathcal{T}[p(x,f(x))]=p(\mathcal{T}[x],\mathcal{T}[ f])=p(x,\boldsymbol{\sigma})\). Uniqueness of \(\boldsymbol{\sigma}\) is guaranteed by Theorem 2, because \((\nabla_{\mathbf{y}}\ \mathcal{E})(0,\mathbf{r}_{0})=(\nabla_{\mathbf{y}}\ \mathcal{E})(0, \mathbf{r}_{0},\mathbf{r}_{0})\) (see Lemma 4) and it is invertible by hypothesis. \(\square\)
A corollary of the above theorem is that one can obtain the unique stream solution of \(\mathcal{E}\) also by computing the Taylor coefficients of the solution \(f\) of (18). Such coefficients can be computed without having to explicitly solve the system of ODEs. We will elaborate on this point in Section 6.
**Example 2**.: Consider again the system \(\mathcal{E}=\{y-(1+xy^{2})\}\) in the single variable \(y=y_{1}\), with the initial condition \(r_{0}=1\), seen in Example 1. Since \((\nabla_{y}\ \mathcal{E})(x,y)=1-2xy\) is nonzero at \((0,r_{0})\), we can apply Theorem 3. The ODE and initial condition in (18) in this case are, letting \(f=y\): \(\frac{\mathrm{d}}{\mathrm{d}x}y(x)=\frac{y^{2}}{1-2xy}\) and \(y(0)=1\). This system can be solved explicitly. Alternatively, one can compute the coefficients of the Taylor expansion of the solution, e.g. by successive differentiation: \(y(x)=\sum_{j\geq 0}\frac{y^{(j)}(0)}{j!}x^{j}=1+1x+2x^{2}+5x^{3}+14x^{4}+42x ^{5}+\cdots\). Such coefficients form again the stream \(\sigma\) of Catalan numbers that is therefore the unique stream solution of \(\mathcal{E}\) with \(\sigma(0)=r_{0}=1\).
## 5 An extended example: three-coloured trees
We consider a polynomial system \(\mathcal{E}\) implicitly defining the generating functions of 'three-coloured trees', Example 14 in [9, Sect.4]. For each of the three considered colours (variables), [9, Sect.4] shows how to reduce \(\mathcal{E}\) to a single nontrivial equation. This implies algebraicity of the series implicitly defined by \(\mathcal{E}\): the reduction is conducted using results from elimination theory [8]. Here we will show how to directly transform \(\mathcal{E}\) into a system of polynomial SDEs and initial conditions, \((\mathcal{D},\rho)\), as implied by the stream IFT (Theorem 2). As the coefficients in \(\mathcal{E}\) are rational, reduction to SDEs directly implies algebraicity (Remark 2), besides giving a method of calculating the streams coefficients. We will
also consider reduction of \(\mathcal{E}\) to a system of polynomial ODEs, as implied by the classical version of the IFT (Theorem 3), and compare the obtained SDE and ODE systems.
Three-coloured trees are binary trees (plane and rooted) with nodes coloured by any of three colours, \(a,b,c\), such that any two adjacent nodes have different colours and external nodes are coloured by the \(a\)-colour. Let \(\mathcal{A},\mathcal{B},\mathcal{C}\) denote the sets of three-coloured trees with root of the \(a,b,c\) color respectively, and \(A,B,C\) the corresponding ordinary generating functions: the \(n\)-th coefficient of \(A\) is the number of trees with \(a\)-coloured root and \(n\) external nodes; similarly for \(B\) and \(C\). Below, we report from [9, Sect.4,eq.(40)] the polynomial system \(\mathcal{E}\); to adhere to the notation of Section 2, we have replaced the variables \((A,B,C)\) with \(\mathbf{y}=(y_{1},y_{2},y_{3})\).4
Footnote 4: We note that there is a slip in the first equation appearing in [9], by which the term \((B+C)^{2}=(y_{2}+y_{3})^{2}\) appears with the wrong sign. The correct sign is used here.
\[\mathcal{E}:\;\begin{cases}y_{1}-x-(y_{2}+y_{3})^{2}&=0\\ y_{2}-(y_{3}+y_{1})^{2}&=0\\ y_{3}-(y_{1}+y_{2})^{2}&=0\,.\end{cases} \tag{19}\]
System (19) has been derived via the symbolic method ([9]), a powerful technique to translate formal definitions of combinatorial objects, into equations on generating functions, and thus to count combinatorial objects. It is based on the analysis of the internal structure of the combinatorial objects considered, in this case three-coloured trees. For instance, let us consider the root of a three-coloured tree and assume that it is coloured by the \(a\)-colour; since in three-coloured trees adjacent nodes have different colours, then the root can have as children two subtrees either with the root of \(b\)-type or of \(c\)-type, and recursively on all their subtrees. Similarly with trees whose roots are coloured by the \(b\)-colour or by the \(c\)-colour. In addition, all the external nodes are coloured by \(a\)-colour. Considering this structure, system (19) can be easily generated.
Since the number of external nodes of any empty tree is \(0\), we set \(\mathbf{r}_{0}=(0,0,0)\). It is immediate to verify that \(\mathcal{E}(0,\mathbf{r}_{0})=0\) and \((\nabla_{\mathbf{y}}\ \mathcal{E})(0,\mathbf{r}_{0},\mathbf{r}_{0})=\left[ \begin{smallmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{smallmatrix}\right]=I\), that is obviously invertible, hence Theorem 2 holds, and we generate system (13) in Theorem 2. In particular, after applying Lemma 1, we get the following polynomial system of SDEs and initial conditions:
\[\begin{cases}y_{1}^{\prime}\ =2wy_{1}y_{2}+wy_{2}y_{3}-w&y_{1}(0)=0\\ y_{2}^{\prime}\ =-2wy_{1}^{2}-4wy_{1}y_{2}-wy_{1}y_{3}-wy_{1}-2wy_{2}y_{3}&y_{2}(0)=0\\ y_{3}^{\prime}\ =-wy_{1}y_{2}-wy_{1}-2wy_{2}&y_{2}(0)=0\\ w^{\prime}\ =4w^{2}y_{1}^{2}y_{2}^{2}+4w^{2}y_{1}^{2}y_{2}y_{3}-8w^{2}y_{1}^{2}y_{3}^{ 2}-6w^{2}y_{1}^{2}y_{3}+8w^{2}y_{1}y_{2}^{3}+&w(0)=-1\\ \
On the other hand, applying the classic IFT (Theorem 3) to system (19), there is a unique real analytic solution \(f(x)=\mathbf{y}(x)=(y_{1}(x),y_{2}(x),y_{3}(x))\) of the ODE initial value problem (18) s.t. \(\mathbf{y}(0)=\mathbf{r}_{0}=(0,0,0)\). The system in question can be computed starting from the classical jacobian of \(\mathcal{E}\), \((\nabla_{\mathbf{y}}\ \mathcal{E})(x,\mathbf{y})=\left[\begin{smallmatrix}1&-2y_{2}+-2y_{3}&-2y_{ 3}-2y_{2}\\ -2y_{1}-2y_{3}&1&-2y_{3}-2y_{1}\\ -2y_{1}-2y_{2}&-2y_{2}-2y_{1}&1\end{smallmatrix}\right].\)
Since \(\frac{\partial\mathcal{E}}{\partial x}(x,\mathbf{y})_{|x=0,\mathbf{y}=\mathbf{ r}_{0}}=(-1,0,0)\), (18) yields the following system of rational ODEs and initial conditions:
\[\frac{\mathrm{d}}{\mathrm{d}x}\mathbf{y}(x)=-(\nabla_{\mathbf{y}}\ \mathcal{E})^{-1}\cdot\left(\frac{\partial\mathcal{E}}{\partial x}\right)^{T }=\tilde{\Delta}^{-1}\cdot\left[\begin{smallmatrix}4y_{1}^{2}+4y_{1}y_{2}+4y_ {1}y_{3}+4y_{2}y_{3}-1\\ -4y_{1}^{2}-4y_{1}y_{3}-4y_{1}y_{2}-2y_{1}-4y_{2}y_{3}-2y_{3}\\ -4y_{1}^{2}-4y_{1}y_{2}-4y_{1}y_{3}-2y_{1}-4y_{2}y_{3}-2y_{2}\end{smallmatrix} \right]\ \ \mathbf{y}(0)=\mathbf{r}_{0}. \tag{21}\]
where \(\tilde{\Delta}:=-16y_{1}^{2}y_{2}-16y_{1}^{2}y_{3}-4y_{1}^{2}-16y_{1}y_{2}^{2} -32y_{1}y_{2}y_{3}-12y_{1}y_{2}-16y_{1}y_{3}^{2}-12y_{1}y_{3}-16y_{2}^{2}y_{3}- 4y_{2}^{2}-16y_{2}y_{3}^{2}-12y_{2}y_{3}-4y_{3}^{3}+1\) is the determinant of \(\nabla_{\mathbf{y}}\ \mathcal{E}\).
Considering a series solution of the system, we obtain, for the first component of the solution \(f\):
\[y_{1}(x)=x+4x^{4}+16x^{5}+56x^{6}+256x^{7}+1236x^{8}+5808x^{9}+O(x^{10})\]
whose coefficients match those of \(\sigma_{1}\) for (20).
## 6 Classical vs. stream IFT: computational aspects
We compare the stream and the classical version of the IFT from a computational point of view. First, we discuss how the recurrence (6) can be effectively implemented for _any_ polynomial SDE initial value problem of the form (4), not necessarily arising from an application of Theorem 2. The basic idea is to always reduce products involving more than two terms to binary products, for which the convolution formula (1) can be applied. In order to perform this reduction systematically, let us consider the set \(T\) of all subterms \(t=t(x,\mathbf{y})\) that occur in the polynomials \(p_{i}\) in \(\mathcal{D}\). We assume that \(T\) also includes all the constants appearing in \(\mathcal{D}\), the constant \(1\), and all the variables \(y_{0}\,(:=x),y_{1},....,y_{n}\). For each term \(t\) in \(T\), a stream \(\sigma_{t}\) is introduced via the following recurrence relation that defines \(\sigma_{t}(k)\). Formally, the definition goes by lexicographic induction on \((k,t)\), with the second elements ordered according the "_subterm of_" relation. For the sake of notation, below we let \(p_{0}=1\), and let the case \(t=c\cdot t_{1}\) for \(c\in\mathbb{K}\) be subsumed by the last clause, where \(c\) is treated as the constant stream \((c,0,0,...)\). Finally, \(k>0\).
\[\begin{array}{rcll}\sigma_{t}(0)&=&t(0,\mathbf{r}_{0})\\ \sigma_{y_{i}}(k)&=&\sigma_{p_{i}}(k-1)&(i=0,...,n)\\ \sigma_{c}(k)&=&0&(c\in\mathbb{K})\\ \sigma_{t_{1}+t_{2}}(k)&=&\sigma_{t_{1}}(k)+\sigma_{t_{2}}(k)\\ \sigma_{t_{1}\cdot t_{2}}(k)&=&\sum_{j=0}^{k}\sigma_{t_{1}}(j)\cdot\sigma_{t_ {2}}(k-j)\,.\end{array} \tag{22}\]
The correctness of the above algorithm, as stated by the next lemma, is obvious.
**Lemma 5**.: _Let \(\boldsymbol{\sigma}=(\sigma_{1},...,\sigma_{n})\) be the unique stream solution of a problem \((\mathcal{D},\rho)\) of the form (4). With the above definition of \(\sigma_{t}\), we have \(\sigma_{i}=\sigma_{y_{i}}\), for \(i=1,...,n\)._
In a practical implementation of this scheme, one can avoid recursing over the structure of \(t\), as follows. At the \(k\)-th iteration (\(k>0\)), the values \(\sigma_{t}(k)\) are computed and stored by examining the terms \(t\in T\) according to a total order on \(T\) compatible with the "_subterm of_" relation. In this way, whenever either of the last two clauses is applied, one can access the required values \(\sigma_{t_{1}}(j),\sigma_{t_{2}}(j)\) up to \(j=k\) already computed and stored away in the current iteration. The computation of the \(k\)-th coefficient \(\boldsymbol{\sigma}(k)\), given the previous ones, requires therefore \(O(Pk+S)\) multiplications and additions, where \(P\) and \(S\) are the number of overall occurrences in \(T\) of the product and sum operators, respectively. Overall, this means \(O(Pk^{2}+Sk)\) operations for the first \(k\) coefficients. This complexity is minimized by choosing a format of polynomial expressions that minimizes \(P\): for example, a Horner scheme (note that Horner schemes exist also for multivariate polynomials). Memory occupation grows linearly as \(O(k(P+S))\).
Another method to generate the coefficients of the stream solution is applying the classical version of the IFT (Theorem 3), and rely on the ODE initial value problem in (18). However, this choice appears to be computationally less convenient. Indeed, apart from the rare cases where (18) can be solved explicitly, one must obtain the coefficients of the solution by expanding it as a power series -- indeed its Taylor series. Once the rational system (18) is reduced to a polynomial form, which is always possible by introducing one extra variable, the coefficients of this power series can be computed by a recurrence relation similar to that discussed in Lemma 5 for (6). The catch is that the size of the resulting set of terms \(T\) is _significantly larger_ for the ODE system (18) than it is for the SDE system (13). To understand why, consider that, under the given hypotheses, the SDE system in (13) is equivalent to \(\mathcal{E}^{\prime}=0\), while the ODE system in (18) is equivalent to \(\frac{\mathrm{d}}{\mathrm{d}x}\mathcal{E}=0\). Now, the terms appearing in \(\mathcal{E}^{\prime}\) are approximately _half the size_ of those appearing in \(\frac{\mathrm{d}}{\mathrm{d}x}\mathcal{E}\). This is evident already when comparing with one another the stream and the ordinary derivatives of a bivariate polynomial \(p(x,y)=q_{m}(y)x^{m}+\cdots+q_{1}(y)x+q_{0}(y)\):
\[p(x,y)^{\prime} =\ q_{m}(y)x^{m-1}+\cdots+q_{1}(y)+q_{0}^{\prime}(y)\] \[\frac{\mathrm{d}}{\mathrm{d}x}p(x,y) =\left(q_{m}(y)mx^{m-1}+x^{k}\frac{\mathrm{d}}{\mathrm{d}x}q_{m} (y)\right)+\cdots+\left(q_{1}(y)+x\frac{\mathrm{d}}{\mathrm{d}x}q_{1}(y) \right)+\frac{\mathrm{d}}{\mathrm{d}x}q_{0}(y)\,.\]
A small experiment conducted with two different systems of polynomials, the three-coloured trees (19) and the one-dimensional system (17), is in agreement with these qualitative considerations. For each of these systems, we have computed a few hundreds coefficients of the solution, using both the methods in turn: SDEs via the recurrence relation of Lemma 5 (Theorem 2), and ODEs via a power series solution (Theorem 3). In the second case, we have used Maple's dsolve command with the series option5. For both systems, we plot in Figure 1 the execution time as a function of the number of computed coefficients.
Footnote 5: Python and Maple code for this example available at [https://github.com/Luisa-unifi/IFT](https://github.com/Luisa-unifi/IFT)
**Remark 3** (Newton method).: In terms of complexity w.r.t. \(k\) (number of computed coefficients), _Newton iteration_ applied to formal power series [14, 24, 10, 7] does asymptotically better than the \(O(k^{2})\) algorithm outlined above. In particular, [7, Th.3.12] shows that, under the same hypotheses of IFT, the first \(k\) coefficients of the solution of a system of algebraic equation can be computed by Newton iteration in time \(O(k\log k)\); on the downside, each iteration of Newton involves in general finding the solution of a \(n\times n\) linear system.
## 7 Conclusion
In this paper we have presented an implicit function theorem for the stream calculus, a powerful set of tools for reasoning on infinite sequences of elements from a given field. Our theorem is directly inspired from the analogous one from classical calculus. We have shown that the stream IFT has clear computational advantages over the classical one.
The present work can be extended in two directions. First, one would like to go beyond the polynomial format and allow systems of equations \(\mathcal{E}\) involving, for example, functions that are in turn defined via SDEs. Second, one would like to extend the present results to the case of multivariate streams, that is consider a _vector_\(\mathbf{x}=(x_{1},...,x_{m})\) of independent variables, akin to the more general version of the classical IFT. Both extensions seem to pose nontrivial technical challenges.
|
2307.03985 | Spectroscopic Devices for Asteroseismology With Small Telescopes in
NARIT | The National Astronomical Research Institute of Thailand (NARIT) has a
manifold network of small telescopes installed worldwide. These telescopes
serve educational and research purposes and are equipped mainly with CCD
detectors for direct imaging and photometry. To extend the possible field of
applications, several telescopes were fitted with commercially available
medium-resolution spectrographs eShel from Shelyak. With these devices,
researchers in NARIT obtained a versatile tool for stellar spectroscopy. Here
we describe the current status of available equipment, possible ways of
upgrading, and briefly introduce the achieved results of the asteroseismologic
study of fast-rotating stars. | Somsawat Rattanasoon, Eugene Semenko, David Mkrtichian, Saran Poshyachinda | 2023-07-08T14:24:37Z | http://arxiv.org/abs/2307.03985v1 | # Spectroscopic Devices for Asteroseismology With Small Telescopes in NARIT
###### Abstract
National Astronomical Research Institute of Thailand (NARIT) has a manifold network of small telescopes installed worldwide. These telescopes serve educational and research purposes and are equipped mainly with CCD detectors for direct imaging and photometry. To extend the possible field of applications, several telescopes were fitted with commercially available medium-resolution spectrographs eShel from Shelyak. With these devices, researchers in NARIT obtained a versatile tool for stellar spectroscopy. Here we describe the current status of available equipment, possible ways of upgrading, and briefly introduce the achieved results of the asteroseismologic study of fast-rotating stars.
Keywords -- _spectroscopy, instrumentation, asteroseismology_
## 1 Motivation
A fibre-fed medium-resolution echelle spectrograph eShel has been designed and distributed for small telescopes by Shelyak Instruments (France) since 2008 (Thizy and Cochard, 2011). A typical device consists of a stationary spectrograph block linked by a fibre with 50 \(\mu\)m core to the Fibre Injection and Guiding Unit (FIGURE) installed at the telescope side. FIGU is also connected through a 200-\(\mu\)m fibre channel to the Calibration Unit comprising halogen, LED, and ThAr lamps. Spectrograph and its components are commercially available on the company's website [https://www.shelyak.com/](https://www.shelyak.com/). Earlier models of eShel registered spectra within the wavelength range 430-700 nm with the resolution \(R>10,000\). In 2018, after the upgrade, which affected many components of eShel, the working range was significantly extended.
NARIT has a distributed network of small telescopes with apertures up to 1 m. For the spectroscopy of relatively bright stars, these telescopes can optionally be equipped with eShel. At the moment, NARIT has three devices with serial numbers 6H-115 (2010), 6H-128 (2016), and 6H-171 (2018). All spectrographs were acquired in their original complete set, thus having limited capabilities. To enable observations of fainter objects and to increase sensitivity in the blue part of the spectrum, we initiated a substantial upgrade of a device with SN 6H-171.
## 2 Modification and Tests
The improved device received a new high-OH fibre with enhanced throughput in the blue part of the spectrum, a new doublet collimator (Shelyak provided both components), a new imaging lens, and a professional-grade CCD. All components, except fibre, are shown in Fig. 1. As a detector, we use a water-cooled Andor iKon-L system based on a \(2048\times 2048\) pixels CCD array with \(13.5\,\mu\)m pixel pitch. To match the plate scale to the increased pixel size, among several lenses with comparable focus lengths available in the market, we choose a commercial lens Sony FE \(135\,\)mm F1.8 GM, primarily due to its outstanding optical quality. Subsequent testing of the whole assembly also showed excellent transmission of the selected lens within the required range of wavelengths. The imaging lens is attached to the CCD camera through a specially designed adapter with an enclosed shutter.
Technical parameters of the original and upgraded versions of eShel are summarized in Table 1.
An upgraded variant of the spectrograph was installed for tests in a spectrograph room of the Thai National Observatory (TNO) at Doi Inthanon (Chiang Mai, Thailand) in a temperature-controlled environment. The FIGU was mounted to the left Nasmyth port of the 1-m telescope of TNO. Tests were performed in December 2022 and January 2023 under affordable
\begin{table}
\begin{tabular}{l l l} \hline
**Parameter** & **Original Value** & **New value** \\ \hline FIGU \(F\#\) & F6 & original \\ Fibre core & 50 \(\mu\)m & \({}^{*}\)50 \(\mu\)m \\ \(f_{\text{col}}\), \(F\#_{\text{col}}\) & 125 mm, F5 & \({}^{*}\)125 mm, F5 \\ \(d_{\text{eehelle}}\), \(\theta_{\text{b}}\) & 79 mm\({}^{-1}\), \(63^{\circ}\) & original \\ Absolute orders \# & 32–52 & 24–57 \\ Imaging lens & Canon EF 85 F1.8 & Sony FE 135 mm F1.8 GM \\ Detector (sensor) & ATIK 460 EX (Sony ICX694) & Andor iKon-L (E2V CCD42-40) \\ Sensor format & \(2749\times 2199\)@\(4.54\mu\)m & \(2048\times 2048\)@\(13.5\mu\)m \\ \hline \end{tabular}
\end{table}
Table 1: Technical parameters of upgraded eShel. Asterisk indicates the new elements with the same listed characteristics.
Figure 1: Main elements of eShel upgraded in NARIT.
weather conditions and were aimed at the verification of the optical performance of the assembly. Observational data include a standard set of calibrations (bias, flat, ThAr) and spectra of the selected stars and daytime sky. Two-dimensional raw FITS images were reduced using the pipeline PyYAP ([http://github.com/ich-heisse-eugene/PyYAP](http://github.com/ich-heisse-eugene/PyYAP)), specially adapted to a new device.
## 3 Results
Test images taken with the upgraded device showed remarkable aberrations arising from the misaligned optical elements of the spectrograph. As this problem appeared in the direction perpendicular to dispersion, it influenced the overall throughput and the level of scattered light. Still, it didn't affect the spectral resolution and transmission of the device. Thus we leave the evaluation of the total throughput and stability for future works and concentrate here primarily on studying these unaffected characteristics.
### Transmission
Analysis of observational data revealed significantly improved spectrum quality due to better control of aberrations and enhanced transmission in the Sony lens. In the images, the point spread function remains nearly stable across the field of view in the 380-850 nm wavelength range. As a result, the shortwave limit of the working spectral range has been extended by 70 nm, from 450 nm to 380 nm. In the infrared, the working range of the current setup is limited by 900 nm. In Fig. 2, we show four samples of the observed spectrum of the daytime sky.
Figure 2: Four samples of the solar spectrum taken at TNO during day time.
### Resolving power
The resolving power of the modified eShel was evaluated by fitting the Gaussian function to the emission lines of the ThAr spectrum. This procedure is implemented as a standard step of processing in PyYAP.
Inspection of the ThAr spectra showed that the focus of the imaging camera remained stable during all observational nights. Within the spectrograph's working wavelength range, the resolving power \(R=\lambda/\Delta\lambda\) varied from \(10,000\) to \(12,500\), with the median \(R=11,700\) evaluated from 355 lines in a single image. The resolving power does not variate significantly between nights: the full width at half maximum (FHWM) of the mean ThAr line equals 3.7 pixels, close to the optimal sampling.
## 4 Scientific Application
A medium-resolution fibre-fed spectrograph, in combination with a 1-meter class telescope, can be a powerful instrument for the spectroscopy of relatively bright sources. Literature has many examples of using eShel in stellar physics and the physics of the Solar system objects. Due to its compact design and high positional stability, this spectrograph appears even in the observations of the extrasolar planets. The thing is that the accuracy of the radial velocity measurements reported in Kovacs et al. (2014), Pribulla et al. (2015), and Engel et al. (2017) was better than \(100\,\mathrm{m}\,\mathrm{s}^{-1}\) for the stars brighter than 11 magnitudes and exposure time under one hour. Such characteristics enable the detection and observation of hot Jupiters around the brightest stars. Kovacs et al. (2014) also gave the example of how to use eShel for observation of pulsating stars (cepheids).
The proposed upgrade opens new perspectives for the family of small telescopes in NARIT, as we have several spectrographs which, after the improvement, can be installed at any of our telescopes. In this way, it becomes possible to move part of scientific proposals aimed at studying exoplanets, active solar-like stars, binary and multiple stars from the main 2.4-m Thai National Telescope to smaller instruments without losing the efficiency and observing time. However, the main stimulus which led us to this technical work was the capability of using this device for asteroseismology of the brightest fast-rotating pulsating stars.
To demonstrate the efficiency of eShel in asteroseismological observations, in Fig. 3, we show an example of non-radial pulsations discovered in a 4-magnitude fast-rotating star. A typical pattern of waves propagating across the averaged spectral profile is in the left panel of 3. The right panel shows the 2D periodogram used for the identification of frequencies of pulsation. In this example, the star has been observed continuously with short exposures for more than five hours with the original version of eShel and the 1-m telescope of NARIT. The upgraded version of the spectrograph will allow us to increase the signal-to-noise ratio (SNR) of observational data and, thus, expand the number of potential targets or increase the temporal resolution of data with shorter exposure time preserving the same level of SNR.
#### ORCID identifiers of the authors
0000-0002-1912-1342 -- Eugene Semenko
0000-0001-5094-3910 -- David Mkrtichian
#### Author contributions
SR, ES, and DM are responsible for formulating the project, its technical implementation, and carrying out the observations. ES and DM are responsible for data reduction and analysis. SP
contributed to the project administration. All authors equally contributed to the text of the article.
### Conflicts of interest
The authors declare no conflict of interest.
|
2304.04562 | Degrees of closed points on hypersurfaces | Let $k$ be any field. Let $X \subset \mathbb{P}_k^N$ be a degree $d \geq 2$
hypersurface. Under some conditions, we prove that if $X(K) \neq \emptyset$ for
some extension $K/k$ with $n:=[K:k] \geq 2$ and $\gcd(n,d)=1$, then $X(L) \neq
\emptyset$ for some extension $L/k$ with $\gcd([L:k], d)=1$, $n \nmid [L:k]$,
and $[L:k] \leq nd-n-d$. Moreover, if a $K$-solution is known explicitly, then
we can compute $L/k$ explicitly as well. As an application, we improve upon a
result by Coray on smooth cubic surfaces $X \subset \mathbb{P}^3_k$ by showing
that if $X(K) \neq \emptyset$ for some extension $K/k$ with $\gcd([K:k], 3)=1$,
then $X(L) \neq \emptyset$ for some $L/k$ with $[L:k] \in \{1, 10\}$. | Francesca Balestrieri | 2023-04-07T17:30:24Z | http://arxiv.org/abs/2304.04562v2 | # Degrees of closed points on diagonal-full hypersurfaces
###### Abstract.
Let \(k\) be any field. Let \(X\subset\mathbb{P}^{N}_{k}\) be a diagonal-full degree \(d\) hypersurface, where \(d\) is an odd prime. We prove that if \(X(K)\neq\emptyset\) for some extension \(K/k\) with \(n:=[K:k]\) prime and \(\gcd(n,d)=1\), then \(X(L)\neq\emptyset\) for some extension \(L/k\) with \(\gcd([L:k],nd)=1\) and \([L:k]\leq nd-n-d\). Moreover, if a \(K\)-solution is known explicitly, then we can compute \(L/k\) explicitly as well. When \(n\) or \(d\) is not prime, we can still say something about the possible values of \([L:k]\). As an example, we improve upon a theorem by Coray on smooth cubic surfaces \(X\subset\mathbb{P}^{3}_{k}\), in the case when \(X\) is diagonal-full, by showing that if \(X(K)\neq\emptyset\) for some extension \(K/k\) with \(\gcd([K:k],3)=1\), then \(X(L)\neq\emptyset\) for some \(L/k\) with \([L:k]\in\{1,10\}\).
MSC2020: 11E76, 11D25, 11D41
## 1. Introduction
Springer's theorem for quadratic forms famously states that, if a quadratic form \(\varphi\) on a finite-dimensional vector space over a field \(k\) is isotropic over some extension \(L/k\) of odd degree, then it is already isotropic over \(k\) (see [10] for the case when the characteristic is not \(2\) and [1, Corollary 18.5] for any characteristic). Equivalently, in more geometric terms, if \(X\subset\mathbb{P}^{N}_{k}\) is a degree \(2\) hypersurface, then \(X(L)\neq\emptyset\) for some extension \(L/k\) of odd degree implies that \(X(k)\neq\emptyset\). A natural question to ask is whether Springer's theorem generalises to higher degree forms.
**Question 1.1**.: Given a degree \(d\geq 3\) hypersurface \(X\subset\mathbb{P}^{N}_{k}\) over a field \(k\), is it true that if \(X(L)\neq\emptyset\) for some extension \(L/k\) with \(\gcd([L:k],d)=1\), then \(X(k)\neq\emptyset\)?
When \(d\geq 4\), the general answer to Question 1.1 seems to be _no_, while, when \(d=3\), Cassels and Swinnerton-Dyer have conjectured that the answer to Question 1.1 should be _yes_. Some progress towards the conjecture by Cassels and Swinnerton-Dyer has been obtained by Coray (see [11]), who proved, for any smooth cubic surface \(X\subset\mathbb{P}^{3}_{k}\) over a perfect field \(k\), that if \(X(K)\neq\emptyset\) for some extension \(K/k\) with \(\gcd([K:k],3)=1\), then \(X(L)\neq\emptyset\) for some extension \(L/k\) with \([L:k]\in\{1,4,10\}\). In recent work, Ma has been able to remove the condition on the field being perfect, proving Coray's result for any field (see [13]). Moreover, when \(k\) is a global field, Rivera and Viray have shown that, if the Brauer-Manin obstruction is the only one for the Hasse principle for rational points on smooth cubic surfaces in \(\mathbb{P}^{3}\) over \(k\) (and, by a conjecture by Colliot-Thelene and Sansuc, this should always be the case), then the conjecture by Cassels and Swinnerton-Dyer holds for such surfaces (see [12]).
In this paper, we are concerned with the following much weaker version of Question 1.1.
**Question 1.2**.: Let \(X\subset\mathbb{P}^{N}_{k}\) be a degree \(d\geq 3\) hypersurface over a field \(k\). If \(X(K)\neq\emptyset\) for some finite extension \(K/k\) with \(\gcd([K:k],d)=1\), can we find some (somewhat explicit) finite extension \(L/k\) with \(\gcd([L:k],d)=1\), \([K:k]\nmid[L:k]\), and \(X(L)\neq\emptyset\)?
Our main theorem gives a positive answer to Question 1.2 for the class of _diagonal-full_ (see Definition 2.3) hypersurfaces of degree \(d\), under some assumptions on \(d\) and \([K:k]\).
**Theorem** (Theorem 3.1).: _Let \(k\) be any field. Let \(X\subset\mathbb{P}^{N}_{k}\) be a diagonal-full degree \(d\) hypersurface over \(k\), where \(d\) is an odd prime. If \(X(K)\neq\emptyset\) for some extension \(K/k\) with \(n:=[K:k]\) prime and \(\gcd(n,d)=1\), then \(X(L)\neq\emptyset\) for some extension \(L/k\) with \(\gcd([L:k],nd)=1\) and \([L:k]\leq nd-n-d\). Moreover, if a \(K\)-solution is known explicitly, then \(L/k\) can be computed explicitly as well._
When \(n\) or \(d\) is not prime, the proof of Theorem 3.1 can still say something about the possible values of \([L:k]\). As an example, we prove the following result, which implies an improvement upon Coray's and Ma's theorems when considering diagonal-full forms.
**Theorem** (Theorem 3.10).: _Let \(k\) be a field and let \(X\subset\mathbb{P}^{N}_{k}\) be a cubic diagonal-full hypersurface over \(k\). If \(X(K)\neq\emptyset\) for some simple field extension \(K/k\) with \([K:k]=4\), then \(X(L)\neq\emptyset\) for some extension \(L/k\) with \([L:k]\in\{1,5\}\)._
**Corollary 1.3**.: _Let \(k\) be a field and let \(X\subset\mathbb{P}^{3}_{k}\) be a smooth diagonal-full cubic surface over \(k\). If \(X(K)\neq\emptyset\) for some extension \(K/k\) with \(\gcd([K:k],3)=1\), then \(X(L)\neq\emptyset\) for some \(L/k\) with \([L:k]\in\{1,10\}\)._
Proof.: By Coray's and Ma's results, we know, under the hypothesis of the corollary, that there exists some \(L/k\) with \([L:k]\in\{1,4,10\}\) and \(X(L)\neq\emptyset\). If \([L:k]=4\), then either \(L/k\) is simple, in which case, by Theorem 3.10, there is some other \(L^{\prime}/k\) with \([L^{\prime}:k]\in\{1,5\}\) and \(X(L^{\prime})\neq\emptyset\), or \(L/k\) is not simple. If \(L/k\) is not simple, then, since it is finite, it must be a tower of simple extensions \(L/k(\alpha)/k\) with \([L:k(\alpha)]=[k(\alpha):k]=2\). Then \(X_{k(\alpha)}\) is a smooth diagonal-full cubic surface as well, and \(X_{k(\alpha)}(L)\neq\emptyset\), where \([L:k(\alpha)]=2\); this implies that \(X(k(\alpha))\neq\emptyset\). Repeating the same argument with \(k(\alpha)\) and \(k\), we get that \(X(k)\neq\emptyset\) and we can let \(L^{\prime}=k\). In any case, we have found some \(L^{\prime}/k\) with \([L^{\prime}:k]\in\{1,5\}\) and \(X(L^{\prime})\neq\emptyset\). If \(L^{\prime}=k\) we are done, and if \([L^{\prime}:k]=5\), then any quadratic extension \(L^{\prime\prime}/L^{\prime}\) (thus with \([L^{\prime\prime}:k]=10\)) satisfies \(X(L^{\prime\prime})\neq\emptyset\).
## 2. Preliminaries on degree \(d\) forms
Hypersurfaces \(X\subset\mathbb{P}^{N}_{k}\) of degree \(d\) over a field \(k\) are equivalent to degree \(d\) (homogeneous) forms in \(N+1\) variables over \(k\). Since we are going to prove our main theorems in the language of forms, we start by recalling some basic definitions.
**Definition 2.1**.: Let \(\varphi\) be a form of degree \(d\) on a finite-dimensional vector space \(V\) over a field \(k\). We say that \(\varphi\) is isotropic if there exists some non-zero \(v\in V\) with \(\varphi(v)=0\). Otherwise, we say that \(\varphi\) is anisotropic.
**Remark 2.2**.: If \(X\subset\mathbb{P}^{N}_{k}\) is a degree \(d\) hypersurface over a field \(k\) corresponding to the degree \(d\) form \(\varphi\) on \(k^{N+1}\), then, for any extension \(L/k\), we have that \(X(L)\neq\emptyset\) if and only if \(\varphi_{L}\) is isotropic.
If \((i_{0},...,i_{N})\in\mathbb{Z}^{N+1}_{\geq 0}\) and \(x:=(x_{0},...,x_{N})\), we denote by \(\underline{x}^{(i_{0},...,i_{N})}\) the monomial in which \(x_{j}\) appears with exponent \(i_{j}\) if \(i_{j}>0\) and does not appear at all if \(i_{j}=0\).
**Definition 2.3**.: Let \(\varphi\) be a form of degree \(d\) on a finite-dimensional vector space \(V\cong k^{N+1}\) over a field \(k\), say
\[\varphi(x_{0},...,x_{N})=\sum_{\begin{subarray}{c}(i_{0},...,i_{N})\in\mathbb{Z }^{N+1}_{\geq 0}:\\ i_{0}+,...+i_{N}=d\end{subarray}}a_{(i_{0},...,i_{N})\underline{x}^{(i_{0},...,i_ {N})}},\]
with \(a_{(i_{0},...,i_{N})}\in k\). We say that \(\varphi\) is diagonal-full if \(a_{(d,0,...,0)},a_{(0,d,0,...,0)},...,a_{(0,...,0,d)}\neq 0\).
(In more geometric terms, a degree \(d\) hypersurface \(X\subset\mathbb{P}_{k}^{N}\) is diagonal-full if \(X\) is given by an equation
\[\sum_{\begin{subarray}{c}(i_{0},...,i_{N})\in\mathbb{Z}_{\geq 0}^{N+1}:\\ i_{0}+...+i_{N}=d\end{subarray}}\underline{x}^{(i_{0},...,i_{N})}=0\]
with \(a_{(i_{0},...,i_{N})}\in k\) and \(a_{(d,0,...,0)},a_{(0,d,0,0,...,0)},...,a_{(0,...,0,d)}\neq 0\).)
**Example 2.4**.: Any non-degenerate diagonal form is diagonal-full.
**Definition 2.5**.: We let \(D(\varphi_{V}):=\{\varphi(v)\neq 0:v\in V\}\).
The following is a straightforward modification of [1, Theorem 18.3, proof of \((2)\Rightarrow(3)\)].
**Lemma 2.6**.: _Let \(\varphi\) be a form of degree \(d\) on a finite-dimensional vector space \(V\) over \(k\) and let \(f\in k[t]\) be a non-constant polynomial. If there exists some \(a\in k^{\times}\) such that \(af\in\langle D(\varphi_{k(t)})\rangle\), then \(\varphi_{k(g)}\) is isotropic for each irreducible polynomial \(g\) occurring to a power coprime to \(d\) in the factorisation of \(f\), where \(k(g):=k[t]/(g(t))\)._
Proof.: Since \(af\in\langle D(\varphi_{k(t)})\rangle\), there exist some \(0\neq h\in k[t]\) and \(v_{1},...,v_{m}\in V[t]\) such that
\[afh^{d}=\prod_{i=1}^{m}\varphi(v_{i}).\]
If it exists, let \(p\in k[t]\) be a non-constant monic irreducible factor of \(f\) appearing with exponent \(\lambda\) coprime to \(d\) in the factorisation of \(f\) into irreducible polynomials, i.e. say \(f=p^{\lambda}f^{\prime}\) with \(p\) monic irreducible, \(\deg(p)\geq 1\), \(p\nmid f^{\prime}\), and \(\gcd(\lambda,d)=1\). Write \(v_{i}=p^{k_{i}}v_{i}^{\prime}\), where \(k_{i}\geq 0\) and \(p\nmid v_{i}^{\prime}\), for each \(i=1,...,m\). Then
\[ap^{\lambda}f^{\prime}h^{d}=afh^{d}=\prod_{i=1}^{m}\varphi(v_{i})=\prod_{i=1}^ {m}p^{dk_{i}}\varphi(v_{i}^{\prime})=p^{d\sum_{i=1}^{m}k_{i}}\prod_{i=1}^{m} \varphi(v_{i}^{\prime}).\]
Since
\[\lambda+d\nu_{p}(h)=\nu_{p}(ap^{\lambda}f^{\prime}h^{d})=\nu_{p}\left(\prod_{i =1}^{m}p^{dk_{i}}\varphi(v_{i}^{\prime})\right)=d\sum_{i=1}^{m}k_{i}+\sum_{i=1 }^{m}\nu_{p}(\varphi(v_{i}^{\prime})),\]
where \(\nu_{p}(-)\) denotes the valuation at \(p\), and since \(\gcd(\lambda,d)=1\), it follows that \(\nu_{p}(\varphi(v_{j}^{\prime}))\geq 1\) for some \(j\in\{1,...,m\}\). This means that \(\varphi(v_{j}^{\prime})\equiv 0\) mod \(p\). Since by construction \(p\nmid v_{j}^{\prime}\), we also have that \(v_{j}^{\prime}\not\equiv 0\) mod \(p\). Hence, \(\varphi_{k(p)}\) is isotropic, as required.
**Lemma 2.7**.: _Let \(d\) be a positive integer. Let \(k\) be a field and let \(\varphi\) be a diagonal-full form of degree \(d\) on a finite-dimensional vector space \(V\cong k^{N+1}\) over \(k\). Suppose that \(\varphi\) is anisotropic. Let \(0\neq r\in V[t]\). Then \(\deg(\varphi(r))=d\deg(r)\), where \(deg(r):=\max_{i=0,...,N}(\deg(r_{i}))\)._
Proof.: Since \(r\in V[t]\) and since \(V\cong k^{N+1}\), we can write \(r=(r_{0},...,r_{N})\) with \(r_{i}\in k[t]\) for each \(i=0,...,N\). Let \(\deg(r):=\max_{i=0,...,N}(\deg(r_{i}))\), and let
\[I_{\deg(r)}:=\{i\in\{0,...,N\}:\deg(r_{i})=\deg(r)\}.\]
Since \(\varphi\) is diagonal-full, we can write it as
\[\varphi(x_{0},...,x_{N})=\sum_{\begin{subarray}{c}(i_{0},...,i_{N})\in\mathbb{Z }_{\geq 0}^{N+1}:\\ i_{0}+...+i_{N}=d\end{subarray}}a_{(i_{0},...,i_{N})\underline{x}}^{(i_{0},..., i_{N})},\]
with \(a_{(i_{0},\ldots,i_{N})}\in k\) and \(a_{(d,0,\ldots,0)},a_{(0,d,0,\ldots,0)},...,a_{(0,\ldots,0,d)}\neq 0\). If \(\deg(\varphi(r))\neq d\deg(r)\), then some cancellation must have occured among the leading coefficients (not all \(0\), since \(\varphi\) is diagonal-full) of those polynomials \(a_{(i_{0},\ldots,i_{N})}\underline{r(t)}^{(i_{0},\ldots,i_{N})}\) of degree \(d\deg(r)\). (We note that, since \(d\deg(r)\) is the maximal degree that can possibly be attained, the polynomial \(\underline{r(t)}^{(i_{0},\ldots,i_{N})}\) has degree \(d\deg(r)\) if and only if \(i_{j}=0\) for all \(j\notin I_{\deg(r)}\).) In particular, if we let \(0\neq\tilde{r}\in k^{N+1}\cong V\) be defined by
\[\tilde{r}_{i}=\begin{cases}r_{i}^{*}&\text{ if }i\in I_{\deg(r)},\\ 0&\text{ if }i\notin I_{\deg(r)},\end{cases}\]
where \(r_{i}^{*}\in k\) denotes the leading coefficient of \(r_{i}(t)\), then \(\tilde{r}\) must satisfy
\[\varphi(\tilde{r})=\sum_{\begin{subarray}{c}(i_{0},\ldots,i_{N})\in\mathbb{Z} _{\geq 0}^{N+1}:\\ i_{0}+\ldots+i_{N}=d\end{subarray}}a_{(i_{0},\ldots,i_{N})}\bar{\Sigma}^{(i_{0},\ldots,i_{N})}=0,\]
which would imply that \(\varphi\) is isotropic, a contradiction. Hence, \(\deg(\varphi(r))=d\deg(r)\), as required.
## 3. Proof of the main theorems
In this section, using fairly simple arguments, we prove (in the language of forms) the two main theorems of the paper.
**Theorem 3.1**.: _Let \(k\) be any field. Let \(X\subset\mathbb{P}_{k}^{N}\) be a diagonal-full degree \(d\) hypersurface over \(k\), where \(d\) is an odd prime. If \(X(K)\neq\emptyset\) for some extension \(K/k\) with \(n:=[K:k]\) prime and \(\gcd(n,d)=1\), then \(X(L)\neq\emptyset\) for some extension \(L/k\) with \(\gcd([L:k],nd)=1\) and \([L:k]\leq nd-n-d\). Moreover, if a \(K\)-solution is known explicitly, then \(L/k\) can be computed explicitly as well._
Proof.: If \(\varphi\) is isotropic over \(k\), we can take \(L=k\) and \([L:k]=1\) is coprime to \(nd\). So, from now on, we assume that \(\varphi\) is anistropic over \(k\).
Since \([K:k]\) is prime, \(K/k\) is a simple extension. Let \(K=k(\alpha)\) and let \(f\in k[t]\) be the minimal (irreducible) polynomial of \(\alpha\) over \(k\). Since, by assumption, \(\varphi_{k(f)}\) is isotropic, it follows that there exists some \(v\in V[t]\) such that \(\varphi(v)\equiv 0\bmod f\) but \(v\not\equiv 0\bmod f\). By the division algorithm, there exist some \(0\neq h\in k[t]\) and \(w,r\in V[t]\) such that
\[hv=fw+r\]
and with \(\deg(h)<\deg(f)=n\) and \(\deg(r)<\deg(f)=n\). Since
\[h^{d}\varphi(v)=\varphi(hv)=\varphi(fw+r)=f^{d}\varphi(w)+f(\text{other stuff})+ \varphi(r)\]
and since \(f\mid\varphi(v)\), it follows that \(f\mid\varphi(r)\).
If \(r=0\), then \(f\mid hv\). But since \(f\) is irreducible and \(f\nmid v\), it follows that \(f\mid h\), which is a contradiction since \(\deg(h)<\deg(f)\). Hence, \(r\neq 0\). Let \(\varphi(r)=fg\) for some \(g\in k[t]\). Since \(r\neq 0\) and since, by assumption, \(\varphi\) is anisotropic, it follows that \(\varphi(r)\neq 0\): indeed, since \(r(t)\neq 0\), there is some \(\tilde{t}\in k\) such that the specialisation \(r(\tilde{t})\in V\) is also not \(0\); if, however, \(\varphi(r)=0\), then we would have in particular that \(\varphi(r(\tilde{t}))=0\), which would imply that \(\varphi\) is isotropic over \(k\), a contradiction. Since \(\varphi(r)\neq 0\), it follows that \(g\neq 0\). Hence, we have that \(fg\in\langle D(\varphi_{k(t)})\rangle\). Since \(\varphi(r)=fg\) and \(\deg(r)<\deg(f)\), it follows that
\[\deg(g)+\deg(f)=\deg(\varphi(r))<d\deg(f)=dn,\]
that is, \(\deg(g)<n(d-1).\) Notice also that \(\deg(g)\geq 1,\) since otherwise we would get, by Lemma 2.7, that \(d\deg(r)=\deg(\varphi(r))=\deg(f)=n,\) which is a contradiction to the fact that \(\gcd(d,n)=1.\)
In the remainder of the proof, we aim to show that there exists an irreducible factor \(p\) of \(fg\) of exponent \(\lambda\) coprime to \(d\) and with \(\gcd(\deg(p),dn)=1\) and \(\deg(p)>1\) (with the goal of then applying Lemma 2.6 to it). Let the factorisation of \(g\) into irreducible factors be
\[g=g^{*}\prod_{i=1}^{r}p_{i}^{\lambda_{i}}\]
where \(g^{*}\in k^{\times}\) and, for each \(i=1,...,r,\) the distinct polynomials \(p_{i}\in k[t]\) are monic and irreducible, with \(\deg(p_{i})=:u_{i}\) and \(\lambda_{i}\geq 1\). Then
\[\deg(g)=\sum_{i=1}^{r}u_{i}\lambda_{i}<n(d-1).\]
We now introduce some terminology and notation.
**Definition 3.2**.: Let \(n^{*}\in\{1,...,d-1\}\) be the unique integer such that \(n^{*}\equiv-n\bmod d.\) We define the set
\[S_{d,n}:=\left\{n^{*}+jd:j\in\mathbb{Z}_{\geq 0}\text{ and }n^{*}+jd<n(d-1)\right\}.\]
**Definition 3.3**.: Let \(u\in S_{d,n}\). We call any partition \(u=\lambda_{1}u_{1}+...+\lambda_{r}u_{r}\) in which there exists some \(i\) with \(u_{i}=1\) and \(\gcd(\lambda_{i},d)=1\) an inadmissible partition. We call all the other partitions of \(u\) admissible.
**Claim 3.4**.: _Let \(\lambda_{1}u_{1}+...+\lambda_{r}u_{r}\) be an admissible partition of \(u\in S_{d,n}\). Then there exists at least one \(i\in\{1,...,r\}\) with \(\lambda_{i}\) coprime to \(d\) and \(u_{i}>1\) coprime to both \(n\) and \(d\)._
Proof.: Indeed, suppose, for a contradiction, that this is not the case. Then, for any \(i\in\{1,...,r\}\), either \(\lambda_{i}\) is not coprime to \(d\) or \(u_{i}\) is not comprime to both \(n\) and \(d\). (We note that if \(\gcd(\lambda_{i},d)=1\), then \(u_{i}>1\) since the partition is admissible.) Since \(u\in S_{d,n}\) and since \(d\) is prime and \(\gcd(d,n)=1\), it follows that \(u\) is coprime to \(d\). Hence, since \(u=\sum_{i=1}^{r}u_{i}\lambda_{i}\), there exists at least one \(i\) with \(\lambda_{i}\) coprime to \(d\). Let
\[I_{d}:=\{i\in\{1,...,r\}:\lambda_{i}\text{ is coprime to }d\},\]
which is non-empty, as noted above. Since the partition is admissible, we must have that \(u_{i}>1\) for all \(i\in I_{d}\). Moreover, by assumption, we must have that \(u_{i}\) is not coprime to both \(n\) and \(d\) for all \(i\in I_{d}\). Consider the subset of \(I_{d}\) defined by
\[J_{d}:=\{j\in I_{d}:u_{j}\text{ is coprime to }d\}.\]
Since we are assuming that, for any \(i\), either \(\lambda_{i}\) is not coprime to \(d\) or \(u_{i}\) is not coprime to both \(n\) and \(d\), using the fact that \(n\) is prime we must have that \(u_{j}\) is divisible by \(n\) for all \(j\in J_{d}\), and thus that \(\sum_{j\in J_{d}}\lambda_{j}u_{j}\in n\mathbb{Z}\). Moreover, again using our assumptions, it follows by definition that if \(i\in\{1,...,r\}-J_{d}\), then \(\lambda_{i}u_{i}\in d\mathbb{Z}\). Hence, we can write \(u\) as
\[u=\underbrace{\sum_{i\notin I_{d}}\lambda_{i}u_{i}}_{\in d\mathbb{Z}}+ \underbrace{\sum_{j\in(I_{d}-J_{d})}\lambda_{j}u_{j}}_{\in d\mathbb{Z}}+ \underbrace{\sum_{j\in J_{d}}\lambda_{j}u_{j}}_{\in n\mathbb{Z}}. \tag{3.1}\]
Using the fact that \(\gcd(u,d)=1\), it follows that \(J_{d}\neq\emptyset\). Moreover, we must also have that \(\sum_{j\in J_{d}\neq\emptyset}\lambda_{j}u_{j}\notin d\mathbb{Z}\).
Write \(\sum_{j\in J_{d}}\lambda_{j}u_{j}=nc\), for some \(c\in\mathbb{Z}_{>0}\). Recall that \(u=n^{*}+md\), for some \(m\in\mathbb{Z}_{\geq 0}\) such that \(u<(d-1)n\). It follows that \(c<d-1\) (note that \(d\geq 3\)). Moreover, reducing (3.1) modulo \(d\), we get
\[\begin{array}{ccc}&n^{*}&\equiv nc&(\bmod\,d)\\ \therefore&-n&\equiv nc&(\bmod\,d)\\ \therefore&n(c+1)&\equiv 0&(\bmod\,d).\end{array}\]
Since \(n\) is coprime to \(d\), it follows that \(c+1\equiv 0\pmod{d}\). But \(c\in\{1,...,d-2\}\), meaning that \(c+1\in\{2,...,d-1\}\) is coprime to \(d\), a contradiction. Hence, for each admissible partition \(u=\lambda_{1}u_{1}+...+\lambda_{r}u_{r}\), there exists at least one \(i\in\{1,...,r\}\) with \(\lambda_{i}\) coprime to \(d\) and \(u_{i}>1\) coprime to both \(n\) and \(d\).
We now resume the proof of the main theorem. We make two claims.
**Claim 3.5**.: _In the notation and assumptions as above,_
1. \(\deg(g)\in S_{d,n}\)_;_
2. \(\deg(g)=\sum_{i=1}^{r}\lambda_{i}u_{i}\) _is an admissible partition._
Proof.:
1. By Lemma 2.7, we have that \(\deg(\varphi(r))=d\deg(r)\). Since \(\varphi(r)=fg\), it follows that \(\deg(g)=-n+d\deg(r)\). Since, moreover, \(1\leq\deg(g)<n(d-1)\), it follows that \(\deg(g)\in S_{d,n}\), as claimed.
2. Suppose, for a contradiction, that \[\deg(g)=\sum_{i=1}^{r}\lambda_{i}u_{i}\] is an inadmissible partition. This means that there exists some \(i\) with \(u_{i}=1\) and \(\gcd(\lambda_{i},d)=1\). Since \(u_{i}=\deg(p_{i})\), this means that \(g\) has a monic linear factor \(p_{i}\in k[t]\) with exponent coprime to \(d\). We note that \(p_{i}\nmid f\), since \(f\) is irreducible and \(\deg(p_{i})=1<\deg(f)=n\). Hence, \(p_{i}\) is a monic linear factor of \(fg\) appearing with exponent coprime to \(d\) in the factorisation of \(fg\). By Lemma 2.6, this means that \(\varphi_{k(p_{i})}\) is isotropic. But \(k(p_{i})\cong k\), which implies that \(\varphi\) is isotropic, a contradiction to the assumption that \(\varphi\) is anisotropic. Hence, \(\deg(g)=\sum_{i=1}^{r}\lambda_{i}u_{i}\) is an admissible partition, as claimed.
By Claims 3.5 and 3.4, there exists some \(i\in\{1,...,r\}\) with \(\gcd(\lambda_{i},d)=1\) and \(u_{i}>1\) with \(\gcd(u_{i},nd)=1\). This corresponds to an irreducible factor \(p_{i}\) of degree \(u_{i}\) of \(g\) with exponent \(\lambda_{i}\) coprime to \(d\). We notice that \(p_{i}\nmid f\), since both \(f\) and \(p_{i}\) are irreducible and \(\deg(p_{i})=u_{i}\neq n=\deg(f)\). Hence, \(p_{i}\) is a monic irreducible factor of \(fg\) of exponent \(\lambda_{i}\) coprime to \(d\). By Lemma 2.6, this implies that \(\varphi_{k(p_{i})}\) is isotropic. By letting \(L:=k(p_{i})=k[t]/(p_{i}(t))\), we see that \([L:k]=u_{i}\) satisfies \(\gcd([L:k],nd)=1\), as required.
In order to show that any \(L/k\) found by using the above method satisfies \([L:k]\leq nd-n-d\), it suffices to show that \(u_{\max}:=\max S_{d,n}=nd-n-d\), because then we can just notice that, for any \(u\in S_{d,n}\) with \(u\neq u_{\max}\), if \((a_{1},...,a_{r})\) is an admissible (in the sense that \(a_{i}>1\) for all \(i=1,...,r\)) partition into positive integers for \(u\), then \((a_{1},...,a_{r},jd)\) is an admissible partition for \(u_{\max}\), for some \(j\in\mathbb{Z}_{\geq 1}\), and so \([L:k]\) will necessarily come from some admissible partition of \(u_{\max}\).
**Claim 3.6**.: _For any positive integers \(n,d\geq 2\) with \(\gcd(d,n)=1\) we have_
\[\max S_{d,n}=nd-n-d.\]
Proof.: We assume first that \(n<d\). If \(n^{*}\in\{1,2,...,d-1\}\) is such that \(n^{*}\equiv-n\bmod d\), then, since \(n<d\), we have \(n^{*}=d-n\). Hence,
\[\begin{array}{ll}S_{d,n}&=\{d-n+jd:j\in\mathbb{Z}_{\geq 0}\text{ and }d-n+jd<(d-1)n\}\\ &=\{d-n+jd:j\in\{0,1,...,n-2\}\}\end{array}\]
and so \(\max S_{d,n}=d-n+(n-2)d=dn-n-d\).
Assume now that \(d<n\). If \(n^{*}\in\{1,2,...,d-1\}\) is such that \(n^{*}\equiv-n\bmod d\), then, since \(d<n\), we can write \(n^{*}=\alpha d-n\) where \(\alpha\) is the unique positive integer strictly between \(\frac{n}{d}\) and \(\frac{d+n}{d}\). Hence,
\[\begin{array}{ll}S_{n,d}&=\{\alpha d-n+jd:j\in\mathbb{Z}_{\geq 0}\text{ and } \alpha d-n+jd<(d-1)n\}\\ &=\{\alpha d-n+jd:j\in\{0,1,...,n-\alpha-1\}\}\end{array}\]
and so \(\max S_{n,d}=\alpha d-n+(n-\alpha-1)d=dn-n-d\).
So, in any case, \(\max S_{n,d}=dn-n-d\), as required.
Finally, if we have an explicit non-trivial solution over \(K\), then, in the above proof, we also have an explicit \(v\in V[t]\), which implies that \(h,w,r\) are also explicit, and thus that \(g\) is explicit as well. Then the factorisation \(g=g^{*}\prod_{i=1}^{r}p_{i}^{\lambda_{i}}\) into its irreducible factors is also explicit, and we get all its irreducible factors \(p_{i}\) with \(\gcd(\deg(p_{i}),nd)=1\) and \(\gcd(\lambda_{i},d)=1\); for each such factor, \(L=k[t]/(p_{i}(t))\) is explicitly computed.
**Remark 3.7**.: We remark that the statement of Theorem 3.1 is completely symmetric in \(n\) and \(d\) (taking also into account Springer's theorem in the case when either \(d\) or \(n\) is equal to \(2\)).
**Example 3.8**.: Let \(\varphi\) be a diagonal-full cubic form on a finite-dimensional vector space \(V\) over a field \(k\) with \(\varphi_{K}\) isotropic for some simple extension \(K/k\) of degree \(n:=[K:k]=2\). Since \((d,n)=(3,2)\), we have \(S_{d,n}=\{1\}\). By following the proof of Theorem 3.1, this implies that \(\varphi\) is already isotropic over \(k\).
**Example 3.9**.: Let \(\varphi\) be a diagonal-full cubic form on a finite-dimensional vector space \(V\) over a field \(k\) with \(\varphi_{K}\) isotropic for some simple extension \(K/k\) of degree \(n:=[K:k]=5\). Since \((d,n)=(3,5)\), we have \(S_{d,n}=\{1,4,7\}\). By considering the partitions into positive integers of each \(u\in S_{d,n}\), we can find possible values for \([L:k]\). We note that all the partitions into positive integers of \(u\in\{1,4\}\) appear as subpartitions of \(u=7\), so we just need to consider \(u=7\).
* \(u=7\). If a partition of \(7\) into positive integers involves \(1\) or \(2\), then, following the notation in the proof of Theorem 3.1, we know that there exists some \(i\in\{1,...,r\}\) with \(u_{i}\in\{1,2\}\) and \(\gcd(\lambda_{i},3)=1\), which implies that there exists some \(L/k\) with \([L:k]\in\{1,2\}\) and \(\varphi_{L}\) isotropic; if \([L:k]=2\), then we can use Example 3.8 to conclude that \(\varphi\) is isotropic over \(k\). Hence, it suffices to consider those partitions of \(7\) not involving \(1\) or \(2\). The only partitions of \(7\) into positive integers not involving \(1\) or \(2\) are \((7)\) and \((4,3)\). Hence, in this case, there always exists some \(i\in\{1,...,r\}\) with \(\gcd(\lambda_{i},3)=1\) and \(u_{i}\in\{1,2(\leftrightarrow 1),4,7\}\).
Hence, we conclude that there is always an extension \(L/k\) with \([L:k]\in\{1,4,7\}\) and \(\varphi_{L}\) isotropic.
### The case when \(n\) or \(d\) is not prime
When \(n:=[K:k]\) or \(d\) is not prime, the proof of Claim 3.4 might fail. However, even in this case, we can use the proof of Theorem 3.1, with some care, to determine the possible degrees of \(L/k\) with \(\varphi_{L}\) isotropic. We illustrate this procedure by specialising to the case when \(d=3\) and \(n=4\).
**Theorem 3.10**.: _Let \(k\) be a field and let \(\varphi\) be a cubic diagonal-full form on a finite-dimensional vector space \(V\) over \(k\). If there exists a simple extension \(K/k\) with \([K:k]=4\) such that \(\varphi_{K}\) is isotropic, then there exists a finite extension \(L/k\) with \([L:k]\in\{1,5\}\) such that \(\varphi_{L}\) is isotropic._
Proof.: The first part of the proof of Theorem 3.10 is identical to that of Theorem 3.1, so we just sketch it. Let \(K=k(\alpha)\) and let \(f\in k[t]\) be the minimal (irreducible) polynomial of \(\alpha\) over \(k\). Since, by assumption, \(\varphi_{k(f)}\) is isotropic, it follows that there exists some \(v\in V[t]\) such that \(\varphi(v)\equiv 0\bmod f\) but \(v\not\equiv 0\bmod f\). By the division algorithm, there exist some \(0\neq h\in k[t]\) and \(w,r\in V[t]\) such that
\[hv=fw+r,\]
with \(\deg(r)<\deg(f)=4\), with \(f\mid\varphi(r)\), and with \(\varphi(r)\neq 0\). We write \(\varphi(r)=fg\) for some \(g\in k[t]\), which we can show satisfies \(0<\deg(g)<n(d-1)=8\).
Let the factorisation of \(g\) into irreducible factors be
\[g=g^{*}\prod_{i=1}^{r}p_{i}^{\lambda_{i}}\]
where \(g^{*}\in k^{\times}\) and, for each \(i=1,...,r\), the distinct polynomials \(p_{i}\in k[t]\) are monic and irreducible, with \(\deg(p_{i})=:u_{i}\) and \(\lambda_{i}\geq 1\). Then
\[\deg(g)=\sum_{i=1}^{r}u_{i}\lambda_{i}<8.\]
Let us compute \(S_{d,n}\) for \((d,n)=(3,4)\). We have \(n^{*}=2\). Hence,
\[S_{3,4}=\{2+3j:j\in\mathbb{Z}_{\geq 0}\text{ and }2+3j<8\}=\{2,5\}.\]
We notice that \(\deg(g)\in S_{3,4}=\{2,5\}\), since, by Lemma 2.7, we have that \(\deg(\varphi(r))=3\deg(r)\) and since \(\varphi(r)=fg\), implying that \(0<\deg(g)=-4+3\deg(r)<8\).
We remark that if \(\deg(g)=\sum_{j=1}^{r}u_{j}\lambda_{j}\) is such that \(u_{i}\in\{1,2\}\) and \(\gcd(\lambda_{i},3)=1\) for some \(i\in\{1,...,r\}\), then we know that \(g\) has an irreducible factor \(p_{i}\) of degree either \(1\) or \(2\) appearing in the factorisation of \(g\) with exponent \(\lambda_{i}\). Moreover, such a factor \(p_{i}\) cannot divide \(f\), since \(f\) is irreducible and \(\deg(f)=4\), while \(\deg(p_{i})<\deg(f)=4\). Hence, \(p_{i}\) is an irreducible factor of \(fg\) appearing with exponent \(\lambda_{i}\), and thus Lemma 2.6 yields that \(L:=k[t]/(p_{i}(t))\) is a field of degree \(1\) or \(2\) with \(\varphi_{L}\) isotropic. But if \([L:k]=2\), it is easy to check that \(S_{3,2}=\{1\}\) and thus \(\varphi\) is already isotropic over \(k\). Hence, since if \(\deg(g)=\sum_{j=1}^{r}u_{j}\lambda_{j}\) satisfies \(u_{i}\in\{1,2\}\) and \(\gcd(\lambda_{i},3)=1\) for some \(i\in\{1,...,r\}\) then \(\varphi\) is already isotropic over \(k\), in the considerations below we will omit considering any partitions of \(2\) or \(5\) into positive integers having a \(1\) or a \(2\) in them, since any such partition would imply that \(u_{i}\lambda_{i}\in\{1,2\}\) for some \(i\), meaning that \(u_{i}\in\{1,2\}\).
We distinguish two cases, depending on whether \(\deg(g)\) is \(2\) or \(5\).
* **Case \(\deg(g)=2\).** Since any partition of \(2\) into positive integers involves a \(1\) or a \(2\), this implies that, in \(2=\sum_{j=1}^{r}u_{j}\lambda_{j}\), there is always some \(u_{i}\in\{1,2\}\) with \(\gcd(\lambda_{i},3)=1\). Hence, by the discussion above, \(\varphi\) is already isotropic over \(k\).
* **Case \(\deg(g)=5\).** Since the only partition of \(5\) into positive integers that does not involve a \(1\) or a \(2\) is \((5)\), we have, for this partition, that \(r=1\) and \(u_{1}\lambda_{1}=5\), implying that \(u_{1}\in\{1,5\}\) and \(\gcd(\lambda_{1},3)=1\). Notice that \(p_{1}\nmid f\) since \(f\) is irreducible of degree \(4\) and \(\deg(p_{1})=u_{1}\in\{1,5\}\); hence, \(p_{1}\) is an irreducible factor in \(fg\) of exponent \(\lambda_{i}\) coprime to \(3\) and by Lemma 2.6, there is some \(L/k\) with \([L:k]\in\{1,5\}\) and \(\varphi_{L}\) isotropic. It follows that, by considering all the partitions of \(5\) into positive integers, if \(\deg(g)=5=\sum_{j=1}^{r}u_{j}\lambda_{j}\), then we can always find some \(L/k\) with \([L:k]\in\{1,5\}\) and \(\varphi_{L}\) isotropic.
Hence, putting together all the possibilities from the two cases above, we conclude that there is always an extension \(L/k\) with \([L:k]\in\{1,5\}\) and \(\varphi_{L}\) isotropic, as required.
A similar proof as the one of Theorem 3.10 yields a procedure that can also give information about the possible degrees \([L:k]\) for other values of \(n\) (and \(d\)).
**Example 3.11**.: Let \(\varphi\) be a diagonal-full cubic form on a finite-dimensional vector space \(V\) over a field \(k\) with \(\varphi_{K}\) isotropic for some simple extension \(K/k\) of degree \(n:=[K:k]=10\). Since \((d,n)=(3,10)\), we have \(S_{d,n}=\{2,5,8,11,14,17\}\). By considering the partitions into positive integers of each \(u\in S_{d,n}\), and by using the knowledge that we have about the cases \((d,n)\in\{(3,2),(3,4)\}\), we can find possible values for \([L:k]\). We note that all the partitions into positive integers of \(u\in\{2,5,8,11,14\}\) appear as subpartitions of \(u=17\), so we just need to consider \(u=17\).
* \(u=17\). If a partition of \(17\) into positive integers involves \(1\),\(2\), or \(4\), then we know that there exists some \(L/k\) with \([L:k]\in\{1,5\}\) and \(\varphi_{L}\) isotropic. The only partitions of \(17\) into positive integers not involving \(1\), \(2\), or \(4\) are \((17)\), \((14,3)\), \((12,5)\), \((11,6)\), \((11,3,3)\), \((10,7)\), \((9,8)\), \((9,5,3)\), \((8,6,3)\), \((8,3,3,3)\), \((7,7,3)\), \((7,5,5)\), \((6,6,5)\), \((6,5,3,3)\), and \((5,3,3,3,3)\). Hence, in this case, there always exists some \(i\in\{1,...,r\}\) with \(\gcd(\lambda_{i},3)=1\) and \(u_{i}\in\{1,2(\leftrightarrow 1),4(\leftrightarrow 1\text{ or }5),5,7,8,11,14,17\}\).
Hence, we conclude that there is always an extension \(L/k\) with \([L:k]\in\{1,5,7,8,11,14,17\}\) and \(\varphi_{L}\) isotropic.
Summary of the procedure.To summarise, the general procedure for any positive integers \(d,n\geq 2\) with \(\gcd(d,n)=1\) is the following.
1. If \(\varphi\) is isotropic, we are done. Assume that \(\varphi\) is anisotropic over \(k\) and that \(\varphi_{K}\) is isotropic for some simple extension \(K/k\) of degree \(n:=[K:k]\geq 2\) coprime to \(d\).
2. Let \(f\) be the (irreducible) minimal polynomial of \(K/k\), so that \(\deg(f)=n\).
3. There is some \(0\neq r\in V[t]\) with \(\deg(r)<n\) and \(0\neq\varphi(r)=fg\), for some \(g\in k[t]\) with \(0<\deg(g)<n(d-1)\) and \(\deg(g)\in S_{d,n}\), since \(\deg(\varphi(r))=d\deg(r)\).
4. Let \(g=g^{*}\prod_{i=1}^{r}p_{i}^{\lambda_{i}}\) be the factorisation of \(g\) into irreducible polynomials over \(k\), where \(g^{*}\) is the leading coefficient of \(g\) and the \(p_{i}\)'s are distinct monic irreducible polynomials of degree \(u_{i}:=\deg(p_{i})\). Then \(\deg(g)=\sum_{i=1}^{r}u_{i}\lambda_{i}\in S_{d,n}\).
5. Let \(u_{\max}\in S_{d,n}\) be the largest element; we have seen that \(u_{\max}=nd-n-d\) (see the proof of Claim 3.6). For any \(u\in S_{d,n}\) with \(u\neq u_{\max}\), any partition \((a_{1},...,a_{t})\) of \(u\) into positive integers is a subpartition of the partition \((a_{1},...,a_{t},jd)\) of \(u_{\max}\), for some \(j\geq 1\).
6. Let \((a_{1},...,a_{r})\) be a partition of \(u_{\max}\) into positive integers. For each \(a_{i}\) with \(\gcd(a_{i},d)=1\), writing \(a_{i}=u_{i}\lambda_{i}\) yields that \(u_{i}\) can be any positive divisor of \(a_{i}\); if \(u_{i}\mid a_{i}\) and \(n\nmid u_{i}\), then we have found a \(p_{i}\nmid f\) (since \(f\) is irreducible and \(\deg(p_{i})=u_{i}\neq n=\deg(f)\)
appearing in the factorisation of \(fg=\varphi(r)\) with exponent \(\lambda_{i}\) coprime to \(d\). By Lemma 2.6, any such \(p_{i}\) yields a field \(L:=k[t]/(p_{i}(t))\) with \(\gcd([L:k],d)=1\), \(n\nmid[L:k]\) and \(\varphi_{L}\) isotropic.
**Remark 3.12**.: If there exists a partition \((a_{1},...,a_{r})\) of \(u_{\max}\) into positive integers with \(\gcd(a_{i},d)>1\) for all \(i=1,...,r\), then unfortunately we cannot get any new information from the procedure. Moreover, if there exists a partition \((a_{1},...,a_{r})\) of \(u_{\max}\) into positive integers such that, for any \(a_{i}\) with \(\gcd(a_{i},d)=1\), we have that \(n\mid a_{i}\), then unfortunately we cannot get any new information in this case as well, because the existence of such a partition implies that \(n\) could possibly divide \([L:k]\) and that possibly \(K\subset L\).
7. Hence, by considering all the possible partitions of \(u_{\max}\) into positive integers, if the situations described in Remark 3.12 do not occur, then we know that there exists some \(L/k\) with \(\gcd([L:k],d)=1\), \(n\nmid[L:k]\), and \(\varphi_{L}\) isotropic, where \([L:k]\) is in the set of all possible degrees found by considering all the partitions of \(u_{\max}\).
|
2305.14458 | Dancing Between Success and Failure: Edit-level Simplification
Evaluation using SALSA | Large language models (e.g., GPT-4) are uniquely capable of producing highly
rated text simplification, yet current human evaluation methods fail to provide
a clear understanding of systems' specific strengths and weaknesses. To address
this limitation, we introduce SALSA, an edit-based human annotation framework
that enables holistic and fine-grained text simplification evaluation. We
develop twenty one linguistically grounded edit types, covering the full
spectrum of success and failure across dimensions of conceptual, syntactic and
lexical simplicity. Using SALSA, we collect 19K edit annotations on 840
simplifications, revealing discrepancies in the distribution of simplification
strategies performed by fine-tuned models, prompted LLMs and humans, and find
GPT-3.5 performs more quality edits than humans, but still exhibits frequent
errors. Using our fine-grained annotations, we develop LENS-SALSA, a
reference-free automatic simplification metric, trained to predict sentence-
and word-level quality simultaneously. Additionally, we introduce word-level
quality estimation for simplification and report promising baseline results.
Our data, new metric, and annotation toolkit are available at
https://salsa-eval.com. | David Heineman, Yao Dou, Mounica Maddela, Wei Xu | 2023-05-23T18:30:49Z | http://arxiv.org/abs/2305.14458v2 | # Dancing Between Success and Failure:
###### Abstract
Large language models (e.g., GPT-3.5) are uniquely capable of producing highly rated text simplification, yet current human evaluation methods fail to provide a clear understanding of systems' specific strengths and weaknesses. To address this limitation, we introduce Salsa, an edit-based human annotation framework that enables holistic and fine-grained text simplification evaluation. We develop twenty one linguistically grounded edit types, covering the full spectrum of success and failure across dimensions of conceptual, syntactic and lexical simplicity. Using Salsa, we collect 12K edit annotations on 700 simplifications, revealing discrepancies in the _distribution_ of transformation approaches performed by fine-tuned models, few-shot LLMs and humans, and finding GPT-3.5 performs more quality edits than humans, but still exhibits frequent errors. Using our fine-grained annotations, we develop Lens-Salsa, a reference-free automatic simplification metric, trained to predict sentence- and word-level quality simultaneously. Additionally, we introduce word-level quality estimation for simplification and report promising baseline results. Our training material, annotation toolkit, and data are released at [http://salsa-eval.com](http://salsa-eval.com).
## 1 Introduction
Text simplification aims to improve a text's readability or content accessibility while preserving its fundamental meaning (Stajner, 2021; Chandrasekar et al., 1996). Traditional human evaluation for text simplification often relies on individual, shallow sentence-level ratings (Sulem et al., 2018; Alva-Manchego et al., 2021), easily affected by the annotator's preference or bias. Maddela et al. (2022) recently proposes a more reliable and consistent human evaluation method by ranking and rating multiple simplifications altogether. However, as text simplification involves performing a series of transformations, or _edits_, such as paraphrasing, removing irrelevant detail or splitting a long sentence into multiple shorter ideas (Xu et al., 2012), sentence-level scoring remains difficult to interpret since it is not reflective of fine-grained information about the types of edits being performed.
Fine-grained human evaluation through span selection has been explored for machine translation (Lommel et al., 2014) and open-ended text generation (Dou et al., 2022). Yet, these evaluation methods are error-driven - i.e., focusing solely on evaluating _failure_ - which punishes creative and diverse generations with minor errors in favor of generic ones. Additionally, machine translation and open-ended generation tasks usually retain none of the input words, while text simplification must balance the editing and preservation of words in the original input (Xu et al., 2016). We thus evaluate simplification quality as the aggregation of edit _successes_ and _failures_ (see Figure 1).
We introduce Salsa & - **S**uccess and **FA**ilure-driven **L**inguistic **S**implification **A**nnotation - an _edit-level_ human evaluation framework capturing
Figure 1: Simplification generated by few-shot GPT-3.5. Our edit-level Salsa annotation communicates a fine-grained evaluation of successes and failures.
a broad range of simplification transformations. Salsa is built on a comprehensive typology (SS3) encompassing 21 _quality_ (e.g., elaboration, generalization, paraphrasing), _error_ (e.g., hallucination, coreference deletion), and _trivial_ (e.g., add articles such as "the") edit types. To enable annotation with Salsa, we develop a easy-to-use interface and tutorial. Using Salsa, we collect 13K edit annotations from 700 simplifications written by five state-of-the-art language models and two humans. With these annotations, we conduct a large-scale analysis of model and automatic metric performance, and further introduce word-level quality estimation for simplification.
Our **main findings** are as follows:
* Few-shot GPT-3.5 simplification far surpasses other models, particularly in sentence-level syntax and content editing. However, its simplifications are not _tuned_ to the types of operations performed by human simplification. (SS5)
* Some fine-tuned models such as the MUSS (Martin et al., 2022) produce more diverse edits than GPT-3.5, yet suffer from incredibly high errors, while others (T5, Raffel et al., 2020) learn to minimize loss by making very few changes. (SS5)
* Compared to lexical and syntax edits, edits modifying sentence content, such as generalization and elaboration, are difficult for current automatic metrics to evaluate. (SS6)
* Fine-tuned on Salsa annotations, our reference-free metric, Lens-Salsa, capture the subtleties of different simplification approaches more accurately than existing metrics. (SS6)
* Leveraging our data, we present the word-level quality estimation task for text simplification and establish initial baselines for future modeling efforts. (SS7)
Our results demonstrate Salsa provides an interpretable and exhaustive evaluation of text simplification. We release our interactive annotation interface, annotator training material, and data at [http://salsa-eval.com](http://salsa-eval.com) to facilitate further development of text generation models, automatic metrics, and edit-based tasks.
## 2 Related Work
**Model Evaluation.** Simplification work broadly agrees some typology of simplification operations exists (Siddharthan, 2014), starting with early rule-based systems which explicitly defined specific syntax operations (Dras, 1999). Past work has experimented with designing models to control the extent of each operation by using a pipeline to perform each operation independently (Maddela et al., 2021; Raffel et al., 2020), predicting edit operations (Dong et al., 2019) or augmenting fine-tuned models with learned control tokens (Martin et al., 2022, 2020). However, evaluation only considers a sentence in its entirety rather than rating individual operations, either using automatic metrics, shown to be an inadequate representation of quality (Alva-Manchego et al., 2021; Sulem et al., 2018), or surface-level Likert ratings, typically asking crowd-sourced annotators to rate on scales of fluency, adequacy and simplicity. These scores are difficult to interpret as independent dimensions of quality and capture no detail into the type of simplification being written (Briakou et al., 2021; Hashimoto et al., 2019). Additionally, despite current systems' often producing simplification errors (Choshen and Abend, 2018), annotating error has primarily been performed through inspection, and has not been incorporated into human or automatic evaluation (Gooding, 2022).
**Linguistic Inspection.** Manual inspection attempts to understand the behavior of simplification models or datasets, characterized by detailed typologies and often conducted by authors or domain experts. Cardon et al. (2022) performs detailed inspection of the ASSET simplification test corpus and use their data to study the behavior of automatic metrics. Stodden and Kallmeyer (2022) and Jiang et al. (2020) propose an interactive linguistic inspection interfaces for sentence alignment and corpus annotation. However, these interfaces are not designed for human evaluation of model outputs and do not provide edit-level ratings for measuring performance.
**Fine-grained Human Evaluation.** Human evaluation performed on a span-level has been previously proposed for a variety of NLP tasks. In translation, the Multidimensional Quality Metrics (MQM) (Lommel et al., 2014), categorizes error into accuracy and fluency sub-types and is later extended by Freitag et al. (2021) to weight errors by severity and combine into a single quality score. Dou et al. (2022) proposes Scarecrow to capture errors appearing in open-ended text generation. However, as these span-based evaluation schemes exclusively annotate error, they encourage generic generations and punish interesting or diverse out
puts. For summarization, the FRANK typology (Pagnoni et al., 2021) aggregates errors into broader categories to benchmark metrics that measure factuality. Inspired by FRANK, Devaraj et al. (2022) introduces a framework to evaluate factuality for text simplification. To our knowledge, error-driven evaluation in text simplification has not yet been proposed beyond the context of factuality.
## 3 The Salsa & Framework
We introduce Salsa, an edit-based human evaluation framework for text simplification defined by a typology of 21 linguistically-grounded edit types with the aim of capturing both successes and failures (i.e., both quality changes and errors, see Fig. 1) through evaluating each edit. Our annotation pipeline consists of: edit selection (SS3.1), categorizing the edits' impact on sentence information (SS3.2), classifying the fine-grained edit type (SS3.3) and rating by efficacy or severity (SS3.4). We implement our Salsa framework through an interactive annotation interface (Fig. 2). The Salsa typology is organized by a decision tree, as illustrated by Figure 3, where annotators only answer 3-4 intuitive questions about each edit.
### Edit Selection
We formulate _edit selection_ as a sequence tagging problem similar to phrase alignment (Yao et al., 2013; Lan et al., 2021), but different in that (1) spans are labeled by the primitive edit operation being performed: either single-operation insertion, deletion, substitution, word-/clause-re order; or multi-operation sentence split and structure changes, (2) a single span may belong to multiple edit operations to account for overlapping edits. An insertion or deletion edit exclusively modifies content, while a substitution either modifies or paraphrases content. A reorder, split or structure edit exclusively performs a content-agnostic syntax transformation. As split and structure edits are multi-operation (i.e., require a combination of primitive operations to perform), they are defined by a set of underlying single-operation _constituent_ edits. For example, this change from passive to active voice via a structure change written by zero-shot GPT-3.5 involves an insertion, substitution, reorder and four deletion edits:
### Categorizing by Information Change
Each selected edit is then labeled with its impact on the underlying sentence information: _less_, _same_, _more_ or _different_ information. Given the type of operation and change to information, we subsequently organize each edit into three linguistic families as defined by Siddharthan (2014):
**Lexical edits** perform simple changes in "wording". This includes paraphrasing (i.e., substitution that keeps the same information) and inconsequential trivial changes (e.g., inserting 'the').
**Syntax edits** capture transformations to the _distribution_ of information, rather than substance. A split converts a candidate sentence to two sentences, a re-order edit re-arranges clauses or wording within a clause, and a structural edit modifies the voice, tense or clausal structure. Examples of structural edit sub-types are in Appendix B.
**Conceptual edits** modify underlying ideas con
Figure 2: The Salsa annotation process consists of (1) selecting edits, (2) identifying information change, (3) classifying edit type and (4) rating efficacy/severity (also see Fig. 3).
veyed by the text. A successful conceptual edit requires elaboration to add clarifying information absent from the input or generalization to delete unnecessary/complicated ideas. Therefore, a substitution, insertion or deletion may alter the content.
### Edit Type Classification
After being categorized into lexical, syntax or conceptual edit families, we further classify each edit operation into 21 fine-grained success (quality), failure (error) or trivial edit types (see Fig. 3). Each specific edit type may only be introduced by certain operations (e.g., a deletion cannot introduce a hallucination error). Successful edits only have one 'type' of success but a failed edit may introduce multiple error types. For example, a successful information insertion will always be _elaboration_, but an unsuccessful information insertion may be one of four errors. We also separately identify edits containing a _grammar error_, as sentence grammar is independent of its semantics [10]. Appendix A enumerates each Salsa edit type.
### Rating Edit Efficacy / Severity
As each quality and error edit has a varying degree of impact on the overall simplification quality, we define three levels to measure the efficacy of quality edits and severity of error edits: 1 - minor, 2 - somewhat, and 3 - major.
**Overall simplification score.** Similar to MQM evaluation in machine translation [11], we collapse edit annotations into a single simplification score to allow for direct system comparison. We calculate sentence-level simplification score \(score(S)\) as a weighted sum of edit ratings:
\[\sum_{e\in E}\exp\left(\frac{len(e_{\text{C}})+len(e_{\text{S}})}{len(C)+len(S )}\right)\cdot w(e)\cdot r(e)\]
where \(S\) is the simplification of complex sentence \(C\), \(E\) is the set of edits, \(e_{C}\) and \(e_{S}\) are the parts of edit \(e\) performed on \(C\) and \(S\) respectively, \(w(e)\) is the edit weight, \(r(e)\) is the edit rating (severity / efficacy), and \(len\) denotes character length.1 For weight scheme \(w(e)\), we fit a linear regression model by considering the sentence-level human ratings gathered in SimpeVal\({}_{2022}\)[12] as a gold standard, as reported in Table 1.
Footnote 1: We normalize the edit length and use \(\exp\) to add weight for longer edits (see example in Appendix C).
The absolute values of the quality weights are generally higher than the error weights, as simplifications tend to make far more quality edits than error edits in all three linguistic families (see Figures 4 and 5 in SS5). However, the weight for syntactic simplification er
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Edit Family** & **Quality Weight** & **Error Weight** \\ \hline Conceptual Simplification & 3.5 & -1 \\ Syntactic Simplification & 3 & -5 \\ Lexical Simplification & 4.8 & -1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Edit weighting \(w(e)\) for sentence-level Salsa score, fit to maximize agreement on SimpeVal\({}_{2022}\) human ratings.
Figure 3: The multi-stage Salsa edit evaluation framework, implemented by our edit annotation interface (Fig. 2). Underlying spans are classified into twenty one success (\(\uparrow\)) and failure (\(\downarrow\)) types, rated by efficacy or error severity.
others (-5), as these errors often completely disrupt the sentence. As the type of simplification depends on the needs of each particular user group (Stajner, 2021), weights could be adjusted according to the simplification domain (Cemri et al., 2022; Basu et al., 2023) or use case (Trienes et al., 2022).
## 4 Human Annotation
We describe our use of Salsa to collect 13,180 edit ratings across 2,100 human annotations on 700 simplifications written by 5 state-of-the-art models and two humans.
### Simplification Data
We collect annotations on SimPEval\({}_{2022}\)(Maddela et al., 2022), a challenging simplification benchmark with 360 simplifications written by four state-of-the-art models and two humans on 60 manually selected complex Wikipedia sentences originally written between Oct 2022 and Nov 2022. We further expand the dataset with 40 additional sentences from Wikipedia written in Dec 2022 and adding simplifications from a fine-tuned T5-11B. As these sentences are selected to be more complex than previous simplification benchmarks, it allows systems to demonstrate their full capabilities in performing different simplification operations. Our SimPEval\({}_{2022}\) inputs contain significantly longer sentences (\(\mu=37.87\), \(\sigma=12.73\)) than the previous ASSET benchmark (\(\mu=19.72\), \(\sigma=7.95\)).
**Simplification Systems.** We aim for a broad coverage of model approaches:
Muss(Martin et al., 2022), a BART-large model conditioned on explicit parameter tokens from Martin et al. (2020), fine-tuned on Wiki-Large (Zhang and Lapata, 2017) and mined paraphrase data. MUSS is the SOTA model before GPT-3.5.
T5(Raffel et al., 2020), an encoder-decoder transformer pre-trained on 745 GB of web text. We use T5-3B and T5-11B variants and fine-tune on the aligned Wiki-Auto dataset (Jiang et al., 2020), shown to be higher quality than Wiki-Large.
GPT-3.5, a series of GPT-3 models pre-trained on text and code dated before Q4 2021. We use the best available text-davinci-003 model, based on InstructGPT (Ouyang et al., 2022), fine-tuned with human demonstrations and reinforcement learning with human feedback. We include both zero- and few-shot (5-shot) generation, using the same prompt setup as SimPEval\({}_{2022}\).
Humans. We ask two in-house annotators to write simplifications for the 40 newly selected sentences, replicating instructions used in SimPEval\({}_{2022}\).
### Collecting Human Annotations
As crowd-sourced annotators have shown to have inconsistent quality (Shmueli et al., 2021), we hire 6 undergraduate students from a US university. All annotators were native English speakers and paid $15 / hour. Annotators were trained with an in-depth tutorial consisting of broad explanations of simplification concepts, over 100 examples covering each of the 21 Salsa edit types and interactive exercises. After finishing the tutorial, annotators completed two rounds of onboarding annotations and were provided feedback by the authors. To concretely measure agreement for each stage of the SALSA framework, we collect annotations in three stages: (1) we have three annotators select edits, (2) a fourth annotator adjudicates the edits into a single selection and (3) the original three annotators classify and rate the adjudicated edits. During each stage, the authors closely monitor each set of annotations to ensure quality and continually provide feedback to annotators. The final result is 2100 annotations, with the average time for a single annotation taking 4.23 minutes. Figure 2 illustrates our annotation interface, with further screenshots of our tutorial included in Appendix G.
### Inter-Annotator Agreement
We calculate edit selection agreement by each token, as a single token may be annotated to multiple edits simultaneously, with Table 2 reporting agreement per-edit, further organized by their type of information change. Agreement is highly dependent on the edit type, as we observe high agreement for deletion (\(\alpha\)=0.75), paraphrase (substitution with
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Edit** & **Sub-type** & **Kripp.**\(\alpha\) & **3 Agree\%** & **2 Agree\%** \\ \hline \hline Insertion & More Information & 0.45 & 14\% & 40\% \\ Deletion & Less Information & 0.75 & 42\% & 65\% \\ Substitution & More Information & 0.15 & 1\% & 11\% \\ & Less Information & 0.31 & 7\% & 26\% \\ \hline \hline
**Reorder** & Word-level & 0.12 & 0\% & 13\% \\ & Component-level & 0.41 & 11\% & 38\% \\ Split & Sentence Split & 0.66 & 32\% & 55\% \\ Structure & Structure & 0.25 & 5\% & 25\% \\ \hline \hline
**Substitution** & Same Information & 0.53 & 21\% & 51\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Edit selection inter-annotator agreement measured per token. As Krippendorff’s \(\alpha\)(2018) includes unlabeled tokens, we also report the percentage of annotated tokens where at least 2 and 3 annotators agree.
the same information, \(\alpha\)=0.53), and sentence split (\(\alpha\)=0.66) edits. We also find low agreement for substitution with more information (\(\alpha\)=0.15), due to the subjectivity among annotators on determining whether new tokens contain 'novel' information, as it was often confused with insertion. Disagreement on reordering (\(\alpha\)=0.12) and structure (\(\alpha\)=0.25) may be attributed to their low frequencies and the ambiguity of overlapping syntactic and content edits, as the highly-compressed SimPeval outputs often make substantial edits whose annotations have multiple correct interpretations. We also report % two and three annotators agree, which we find are similar to fine-grained evaluation frameworks in other text generation tasks Dou et al. (2022). We include further agreement analysis and examples in Appendix D.
## 5 Simplification Systems Edit Analysis
We aggregate our Salsa annotations to explore patterns in fine-tuned, LLM- and human-written simplifications. Figures 4 and 5 summarize the frequency of quality and error edit types. As edits vary in size, we calculate also _edit coverage_ as the length of each edit in proportion to the entire simplification using the same method as SS3.4. Figure 6 reports edit coverage for different edit efficacy and severity ratings. We average the annotations of both human simplifications for our analysis. The following are our main findings:
**All systems make far more quality than error edits, but these errors are sparse (Fig. 4 and 5).** We observe only 16% of these models' edits were errors, but these errors were distributed across all simplifications. 73% of simplifications generated by MUSS have at least one error, compared to 62% / 56% by zero- / few-shot GPT-3.5. Human simplifications have the lowest error rate of 48%.
**Human mainly produces bad deletion errors, which are often subjective (Fig. 5).** After excluding _bad deletion_ errors, humans' error rate drops from 48% to 25%, compared to few-shot GPT-3.5 only decreasing from 56% to 43%. The only anomaly in errors is _bad deletion_, which may attribute to the subjectivity in judging deletion:
```
EXAMPLE
```
Unlike the first film adaptation, in which director Samuel Fuller removed... Unlike the first film adaptation, Samuel Fuller removed... ```
We see that some annotators mark this as a bad deletion while others consider it appropriate as they be
Figure 4: Successful edits on SimPeval per-model, organized by edit type. MUSS successfully paraphrases at a human rate but fails to capture more complex simplification techniques. Compared to GPT-3.5, human content simplification utilizes more generalization, a similar distribution of syntax edits, and slightly less paraphrasing.
Figure 5: Failed edits on SimPeval per-model, organized by edit type. Compared to humans, both GPT-3.5 setups make more syntax and lexical errors. Although humans perform bad deletion, errors at a higher frequency than GPT-3.5, this is reflective of the inherent ambiguity in judging the relevancy of the deleted content. T5-3B, T5-11B and MUSS (w.r.t. syntax edits) make fewer errors than GPT-3.5 simply because they perform less overall edits.
lieve this information is not entirely relevant since the sentence is communicating whether a film is a meaningful adaptation of a book.
**Fine-tuned T5-3B and T5-11B generate conservative simplifications (Fig. 4, 5, and 6).** Compared to all other systems, both T5 models make minimal changes, while still exhibiting high rates of error. This is likely due to their training data, Wiki-Auto, containing shorter sentences, usually requiring simpler simplification techniques, making it difficult for models to generalize on longer and more complex sentences. This underscores the need for explicit simplification design in fine-tuned models, such as the control tokens Martin et al. (2020) used by MUSS.
**GPT-3.5 writes quality edits at a higher frequency than humans, but human edits are longer and more effective (Fig. 4 and 6).** Both zero-shot and few-shot GPT-3.5 make a larger number of content (elaboration and generalization) edits, but humans make longer edits and a higher percentage of high-efficacy edits. Human simplification typically inserts or deletes entire clauses, while GPT-3.5 edits single modifiers or words, which have less impact on sentence quality or simplicity.
**Models elaborate, while humans generalize (Fig. 4).** When simplifying content, all models (excluding T5) tend to elaborate at a higher ratio than humans, for example, GPT-3.5 attempts to insert content 17% more often. As LLMs have shown to encode world knowledge in their parameters Petroni et al. (2019); Brown et al. (2020), GPT-3.5 elaboration is far more effective than MUSS, for example:
**EXAMPLE** _Few-shot GPT-3.5_
After defeating PSD candidate Viorica Dancila by a landslide in 2019, **his** second term..
In 2019, Klaus Johannis defeated PSD candidate Viorica Dancila by a large margin. His second term..
**Split edits are straightforward, Structure edits are far more complex (Fig. 4 and 5).** Surprisingly, sentence split is shown to be the easiest edit for all models to accomplish, with a similar number made by MUSS, GPT-3.5, and human, with even the conservative T5 models making a comparable number of split edits. However, the more complex structure and re-ordering edits are rarely seen in fine-tuned models, we speculate this may be attributed to (i) SimpleEval's sentences are more compressed than models' training data and (ii) GPT-3.5 has a unique ability to perform complicated syntax rewriting, also reflective of findings in abstractive summarization Goyal et al. (2022). Despite GPT-3.5's improvement, the structure error rate demonstrates it has not yet reached human-level ability. Additionally, we observe zero-shot GPT-3.5 produces structure errors (see below example) at a 19% rate above few-shot.
**EXAMPLE** _Zero-shot GPT-3.5_
The sentence included a fine of 5400...
You will receive a fine of 5400...
We also find human simplifications are more conservative with re-ordering than models, but its attempts to simplify using re-ordering often appear arbitrary:
**EXAMPLE** _Human written_
On \(\exists\)November 2022, the British Secretary...
On November 3rd, 2022, the British Secretary...
**Paraphrasing is a crucial, but tricky mechanism**
Figure 6: Edit coverage of efficacy (+) and severity (-) ratings on SimPeVal, separated by approach to simplification. Edit coverage is defined as \((len(e_{\text{C}})+len(e_{\text{S}}))/(len(C)+len(S))\) (see §3.4). The final _all_ column ignores frequency to compare ratio of quality and error among models. High quality simplification is _tuned_ to human performance, rather than maximizing the number of edits. We also report frequencies of only edit ratings in Figure 14 in Appendix.
**(Fig. 4 and 5).** MUSS, GPT-3.5, and human all paraphrase in at least 75% of sentences. Despite low performance in conceptual and syntactic simplification, MUSS paraphrases at a human-like rate likely due to its training on over one million paraphrase sentence pairs mined from web crawl data. Although zero-/few-shot GPT-3.5 paraphrase at a higher rate than humans, they often are not necessary as shown here:
**EXAMPLE**
_Few-shot GPT-3.5_
The club said on social media that customers subdued
the gunman...
The club reported on social media that customers were able...
We include further discussion and analysis of edit-level evaluation with Salsa in Appendix E.
## 6 Automatic Metric Evaluation
While automatic metrics are traditionally evaluated based on correlation with human ratings on high-level aspects such as semantic similarity and simplicity, their ability to capture the subtleties of lexical, syntactic, and conceptual simplification is not well understood. Using our comprehensive annotations collected, we study how well current automatic metrics capture these distinct simplification approaches. Additionally, we introduce Lens-Salsa, a reference-free metric fine-tuned on Salsa annotations.
**Existing Automatic Metrics.** We consider five automatic metrics: BLEU (Papineni et al., 2002), SARI (Xu et al., 2016), the most widely-used text simplification metric, BERTScore(Zhang et al., 2020), measuring semantic similarity based on BERT embedding (Devlin et al., 2019), ComC-mqm, a machine translation metric (Rei et al., 2020) trained on MQM error ratings (Freitag et al., 2021), and Lens(Maddela et al., 2022), a recently proposed text simplification metric fine-tuned on SimPEval that contains rank-based human evaluation ratings of 24 systems' simplifications of TurkCorpus (Xu et al., 2016).
**Lens-Salsa.** The automatic simplification metrics mentioned above require human-written references, which may not exist in practice and are costly to collect. To this end, we introduce Lens-Salsa, a _reference-free_ simplification metric enabled by the edit-level information provided by Salsa annotations. Inspired by the CometKwi machine translation metric design (Rei et al., 2022), Lens-Salsa is first pre-trained on the SimPEval sentence-level scores using UniTE (Wan et al., 2022), a multi-task learning objective with three input formats: _Simp + Ref_, _Simp + Comp_, and _Simp + Comp + Ref_. We then fine-tuned Lens-Salsa on Salsa annotations, with _Simp + Comp_ as the input format. We use a dual-objective to predict both the sentence-level score (calculated by Lens) and word-level quality score \(\hat{y}_{i}\in[-3,3]\), which is the efficacy or severity rating of each word \(w_{i}\).
**Results.** Table 3 reports the Pearson correlation of each metric with the human sub-scores across each Salsa dimension. We calculate the human sub-score for each dimension of simplification as discussed in SS3.4. We find our Lens-Salsa is uniquely sensitive to Salsa edit-level ratings, despite not being trained to predict the Salsa sentence-level score. In fact, fine-tuning on word-level quality scores substantially improved performance (\(+0.07\) correlation on _all edits_ compared to no fine-tuning). Only Lens and Lens-Salsa obtain substantial correlation with human Salsa scores (0.27 and 0.34 respectively), with other metrics demonstrating spurious and even negative correlation with human judgements. Although trained on span-based MQM ratings, Comet-mqm fails to capture monolingual simplification quality, demonstrating the need for simplification-specific quality estimation. Despite their strong performance, we find Lens-based automatic metrics mainly evaluate lexical and syntactic simplification edits, rather than conceptual edits, which may be attributed to the SimPEval training data consisting of shorter, paraphrase-based simplifications. Lastly, all metrics have a higher correlation with quality than error edits. We posit this is primarily due to the sparsity of errors exhibited in the generations generated by the current high-performing systems.
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline & Lexical & -0.185 & 0.030 & 0.015 & 0.086 & **0.289** & 0.284 \\ & Syntax & -0.117 & 0.097 & 0.008 & 0.024 & 0.206 & **0.244** \\ & Conceptual & -0.240 & -0.147 & -0.325 & -0.187 & -0.000 & **0.173** \\ \hline \multirow{3}{*}{\begin{tabular}{l} \end{tabular} } & Lexical & -0.259 & -0.162 & -0.134 & -0.004 & -0.059 & **0.015** \\ & Syntax & -0.147 & -0.094 & -0.136 & -0.073 & -0.042 & **-0.013** \\ & Conceptual & -0.128 & -0.099 & -0.293 & -0.169 & -0.016 & **0.062** \\ \hline \multirow{3}{*}{
\begin{tabular}{l} \end{tabular} } & All Error & -0.263 & -0.190 & -0.329 & -0.170 & -0.035 & **0.046** \\ & All Quality & -0.201 & 0.056 & -0.018 & 0.033 & 0.304 & **0.318** \\ \cline{1-1} & All Edits & -0.286 & -0.035 & -0.235 & -0.129 & 0.266 & **0.336** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Pearson correlation between automatic metrics and Salsa sub-scores (§3.4). We exclude human written simplifications, which are used for references. **Best**; Second Best.
Word-Level Quality Estimation
Word-level Quality Estimation (QE), defined as predicting the quality of each token in the output, carries substantial value in evaluating and refining text simplification. Despite its utility as well explored in machine translation (Basu et al., 2018; Zerva et al., 2022), word-level QE has not been studied for text simplification due to a lack of appropriately annotated data. In this section, we leverage our Salsa annotations to demonstrate baseline approaches and show significant potential for future work. As the presence of deletion edits is exclusive to the complex sentence, the task setup is classifying each token in both the complex and simplified sentences as _quality_, _error_, or _ok_.
**Data.** We label each word by the average efficacy/severity rating of its associated edit: \(<0\) as error, \(=0\) as ok, and \(>0\) as quality. Words that are not part of any edits default to the ok label. For edit types such as reorder and substitution that span both sentences, we only label the words in simplified sentences, leaving the words in original sentences with ok labels. Given that split and structure edits are formulated of composite edits, such as deletion and substitution, we deconstruct them into their composite edits before proceeding into the labeling process. For tokens that appear in multiple edits, we use the lowest rating to assign its label. After the entire process, 6.8K, 1.8K, and 27K words are labeled as quality, error, or ok respectively for training.
**Models.** We fine-tune Transformer-based models such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) to perform quality estimation as a sequence tagging task. During inference, we take the hidden states of the first token for each word as the input to the classification head. We also include Lens-Salsa, as one of its training dual-objective is to predict word-level quality. For additional implementation details, please refer to Appendix F.
**Results.** Table 4 shows the F1 scores separated by edit type. RoBERTa models perform better than BERT, even with a smaller size. The overall baseline performance mirrors current models in machine translation (Yang et al., 2022). Interestingly, while word-level QE offers substantial benefits for sentence-level metrics as shown in SS6, the dual objective of sentence-level does not reciprocate similar benefits back, which is potentially due to the pre-training data only focuses on sentence-level QE. Given the imbalance in label distribution, we posit data augmentation could improve performance in detecting error tokens.
## 8 Conclusion
In this work, we introduce Salsa, a novel human evaluation framework that incorporates, edit-based labeling, error and quality evaluation, and dimensions of lexical, syntax and conceptual simplification. We demonstrate Salsa benefits in granularity, accuracy, and consistency. We use Salsa to collect a 13K edit annotation dataset on simplifications written by modern models as well as humans, and analyze the strengths and limitations of GPT-3.5, fine-tuned models, and human simplifications. Finally, we use Salsa annotations to develop the first reference-less automatic metric for text simplification and demonstrate promising baselines for word-level quality estimation, showing productive avenues for future development of fine-grained human evaluation, automatic metric development and simplification error identification.
## Limitations
While we demonstrate promising results on sentence-level evaluation, simplification is often a document-level task (Laban et al., 2021; Sun et al., 2021). Incorporating higher-level operations such as sentence fusion, paragraph compression, and reordering would require an extension to Salsa and presents unique analytical challenges. Additionally, detailed human evaluation inherently requires greater resources to produce a high granularity of annotations. While we show this process can be streamlined with a robust annotator training, Salsa requires a similar amount of resources as widely used fine-grained evaluation in other tasks such as MQM (Lommel et al., 2014) or FRANK
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{**Quality Error**} & \multicolumn{1}{c}{**Ok**} & \multirow{2}{*}{**Average**} \\ _\# Words_\(\rightarrow\) & _6.8K_ & _1.8K_ & _27K_ & \\ \hline Majority baseline & 0.00 & 0.00 & 0.87 & 0.29 \\ \hline Lens-Salsa & 0.41 & 0.20 & 0.46 & 0.36 \\ BERT-base & 0.61 & 0.32 & 0.92 & 0.62 \\ BERT-large & 0.64 & 0.38 & 0.92 & 0.65 \\ RoBERTa-base & **0.69** & 0.42 & 0.93 & 0.68 \\ RoBERTa-large & 0.68 & **0.43** & **0.93** & **0.68** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Word-level F1 scores of automatic sequence tagging on evaluation set. “Majority baseline” denotes classifying all tokens as label “ok”.
(Pagnoni et al., 2021).
## Ethics Statement
Our annotations were performed using the SimPEVAL2022 corpus, originally collected from publicly available Wikipedia articles (Maddela et al., 2022) and we further extend the dataset with complex sentences collecting using the same methodology from publicly available Wikipedia articles. As discussed in SS4.2, we perform data collection with in-house annotators from a US university. Annotators were paid $15-$18/hour. We took care to manually review all data prior to annotation as to exclude any triggering or sensitive material from our annotation data. Annotators were informed that any data they felt uncomfortable with was not required to annotate. Our interface was built using the open-source Vue.js2 library, and training of our added T5-11B system was implemented using the open-source Hugging Face Transformers3 library.
Footnote 2: [https://vuejs.org/](https://vuejs.org/)
Footnote 3: [https://huggingface.co/](https://huggingface.co/)
## Acknowledgements
We thank Tarek Naous, Nghia T. Le, Fan Bai, and Yang Chen for their helpful feedback on this work. We also thank Marcus Ma, Rachel Choi, Vishnesh J. Ramanathan, Elizabeth Liu, Govind Ramesh, Ayush Panda, Anton Lavrouk, Vinayak Athavale, and Kelly Smith for their help with human annotation. This research is supported in part by the NSF awards IIS-2144493 and IIS-2112633, ODNI and IARPA via the HIATUS program (contract 2022-22072200004). The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NSF, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
|
2307.02443 | An Exploratory Literature Study on Sharing and Energy Use of Language
Models for Source Code | Large language models trained on source code can support a variety of
software development tasks, such as code recommendation and program repair.
Large amounts of data for training such models benefit the models' performance.
However, the size of the data and models results in long training times and
high energy consumption. While publishing source code allows for replicability,
users need to repeat the expensive training process if models are not shared.
The main goal of the study is to investigate if publications that trained
language models for software engineering (SE) tasks share source code and
trained artifacts. The second goal is to analyze the transparency on training
energy usage. We perform a snowballing-based literature search to find
publications on language models for source code, and analyze their reusability
from a sustainability standpoint.
From 494 unique publications, we identified 293 relevant publications that
use language models to address code-related tasks. Among them, 27% (79 out of
293) make artifacts available for reuse. This can be in the form of tools or
IDE plugins designed for specific tasks or task-agnostic models that can be
fine-tuned for a variety of downstream tasks. Moreover, we collect insights on
the hardware used for model training, as well as training time, which together
determine the energy consumption of the development process. We find that there
are deficiencies in the sharing of information and artifacts for current
studies on source code models for software engineering tasks, with 40% of the
surveyed papers not sharing source code or trained artifacts. We recommend the
sharing of source code as well as trained artifacts, to enable sustainable
reproducibility. Moreover, comprehensive information on training times and
hardware configurations should be shared for transparency on a model's carbon
footprint. | Max Hort, Anastasiia Grishina, Leon Moonen | 2023-07-05T17:13:00Z | http://arxiv.org/abs/2307.02443v1 | # An Exploratory Literature Study on Sharing and Energy Use of Language Models for Source Code
###### Abstract
Context: Large language models trained on source code can support a variety of software development tasks, such as code recommendation and program repair. Large amounts of data for training such models benefit the models' performance. However, the size of the data and models results in long training times and high energy consumption. While publishing source code allows for replicability, users need to repeat the expensive training process if models are not shared.
Goals: The main goal of the study is to investigate if publications that trained language models for software engineering (SE) tasks share source code and trained artifacts. The second goal is to analyze the transparency on training energy usage.
Methods: We perform a snowballing-based literature search to find publications on language models for source code, and analyze their reusability from a sustainability standpoint.
Results: From a total of 494 unique publications, we identified 293 relevant publications that use language models to address code-related tasks. Among them, 27 % (79 out of 293) make artifacts available for reuse. This can be in the form of tools or IDE plugins designed for specific tasks or task-agnostic models that can be fine-tuned for a variety of downstream tasks. Moreover, we collect insights on the hardware used for model training, as well as training time, which together determine the energy consumption of the development process.
Conclusion: We find that there are deficiencies in the sharing of information and artifacts for current studies on source code models for software engineering tasks, with 40% of the surveyed papers not sharing source code or trained artifacts. We recommend the sharing of source code as well as trained artifacts, to enable sustainable reproducibility. Moreover, comprehensive information on training times and hardware configurations should be shared for transparency on a model's carbon footprint.
sustainability, reuse, replication, energy, DL4SE.
## I Introduction
The FAIR data principles are designed to support and enhance the reusability of digital research objects following four guiding principles: to be findable, accessible, interoperable, and reusable [1]. While the initial focus of FAIR was on scientific data, the principles have been transferred to research software [2]. Publishing source code supports the replicability of software but may incur repeated training costs, if a software product is data-driven. Training costs can be especially high for tools that are trained on large amounts of data, such as Machine Learning (ML) models, which have achieved state-of-the-art performance in various disciplines (e.g., text and image understanding, video content prediction [3, 4]). In particular, Deep Learning (DL) often achieves performance improvements by increasing the amount of training data and the size of the model, leading to long training times and substantial energy consumption [5], with an increase in computational costs for state-of-the-art models by a factor of 300 000 between 2012 and 2018 [6, 7]. This trend not only raises barriers for researchers with limited computational resources [8], it is also harmful to the environment [5, 6].
One class of DL models that benefit from training on large amounts of data are Large Language Models (LLMs). LLMs have been able to learn semantic information via training on texts from the Internet and achieve high performance on Natural Language Processing (NLP) tasks [5, 9]. Similarly, by training language models on a large corpus of source code (e.g., as provided by GitHub1), one can learn semantic information of source code [10] and apply the models on SE tasks, such as code generation, bug prediction and fixing, to alleviate developers from tedious work [11]. This research area is referred to as DL4SE, and the models are referred to as _Source Code Models_ (SCMs).
Footnote 1: www.github.com
Training an SCM can take more than 100 days and incur high costs from hardware and energy requirements [12, 13]. From an energy usage point of view, only sharing the source code to train the model is wasteful, because replication or reuse requires repeating the expensive and energy-consuming training process. Instead, trained models should be considered _digital artifacts_ that must be shared to lower the bar for building on existing work [14]. For instance, fine-tuning an existing task-agnostic model requires only a fraction of the computational costs of training such a model from scratch [12].
Despite the benefits of sharing the trained models and source code, a large number of studies in DL, including many in DL4SE, do not make code or models publicly available. Liu et al. [15] surveyed deep learning studies in SE conferences and journals. They found that 74.2% of the studies did not share the source code and data for replication. Failing to share the data or trained artifacts contradicts the software sustainability-quality characteristics [16]. Software sustainability is defined from economic, environmental, social and technical dimen
sions in that software should generate economic value, enable equal and sustained access to social resources, minimize harm to the environment, and ensure technical improvements and maintainability [17]. In this study, we focus on the technical and environmental aspects of sustainability in software, namely reusability and efficiency [16, 18]. To investigate the reusability and resource efficiency of source code models, we perform an exploratory literature search of existing DL4SE publications. For each publication, we investigate whether code and trained models are available, and what the training and energy requirements are. In other words, we focus on the following two research questions:
**RQ1:**: How many DL4SE publications share source code and/or trained models or related trained artifacts?
**RQ2:**: How much energy was used to train these models?
**Contributions:**: The contributions of this paper include:
* We conduct an exploratory study on the sustainability and reusability of (large) language models for source code. We analyze to what extent publications make trained artifacts available, so that software developers and researchers can reuse and profit from large models trained with high energy consumption without incurring such training costs themselves.
* We investigate the information provided in 79 publications with shared artifacts;
* We estimate the energy needed for training models from 30 publications that provided sufficient information;
* We summarize the lessons learned while studying the academic literature with this focus on sustainability;
* We provide recommendations to help researchers make their models more sustainable and support clearly communicating the relevant aspects in their publications.
## II Related Work
### _Sustainable Software Engineering_
Sustainable software engineering addresses sustainability in two regards: (1) creating software that empowers sustainable applications and (2) creating software in a sustainable resource-efficient way. The former is referred to as Information Technology for Green (IT for Green) [19] or sustainability _BY_ software [16]. The latter is called Green IT [20] or sustainability _IN_ software [16]. In this study, we focus on sustainability _IN_ software and use the term _sustainable software_ or _sustainable software engineering_ to define reusable shared software that is built with resource usage considerations in mind. Development of sustainable software can be supported by integrating sustainability goals in the development process [20]. One way to improve the sustainability of software is to optimize its performance by refactoring the source code, which can have positive impacts on the accompanying energy consumption [21]. For example, Verdecchia et al. [22] showed that refactoring code smells in Java applications can reduce energy consumption by almost 50%.
In addition to observing the energy consumed by applying software and potential positive effects by providing sustainable solutions, the energy consumed during the development process is of relevance as well, as pointed out by the GREENSOFT model [23]. The GREENSOFT model presents a life cycle for software products. Accordingly, a green software product should be sustainable during the course of the life cycle, including the software engineering process and the tasks developers address during implementation and maintenance. To alleviate their workload, they can use tools to automate and support software engineering tasks. In this regard, Martinez et al. [24] addressed the field of green software research by measuring energy consumption induced by development and maintenance activities, in particular Automated Program Repair (APR). APR is used to fix software bugs, which usually incur a high monetary cost to resolve, without requiring manual intervention of developers. While APR tools tend to report their performance in terms of number of bugs they are able to fix, Martinez et al. [24] considered their energy consumption as an additional quality measure. To evaluate the trade-off between accuracy and energy consumption of APR tools, they computed the energy cost for each point of accuracy (i.e., energy consumption divided by accuracy).
### _Energy Consumption of Machine Learning Models_
The energy consumption of training and developing ML models is becoming a growing concern [25], with models requiring large amounts of computational resources to train, causing financial costs and CO\({}_{2}\) emissions [7, 26]. Recently, implementation challenges and leaderboards have been introduced to incentivize the development of energy efficient models [27, 28]. Another proposition is to measure the performance of ML models not only with regard to accuracy, but also to consider energy consumption and trade-offs between the two metrics.
To account for the sustainability-accuracy trade-off, Gutierrez et al. [29] analyzed the impact of changing solvers for ML models. Having applied the models to credit card fraud data, they found configurations that required 2.9x more energy while improving accuracy by only 0.016. This illustrates that developers can make trade-offs between energy consumption and ML quality measures, such as precision and recall. In the same line of research, Georgiou et al. [7] compared the energy consumption of two frameworks (TensorFlow, PyTorch) for the development of DL by implementing and comparing the performance of six machine learning models. Energy consumption varied significantly in both the training and inference stages, with TensorFlow requiring less energy for training and PyTorch less energy for inference. However, the framework documentation did not provide information on hardware specifications to allow developers to select models and frameworks with regard to energy requirements.
Verdecchia et al. [25] modified the underlying datasets for training DL models to reduce energy consumption during training. Results showed that reducing dataset size, either in the number of features or number of data points, improves the energy efficiency by up to 92%, while having a negligible effect on accuracy reduction for selected algorithms. Garcia-Martin et al. [30] investigated the impact of parameter tun
ing on energy consumption and accuracy for the Very Fast Decision Tree algorithm. In some cases, small reductions in accuracy (\(<0.1\)) can reduce energy consumption by more than 70%. For an overview of publications addressing Green AI (AI systems developed with sustainability and costs considered), we refer to the systematic review by Verdecchia et al. [31].
### _Energy Consumption of Large Language Models_
To support responsible NLP, Zhou et al. [12] proposed the platform Hulk for benchmarking pre-trained language models in terms of time and cost. Processing time and costs are measured according to cloud services' hardware specifications and resource consumption. The cost of NLP models is evaluated at three stages: pre-training, fine-tuning, and inference. Pre-training is the most expensive stage in the development of language models: it can take several days and can cost up to 75,000S. However, once pre-trained, a model can be fine-tuned for several tasks, which requires less computational resources.
Strubell et al. [5] provided insights on the financial and environmental costs of training large language models for NLP tasks. In particular, they estimate the training cost in USD and carbon emissions for four open-source models. For example, training large language models with neural architecture search can cause CO\({}_{2}\) emissions 17 times as high as the average per-capita consumption in America. Given the high cost of NLP models, Strubell et al. formulated three actionable recommendations: (1) authors should report training times to allow for a cost-benefit analysis rather to solely focus on accuracy; (2) researchers need equal access to computational resources; (3) efficient hardware and algorithms should be prioritized.
## III Literature Search
To find relevant literature, we adopt and adapt a snowballing search procedure [32, 33, 34]. We make small adjustments to the search procedure described by Wohlin et al. [33], as we aim to build on four recent surveys in the domain of deep learning models for software engineering tasks. The surveys examine different research questions than ours but consider the same domain, which makes them good starting points. Moreover, we apply the inclusion and exclusion criteria after each snowballing step to control the scope of the search, as it can quickly become too wide and cover all of SE. Figure 1 presents an overview of the search procedure and the number of publications collected. The search and subsequent information extraction were conducted by the first two authors; the third author helped mitigate classification discrepancies where needed. In step, we select the four survey papers that seed the study. We include both published work and arXiv preprints to ensure timeliness. The four seed surveys are:
* Chen and Monperrus [35]: A literature study on embeddings learned from source code. Embeddings have been trained on different levels of granularity (e.g., binary code, tokens, functions). A list of 21 publicly available embeddings is provided.
* Sharma et al. [36]: A survey of ML techniques for analysing source code. A total of 364 studies published from 2002-2021, divided over 12 SE tasks. For each task, data collection, feature extraction and model training stages are outlined. They listed 61 tools for analyzing source code and applying ML techniques.
* Watson et al. [37]: A literature review of deep learning approaches in SE research. A total of 128 deep learning publications spanning 23 SE tasks have been reviewed.
* Niu et al. [38]: A survey on pre-trained models on source code applied to SE tasks. They presented a total of 20 pre-trained code models that have been applied to 18 tasks.
These four initial studies contain references to a total of 676 publications (step ). After deduplication, we consider a total of 202 unique publications for further investigation based on their title (step ). We deem a paper of interest for further analysis if the title matches the following inclusion criteria:
_IC-1:_: the publication addresses an SE task, and
_IC-2:_: the publication applies a deep learning technique.
To filter relevant publications, we read all 202 publications, published from 2012-2022, and flag publications for exclusion if they did not train language models for source code, i.e., the exclusion criterion for step is:
_EC-1:_: the publication does not train a source code model.
This step leaves us with 108 publications for further analysis.
In step, we extract information from the 108 publications to determine how they share artifacts. First, we investigated if source code is available. For this purpose, we analyzed the respective publications for links or references to external sources (e.g., a GitHub repository for source code, or Zenodo for datasets and tools). Among the 108 publications, 33 publi
Fig. 1: Overview of the search procedure.
cations did not provide source code ("unavailable" in Fig. 1).2 Next, we determined if the shared artifacts do not only provide source code, but include fully functional tools or checkpoints for ML models that are ready to use, without the need to be trained. This was the case for 35 out of the 108 publications ("reusable" in Fig. 1). The remaining 40 publications provided source code but no trained artifacts ("reproducible" in Fig. 1).
Footnote 2: To ensure that no artifacts were overlooked, we performed additional Google searches for publications that did not mention artifacts for replication.
The initial survey-based search is followed by repeated backward snowballing in steps & &. During snowballing, we collect additional relevant publications that have been cited by the 35 publications which provided trained artifacts collected prior. Snowballing is performed incrementally on publications that share trained artifacts, until no new publications are found (i.e., we perform multiple iterations, and stop when a fixed point is reached, which happened after four iterations). This yielded 292 additional publications (published from 2002-2023) that fit the inclusion criteria, bringing the total to 494 (202 from step & and 292 from step & ). After further inspection, 107 of the 292 additional publications did not train a language model and were excluded. Furthermore, 83 of those publications did not provide source code, and 58 of the publications shared source code but not the trained artifacts. Thus, repeated snowballing adds 44 publications that share trained artifacts, bringing the total to 79 publications with shared artifacts, published from 2015 to 2022.
We classify these 79 publications with respect to the 11 SE tasks presented in Table I, which were inspired by Niu et al. [38]. To address these tasks, source code models were trained on 18 different programming languages. The most frequent languages include Java (45 publications), Python (32 publications), and C and/or C++ (18 publications). Figure 2 presents the number of publications for each combination of programming language and SE task (e.g., an approach trained on Java source code for code completion).
We found that there are two types of trained artifacts that were publicly available: (1) trained ML models and tools; (2) source code embeddings. While trained models and tools are aimed at a specific task, source code embeddings are task-agnostic and provide comprehensive code representations for training future models with less effort than generating pre-trained embeddings [12]. Section IV (task-specific tools) and Section V (task-agnostic embeddings) present detailed information for the two types of shared artifacts.
**Answer to RQ1:** Out of the reviewed 293 publications, 33% shared source code and 27% shared trained artifacts.
## IV Task-Specific Code Models
This section presents approaches with shared artifacts that are designed to address specific tasks. In total, we collected 52 task-specific publications, which are summarized in Table II. Publications are presented with regards to the task they address and their respective programming language is shown, as well as hardware configuration and training time, if provided.
Among the 52 publications, two publications shared artifacts for more than one task. Hoang et al. [57] proposed CC2Vec, an approach for representing code changes. For each of the three tasks (log message generation, bug fixing patch identification,
Fig. 2: Number of publications with shared artifacts for combinations of task and programming language.
and just-in-time defect prediction), they trained and shared a separate model. Huang et al. [63] first introduced a new dataset called CoSQA, consisting of 20,604 human-annotated labels for natural language and source code pairs. Additionally, they proposed a model, CoCLR, trained on two tasks: code search and question answering. Their GitHub repository provides model checkpoints for both of these tasks.
The most frequently addressed tasks are concerned with faulty programs: code repair and defect prediction. Ten publications proposed approaches for code repair and nine publications addressed defect prediction. The task with the fewest available artifacts is code translation. Only Lachaux et al. [70] shared their TransCoder models for translating between three programming languages (Java, C++, Python). To allow for the translation of each pair of languages, they shared two models: 1) translate C++ \(\rightarrow\) Java, Java \(\rightarrow\) C++, Java \(\rightarrow\) Python; 2) C++ \(\rightarrow\) Python, Python \(\rightarrow\) C++, Python \(\rightarrow\) Java.
The most popular programming languages, among 11 unique languages considered by the 52 publications, are Java (23 out of 52 publications), C/C++ (14 out of 52 publications), and Python (14 out of 52 publications). In detail, 42 publications considered one programming language, while
ten publications were applied to more than one language: six publications considered two programming languages, one publication considered three languages, and three publications considered four languages. This results in an average of 1.33 programming languages considered per publication.
In addition to programming languages considered, we collect training details, such as hardware used and training time for each publication. However, those are not always provided. There are 22 out of 52 publications without hardware details (42%) and 26 out of 52 without training time (50%), 33% shared neither information (17 out of 52 publications). The training time of 26 publications with such details ranges from two hours or less [41, 55, 78, 79, 85] to hundreds of hours [47, 53]. While it is common to perform training on GPUs, there are four publications that did not use any GPU for their training procedure, published from 2015-2019 [41, 53, 77, 86]. Commonly, publications used a single GPU for training [39, 40, 43, 44, 49, 55, 63, 68, 78, 75, 78, 79, 80, 87, 88], sometimes in combination with CPUs. The highest amount of GPUs have been used by Syvatkovskiy et al. [47]. They utilized 5 Lambda V100 boxes, with 16 V100 GPUs each, resulting in 80 GPUs.
While we focus on the training procedure and the energy associated with creating and sharing an ML model, we note the application of such models can vary highly for different SE tasks. Usually, the reported tested times are lower than the required training time (e.g., more than 100 times quicker than training [40, 75, 76]), but in particular, program repair experiments can require long testing times. For example, Chen et al. [55] applied Sequencer for 130 hours to find patches for 75 bugs. White et al. [56] applied their program repair tool DeepRepair for 2,616 days. Data extraction and preparation steps can also require considerable amounts of time and compute resources, ranging from 5-12 days [73, 78, 81].
The majority of task-specific publications provided access to the full trained models, some of which one needs to request access to [51, 76]. Moreover, there are approaches shared as online tools [44, 49, 86] or IDE extensions [47, 48, 83, 85]. There are also 12 out of 52 publications that did not share the full model, but trained embedding files, which are used by the model. These are marked in Table II with the \({}^{\dagger}\) symbol.
## V Task-Agnostic Code Models
This section presents task-agnostic code models which share means of representing source code as embeddings, for a variety of downstream tasks. These models are able to transform code snippets to embeddings, which can be fine-tuned to SE tasks. For example, Lu et al. [108] provided fine-tuning details for the CodeXGLUE benchmark, with information for task-specific training and inference time for each task.3 The fine-tuning time ranges from 2 GPU hours (defect detection) to 60 hours (text-to-code generation, documentation translation).
Footnote 3: [https://microsoft.github.io/CodeXGLUE/](https://microsoft.github.io/CodeXGLUE/)
In total, we collected 27 task-agnostic models, as shown in Table III. For each publication, we list the model name and the programming languages it was trained on. If available, we list details on hardware configuration and training times. Among the 27 publications, 52% did not provide training time details (14 out of 27) and 26% did not provide their hardware configurations (7 out of 27). For publications without hardware details, training time is not reported as well.
Among the publications that shared training time details, the shortest duration is found for code2vec [101], which was
trained for 1.5 days and a single GPU. However, training large models can usually take weeks, up to 87 days for CodeTrans [98] and 3.5 months for BLOOM [13]. The long training time of BLOOM can be explained by the fact that it was trained on the highest number of programming languages (13 programming languages) in addition to 46 natural languages. Thereby, BLOOM is also the model trained on the highest number of programming languages, as it was trained on 13 out 14 programming languages we observed. BLOOM was not trained on LISP, which was only considered by CodeTrans [98] On average, each task-agnostic model is trained on source code data from 3.6 programming languages. Moreover, 10 out of the 27 publications train on a single programming language, which in 6 out 10 cases is Java.
In comparison to task-specific models, task-agnostic models are trained on more programming languages, 3.6 in comparison to 1.3 programming languages on average, and require a higher computational effort. In addition, publications that provide task-agnostic models for embedding source code are more likely to share hardware configurations than publications with task-specific models. The proportion of publications without training time details is comparable for both types (50% and 52%, for task-specific and task-agnostic models, respectively). Another difference is that task-agnostic models use more sophisticated hardware for training, with each publication using either GPUs or TPUs. Only one publication considered CPUs in addition to GPUs for training [103].
## VI Discussion
To discuss the various facets of RQ2, we consider three aspects: (A) How much energy do task-specific and task-agnostic models consume? (B) To what extent do studies on source code models take sustainability concerns into account? (C) When is sharing a model more efficient than re-training?
### _Energy Usage of Task-specific vs. Task-agnostic Models_
First, we perform a comparison of the energy consumed by training task-specific and task-agnostic models. For this purpose, we collect all publications that provide hardware and training time details, such that we can estimate the consumed energy in kilowatt-hours (kWh). In total, 30 publications provide sufficient information.4
Footnote 4: Note that we did not contact authors to provide missing information.
To estimate energy consumption, we used the Green Algorithms calculator [115].5 This calculator is designed to estimate the carbon footprint and energy needed to run algorithms based on the number and type of CPU/GPU cores, runtime, available memory, and platform run on (PC, local server, cloud). It is also possible to consider the location for training and running algorithms, because the energy mix in the grid impacts the carbon footprint. In contrast to the _Machine Learning Emissions Calculator_[116], the Green Algorithms calculator provides averaged options when details are missing (e.g., "world" if the country is unknown, "Any" CPU type if the type is not known), which is beneficial for estimating energy consumption if these details are missing. Our estimates report the energy needed in kWh with the default location set to "world", because server locations are seldom reported.
Footnote 5: [https://www.green-algorithms.org/](https://www.green-algorithms.org/)
We share energy usage estimations in the last column of Table II and Table III for task-specific and task-agnostic artifacts, respectively. Hardware specifications required by the Green Algorithms calculator are incomplete in the majority of studies considered. Most of the models are trained using a type of accelerator, such as GPU or TPU. Four studies reported cloud provider utilization, while the other studies used different server configurations. To this end, we make assumptions about the missing specifications based on the standard CPU and GPU values stated in product descriptions on web pages of Intel and NVIDIA. In case the calculator does not cover a specific CPU type, we fetch Thermal Design Power (TDP) information from the manufacturers' website, to estimate the power used per core. In addition, for publications that used both CPU and GPU for training their models, we consider both to be active during the entirety of the training time, unless stated differently. We use the specifications reported in Table IV unless stated otherwise by the publications.
Figure 3 illustrates the energy consumed for training for each of the 30 publications. Of these, 12 provided task-agnostic models and 18 task-specific models. Among the task-specific models, 5 only provided partially trained artifacts (e.g., embeddings that are used for later training), and therefore require additional training effort before usage.
**Answer to RQ2-A:** 30 out of 79 publications share sufficient information to estimate their energy consumption during training. Among these, the training of task-agnostic models used more sophisticated hardware (GPUs and TPUs) and required more energy.
### _Sustainability Concerns Considered in DL4SE Studies_
In Section VI-A, we outlined publications that provided sufficient information to estimate the energy required to replicate their models (i.e., hardware and training time). While this is important to understand how high the energy requirements are, it does not illustrate whether the resource usage is sustainable, or whether sustainability was taken into account. Only in a few cases do authors consider the sustainability of the training process and the carbon footprint caused. Here, we present all three publications that, in addition to providing pre-trained artifacts, mention sustainability concerns when training. All of these three trained and provided large task-agnostic models, two of which required "hundreds of petaflop/s-days of compute" [10] or more than a million GPU hours for training [13].
Chen et al. [10] trained Codex on Azure, which purchases carbon credits and uses renewable energies to reduce the carbon footprint. Using the pre-trained Codex model for repeated inference could exceed training costs.
Wang et al. [94] stated that the experimental design followed the objective of avoiding unnecessary computation, by creating smaller-sized models in comparison to existing ones, such as Codex. Moreover, training has been conducted on the Google Cloud Platform, which purchases carbon credits to offset the 49.25kg CO\({}_{2}\) caused by training CodeT5.
Le Scao et al. [13] considered various sustainability aspects during the creation of BLOOM: equipment manufacturing, model training, model deployment. The 81 tons of CO\({}_{2}\) needed for training BLOOM can be attributed to 14% equipment manufacturing, 30% training, 55% idle energy consumption. Training benefits from France's energy grid, which uses nuclear energy in a large proportion, as a low-carbon energy source. Further details on the carbon footprint of BLOOM are provided in a dedicated study by Luccioni et al. [117].
**Answer to RQ2-B:** Three publications covered sustainability concerns of the training process in addition to providing trained models. For example, they used cloud providers that purchase carbon credits or calculated CO\({}_{2}\) emissions resulting from training the shared models.
### _When is Sharing Models More Efficient than Re-training?_
In this section, we provide an exemplary scenario to compute and compare the energy required for training and storing a task-specific and task-agnostic model. We also show the energy used for downloading shared artifacts, to illustrate the energy-saving capabilities of sharing models trained on code.
In accordance with the energy estimates for the training process in Section VI-A, we used the calculator provided by Lannelongue et al. [115]. To determine the energy consumption of training and sharing language models, we followed Lakim et al. [118] who provided an assessment of the carbon footprint for the Arabic language model _Noor_. Data storage energy consumption estimates are based on the cloud storage energy consumption reported by Posani et al. [119], with a mean operating peak power of 11.3 W/TB. This measure includes a redundancy factor of 2 (i.e., an additional copy is stored) and Power Usage Effectiveness (PUE) of 1.6. Per year, this results in the energy consumption of 99 kWh per TB of data. Following the formula by Baliga et al. [120], Posani et al. [119] estimated the energy consumption of data transfers to be 23.9kJ/GB, with 1kJ being equal to 1/3600 kWh.
In Table V, we illustrate the exemplary energy consumption of sharing a tool (500 MB) and a large task-agnostic model (5 GB) over the span of one year. Note that we only consider the energy consumed by training and data storage. Other aspects, such as the manufacturing of hardware components, are omitted. Therefore, our example presents a reduced estimate of the complete energy consumption of the entire model lifecycle.
One also needs to consider the rebound effect depending on the number of downloads when estimating potential energy savings [121]. If trained models are downloaded because it is easy rather than necessary, then excess downloads can cause higher energy consumption than the initial model training caused. In our example, this is the case after 1,247 downloads for the task-specific model and 20,544 downloads for the task-agnostic model. While 20,544 downloads may sound like a large number, CodeBERT [122] was downloaded 1,982,300
Fig. 3: Energy used for training publicly available models for code. We distinguish partially shared and fully shared task-specific models, and fully shared task-agnostic models.
times from Hugging Face in January 2023.6
Footnote 6: [https://huggingface.co/microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base)
**Answer to RQ2-C:** A rebound effect happens when a shared model is downloaded too many times. For example, energy usage for storage and downloading of a 500 MB-size task-specific model is higher than re-training it after ca. 1,130 downloads.
## VII Threats to Validity
This section discusses the threats to validity of this mapping study based on the categories identified by Zhou et al. [123].
**Internal Validity:** Internal validity refers to threats to the validity of results presented in this work, for example, due to missing relevant publications during the literature search stage [124]. To mitigate this threat, we use a systematic process, starting our literature search with four comprehensive surveys on machine learning approaches for the SE domain. These provide an overview of relevant publications from 2022 and prior. Moreover, we apply four stages of snowballing to find additional references. This allows us to gather previous approaches which shared their artifacts, but there is a chance that we miss more recent works that have not been cited by any publication in our corpus, as we did not perform forward snowballing. While this can slightly alter our results, we are hopeful that recent works are more likely to share artifacts than publications from the past 10 years.
**External Validity:** External validity addresses the domain to which our findings can be generalized to. While our study focuses on the sustainability of shared artifacts for LLMs on code, our results confirm observations of related studies, such as high energy consumption in training LLMs for NLP tasks [5] and a lack of shared artifacts of DL studies for SE tasks [15]. Therefore, we hope our findings and recommendations are beneficial beyond LLM models for code.
**Construct Validity:** Construct validity is concerned with the quality of measures chosen to study the construct of interest. In our case, we were first interested in whether and which artifacts are shared (i.e., none, source code, trained models). For this purpose, we considered the absolute amount of publications with respect to the amount of shared artifacts, which coincides with the construct we want to measure.
Afterwards, we estimated the energy requirements in kWh for training language models, for which we used the Green Algorithms calculator [115]. For the validity of estimates, we assume the correctness of the calculator and the information specified in the respective publications (i.e., type of CPU/GPU and training time). If the information provided was not sufficient, we had to make choices for available memory and hardware parameters. All such choices are provided in Table IV, to make our kWh estimates reproducible.
**Conclusion Validity:** Conclusion validity describes whether the operations performed and obtained results in this study (e.g., literature search, data collection) can be reproduced [123]. Section III outlines our literature search procedure, starting from four existing surveys, followed by iterative snowballing steps. We list our inclusion and exclusion criteria, to allow for a reproducibility. Moreover, we provide a link to the collected publications and extracted information in the _Data Availability_ section, such that our search results can be verified. To allow for the reproducibility of observations and results, we provide all the relevant extracted information in Tables II and III. When information is insufficient, we provided all assumptions over hardware specifications in Table IV.
## VIII Lessons Learned
**1.** In general, shared information on the amount of energy consumed by the training and use of SCMs is limited (RQ1; some notable exceptions, such as BLOOM [13, 117], RQ2-B). Specifically, CPU and GPU details are missing or incomplete in the majority of papers, while they are crucial for energy usage estimation. Even with hardware details and training time available, it is hard to make accurate estimates of energy consumption and CO\({}_{2}\) footprint since other missing factors, such as the server location, impact the estimation (RQ2-A).
**2.** From the data that is shared, we see that SCMs are extremely energy-intensive to train due to computational requirements, in particular, when compared to the energy required for downloading shared artifacts (RQ2-C). It is therefore important that researchers share their artifacts (RQ1), including pre-trained and fine-tuned models, as well as explore ways to reduce their energy consumption, such as training in clouds with low CO\({}_{2}\) emissions (e.g., hydro-powered).
**3.** In general, we find that the larger the model, the higher the energy consumed for its training (RQ2-A), increasing the importance to share model artifacts to ensure sustainability.
**4.** Not only the energy consumption of training but also that of long-term storage of pre-trained models and datasets, as well as of their downloads should be considered (RQ2-C).
**5.** On the positive side, SCMs provide ample opportunities for collaborative and cooperative efforts. Sharing artifacts in the end can lead to higher sustainability than when all users would develop their models independently. More work and data is needed to be able to analyze this trade-off, which is why there is a need for a series of guidelines or a checklist to help people systematically report on the environmental/sustainability impact of their techniques.
## IX Recommendations
**1.** Define the scope of the research and the intended application of task-agnostic or task-specific SCMs to ensure a good understanding of the intended tasks and reuse potential.
**2.** Establish a set of clear and transparent metrics for energy consumption and sustainability to ensure systematic, accurate, and reliable reporting.
**3.** Specify details of the hardware and software configuration used for the training and inference of SCMs, including the exact types of the processors and accelerators, memory and the number of cores for CPU (e.g., Intel i7-8700 CPU, 6 cores, 32GB memory), the model and memory for GPUs (e.g., 1
NVIDIA Titan X GPU, 12GB), as well as storage media and infrastructure (RQ2-A).
**4.** Provide energy consumption measurements [5, 125] or estimations for both training and inference (RQ2-A). Use existing proven calculators [115, 116] and provide complete details in the paper, not just the final result, so that the computation can be repeated if an improved calculator becomes available.
**5.** Document the CO\({}_{2}\) footprint associated with energy consumption, considering energy sources and carbon offsetting applied. For cloud infrastructures, this means including the provider and region, because these details vary by location.
**6.** Assess other environmental impacts of SCMs, including the amount of data and storage required and the impact on the (network) infrastructure (RQ2-C).
**7.** Provide (and promote) open access to data and models to foster collaboration and reduce duplication of efforts, thereby reducing the energy and resource requirements for SCM development and fine-tuning.
Observe that several of these recommendations overlap with the recommendations for reproducible machine learning [126], which also cover additional aspects.
## X Conclusion
In this exploratory study, we have performed a snowballing study (i.e., four iterations of backwards snowballing) to find publications on language models for SE tasks, from which we gathered 494 publications of interest. After applying our inclusion and exclusion criteria, we are left with 293 studies, which we investigated further with regard to their reusability and sustainability (e.g., are trained artifacts shared?). We showed that there are deficiencies in the existing studies that train language models on source code regarding the transparency of sustainability aspects. Among the 293 publications, only 27% provide trained artifacts to enable the reuse of their models without incurring the same amount of training effort; 40% of the reviewed publications provide neither source code nor trained artifacts.
We collect training information from the surveyed publications, including the hardware configurations and training time. This allows us to estimate how much time and resources can be saved by reusing the artifacts or how many resources are needed to replicate the models. We have estimated the energy consumption for 30 publications that provided sufficient information (i.e., number and type of processors, training time), while only two publications provided details on energy consumption and CO\({}_{2}\) of the model training [13, 94].
We stress the importance of describing hardware configurations and processing times, so that even if energy consumption is not reported, one can estimate the required resources and judge whether one wants to spend effort to replicate ML models. This agrees with Bender et al. [127], who called for the research community to prioritize the environmental and financial cost of deep learning systems, by reporting or evaluating them with regard to resource usage. Optimally, if a publication creates an ML tool or model with the clear intention of its reuse, it can be beneficial to make trained artifacts available. As shown, making small tools available for download and reuse can prevent unnecessary energy consumption as opposed to training tools from scratch.
**Future Work:** One possible direction for future investigation is an analysis of the literature that cites the energy calculators [115, 116] mentioned earlier to assess if their use indeed leads to better communication of sustainability aspects. This could add further evidence to our recommendations.
## Data Availability
To support open science and allow for replication and verification of our work, an overview of the collected publications and the extracted information is made available via Zenodo.7
Footnote 7: Replication package on Zenodo: [https://doi.org/10.5281/zenodo.8058668](https://doi.org/10.5281/zenodo.8058668).
## Acknowledgements
The research presented in this paper was financially supported by the Research Council of Norway through the secureIT project (grant #288787). Max Hort is supported through the ERCIM 'Alain Bensoussan' Fellowship Programme.
|
2306.15216 | On the Periods of Twisted Moments of the Kloosterman Connection | This paper aims to study the Betti homology and de Rham cohomology of twisted
symmetric powers of the Kloosterman connection of rank two on the torus. We
compute the period pairing and, with respect to certain bases, interpret these
associated period numbers in terms of the Bessel moments. Via the rational
structures on Betti homology and de Rham cohomology, we prove the
$\mathbb{Q}$-linear and quadratic relations among these Bessel moments. | Ping-Hsun Chuang, Jeng-Daw Yu | 2023-06-27T05:37:18Z | http://arxiv.org/abs/2306.15216v2 | # On the periods of twisted moments of the Kloosterman connection
###### Abstract.
This paper aims to study the Betti homology and de Rham cohomology of twisted symmetric powers of the Kloosterman connection of rank two on the torus. We compute the period pairing and, with respect to certain bases, interpret these associated period numbers in terms of the Bessel moments. Via the rational structures on Betti homology and de Rham cohomology, we prove the \(\mathbb{Q}\)-linear and quadratic relations among these Bessel moments.
###### Contents
* 1 Introduction
* 1.1 Historical results and our results
* 1.2 Approach
* 2 The Kloosterman connection and its twisted symmetric powers
* 2.1 Self-duality and pairing on \(\operatorname{Kl}_{2}\)
* 2.2 Rational structures and pairings on \((\mathcal{O}_{\mathbb{G}_{m}},\operatorname{d}+\frac{1}{2}\frac{\operatorname {d}x}{z})\).
* 2.3 Algebraic and topological pairings on \(\sqrt{z}\operatorname{Sym}^{k}\operatorname{Kl}_{2}\)
* 3 The de Rham cohomology
* 3.1 Dimension of \(H^{1}_{\operatorname{dR}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}\right)\)
* 3.2 Compactly supported de Rham cohomology
* 3.3 Poincare pairing
* 4 The local system and the associated homology
* 4.1 Rapid decay cycles
* 4.2 Moderate decay cycles
* 4.3 Betti intersection pairing
* 5 Twisted moments as periods
* 5.1 Bessel moments and regularized Bessel moments
* 5.2 Period pairing and compactly supported period pairing
* 5.3 \(\mathbb{Q}\)-linear and quadratic relations on Bessel moments
* A The Bessel operator and determinants of Bessel moments
* A.1 Symmetric power of the modified Bessel differential operator
* A.2 Two-scale Bessel moments
* A.3 The Vanhove operators
* A.4 Singularities of \(\omega_{n+1}(x)\)
## 1. Introduction
Let \(\operatorname{Kl}_{2}\) be the Kloosterman connection (of rank two) on the torus \(\mathbb{G}_{m,z}=\operatorname{Spec}\left(\mathbb{Q}[z,z^{-1}]\right)\) corresponding to the differential operator \((z\partial_{z})^{2}-z\). (For details, see section 2.) In [10], in order to study the symmetric powers \(\operatorname{Sym}^{k}\operatorname{Kl}_{2}\), Fresan, Sabbah, and Yu consider the following settings. Let \([2]:\mathbb{G}_{m,t}\to\mathbb{G}_{m,z}\) be the double cover induced by \(z\mapsto t^{2}\). One obtains the pullback connection
\[\widetilde{\operatorname{Kl}}_{2}=[2]^{+}\operatorname{Kl}_{2}.\]
The structure of \(\widetilde{\operatorname{Kl}}_{2}\) is much simpler since it is the restriction to \(\mathbb{G}_{m}\) of the Fourier transform of a regular holonomic module on the affine line. In addition, the symmetric power \(\operatorname{Sym}^{k}\operatorname{Kl}_{2}\) appears in the pushforward \(\left[2\right]_{+}\operatorname{Sym}^{k}\widetilde{\operatorname{Kl}}_{2}\) naturally by the decomposition
\[\left[2\right]_{+}\operatorname{Sym}^{k}\widetilde{\operatorname{Kl}}_{2} \cong\operatorname{Sym}^{k}\operatorname{Kl}_{2}\oplus\sqrt{z}\operatorname {Sym}^{k}\operatorname{Kl}_{2},\]
where \(\sqrt{z}\operatorname{Sym}^{k}\operatorname{Kl}_{2}=\left(\mathcal{O}_{ \mathbb{G}_{m}},\operatorname{d}+\frac{\operatorname{d}z}{2z}\right)\otimes \operatorname{Sym}^{k}\operatorname{Kl}_{2}\). In [10], they compute the de Rham cohomology and Betti homology for \(\operatorname{Sym}^{k}\operatorname{Kl}_{2}\). In this paper, we study the analogues for \(\sqrt{z}\operatorname{Sym}^{k}\operatorname{Kl}_{2}\).
### Historical results and our results
Let \(I_{0}(t)\) and \(K_{0}(t)\) be modified Bessel functions. Define the Bessel moments
\[\operatorname{IKM}_{k}(a,b)=\int_{0}^{\infty}I_{0}(t)^{a}K_{0}(t)^{k-a}t^{b} \mathrm{d}t \tag{1}\]
provided that \(0\leq a\leq k\) are non-negative integers, \(b\in\mathbb{Z}\), and the convergence of this integral. The particular Bessel moments of the form \(\operatorname{IKM}_{a+b}(a,2c-1)\) appear in two-dimensional quantum field theory as Feynman integrals [1, 1]. In mathematical viewpoints, these moments are realized as period integrals of \(\operatorname{Sym}^{k}\operatorname{Kl}_{2}\). For the details, we refer to [10]. In that paper, they developed the Hodge theory on symmetric powers of the generalized Kloosterman connection \(\operatorname{Kl}_{n+1}\) of rank \((n+1)\).
#### Sum rule identities
In [1, 1], the authors provide the following conjecture on the \(\mathbb{Q}\left(\pi\right)\) linear relation of Bessel moments which is called "sum rule" in their paper.
**Conjecture 1**.: _For each pair of integers \((n,k)\) with \(n\geq 2k\geq 2\), the following combination of Bessel moments vanish_
\[\sum_{m=0}^{\lfloor n/2\rfloor}(-1)^{m}\binom{n}{2m}\pi^{n-2m}\mathrm{IKM}_{2 n}(n-2m,n-2k)=0.\]
Later in [1, (1.5)], Zhou uses the Hilbert transformation to prove this conjecture. Moreover, he also proves a "sum rule":
**Formula 2**.: _For each pair of integers \((n,k)\) with \(n-1\geq 2k\geq 2\), the following conbination of Bessel moments vanish_
\[\sum_{m=1}^{\lfloor(n+1)/2\rfloor}(-1)^{m}\binom{n}{2m-1}\pi^{n-2m+1}\mathrm{ IKM}_{2n}(n-2m+1,n-2k-1)=0.\]
When the involved exponents of \(t\) in (1) are odd, these two identities are both reproved by Fresan, Sabbah, and Yu [10] using the algebraic geometry method. In this paper, we provide proof of these two identities involving even powers of \(t\) using a similar approach in section 5. For example, we have the following result:
**Formula 3** (Corollary 26).: _For \(k=4r+4\) and \(k^{\prime}=\lfloor\frac{k-1}{2}\rfloor\),_
\[\sum_{j=0}^{r}\binom{k/2}{2j}(-1)^{j}\pi^{2j}\mathrm{IKM}_{k}(2j,2i)=\begin{cases} (-1)^{r}\pi^{2r+2}\mathrm{IKM}_{k}(2r+2,2i)&\text{if }0\leq i\leq r,\\ (-1)^{r}\pi^{2r+2}\mathrm{IKM}_{k}^{\mathrm{reg}}(2r+2,2i)&\text{if }r+1\leq i \leq k^{\prime}.\end{cases}\]
The notation \(\mathrm{IKM}^{\mathrm{reg}}(2r+2,2i)\) above is the regularized Bessel moments (see Lemma 19). Roughly speaking, the regularized Bessel moments are the divergence Bessel moments but modifying their singularities. Therefore, our sum rule generalizes the sum rule in [10, 1].
\(\mathbb{Q}\)_-dimension of Bessel moments_. In [10], Zhou consider the \(\mathbb{Q}\)-vector subspace spanned by the Bessel moments in \(\mathbb{C}\). This vector subspace is finite-dimensional due to the sum rule. Similarly, we have the following upper bound of the dimension.
**Theorem 4**.: _For any \(k\) and any \(0\leq a\leq k^{\prime}=\lfloor\frac{k-1}{2}\rfloor\), the dimension of the \(\mathbb{Q}\)-vector space generated by the Bessel moments has upper bound:_
\[\dim\mathrm{span}_{\mathbb{Q}}\left\{\mathrm{IKM}_{k}(a,2j)\mid j\in\{0\} \cup\mathbb{N}\right\}\leq k^{\prime}+1.\]
_For \(k\) even, the dimension of the \(\mathbb{Q}\)-vector space generated by the regularized Bessel moments has an upper bound:_
\[\dim\mathrm{span}_{\mathbb{Q}}\left\{\mathrm{IKM}_{k}^{\mathrm{reg}}\left(k/2,2j\right)\mid j\in\{0\}\cup\mathbb{N}\right\}\leq k^{\prime}+1.\]
Note that our statement involves the regularized Bessel moments. This conclusion is a more general result than the one given by Zhou.
#### Quadratic relations of Bessel moments
In [11], the authors prove a general result of quadratic relations between periods given by a self-dual connection. We apply this result and obtain the quadratic relation between the Bessel moments. Using the basis of cohomologies, let \(B\) be the topological pairing on Betti homology, \(D\) be the Poincare pairing on de Rham cohomology, and \(P\) be the period pairing between these two homology and cohomology. Then, we write down the quadratic relations on these periods:
\[PD^{-1}P_{\mathrm{c}}=(-1)^{k}(2\pi\sqrt{-1})^{k+1}B. \tag{2}\]
Note that \(P\) and \(P_{\mathrm{c}}\) consist of \(\mathbb{Q}\)-linear combinations of Bessel moments and regularized Bessel moments. Moreover, due to the rational structure of Betti homology and de Rham cohomology, the corresponding pairing matrices \(D,B\) consist of rational numbers.
#### Determinants of Bessel moment matrix
Another interesting result is to compute the determinants of certain matrices consisting of Bessel moments. In [1, Conjecture 4,7], Broadhurst conjectures two formulae of the determinants of the following two \(k\times k\) matrices \(\mathbf{M}_{k}\) and \(\mathbf{N}_{k}\) involving the Bessel moments:
\[\mathbf{M}_{k} =(\mathrm{IKM}_{2k+1}(a,2b-1))_{1\leq a,b\leq k},\] \[\mathbf{N}_{k} =(\mathrm{IKM}_{2k+2}(a,2b-1)_{1\leq a,b\leq k}.\]
Later, in [10], Zhou uses an analytic method to prove these two determinant formulae. Using the similar method as Zhou, in Corollary 37, we give explicit determinant formulae for the following two \(r\times r\) matrices:
\[M_{r} =(\mathrm{IKM}_{2r-1}(i-1,2j-2))_{1\leq i,j\leq r},\] \[N_{r} =(\mathrm{IKM}_{2r}(i-1,2j-2))_{1\leq i,j\leq r}.\]
**Formula 5**.: _For \(r\geq 1\), we have_
\[\det M_{r} =\sqrt{\pi}^{r(r+1)}\sqrt{2}^{r(r-3)}\prod_{a=1}^{r-1}\frac{a^{r-a} }{\sqrt{2a+1}^{2a+1}},\] \[\det N_{r} =\frac{\sqrt{\pi}^{(r+1)^{2}}}{\Gamma\left(\frac{r+1}{2}\right)} \frac{1}{\sqrt{2}^{r(r+3)}}\prod_{a=1}^{r-1}\frac{(2a+1)^{r-a}}{(a+1)^{a+1}}.\]
### Approach
In [1], Bloch and Esnault study the irregular connections on curves and provide the associated homology theory. Due to their results, we study the de Rham cohomology and Betti homology of \(\sqrt{z}\operatorname{Sym}^{k}\operatorname{Kl}_{2}\) on \(\mathbb{G}_{m}\) and provide explicit bases in order to find the periods.
In section 2, we introduce the twisted symmetric power of Kloosterman connection \(\sqrt{z}\operatorname{Sym}^{k}\operatorname{Kl}_{2}\), which is the main object in this paper. We discuss the rational structures on the de Rham cohomology and Betti homology of the connection. Moreover, since the connection is self-dual, we introduce its algebraic and topological self-pairings. These pairings will play an important role in our computations.
In section 3, we study the de Rham cohomology and the de Rham cohomology with compact support on \(\sqrt{z}\operatorname{Sym}^{k}\operatorname{Kl}_{2}\) and write down some elements in these two cohomologies. Next, we introduce the Poincare pairing between them and compute the pairing with respect to the elements we have written down. Using the dimension result of de Rham cohomology, along with the non-vanishing determinant of the Poincare pairing, in Corollary 13, we conclude that the elements we write down in de Rham cohomology are bases.
We study parallelly the Betti homology on \(\sqrt{z}\operatorname{Sym}^{k}\operatorname{Kl}_{2}\) in section 4. Since our ambient space is a non-compact space \(\mathbb{C}^{\times}\), we need to modify our Betti homology theory by allowing the chain to go to \(0\) or \(\infty\). By controlling the growth behaviors of the horizontal sections, we study the moderate decay Betti homology and rapid decay Betti homology on \(\sqrt{z}\operatorname{Sym}^{k}\operatorname{Kl}_{2}\). Similarly, We first write down some elements in the moderate decay homology and rapid decay homology and compute their topological pairing explicitly. Moreover, by the duality of de Rham cohomology and Betti homology, the dimension of Betti homology is the same as the de Rham cohomology. Together with the topological pairing, we conclude that they are bases in Corollary 18.
Finally, in section 5, we compute the period pairing between the de Rham cohomologies and the Betti homologies and interpret them in terms of the Bessel moments. Note that our variety \(\mathbb{G}_{m}=\operatorname{Spec}\mathbb{Q}[z,z^{-1}]\) and the connection \(\sqrt{z}\operatorname{Sym}^{k}\operatorname{Kl}_{2}\) are defined over \(\mathbb{Q}\) and therefore, the de Rham cohomology and Betti homology are naturally endowed with a \(\mathbb{Q}\)-vector space structure. From the dimension constraint of homologies, after computing the periods pairing, we obtain the \(\mathbb{Q}\)-linear relation of Bessel moments (Formula 3) and an upper bound of \(\mathbb{Q}\)-dimension of space spanned by the Bessel moments (Theorem 4). In addition, the self duality of \(\sqrt{z}\operatorname{Sym}^{k}\operatorname{Kl}_{2}\) gives quadratic relations between these Bessel moments (2).
In appendix A.1, we provide an accurate analysis of the symmetric powers of the modified Bessel differential operator. The first usage belongs to section 3, which enables us to determine the dimension of the de Rham cohomology \(H^{1}_{\operatorname{dR}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k }\operatorname{Kl}_{2}\right)\). The second usage belongs to appendix A.2, which allows us to analyze the leading term of the Vanhove operator. This helps us to obtain the determinant formula (Formula 5).
## 2. The Kloosterman connection and its twisted symmetric powers
### Self-duality and pairing on \(\operatorname{Kl}_{2}\)
The connection \(\operatorname{Kl}_{2}=(\mathcal{O}_{\mathbb{G}_{m}}v_{0}\oplus\mathcal{O}_{ \mathbb{G}_{m}}v_{1},\nabla)\) consists of a free sheaf of rank \(2\) on \(\mathbb{G}_{m}=\operatorname{Spec}\mathbb{Q}[z,z^{-1}]\) with basis of sections \(v_{0}\) and \(v_{1}\). The connection \(\nabla\)
on \(\mathrm{Kl}_{2}\) is given by
\[z\nabla\left(v_{0},v_{1}\right)=\left(v_{0},v_{1}\right)\begin{pmatrix}0&z\\ 1&0\end{pmatrix}\mathrm{d}z.\]
We refer to [10, SS3.1][10, SS2] for more details. Note that \(\mathrm{Kl}_{2}\) is self-dual in the sense that there exists an algebraic horizontal pairing \(\langle\,\ \rangle_{\mathrm{alg}}\) on \(\mathrm{Kl}_{2}\):
\[\left(\left\langle v_{i},v_{j}\right\rangle_{\mathrm{alg}}\right)_{0\leq i,j \leq 1}=\begin{pmatrix}0&1\\ -1&0\end{pmatrix}\]
such that \(\lambda:\mathrm{Kl}_{2}\to\mathrm{Kl}_{2}^{\vee}\) by \(\left(v_{0},v_{1}\right)\mapsto\left(v_{1}^{\vee},-v_{0}^{\vee}\right)\) makes the following diagram commutes.
Recall that the modified Bessel function \(I_{0}(t)\) and \(K_{0}(t)\) satisfy the differential equation \((t^{2}\partial_{t}^{2}+t\partial_{t}-t^{2})y=0\) and the Wronskian relation
\[I_{0}(t)K_{0}^{\prime}(t)-I_{0}^{\prime}(t)K_{0}(t)=\frac{-1}{t}. \tag{3}\]
After rescaling the modified Bessel function by
\[A_{0}(z)=-2I_{0}(2\sqrt{z}),\qquad A_{1}(z)=z\partial_{z}A_{0}(z);\] \[B_{0}(z)=2K_{0}(2\sqrt{z}),\qquad B_{1}(z)=z\partial_{z}B_{0}(z),\]
the horizontal sections of \(\nabla\) on \(\mathrm{Kl}_{2}\) have basis
\[e_{0}=\frac{1}{2}(A_{0}v_{1}-A_{1}v_{0}),\qquad e_{1}=\frac{1}{2\pi\sqrt{-1}}( B_{0}v_{1}-B_{1}v_{0}). \tag{4}\]
To see this, note that \(A_{0}\) and \(B_{0}\) are annihilated by the operator \(\left(z\partial_{z}\right)^{2}-z\) and real-valued on the principal branch \(\mathbb{R}_{>0}\). This gives
\[\partial_{z}A_{1}(z)=A_{0}(z),\qquad\partial_{z}B_{1}(z)=B_{0}(z).\]
Together with the Wronskian relation \(A_{0}B_{1}-A_{1}B_{0}=2\), the two sections \(e_{0}\) and \(e_{1}\) are indeed the horizontal sections. Let \(\mathrm{Kl}_{2}^{\nabla}\) be the local system of \(\mathbb{Q}\)-vector space generated by \(e_{0},e_{1}\) and there exists a topological pairing \(\langle\,\ \rangle_{\mathrm{top}}=2\pi\sqrt{-1}\langle\,\ \rangle_{\mathrm{ alg}}\):
\[\left(\left\langle e_{i},e_{j}\right\rangle_{\mathrm{top}}\right)_{0\leq i,j \leq 1}=\begin{pmatrix}0&1\\ -1&0\end{pmatrix}.\]
Rational structures and pairings on \((\mathcal{O}_{\mathbb{G}_{m}},\mathrm{d}+\frac{1}{2}\frac{\mathrm{d}z}{z})\)
Consider the cyclic Galois cover \([2]:\mathbb{G}_{m,t}\to\mathbb{G}_{m,z}\) induced by \(z\mapsto t^{2}\), where \(\mathbb{G}_{m}=\mathrm{Spec}\,\mathbb{Q}[x,x^{-1}]\). Let \(T=(\mathcal{O}_{\mathbb{G}_{m,t}},\mathrm{d})\) be the trivial connection on \(\mathbb{G}_{m,t}\). View \(\mathcal{O}_{\mathbb{G}_{m,z}}\) as the \(\mathbb{Q}[t,t^{-1}]\)-module \(\mathbb{Q}[t^{2},t^{-2}]\). The pushforward connection \(\left[2\right]_{+}T\) decompose into
\[\left(\mathcal{O}_{\mathbb{G}_{m,z}},\mathrm{d}\right)\oplus\left(t\cdot \mathcal{O}_{\mathbb{G}_{m,z}},\mathrm{d}\right).\]
The second component \(\big{(}t\cdot\mathcal{O}_{\mathbb{G}_{m,z}},\mathrm{d}\big{)}\) is isomorphic to \((\mathcal{O}_{\mathbb{G}_{m,z}},\mathrm{d}+\frac{1}{2}\frac{\mathrm{d}z}{z})\) via the following diagram
(5)
The dual connection of \((\mathcal{O}_{\mathbb{G}_{m,z}},\mathrm{d}+\frac{1}{2}\frac{\mathrm{d}z}{z})\) is given by \((\mathcal{O}_{\mathbb{G}_{m,z}},\mathrm{d}-\frac{1}{2}\frac{\mathrm{d}z}{z})\), and the two are isomorphic via multiplication by \(z\), that is, the following diagram commutes.
This induces an algebraic horizontal pairing \(\langle\,\ \rangle_{\mathrm{alg}}\) on \((\mathcal{O}_{\mathbb{G}_{m,z}},\mathrm{d}+\frac{1}{2}\frac{\mathrm{d}z}{z})\) given by
\[\langle 1,1\rangle_{\mathrm{alg}}=z.\]
On the other hand, the rational structure of the local system of horizontal sections of \((\mathcal{O}_{\mathbb{G}_{m,t}},\mathrm{d})\) is generated by \(1\). Under the isomorphism (5), the rational structure of local system of horizontal sections of \((\mathcal{O}_{\mathbb{G}_{m,z}},\mathrm{d}+\frac{1}{2}\frac{\mathrm{d}z}{z})\) is generated by \(\frac{1}{t}=\frac{1}{\sqrt{z}}\). Its dual connection \((\mathcal{O}_{\mathbb{G}_{m,z}},\mathrm{d}-\frac{1}{2}\frac{\mathrm{d}z}{z})\) has local system of horizontal sections generated by \(\sqrt{z}\). This induces a rational topological pairing \(\langle\,\ \rangle_{\mathrm{top}}\) on \((\mathcal{O}_{\mathbb{G}_{m,z}},\mathrm{d}+\frac{1}{2}\frac{\mathrm{d}z}{z})\)
\[\langle\frac{1}{\sqrt{z}},\frac{1}{\sqrt{z}}\rangle_{\mathrm{top}}=1.\]
### Algebraic and topological pairings on \(\sqrt{z}\,\mathrm{Sym}^{k}\,\mathrm{Kl}_{2}\)
The \(k\)-th symmetric product of \(\mathrm{Kl}_{2}\), \(\mathrm{Sym}^{k}\,\mathrm{Kl}_{2}\), is a rank \(k+1\) free sheaf on \(\mathcal{O}_{\mathbb{G}_{m}}\) with basis of sections
\[v_{0}^{a}v_{1}^{k-a}=\frac{1}{|\mathfrak{S}_{k}|}\sum_{\sigma\in\mathfrak{S}_{ k}}\sigma\left(v_{0}^{\otimes a}\otimes v_{1}^{\otimes k-a}\right)\qquad a=0,1, \ldots,k,\]
where \(\mathfrak{S}_{k}\) is the symmetric group on \(k\) elements. It is endowed with the induced connection from \((\mathrm{Kl}_{2},\nabla)\). After twisting with the connection \(\big{(}\mathcal{O}_{\mathbb{G}_{m}},\mathrm{d}+\frac{1}{2}\frac{\mathrm{d}z}{ z}\big{)}\), we define
\[\sqrt{z}\,\mathrm{Sym}^{k}\,\mathrm{Kl}_{2}=\left(\mathcal{O}_{\mathbb{G}_{m}}, \mathrm{d}+\frac{1}{2}\frac{\mathrm{d}z}{z}\right)\otimes\mathrm{Sym}^{k}\, \mathrm{Kl}_{2}.\]
The induced connection \(\nabla\) on \(\sqrt{z}\,\mathrm{Sym}^{k}\,\mathrm{Kl}_{2}\) is given by
\[\nabla v_{0}^{a}v_{1}^{k-a}=(k-a)v_{0}^{a+1}v_{1}^{k-a-1}\mathrm{d}z+\frac{a}{ z}v_{0}^{a-1}v_{1}^{k-a+1}\mathrm{d}z+\frac{1}{2z}v_{0}^{a}v_{1}^{k-a}\mathrm{d}z. \tag{6}\]
Note that \(\sqrt{z}\,\mathrm{Sym}^{k}\,\mathrm{Kl}_{2}\) is the same sheaf as \(\mathrm{Sym}^{k}\,\mathrm{Kl}_{2}\) but endowed with a different connection.
Via the self-duality on \(\mathrm{Kl}_{2}\) and on \(\big{(}\mathcal{O}_{\mathbb{G}_{m}},\mathrm{d}+\frac{1}{2}\frac{\mathrm{d}z}{ z}\big{)}\), we have the perfect algebraic pairing \(\langle\,\ \rangle_{\mathrm{alg}}\) on \(\sqrt{z}\,\mathrm{Sym}^{k}\,\mathrm{Kl}_{2}\):
\[\sqrt{z}\,\mathrm{Sym}^{k}\,\mathrm{Kl}_{2}\times\sqrt{z}\,\mathrm{Sym}^{k}\, \mathrm{Kl}_{2}\xrightarrow{\langle\,\ \rangle_{\mathrm{alg}}}\ (\mathcal{O}_{\mathbb{G}_{m}},\mathrm{d})\]
given by
\[\big{\langle}v_{0}^{k-a}v_{1}^{a},v_{0}^{k-b}v_{1}^{b}\big{\rangle}_{\mathrm{ alg}}=z\delta_{k,a+b}(-1)^{a}\frac{\mathrm{d}!b!}{k!}=(2\pi\sqrt{-1})^{k}\, \big{\langle}e_{0}^{k-a}e_{1}^{a},e_{0}^{k-b}e_{1}^{b}\big{\rangle}_{\mathrm{ alg}}\,.\]
The local system \((\sqrt{z}\,\mathrm{Sym}^{k}\,\mathrm{Kl}_{2})^{\nabla}\) is a \(\mathbb{Q}\)-vector space generated by the horizontal sections
\[\frac{1}{\sqrt{z}}e_{0}^{a}e_{1}^{k-a}=\frac{1}{\sqrt{z}}\sum_{\sigma\in \mathfrak{S}_{k}}\sigma\left(e_{0}^{\otimes a}\otimes e_{1}^{\otimes k-a} \right),\quad a=0,1,\cdots,k, \tag{7}\]
which are the product of the horizontal sections to the connection \(\left(\mathcal{O}_{\mathbb{G}_{m}},\mathrm{d}+\frac{1}{2}\frac{\mathrm{d}z}{z}\right)\) and \(\operatorname{Sym}^{k}\operatorname{Kl}_{2}\). The topological pairing \(\langle\,\ \rangle\) on \(\operatorname{Kl}_{2}^{\nabla}\) induces a topological pairing on \((\operatorname{Sym}^{k}\operatorname{Kl}_{2})^{\nabla}\) and thus on \((\sqrt{z}\operatorname{Sym}^{k}\operatorname{Kl}_{2})^{\nabla}\):
This pairing reads
\[\left\langle\frac{1}{\sqrt{z}}e_{0}^{a}e_{1}^{k-a},\frac{1}{\sqrt{z}}e_{0}^{b} e_{1}^{k-b}\right\rangle_{\mathrm{top}}=\delta_{k,a+b}(-1)^{k-a}\frac{\mathrm{d} \mathrm{i}b!}{k!}.\]
## 3. The de Rham cohomology
Dimension of \(H^{1}_{\mathrm{dR}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}\right)\)
**Proposition 6**.: _For the connections \(\operatorname{Sym}^{k}\operatorname{Kl}_{2}\) and \(\sqrt{z}\operatorname{Sym}^{k}\operatorname{Kl}_{2}\), we have_
\[\dim H^{1}_{\mathrm{dR}}\left(\mathbb{G}_{m},\operatorname{Sym}^{k} \operatorname{Kl}_{2}\right)=\dim H^{1}_{\mathrm{dR}}\left(\mathbb{G}_{m}, \sqrt{z}\operatorname{Sym}^{k}\operatorname{Kl}_{2}\right)=\left\lfloor \frac{k+1}{2}\right\rfloor=k^{\prime}+1.\]
Proof.: In [10, SS2.9.13], we have the following result for the Euler characteristic of a connection.
**Lemma 7**.: _On \(\mathbb{G}_{m}\) with parameter \(z\), write \(\theta_{z}=z\frac{\mathrm{d}}{\mathrm{d}z}\), and consider a non-zero operator \(L=\sum z^{i}P_{i}(\theta_{z})\). Define integers \(a,b\) by_
\[a:=\max\left\{i\mid P_{i}\neq 0\right\};\ \ \ \ b:=\min\left\{i\mid P_{i}\neq 0 \right\}.\]
_Then \(\chi\left(\mathbb{G}_{m},\mathcal{D}/\mathcal{D}L\right)=-\left(a-b\right)\)._
Now, the differential operator on \(\mathbb{G}_{m}\) associated with \(\operatorname{Kl}_{2}\) is given by \(L=\theta_{z}^{2}-z\).1 Then the differential operator for \(\operatorname{Sym}^{k}\operatorname{Kl}_{2}\) is given by \(k\)-th symmetric power \(L_{k+1}\) of \(L\). Proposition 35 gives \(a=\left\lfloor\frac{k+1}{2}\right\rfloor\) and \(b=0\). Hence, we conclude that
Footnote 1: We may view this connection as the changing variable of modified Bessel operator \(L=\theta^{2}-t^{2}\) by \(z=t^{2}/4\), that is, by \(\partial_{t}=\sqrt{z}\partial_{z}\).
\[\chi\left(\mathbb{G}_{m},\operatorname{Sym}^{k}\operatorname{Kl}_{2}\right)= -\left\lfloor\frac{k+1}{2}\right\rfloor.\]
Moreover, we know that the solution to the differential equation \(L_{k+1}Y=0\) is given by \(A_{0}^{i}B_{0}^{k-i}\) for \(i=0,1,2,\cdots k\). Note that \(A_{0}\) is holomorphic at \(0\) and has exponential growth near infinity, and \(B_{0}\) has a log pole at \(0\). These imply all of the solutions \(A_{0}^{i}B_{0}^{k-i}\) are not algebraic solutions, and thus \(H^{0}_{\mathrm{dR}}\left(\mathbb{G}_{m},\operatorname{Sym}^{k}\operatorname{ Kl}_{2}\right)=0\). Hence, combining the fact that \(H^{2}_{\mathrm{dR}}\left(\mathbb{G}_{m},\operatorname{Sym}^{k}\operatorname{ Kl}_{2}\right)=0\) by Artin vanishing theorem, we conclude that \(H^{1}_{\mathrm{dR}}\left(\mathbb{G}_{m},\operatorname{Sym}^{k}\operatorname{ Kl}_{2}\right)\) has dimension \(\left\lfloor\frac{k+1}{2}\right\rfloor\).
For \(\sqrt{z}\operatorname{Sym}^{k}\operatorname{Kl}_{2}\), note that the corresponding differential operator becomes \(\frac{1}{\sqrt{z}}L_{k+1}\sqrt{z}\). Now, using the fact that \(\frac{1}{\sqrt{z}}\theta_{z}\sqrt{z}=\theta_{z}+\frac{1}{2}\), we have \(\frac{1}{\sqrt{z}}L\sqrt{z}=\sum z^{i}P_{i}\left(\theta_{z}+\frac{1}{2}\right)\) whenever \(L=\sum z^{i}P_{i}(\theta_{z})\). Therefore, we again obtain that \(a=\left\lfloor\frac{k+1}{2}\right\rfloor\), \(b=0\) and thus
\[\chi\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k}\operatorname{Kl}_{2} \right)=-\left\lfloor\frac{k+1}{2}\right\rfloor.\]
Moreover, the solutions to the differential operator \(\frac{1}{\sqrt{z}}L_{k+1}\sqrt{z}\) are \(\frac{1}{\sqrt{z}}A_{0}^{i}B_{0}^{k-i}\) for \(i=0,1,\cdots,k\) and similarly, \(H^{0}_{\mathrm{dR}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}\right)=0\). Hence, we conclude that \(H^{1}_{\mathrm{dR}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}\right)\) has dimension \(\left\lfloor\frac{k+1}{2}\right\rfloor\) by Artin vanishing theorem.
**Remark 8**.: _In [12, SS2.9.13], Katz provides the proof only on the base field \(\mathbb{C}\). However, we need to determine the dimension of de Rham cohomology as a \(\mathbb{Q}\)-vector space. Thanks to [13, SS3.1.11], we know that the change of de Rham cohomology on a characteristic zero field extension is just changing the base field:_
\[H^{i}_{\mathrm{dR}}\left(\mathbb{G}_{m}\right)\otimes_{\mathbb{Q}}\mathbb{C}=H ^{i}_{\mathrm{dR}}\left(\mathbb{G}_{m,\mathbb{C}}\right)\]
_and the dimension is preserved._
### Compactly supported de Rham cohomology
Consider the \(k^{\prime}+1\) elements \(\left\{v_{0}^{k}z^{i}\frac{\mathrm{d}z}{z}\right\}_{i=0}^{k^{\prime}}\) in \(H^{1}_{\mathrm{dR}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}\right)\). We will prove these elements form a \(\mathbb{Q}\)-basis through computation of the Poincare pairing. (See Corollary 13.) The Poincare dual of the de Rham cohomology is the de Rham cohomology with compact support. Note that an element in \(H^{1}_{\mathrm{dR,c}}\left(\mathbb{G}_{m}\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}\right)\) is represented by a triple \(\left(\xi,\eta,\omega\right)\), where \(\omega\in H^{1}_{\mathrm{dR}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym} ^{k}\operatorname{Kl}_{2}\right)\) and \(\xi,\eta\) are formal solutions to \(\nabla\xi=\nabla\eta=\omega\) at \(0\) and \(\infty\) respectively. The solutions are provided by the following lemma.
**Lemma 9**.: _Suppose that \(k\equiv 0,1,3\pmod{4}\). For \(0\leq i\leq k^{\prime}\), there exists \(\left(\xi_{i},\eta_{i}\right)\in\left(\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}\right)_{\widetilde{0}}\oplus\left(\sqrt{z} \operatorname{Sym}^{k}\operatorname{Kl}_{2}\right)_{\widetilde{\infty}}\) such that \(\nabla\xi_{i}=\nabla\eta_{i}=v_{0}^{k}z^{i}\frac{\mathrm{d}z}{z}\)._
_On the other hand, if \(k\equiv 2\pmod{4}\), say \(k=4r+2\). For \(0\leq i\leq k^{\prime}\) with \(i\neq r\), there exists \(\left(\xi_{i},\eta_{i}\right)\in\left(\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}\right)_{\widetilde{0}}\oplus\left(\sqrt{z} \operatorname{Sym}^{k}\operatorname{Kl}_{2}\right)_{\widetilde{\infty}}\) such that_
\[\nabla\xi_{i}=\nabla\eta_{i}=v_{0}^{k}z^{i}\frac{\mathrm{d}z}{z}-\gamma_{k,i- r}v_{0}^{k}z^{r}\frac{\mathrm{d}z}{z},\]
_where \(\gamma_{k,n}\in\mathbb{Q}\) are the coefficients in the asymptotic expansion of \(\left(-A_{0}\left(z\right)B_{0}\left(z\right)\right)^{k/2}\) given by (11) below._
Proof.: Near \(0\), we want to find
\[\xi_{i}=\sum_{a=0}^{k}\xi_{i,a}(z)v_{0}^{a}v_{1}^{k-a}\in\bigoplus_{a=0}^{k} \mathbb{Q}\llbracket z\rrbracket v_{0}^{a}v_{1}^{k-a}\]
such that \(\nabla\xi_{i}=v_{0}^{k}z^{i}\frac{\mathrm{d}z}{z}\). Using the connection formula (6) on \(\sqrt{z}\operatorname{Sym}^{k}\operatorname{Kl}_{2}\), we need to solve:
\[\frac{\mathrm{d}}{\mathrm{d}z}\xi_{i,k}(z)+(k-1)\xi_{i,k-1}(z)+ \frac{1}{2z}\xi_{i,k}(z) =z^{i-1},\] \[\frac{\mathrm{d}}{\mathrm{d}z}\xi_{i,a}(z)+(k-a+1)\xi_{i,a-1}(z)+ \frac{a+1}{z}\xi_{i,a+1}(z)+\frac{1}{2z}\xi_{i,a}(z) =0\text{ for }a=1,2,\cdots,k-1,\] \[\frac{\mathrm{d}}{\mathrm{d}z}\xi_{i,0}(z)+\frac{1}{z}\xi_{i,1}(z )+\frac{1}{2z}\xi_{i,0}(z) =0.\]
Write \(\xi_{i,a}=\sum_{n=0}^{\infty}\xi_{i,a,n}z^{n}\). We solve \(\xi_{i,a,n}\) recursively on \(n\). Suppose that we have solved \(\xi_{i,a,j}\) for \(j<n\). Compare the coefficient of \(z^{n-1}\) of the above system of equations and get
\[\left(\begin{array}{cccc}&&&n+\frac{1}{2}\\ &&&n+\frac{1}{2}&k\\ &&&n+\frac{1}{2}&k-1\\ &&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Since the first square matrix is invertible, \(\xi_{i,a,n}\) is determined uniquely. Thus, we find \(\xi_{i}\in\bigoplus\limits_{a=0}^{k}\mathbb{Q}\llbracket z\rrbracket v_{0}^{a}v_{1 }^{k-a}\) such that \(\nabla\xi_{i}=v_{0}^{k}z^{i}\frac{\mathrm{d}z}{z}\). In \(k\equiv 2\pmod{4}\) case, we only need to replace \(\xi_{i}\) by \(\xi_{i}-\gamma_{k,i-r}\xi_{r}\).
Next, we turn to investigate the formal solutions at \(\infty\) using horizontal frames. Note that the modified Bessel functions have the asymptotic expansions at \(\frac{1}{t}\)
\[I_{0}(t) \sim e^{t}\frac{1}{\sqrt{2\pi t}}\sum_{n=0}^{\infty}\frac{((2n-1)!!)^{2}}{2^{3n}n!}\frac{1}{t^{n}}, |\arg t|<\frac{1}{2}\pi \tag{9}\] \[K_{0}(t) \sim e^{-t}\sqrt{\frac{\pi}{2t}}\sum_{n=0}^{\infty}(-1)^{n}\frac{ ((2n-1)!!)^{2}}{2^{3n}n!}\frac{1}{t^{n}}, |\arg t|<\frac{3}{2}\pi\] (10) \[I_{0}(t)K_{0}(t) \sim\frac{1}{2t}\sum_{n=0}^{\infty}\frac{((2n-1)!!)^{3}}{2^{3n}n!}\frac{1}{t^{2n}}. \tag{8}\]
Let \(w=\frac{1}{z}\) be the local coordinate at \(z=\infty\). For \(k\) even, by the last asymptotic expansion, we have
\[(-A_{0}(z)B_{0}(z))^{k/2}\sim w^{1/4}\sum_{n=0}^{\infty}\gamma_{k,n}w^{n}, \tag{11}\]
where \(\gamma_{k,0}=1\) and \(\gamma_{k,n}>0\) for all \(n>0\). For convenience, we set \(\gamma_{k,j}=0\) for all \(j<0\).
Following the notation in (4), let us set \(\overline{e}_{1}=\pi\sqrt{-1}e_{1}\). Then \(\frac{e_{0}^{a}\overline{e}_{1}^{k-a}}{\sqrt{z}}\) are horizontal sections. Using the Wronskian relation \(A_{0}B_{1}-A_{1}B_{0}=2\), we have \(v_{0}=B_{0}e_{0}-A_{0}\overline{e}_{1}\). Let \(w=\frac{1}{z}\) and we obtain
\[v_{0}^{k}z^{i}\frac{\mathrm{d}z}{z}=-\sum_{a=0}^{k}\binom{k}{a}\frac{(-A_{0}) ^{a}B_{0}^{k-a}}{w^{i+3/2}}\frac{e_{0}^{k-a}\overline{e}_{1}^{a}}{\sqrt{z}} \mathrm{d}w.\]
By the expression (11), since there is a factor \(w^{1/4}\), we first solve \(\eta_{i,a}\) in \(\mathbb{Q}((w^{1/4}))\) such that
\[\mathrm{d}\eta_{i,a}=\frac{(-A_{0})^{a}B_{0}^{k-a}}{w^{i+3/2}}.\]
Finally, we will prove that the formal solution lies in \(\mathbb{Q}((w))\) by showing that it is invariant by the Galois action \(\sigma\in\mathrm{Gal}(\mathbb{Q}((w^{1/4}))/\mathbb{Q}((w)))\). Near \(\infty\), we have the expansion
\[\frac{(-A_{0})^{a}B_{0}^{k-a}}{w^{i+3/2}}=\begin{cases}\sqrt{\pi}^{k-2a}e^{-2( k-2a)/\sqrt{w}}w^{k/4-i-3/2}\cdot F_{i,a},&a\neq\frac{k}{2}\\ w^{k/4-i-3/2}\left(\sum_{n=0}^{\infty}\frac{((2n-1)!!)^{3}}{2^{3n}n!}w^{n} \right)^{k/2},&a=\frac{k}{2}\end{cases}\]
where \(F_{i,a}\in 1+\sqrt{w}\mathbb{Q}\llbracket\sqrt{w}\rrbracket\).
When \(a\neq\frac{k}{2}\), we can find an antiderivative \(\eta_{i,a}\) of \(\frac{(-A_{0})^{a}B_{0}^{k-a}}{w^{i+3/2}}\) with the expansion
\[\eta_{i,a}=\frac{\sqrt{\pi}^{k-2a}}{k-2a}e^{-2(k-2a)/\sqrt{w}}w^{k/4-i}\cdot G _{i,a} \tag{12}\]
for some \(G_{i,a}\in 1+\sqrt{w}\mathbb{Q}\llbracket\sqrt{w}\rrbracket\). We analyze \(\eta_{i,a}\frac{e_{0}^{k-a}\overline{e}_{1}^{a}}{\sqrt{z}}\). Write \(e_{0}^{k-a}\overline{e}_{1}^{a}\) back to the expression in basis \(v_{0}^{b}v_{1}^{k-b}\):
\[e_{0}^{k-a}\overline{e}_{1}^{a} =2^{-k}(A_{0}v_{1}-A_{1}v_{0})^{k-a}\cdot(B_{0}v_{1}-B_{1}v_{0})^ {a}\] \[=2^{-k}e^{2(k-a)\sqrt{z}}\frac{1}{\sqrt{\pi}^{k-a}}(F_{1}v_{0}-F _{2}v_{1})^{k-a}\cdot e^{-2a\sqrt{z}}\sqrt{\pi}^{a}(G_{1}v_{1}-G_{2}v_{0})^{a}\] \[=2^{-k}e^{2(k-2a)\sqrt{z}}\sqrt{\pi}^{2a-k}(F_{1}v_{0}-F_{2}v_{1} )^{k-a}(G_{1}v_{1}-G_{2}v_{0})^{a},\]
where \(F_{1},F_{2},G_{1},G_{2}\in z^{1/4}\mathbb{Q}\llbracket z^{-1/4}\rrbracket\). Thus,
\[\eta_{i,a}\frac{e_{0}^{k-a}\overline{e}_{1}^{a}}{\sqrt{z}}=\frac{2^{-k}}{k-2a}w^ {k/4-i+1/2}G_{i,a}(F_{1}v_{0}-F_{2}v_{1})^{k-a}(G_{1}v_{1}-G_{2}v_{0})^{a},\]
where \(F_{1},F_{2},G_{1},G_{2}\in z^{1/4}\mathbb{Q}\llbracket z^{-1/4}\rrbracket\). Then, we conclude that \(\eta_{i,a}\frac{e_{0}^{k-a}\overline{e}_{1}^{a}}{\sqrt{z}}\) has no exponential factor as a combination of monomials \(v_{0}^{k-b}v_{1}^{b}\).
Let \(\sigma:w^{1/4}\mapsto\sqrt{-1}w^{1/4}\) be the generator of the Galois group of the extension \(\mathbb{C}\left(w^{1/4}\right)\) of \(\mathbb{C}\left(w\right)\). The \(\sigma\) action on \(A_{i},B_{i}\) is given by
\[\sigma\left(A_{j},B_{j}\right)=\left(\frac{1}{\pi\sqrt{-1}}B_{j},-\pi\sqrt{-1 }A_{j}\right)\text{ for }j=0,1,\]
and thus on \(e_{0},\overline{e}_{1}\) by
\[\sigma\left(e_{0},\overline{e}_{1}\right)=\left(\frac{1}{\pi\sqrt{-1}} \overline{e}_{1},-\pi\sqrt{-1}e_{0}\right);\text{ \ }\sigma\left(e_{0}^{k-a}\overline{e}_{1}^{a}\right)=(\sqrt{-1})^{-k}\pi^{2a-k}e _{0}^{a}\overline{e}_{1}^{k-a}.\]
Moreover, we have
\[\sigma\left(\eta_{i,a}\frac{e_{0}^{k-a}\overline{e}_{1}^{a}}{\sqrt{z}}\right) =\eta_{i,k-a}\frac{e_{0}^{a}\overline{e}_{1}^{k-a}}{\sqrt{z}}.\]
Hence, when \(k\equiv 1,3\pmod{4}\), we take an element
\[\eta_{i}=-\sum_{a=0}^{k}\binom{k}{a}\eta_{i,a}\frac{e_{0}^{k-a}\overline{e}_{1 }^{a}}{\sqrt{z}}\in\bigoplus_{a=0}^{k}\mathbb{Q}\llbracket z\rrbracket v_{0}^{ a}v_{1}^{k-a}.\]
This gives \(\nabla\eta_{i}=v_{0}^{k}z^{i}\frac{\mathrm{d}z}{z}\).
When \(k=4r+4\) and \(a=2r+2\), the exponents of \(w\) of expansion of \(\frac{(-A_{0})^{a}B_{0}^{k-a}}{w^{i+3/2}}\) are in \(\frac{1}{2}+\mathbb{Z}\) and one takes
\[\eta_{i,2r+2}\sim\frac{w^{r-i+1/2}}{r-i+1/2}G_{i}\]
where \(G_{i}\in 1+w\mathbb{Q}\llbracket w\rrbracket\). More precisely, we have
\[G_{i}=1+\sum_{n=1}^{\infty}\frac{r-i+1/2}{r-i+1/2+n}\gamma_{k,n}w^{n}.\]
Moreover, \(\eta_{i,2r+2}\frac{(e_{0}\overline{e}_{1})^{2r+2}}{\sqrt{z}}\) has no exponential factor as a combination of monomials \(v_{0}^{k-b}v_{1}^{b}\) and is invariant under \(\sigma\). Hence, when \(k\equiv 4\pmod{4}\), we take an element
\[\eta_{i}=-\sum_{a=0}^{k}\binom{k}{a}\eta_{i,a}\frac{e_{0}^{k-a}\overline{e}_{ 1}^{a}}{\sqrt{z}}\in\bigoplus_{a=0}^{k}\mathbb{Q}\llbracket z\rrbracket v_{0}^{ a}v_{1}^{k-a}\]
This gives \(\nabla\eta_{i}=v_{0}^{k}z^{i}\frac{\mathrm{d}z}{z}\).
Now, suppose that \(k=4r+2\), a positive integer congruent to \(2\) modulo \(4\), and \(a=2r+1\). Using the expansion (11), we have the residue:
\[\operatorname{Res}_{w}\frac{(-A_{0})^{a}B_{0}^{k-a}}{w^{i+3/2}}=\gamma_{k,i-r},\]
which vanishes if and only if \(i\leq r-1\). Therefore, for \(i\geq r\), there exists
\[\eta_{i,2r+1}\sim\frac{1}{r-i}w^{r-i}\cdot H_{i}\]
such that
\[\mathrm{d}\eta_{i,2r+1}=\left(w^{-i-3/2}-\gamma_{k,i-r}w^{-r-3/2}\right)(-A_{0 }B_{0})^{2r+1}\mathrm{d}w\]
where \(H_{i}\in 1+w\mathbb{Q}\llbracket w\rrbracket\). Also, \(\eta_{i,2r+1}\frac{(e_{0}\overline{e}_{1})^{2r+1}}{\sqrt{z}}\) is invariant under \(\sigma\). Moreover, \(\eta_{i,2r+1}\frac{(e_{0}\overline{e}_{1})^{2r+1}}{\sqrt{z}}\) has no exponential factor as a combination of monomials \(v_{0}^{k-b}v_{1}^{k}\). Thus, we have
\[v_{0}^{k}z^{i}\frac{\mathrm{d}z}{z}-\gamma_{k,i-r}v_{0}^{k}z^{r}\frac{\mathrm{d }z}{z}=\nabla\left(-\sum_{\begin{subarray}{c}a=0\\ a\neq k/2\end{subarray}}^{k}\binom{k}{a}(\eta_{i,a}-\gamma_{k,i-r}\eta_{r,a}) \frac{e_{0}^{k-\overline{e}_{1}^{a}}}{\sqrt{z}}-\binom{k}{k/2}\eta_{i,2r+1} \frac{e_{0}^{2r+1}\overline{e}_{1}^{2r+1}}{\sqrt{z}}\right)\]
and hence we find an element \(\eta_{i}\) in \(\bigoplus_{a=0}^{k}\mathbb{Q}\llbracket z\rrbracket v_{0}^{a}v_{1}^{k-a}\) such that \(\nabla\eta_{i}=v_{0}^{k}z^{i}\frac{\mathrm{d}z}{z}-\gamma_{k,i-r}v_{0}^{k}z^{ r}\frac{\mathrm{d}z}{z}\).
Now, we define some elements in the de Rham cohomology and the de Rham cohomology with compact support.
**Definition 10**.: _In the de Rham cohomology \(H^{1}_{\mathrm{dR}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k}\operatorname {Kl}_{2}\right)\), the classes \(\omega_{i}\) are given as follows._
1. _When_ \(k\equiv 0,1,3\bmod 4\)_, define the_ \(k^{\prime}+1\) _elements:_ \[\omega_{k,i}=v_{0}^{k}z^{i}\frac{\mathrm{d}z}{z}\text{ for }i=0,1,2,\cdots,k^{\prime}.\]
2. _When_ \(k\equiv 2\bmod 4\)_, define the_ \(k^{\prime}\) _elements:_ \[\omega_{k,i}=\begin{cases}v_{0}^{k}z^{i}\frac{\mathrm{d}z}{z}&k=4r+2\text{ and }0\leq i<r;\\ v_{0}^{k}z^{i}\frac{\mathrm{d}z}{z}-\gamma_{k,i-r}v_{0}^{k}z^{r}\frac{ \mathrm{d}z}{z}&k=4r+2\text{ and }r+1\leq i\leq 2r.\end{cases}\]
From the above lemma, we define the elements in the compactly supported de Rham cohomology.
**Definition 11**.: _In the compactly supported de Rham cohomology \(H^{1}_{\mathrm{dR},\mathrm{c}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym} ^{k}\operatorname{Kl}_{2}\right)\),_
1. _when_ \(k\equiv 0,1,3\pmod{4}\)_, for_ \(0\leq i\leq k^{\prime}\)_, we define_ \(k^{\prime}+1\) _elements_ \[\widetilde{\omega}_{k,i}=(\xi_{i},\eta_{i},\omega_{k,i})\text{ for }0\leq i\leq k^{\prime},\] _where_ \(\nabla\xi_{i}=\nabla\eta_{i}=\omega_{k,i}\)_._
2. _When_ \(k\equiv 2\pmod{4}\)_, we define_ \(k^{\prime}\) _elements_ \[\widetilde{\omega}_{k,i}=(\xi_{i},\eta_{i},\omega_{k,i})\text{ for }0\leq i\leq r-1\text{ and }r+1\leq i\leq k^{\prime},\] _where_ \(\nabla\xi_{i}=\nabla\eta_{i}=\omega_{k,i}\)_._
3. _In the case that_ \(k\equiv 2\pmod{4}\)_, write_ \(k=4r+2\)_, we define_ \[\widehat{m}_{2r+1}=\left(0,\frac{2^{k}(e_{0}\overline{e}_{1})^{2r+1}}{\sqrt{z }},0\right)\in H^{1}_{\mathrm{dR},\mathrm{c}}\left(\mathbb{G}_{m},\sqrt{z} \operatorname{Sym}^{k}\operatorname{Kl}_{2}\right).\]
Note that \(\widehat{m}_{2r+1}\) is defined only in the case when \(k\equiv 2\pmod{4}\). In the next subsection, we will prove these elements really form a basis of \(H^{1}_{\mathrm{dR},\mathrm{c}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym }^{k}\operatorname{Kl}_{2}\right)\). (See Corollary 13.) Further, we define the middle part de Rham cohomology, \(H^{1}_{\mathrm{mid}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}\right)\), to be the image of \(H^{1}_{\mathrm{dR},\mathrm{c}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym }^{k}\operatorname{Kl}_{2}\right)\) in \(H^{1}_{\mathrm{dR}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}\right)\). We therefore have
\[\omega_{k,i}\in H^{1}_{\mathrm{mid}}\left(\mathbb{G}_{m},\sqrt{z} \operatorname{Sym}^{k}\operatorname{Kl}_{2}\right)\text{ for }0\leq i\leq k^{\prime}\text{ when }k\equiv 0,1,3 \pmod{4};\] \[\omega_{k,i}\in H^{1}_{\mathrm{mid}}\left(\mathbb{G}_{m},\sqrt{z} \operatorname{Sym}^{k}\operatorname{Kl}_{2}\right)\text{ for }0\leq i\leq k^{\prime},\,i\neq r\text{ when }k=4r+2.\]
We may regard \(H^{1}_{\operatorname{mid}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}\right)\) as a quotient of \(H^{1}_{\operatorname{dR},c}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}\right)\) containing the class of elements \(\widetilde{\omega}_{k,i}\). In the next subsection, we will prove these elements form a basis. (See Corollary 13.)
### Poincare pairing
We have the following Poincare pairing between the de Rham cohomology and the compactly supported de Rham cohomology. Note that the algebraic pairing \(\left\langle\,\ \right\rangle_{\operatorname{alg}}\) is introduced in section 2.3.
\[H^{1}_{\operatorname{dR},c}\left(\mathbb{G}_{m},\sqrt{z} \operatorname{Sym}^{k}\operatorname{Kl}_{2}\right)\otimes H^{1}_{ \operatorname{dR}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}\right)\xrightarrow{\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\text{\text{\text{\text{\text
Next, we discuss the residue at \(z=\infty\). When \(k\equiv 1,3\pmod{4}\) and for any \(0\leq i,j\leq k^{\prime}\), we compute
\[\operatorname{Res}_{w}\left\langle\eta_{i},v_{0}^{k}z^{j}\frac{ \mathrm{d}z}{z}\right\rangle_{\text{alg}} =-\sum_{a=0}^{k}\binom{k}{a}\operatorname{Res}_{w}\left\langle \eta_{i,a}\frac{e_{0}^{k-a}\overline{e}_{1}^{a}}{\sqrt{z}},v_{0}^{k}z^{j}\frac {\mathrm{d}z}{z}\right\rangle_{\text{alg}}\] \[=\sum_{a,b=0}^{k}\binom{k}{a}\binom{k}{b}\operatorname{Res}_{w} \left\langle\eta_{i,a}\frac{e_{0}^{k-a}\overline{e}_{1}^{a}}{\sqrt{z}},\frac{( -A_{0})^{b}B_{0}^{k-b}}{w^{j+3/2}}\frac{e_{0}^{k-b}\overline{e}_{1}^{b}}{\sqrt {z}}\mathrm{d}w\right\rangle_{\text{alg}}\] \[=\frac{1}{2^{k}}\sum_{a=0}^{k}(-1)^{a}\binom{k}{a}\operatorname{ Res}_{w}\left(\eta_{i,a}\frac{1}{w^{j+3/2}}(-A_{0})^{k-a}B_{0}^{a}\mathrm{d}w\right)\] \[=\frac{1}{2^{k}}\sum_{a=0}^{k}\frac{(-1)^{a}\binom{k}{a}}{k-2a} \operatorname{Res}_{w}\left(w^{(k-1)/2-i-j-1}G_{i,a}F_{j,k-a}\mathrm{d}w\right)\] \[=\begin{cases}0&\text{if }i+j\leq k^{\prime}-1,\\ (-2)^{k^{\prime}}\frac{k^{\prime}!}{k!}&\text{if }i+j=k^{\prime},\\ *&\text{if }i+j\geq k^{\prime}+1.\end{cases}\]
where \(G_{i,a},F_{j,k-a}\in 1+\sqrt{w}\mathbb{Q}\llbracket\sqrt{w}\rrbracket\) and the last equality follows from [12, lemma 3.18].
When \(k\equiv 0\pmod{4}\), write \(k=4r+4\). For any \(0\leq i,j\leq k^{\prime}\), we compute
\[\operatorname{Res}_{w}\left\langle\eta_{i},v_{0}^{k}z^{j}\frac{ \mathrm{d}z}{z}\right\rangle_{\text{alg}} =\frac{1}{2^{k}}\sum_{\begin{subarray}{c}a=0\\ a\neq\frac{2}{2}\end{subarray}}^{k}\frac{(-1)^{a}\binom{k}{a}}{k-2a} \operatorname{Res}_{w}\left(w^{(k-1)/2-i-j-1}G_{i,a}F_{j,k-a}\mathrm{d}w\right)\] \[\qquad+\frac{(-1)^{k/2}}{2^{k}}\binom{k}{k/2}\operatorname{Res}_ {w}\left(\eta_{i,2r+2}\frac{1}{w^{j+3/2}}(-A_{0}B_{0})^{2r+2}\mathrm{d}w\right)\] \[=\frac{1}{2^{k}}\sum_{\begin{subarray}{c}a=0\\ a\neq\frac{2}{2}\end{subarray}}^{k}\frac{(-1)^{a}\binom{k}{a}}{k-2a} \operatorname{Res}_{w}\left(w^{(k-1)/2-i-j-1}G_{i,a}F_{j,k-a}\mathrm{d}w\right)\] \[\qquad+\frac{\binom{k}{k/2}}{2^{k}(r-i+1/2)}\operatorname{Res}_ {w}\left(w^{k^{\prime}-i-j-1}\cdot G_{i}F_{2r+2}\mathrm{d}w\right)\] \[=\begin{cases}0&\text{if }i+j\leq k^{\prime}-1,\\ \frac{\binom{k}{k/2}}{2^{k}(r-i+1/2)}&\text{if }i+j=k^{\prime},\\ *&\text{if }i+j\geq k^{\prime}+1.\end{cases}\]
where \(G_{i,a},F_{j,k-a}\in 1+\sqrt{w}\mathbb{Q}\llbracket\sqrt{w}\rrbracket\), \(G_{i}\in 1+w\mathbb{Q}\llbracket w\rrbracket\) and \(F_{2r+2}=\left(\sum_{n=0}^{\infty}\frac{((2n-1)!!)^{3}}{2^{5n}n!}w^{n}\right) ^{2r+2}\).
When \(k\equiv 2\pmod{4}\), the computation is similar to the case \(k\equiv 0\pmod{4}\).
Finally, we compute
\[\operatorname{Res}_{w}\left\langle\frac{2^{k}(e_{0}\overline{e}_{1} )^{2r+1}}{\sqrt{z}},v_{0}^{k}z^{j}\frac{\mathrm{d}z}{z}\right\rangle_{\mathrm{alg}} =-\sum_{b=0}^{k}\binom{k}{b}\operatorname{Res}_{w}\left\langle \frac{2^{k}(e_{0}\overline{e}_{1})^{2r+1}}{\sqrt{z}},\frac{(-A_{0})^{b}B_{0}^{k- b}}{w^{j+3/2}}\frac{e_{0}^{k-b}\overline{e}_{1}^{b}}{\sqrt{z}}\mathrm{d}w \right\rangle_{\mathrm{alg}}\] \[=-\binom{k}{k/2}\operatorname{Res}_{w}\left\langle\frac{2^{k}(e_ {0}\overline{e}_{1})^{2r+1}}{\sqrt{z}},\frac{(-A_{0}B_{0})^{2r+1}}{w^{j+3/2}} \frac{(e_{0}\overline{e}_{1})^{2r+1}}{\sqrt{z}}\mathrm{d}w\right\rangle_{ \mathrm{alg}}\] \[=(-1)^{2r+2}\operatorname{Res}_{w}\left(\frac{(-A_{0}B_{0})^{2r+1 }}{w^{j+3/2}}\mathrm{d}w\right)\] \[=\operatorname{Res}_{w}\left(w^{k/4-j-3/2}\sum_{n=0}^{\infty} \gamma_{k,n}w^{n}\mathrm{d}w\right)\] \[=\begin{cases}0&\text{if }j<r,\\ \gamma_{k,j-r}&\text{if }j\geq r.\end{cases}\]
Combining these residues, we obtain this proposition.
Putting the dimension result in Proposition 6 and the non-vanishing determinant of Poincare pairing (Proposition 12) together, we have the following corollary.
**Corollary 13** (Basis in de Rham side).: _Let \(k\) be a positive integer._
1. \(H^{1}_{\mathrm{dR}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}\right)\) _has basis_ \(\left\{v_{0}^{k}z^{j}\frac{\mathrm{d}z}{z}\right\}_{j=0}^{k^{\prime}}\)_._
2. \(H^{1}_{\mathrm{dR},c}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}\right)\) _has basis_ \[\begin{cases}\left\{\widetilde{\omega}_{k,j}\right\}_{j=0}^{k^{\prime}}&\text{ if }k\equiv 0,1,3\pmod{4},\\ \left\{\widetilde{\omega}_{k,j}\right\}_{j=0}^{r-1}\cup\left\{\widetilde{ \omega}_{k,j}\right\}_{j=r+1}^{k^{\prime}}\cup\left\{\widehat{m}_{2r+1}\right\} &\text{ if }k\equiv 2\pmod{4}\text{ with }k=4r+2.\end{cases}\)__
3. \(H^{1}_{\mathrm{dR},\mathrm{mid}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym }^{k}\operatorname{Kl}_{2}\right)\) _has basis_ \[\begin{cases}\left\{\omega_{k,i}\right\}_{i=0}^{k^{\prime}}&\text{ if }k\equiv 0,1,3\pmod{4},\\ \left\{\omega_{k,i}\right\}_{i=0}^{r-1}\cup\left\{\omega_{k,i}\right\}_{i=r+1} ^{k^{\prime}}&\text{ if }k\equiv 2\pmod{4}\text{ with }k=4r+2.\end{cases}\]
## 4. The local system and the associated homology
Recall the basis \(e_{0},e_{1}\) of the local system \(\operatorname{Kl}_{2}^{\nabla}\) from (4). Note that the modified Bessel function \(I_{0}(t)\) is entire. On the other hand, \(K_{0}(s)\) extends analytically to a multivalued function on \(\mathbb{C}^{\times}\) satisfying the monodromy \(K_{0}(e^{2\pi\sqrt{-1}}t)=K_{0}(t)-2\pi\sqrt{-1}I_{0}(t)\). This implies \(e_{0},e_{1}\) undergo the monodromy action \(T:(e_{0},e_{1})\mapsto(e_{0},e_{1}+e_{0})\) near \(0\). Then the basis in (7) of the local system \((\sqrt{z}\operatorname{Sym}^{k}\operatorname{Kl}_{2})^{\nabla}\) satisfies \(T:\frac{1}{\sqrt{z}}e_{0}^{a}e_{1}^{k-a}\mapsto\frac{-1}{\sqrt{z}}e_{0}^{a}( e_{1}+e_{0})^{k-a}\) near \(0\).
### Rapid decay cycles
Write \(k^{\prime}=\left\lfloor\frac{k-1}{2}\right\rfloor\). Denote the chains on \(\mathbb{C}^{\times}\):
\[\sigma_{0} =\text{the unit circle, starting at }1\text{ and oriented counterclockwise};\] \[\sigma_{+} =\text{the interval }[1,\infty),\text{ starting at }1\text{ toward }+\infty.\]
By the asymptotic expansion (8), (9), the horizontal sections \(\frac{e_{0}^{a}e_{1}^{k-a}}{\sqrt{z}}\) decay exponentially along \(\sigma_{+}\) for \(a=0,1,\ldots,k^{\prime}\). Then, we have the following lemma describing the elements in the rapid decay homology.
**Lemma 14**.: _The rapid decay cycles \(\left\{\delta_{b}\right\}_{b=0}^{k^{\prime}}\) in \(H_{1}^{\mathrm{rd}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k}\operatorname {Kl}_{2}^{\nabla}\right)\) are defined by_
\[\delta_{b}=\sigma_{+}\otimes\frac{1}{\sqrt{z}}e_{0}^{b}e_{1}^{k-b}-\frac{1}{2} \sigma_{0}\otimes\frac{1}{\sqrt{z}}e_{0}^{b}e_{1}^{k-b}+\sum_{n=1}^{k-b}d_{k-b }(n)\sigma_{0}^{2n}\otimes\frac{1}{\sqrt{z}}e_{0}^{b}e_{1}^{k-b}, \tag{13}\]
_where \(d_{n}(i)\) satisfies_
\[\sum_{i=1}^{n}d_{n}(i)\left(2i\right)^{m}=-\frac{1}{2},\text{ for }m=1,2,\cdots,n.\]
_In fact, by Cramer's rule, one solves \(d_{n}(i)=\frac{(-1)^{i}}{n!2^{n+1}}\binom{n}{i}\frac{(2n-1)!!}{2i-1}\) uniquely._
Proof.: It remains to prove that \(d_{n}(i)\) makes \(\delta_{b}\) into a cycle, that is, \(\partial\delta_{b}=0\).
\[\partial\delta_{b} =\frac{1}{\sqrt{z}}e_{0}^{b}e_{1}^{k-b}-\frac{1}{2}\left(\frac{1 }{\sqrt{z}}e_{0}^{b}e_{1}^{k-b}+\frac{1}{\sqrt{z}}e_{0}^{b}(e_{1}+e_{0})^{k-b}\right)\] \[\qquad+\sum_{n=1}^{k-b}d_{k-b}(n)\left(\frac{1}{\sqrt{z}}e_{0}^{b }e_{1}^{k-b}-\frac{1}{\sqrt{z}}e_{0}^{b}(e_{1}+2ne_{0})^{k-b}\right)\] \[=\frac{1}{\sqrt{z}}e_{0}^{b}e_{1}^{k-b}-\frac{1}{2}\left(\frac{1 }{\sqrt{z}}e_{0}^{b}e_{1}^{k-b}+\sum_{j=0}^{k-b}\binom{k-b}{j}\frac{1}{\sqrt{z }}e_{0}^{b+j}e_{1}^{k-b-j}\right)\] \[\qquad+\sum_{n=1}^{k-b}d_{k-b}(n)\left(\frac{1}{\sqrt{z}}e_{0}^{b }e_{1}^{k-b}-\sum_{m=0}^{k-b}\binom{k-b}{m}(2n)^{m}\frac{1}{\sqrt{z}}e_{0}^{b +m}e_{1}^{k-b-m}\right)\] \[=-\frac{1}{2}\sum_{j=1}^{k-b}\binom{k-b}{j}\frac{1}{\sqrt{z}}e_{0 }^{b+j}e_{1}^{k-b-j}-\sum_{m=1}^{k-b}\binom{k-b}{m}\sum_{n=1}^{k-b}d_{k-b}(n) (2n)^{m}\frac{1}{\sqrt{z}}e_{0}^{b+m}e_{1}^{k-b-m}\] \[=\sum_{j=1}^{k-b}\binom{k-b}{j}\left(-\frac{1}{2}-\sum_{n=1}^{k-b }d_{k-b}(n)(2n)^{j}\right)\frac{1}{\sqrt{z}}e_{0}^{b+j}e_{1}^{k-b-j}=0.\qed\]
From this lemma, we have \(k^{\prime}+1\) elements in the rapid decay homology \(H_{1}^{\mathrm{rd}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}^{\nabla}\right)\). At the end of this section, we will prove these elements form a basis (see Corollary 18).
### Moderate decay cycles
Define one more chain
\[\mathbb{R}_{+}=\text{the half line }[0,\infty),\text{ starting at }0\text{ toward }+\infty.\]
Note that the modified Bessel function \(K_{0}\left(t\right)\) has log pole at \(0\), so the horizontal sections \(\frac{e_{0}^{a}e_{1}^{k-a}}{\sqrt{z}}\) decay moderately along \(\mathbb{R}_{+}\) near \(0\) for \(a=0,1,\ldots,\lfloor\frac{k}{2}\rfloor\). Moreover, by the expression (10), \((I_{0}K_{0})^{a}\) decay polynomially along \(\mathbb{R}_{+}\) near \(\infty\). Then, we define the moderate decay cycles in \(H_{1}^{\mathrm{mod}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}^{\nabla}\right)\)
\[\gamma_{a}=\mathbb{R}_{+}\otimes\frac{1}{\sqrt{z}}e_{0}^{a}e_{1}^{k-a},\text{ for }a=0,1,2,\cdots,\lfloor\frac{k}{2}\rfloor. \tag{14}\]
They are indeed a cycle. The proof is the same as the above lemma by taking the limit as the radius of \(\sigma_{0}\) tends to \(0\). This gives a natural map
\[H_{1}^{\mathrm{rd}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}^{\nabla}\right)\xrightarrow[]{}H_{1}^{\mathrm{mod}} \left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k}\operatorname{Kl}_{2}^{ \nabla}\right)\]
for \(b=0,1,\cdots,k^{\prime}\).2
Footnote 2: In fact, this is an isomorphism when \(k\equiv 0,1,3\pmod{4}\) and has one-dimensional kernel when \(k\equiv 2\pmod{4}\). See Corollary 18.
The following lemma shows when \(k\equiv 2\pmod{4}\), \(\sum_{j=0}^{(k-2)/4}\binom{k/2}{2j}\delta_{2j}\) belongs to the kernel of this map.
**Lemma 15**.: _Let \(\rho:\left\{(x,y)\in\mathbb{R}^{2}\mid 0<x,y,x+y<1\right\}\to\mathbb{C}\) be the open simplicial \(2\)-chain_
\[\rho\left(x,y\right)=\tan\frac{\pi\left(x+y\right)}{2}\exp\left(4\sqrt{-1}\tan^ {-1}\frac{y}{x}\right)\]
_that covers \(\mathbb{C}\) once. If \(k\) is even, the singular chain_
\[\Delta=\rho\otimes\left(\frac{1}{\sqrt{z}}(e_{1}-e_{0})^{k/2}\otimes e_{1}^{k /2}\right)\]
_has moderate growth. In \(H_{1}^{\mathrm{mod}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}^{\nabla}\right)\), one has_
\[\sum_{j=0}^{k/4}\binom{k/2}{2j}\mathbb{R}_{+}\otimes\frac{1}{ \sqrt{z}}e_{0}^{2j}e_{1}^{k-2j} =\sum_{j=0}^{k/4}\binom{k/2}{2j}\gamma_{2j}=0\text{ if }k\equiv 0 \bmod 4;\] \[\sum_{j=0}^{(k-2)/4}\binom{k/2}{2j}\mathbb{R}_{+}\otimes\frac{1} {\sqrt{z}}e_{0}^{2j}e_{1}^{k-2j} =\sum_{j=0}^{(k-2)/4}\binom{k/2}{2j}\gamma_{2j}=0\text{ if }k\equiv 2 \bmod 4.\]
Proof.: One computes \(\partial\Delta\):
\[\partial\Delta =\mathbb{R}_{+}\otimes\left(\frac{1}{\sqrt{z}}(e_{1}-e_{0})^{k/2} \otimes e_{1}^{k/2}\right)+\mathbb{R}_{+}\otimes\left(\frac{1}{\sqrt{z}}e_{1} ^{k/2}\otimes(e_{0}+e_{1})^{k/2}\right)\] \[=\sum_{i=0}^{k/2}(-1)^{i}\binom{k/2}{i}\mathbb{R}_{+}\otimes \frac{1}{\sqrt{z}}e_{0}^{i}e_{1}^{k-i}+\sum_{i=0}^{k/2}\binom{k/2}{i}\mathbb{ R}_{+}\otimes\frac{1}{\sqrt{z}}e_{0}^{i}e_{1}^{k-i}.\]
Thus, the result follows.
Here, we have written down the \(1+\lfloor\frac{k}{2}\rfloor\) elements \(\{\gamma_{a}\}_{a=0}^{\lfloor k/2\rfloor}\) in the moderate decay homology \(H_{1}^{\mathrm{mod}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}\operatorname {Kl}_{2}^{\nabla}\right)\). At the end of this section, we will prove that these elements form a basis modulo the linear relation given in the above lemma (see Corollary 18).
Similar to the middle part de Rham cohomology in the previous section, we define the middle part Betti homology \(H_{1}^{\mathrm{mid}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}^{\nabla}\right)\) to be the image of \(H_{1}^{\mathrm{rd}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}^{\nabla}\right)\) in \(H_{1}^{\mathrm{mod}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}^{\nabla}\right)\). More precisely, we have
\[\gamma_{i}\in H_{1}^{\mathrm{mid}}\left(\mathbb{G}_{m},\sqrt{z} \operatorname{Sym}^{k}\operatorname{Kl}_{2}^{\nabla}\right)\text{ for }0\leq i\leq k^{\prime}\text{ when }k\equiv 0,1,3 \pmod{4};\] \[\gamma_{i}\in H_{1}^{\mathrm{mid}}\left(\mathbb{G}_{m},\sqrt{z} \operatorname{Sym}^{k}\operatorname{Kl}_{2}^{\nabla}\right)\text{ for }1\leq i\leq k^{\prime}\text{ when }k\equiv 2 \pmod{4}.\]
Also, we may regard \(H_{1}^{\mathrm{mid}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}^{\nabla}\right)\) as the quotient of \(H_{1}^{\mathrm{rd}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}^{\nabla}\right)\) containing the class of elements \(\delta_{b}\). At the end of this section, we will prove these elements form a basis (see Corollary 18).
### Betti intersection pairing
We use the topological pairing introduced in section 2.3 to define the Betti intersection pairing
where is the topological intersection. At each topological intersection, the corresponding topological pairing is the pairing of sections at that point.
To compute the topological pairing with respect to the elements we had written down, we need to introduce the Euler numbers and Euler polynomials. The Euler polynomials are given by the following power series, and we define the numbers for as in [10],
The first few are
Note that we have the inversion formula for Euler polynomials,
Taking value at, we get
(15)
**Proposition 16**.: _We have the Betti intersection pairing_
_for and._
Proof.: Fix some and let. To compute the pairing, we move the ray by adding the scalar and let the circle start at. Then the component in the deformed meets topologically times at the same point. At the -th intersection, the factor becomes and we have
By adding these contributions, we obtain
where
where
Kim [10] gave the following relation for \(T_{n}(k)\):
\[T_{n}(k)=\frac{(-1)^{k+1}}{2}\sum_{\ell=0}^{n-1}\binom{n}{\ell}E_{\ell}k^{n-\ell} +\frac{E_{n}}{2}\left(1+(-1)^{k+1}\right).\]
Now, we have the following computation
\[\sum_{n=1}^{k-b}d_{k-b}(n)T_{k-b-a}(2n) =\sum_{n=1}^{k-b}d_{k-b}(n)\left[\frac{-1}{2}\sum_{\ell=0}^{k-a-b- 1}\binom{k-b-a}{\ell}E_{\ell}(2n)^{k-a-b-\ell}\right]\] \[=\frac{-1}{2}\sum_{\ell=0}^{k-a-b-1}\binom{k-a-b}{\ell}E_{\ell} \sum_{n=1}^{k-b}d_{k-b}(n)(2n)^{k-a-b-\ell}\] \[=\frac{1}{4}\sum_{\ell=0}^{k-b-a-1}\binom{k-b-a}{\ell}E_{\ell}\]
and thus by (15),
\[\left\langle\delta_{b},\gamma_{a}\right\rangle_{\text{Betti}}=\frac{-1}{2}E_{ k-a-b}.\qed\]
Consider the \((k^{\prime}+1)\times(k^{\prime}+1)\) pairing matrix
\[B_{k}=\begin{cases}\left(\left\langle\delta_{b},\gamma_{a}\right\rangle_{ \text{Betti}}\right)_{0\leq b\leq k^{\prime},\ 0\leq a\leq\lfloor\frac{k}{2}\rfloor}&\text{ if $k$ is odd},\\ \left(\left\langle\delta_{b},\gamma_{a}\right\rangle_{\text{Betti}}\right)_{0 \leq b\leq k^{\prime},\ 1\leq a\leq\frac{k}{2}}&\text{ if $k$ is even}.\end{cases}\]
Using the pairing formula
\[\left\langle\delta_{b},\gamma_{a}\right\rangle_{\text{Betti}}=\frac{(-1)^{a+1} }{2}\frac{(k-a)!(k-b)!}{k!}\frac{E_{k-a-b}}{(k-a-b)!},\]
when \(k\) is even, we have
\[B_{k}=\left(\begin{array}{cccc}\frac{(-1)^{2}}{2}\frac{(k-1)!k!}{k!}\frac{E _{k-1}}{(k-1)!}&\cdots&\frac{(-1)^{k/2+1}}{2}\frac{(k/2)!k!}{k!}\frac{E_{k/2}}{ (k/2)!}\\ \vdots&\ddots&\vdots\\ \frac{(-1)^{2}}{2}\frac{(k-1)!(k/2+1)!}{k!}\frac{E_{k/2}}{(k/2)!}&\cdots&\frac {(-1)^{k/2+1}}{2}\frac{(k/2)!(k/2+1)!}{k!}\frac{E_{1}}{(1)!}\end{array}\right)\]
and that
\[B_{k-1}=\left(\begin{array}{cccc}\frac{(-1)}{2}\frac{(k-1)!(k-1)!}{(k-1)!} \frac{E_{k-1}}{(k-1)!}&\cdots&\frac{(-1)^{k/2}}{2}\frac{(k/2)!(k-1)!}{(k-1)!} \frac{E_{k/2}}{(k/2)!}\\ \vdots&\ddots&\vdots\\ \frac{(-1)}{2}\frac{(k-1)!(k/2)!}{(k-1)!}\frac{E_{k/2}}{(k/2)!}&\cdots&\frac {(-1)^{k/2}}{2}\frac{(k/2)!(k/2)!}{(k-1)!}\frac{E_{1}}{(1)!}\end{array}\right).\]
Then we obtain the relation
\[B_{k}=-\frac{1}{k}\operatorname{diag}(k,k-1,\cdots,k/2+1)\cdot B_{k-1}. \tag{16}\]
Thus, \(B_{k}\) and \(B_{k-1}\) have the same rank whenever \(k\) is even. Moreover, we may compute the determinant of \(B_{k}\) explicitly given in the following proposition.
**Proposition 17**.: _The determinant of \(B_{k}\) is given by the following._
1. _When_ \(k\) _is odd, we have_ \[\det B_{k}=2^{-k-1}\prod_{a=1}^{k^{\prime}}a^{k^{\prime}+1-2a}(2a+1)^{k^{ \prime}-2a}.\]
2. _When_ \(k\) _is even, we have_ \[\det B_{k}=(-1)^{(k^{\prime}+1)(k^{\prime}+3)}2^{-k}\prod_{a=1}^{k^{\prime}}(a+1) ^{k^{\prime}-2a-1}(2a+1)^{k^{\prime}+1-2a}.\]
_In particular, they are all non-vanishing._
Proof.: Set \(\mathcal{E}_{2n-1}=(-1)^{n}2^{2n-1}E_{2n-1}\). Apply the result [1, Eq. H12] in the following computations.
When \(k=2k^{\prime}+1\) is odd, we have
\[\det B_{k} =\frac{(-1)^{(k^{\prime}+1)(k^{\prime}+2)/2}}{(2\cdot k!)^{k^{ \prime}+1}}\left[\prod_{i=k^{\prime}+1}^{k}i!\right]^{2}\det\left(\begin{array} []{ccc}\frac{E_{k}}{k!}&\cdots&\frac{E_{k^{\prime}+1}}{(k^{\prime}+1)!}\\ \vdots&\ddots&\vdots\\ \frac{E_{k^{\prime}+1}}{(k^{\prime}+1)!}&\cdots&\frac{E_{k}}{(1)!}\end{array}\right)\] \[=\frac{1}{(2\cdot k!)^{k^{\prime}+1}}\left[\prod_{i=k^{\prime}+1}^ {k}i!\right]^{2}\frac{1}{2^{(k^{\prime}+1)^{2}}}\det\left(\begin{array}{ccc }\frac{E_{k}}{k!}&\cdots&\frac{E_{k^{\prime}+1}}{(k^{\prime}+1)!}\\ \vdots&\ddots&\vdots\\ \frac{E_{k^{\prime}+1}}{(k^{\prime}+1)!}&\cdots&\frac{E_{k}}{(1)!}\end{array}\right)\] \[=\frac{1}{2^{(k^{\prime}+1)(k^{\prime}+2)}(k!)^{k^{\prime}+1}} \left[\prod_{i=k^{\prime}+1}^{k}i!\right]^{2}2^{k^{\prime}2}\frac{k^{\prime}!} {k!}\prod_{j=1}^{k^{\prime}}\frac{(j-1)!^{2}}{(2j-1)!^{2}}\] \[=\frac{1}{2^{k+1}}\prod_{a=1}^{k^{\prime}}a^{k^{\prime}+1-2a}(2a+ 1)^{k^{\prime}-2a}.\]
When \(k=2k^{\prime}+2\) is even, we have
\[\det B_{k} =\frac{(-1)^{(k^{\prime}+1)(k^{\prime}+4)/2}}{(2\cdot k!)^{k^{ \prime}+1}}\left[\prod_{i=k^{\prime}+1}^{k-1}i!\,(i+1)!\right]\det\left( \begin{array}{ccc}\frac{E_{k-1}}{(k-1)!}&\cdots&\frac{E_{k^{\prime}+1}}{(k^ {\prime}+1)!}\\ \vdots&\ddots&\vdots\\ \frac{E_{k^{\prime}+1}}{(k^{\prime}+1)!}&\cdots&\frac{E_{1}}{(1)!}\end{array}\right)\] \[=\frac{(-1)^{(k^{\prime}+1)(k^{\prime}+4)/2}}{(2\cdot k!)^{k^{ \prime}+1}}\left[\prod_{i=k^{\prime}+1}^{k-1}i!\,(i+1)!\right]\] \[\qquad\qquad\qquad\qquad\qquad\cdot\frac{(\sqrt{-1})^{(k^{\prime }+1)(k^{\prime}+2)}}{2^{(k^{\prime}+1)^{2}}}\det\left(\begin{array}{ccc}\frac {\mathcal{E}_{k-1}}{(k-1)!}&\cdots&\frac{\mathcal{E}_{k^{\prime}+1}}{(k^{ \prime}+1)!}\\ \vdots&\ddots&\vdots\\ \frac{E_{k^{\prime}+1}}{(k^{\prime}+1)!}&\cdots&\frac{\mathcal{E}_{1}}{(1)!} \end{array}\right)\] \[=\frac{(-1)^{(k^{\prime}+1)(k^{\prime}+3)}}{2^{(k^{\prime}+1)(k^ {\prime}+2)}(k!)^{k^{\prime}+1}}\left[\prod_{i=k^{\prime}+1}^{k-1}i!\,(i+1)! \right]2^{k^{\prime}2}\frac{k^{\prime}!}{(k-1)!}\prod_{j=1}^{k^{\prime}}\frac{ (j-1)!^{2}}{(2j-1)!^{2}}\] \[=\frac{(-1)^{(k^{\prime}+1)(k^{\prime}+3)}}{2^{k}}\prod_{a=1}^{k^ {\prime}}(a+1)^{k^{\prime}-2a-1}(2a+1)^{k^{\prime}+1-2a}.\qed\]
Finally, we conclude the basis of Betti homologies.
**Corollary 18**.: _The natural map_
\[H_{1}^{\mathrm{rd}}\left(\mathbb{G}_{m},\sqrt{z}\mathrm{Sym}^{k}\,\mathrm{ Kl}_{2}^{\nabla}\right)\]
_sending \(\delta_{b}\) to \(\gamma_{b}\) is an isomorphism when \(k\equiv 0,1,3\pmod{4}\) and has a one-dimensional kernel when \(k\equiv 2\pmod{4}\). Moreover, we find the following._
1. \(H_{1}^{\mathrm{rd}}\left(\mathbb{G}_{m},\sqrt{z}\,\mathrm{Sym}^{k}\,\mathrm{Kl}_{2 }^{\nabla}\right)\) _has basis_ \(\left\{\delta_{b}\right\}_{b=0}^{k^{\prime}}\)_._
2. \(H_{1}^{\mathrm{mod}}\left(\mathbb{G}_{m},\sqrt{z}\,\mathrm{Sym}^{k}\,\mathrm{ Kl}_{2}^{\nabla}\right)\) _has basis_ \[\begin{cases}\left\{\gamma_{a}\right\}_{a=0}^{k^{\prime}}&\text{ if $k$ is odd.}\\ \left\{\gamma_{a}\right\}_{a=1}^{k/2}&\text{ if $k$ is even.}\end{cases}\]
3. \(H_{1}^{\mathrm{mid}}\left(\mathbb{G}_{m},\sqrt{z}\,\mathrm{Sym}^{k}\,\mathrm{ Kl}_{2}^{\nabla}\right)\) _has basis_ \[\begin{cases}\left\{\gamma_{a}\right\}_{a=0}^{k^{\prime}}&\text{ if $k\equiv 0,1,3 \pmod{4}$}\\ \left\{\gamma_{a}\right\}_{a=1}^{k^{\prime}}&\text{ if $k\equiv 2\pmod{4}$} \end{cases}\]
Proof.: From the duality of Betti homology and de Rham cohomology, the dimension of rapid decay homology and moderate decay homology are both \(k^{\prime}+1\) by Proposition 6. The above non-vanishing determinant of \(B_{k}\) shows that the natural map is an isomorphism when \(k\equiv 1,3\pmod{4}\), and thus the elements form a basis. When \(k\equiv 2\pmod{4}\), Lemma 15 describes the one-dimensional kernel of the natural map. Moreover, \(B_{k}\) has full rank \(k/2\) when \(k\) is even by the relation (16). Hence, we conclude that the natural map is an isomorphism when \(k\equiv 0\pmod{4}\).
## 5. Twisted moments as periods
In this section, we compute the periods pairing of the basis of de Rham cohomology and Betti homology in Corollary 13 and Corollary 18. Also, we interpret these periods as the Bessel moments and regularized Bessel moments.
### Bessel moments and regularized Bessel moments
The Bessel moments are defined by
\[\mathrm{IKM}_{k}(a,b)=\int_{0}^{\infty}I_{0}^{a}(t)K_{0}^{k-a}(t)t^{b}\mathrm{ d}t.\]
provided the convergence of the integral, that is, for non-negative integers \(k,a,b\) satisfying \(a\leq k^{\prime}\), \(b\geq 0\) or \(a=\frac{k}{2}\), \(0\leq b<k^{\prime}\). The justification is given in the following lemma. Moreover, if \(a=\frac{k}{2}\) and \(b\geq k^{\prime}\), by analyzing the singular integral, we could define the regularized Bessel moments \(\mathrm{IKM}_{k}^{\mathrm{reg}}\left(\frac{k}{2},b\right)\) by subtracting the singular part of the integral. The precise definition is also given in the following lemma.
**Lemma 19**.: _The integration of the Bessel moments_
\[\mathrm{IKM}_{k}(a,b)=\int_{0}^{\infty}I_{0}^{a}(t)K_{0}^{k-a}(t)t^{b} \mathrm{d}t\]
_converge for non-negative integers \(k,a,b\) satisfying \(a\leq k^{\prime}\), \(b\geq 0\) or \(a=\frac{k}{2}\), \(0\leq b<k^{\prime}\). Moreover, in the case that \(a=\frac{k}{2}\) and \(b\geq k^{\prime}\) with \(b\) even, we may modify the singular integral. In other words, the following two limits exist for \(2j\geq k^{\prime}\):_
\[\mathrm{IKM}_{k}^{\mathrm{reg}}\left(\frac{k}{2},2j\right) :=\lim_{t\to\infty}\left(\int_{0}^{t}(I_{0}K_{0})^{2r+2}s^{2j} \mathrm{d}s-\sum_{m=0}^{j-r-1}\frac{\gamma_{k,j-r-1-m}t^{2m+1}}{2^{k-2j+2m}(2 m+1)}\right)\ \ \text{ if $k=4r+4$},\] \[\mathrm{IKM}_{k}^{\mathrm{reg}}\left(\frac{k}{2},2j\right) :=\lim_{t\to\infty}\left(\int_{0}^{t}(I_{0}K_{0})^{2r+2}s^{2j} \mathrm{d}s-\frac{2\gamma_{k,j-r}}{2^{k-2j}}\int_{0}^{t}\frac{\mathrm{d}s}{s} -\sum_{m=0}^{j-r-1}\frac{\gamma_{k,j-r-1-m}t^{2m+2}}{2^{k-2j+2m+1}(2m+2)}\right) \ \ \text{if $k=4r+2$}.\]
Proof.: Near \(0\), we have the asymptotic
\[I_{0}(t) =1+O\left(t^{2}\right); \tag{18}\] \[K_{0}(t) =-\left(\gamma+\log\frac{t}{2}\right)+O\left(t^{2}\log t\right), \tag{17}\]
where \(\gamma\) is the Euler constant. Then, the integration \(\int_{0}^{1}I_{0}^{a}(t)K_{0}^{k-a}(t)t^{b}\mathrm{d}t\) converge for all \(0\leq a\leq\frac{k}{2}\) and any \(b\geq 0\).
Near \(\infty\), from (8) and (9), when \(0\leq a\leq k^{\prime}\), \(I_{0}^{a}(t)K_{0}^{k-a}(t)\) is exponential decay and hence the integral \(\int_{1}^{\infty}I_{0}^{a}(t)K_{0}^{k-a}(t)t^{b}\mathrm{d}t\) converges.
When \(a=\frac{k}{2}\), near \(\infty\), by (11), we have the asymptotic expansion
\[\left(I_{0}K_{0}\right)^{k/2}t^{2j}=\frac{1}{2^{k/2}}\sum_{n=0}^{\infty} \gamma_{k,n}4^{n}t^{-2n-k/2+2j}.\]
Using the fact that \(\int_{1}^{\infty}t^{\alpha}\mathrm{d}t\) converges if and only if \(\alpha<-1\), the singular part of the integral \(\int_{0}^{t}\left(I_{0}K_{0}\right)^{k/2}s^{2j}\mathrm{d}s\) is
\[\sum_{m=0}^{j-r-1}\frac{\gamma_{k,j-r-1-m}}{2^{k-2j+2m}}\frac{t^{ 2m+1}}{(2m+1)}\text{ if }k =4r+4 \tag{20}\] \[\frac{2\gamma_{k,j-r}}{2^{k-2j}}\int_{0}^{t}\frac{\mathrm{d}s}{s} +\sum_{m=0}^{j-r-1}\frac{\gamma_{k,j-r-1-m}}{2^{k-2j+2m+1}}\frac{t^{2m+2}}{(2m +2)}\text{ if }k =4r+2 \tag{19}\]
Hence, after subtracting the singular part of the integral, we obtained the regularized Bessel moments.
**Remark 20**.: _For \(a=\frac{k}{2}\) and \(b\geq k^{\prime}\) with odd \(b\), the integral \(\int_{0}^{\infty}I_{0}^{a}(t)K_{0}^{k-a}(t)t^{b}\mathrm{d}t\) also diverges. We may similarly define the regularized Bessel moments in this case. However, we will only use the modified Bessel moments with \(b\) even, so we only write the case when \(b\) is even for convenience._
### Period pairing and compactly supported period pairing
Define the period pairing using topological pairing introduced in section 2.3
by
\[\left\langle\sigma\otimes\frac{1}{\sqrt{z}}e_{0}^{b}e_{1}^{k-b},\omega\right \rangle_{\mathrm{per}}=\int_{\sigma}\frac{1}{\sqrt{z}}\left\langle e_{0}^{b}e _{1}^{k-b},\omega\right\rangle_{\mathrm{top}}.\]
Also, we define the compactly supported period pairing
\[H_{\mathrm{dR,c}}^{1}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \operatorname{Kl}_{2}\right)_{\mathbb{C}}\times H_{1}^{\mathrm{mod}}\left( \mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k}\operatorname{Kl}_{2}^{\mathbb{ V}}\right)_{\mathbb{C}}\xrightarrow{\langle\,\ \cdot\ \rangle_{\mathrm{per},c}}\mathbb{C}\]
by
\[\left\langle(\xi,\eta,\omega),\sigma\otimes\frac{1}{\sqrt{z}}e_{0}^{b}e_{1}^{ k-b}\right\rangle_{\mathrm{per},c}=\int_{\sigma}\frac{1}{\sqrt{z}}\left\langle \omega,e_{0}^{b}e_{1}^{k-b}\right\rangle_{\mathrm{top}}-\frac{1}{\sqrt{z}} \left\langle\eta,e_{0}^{b}e_{1}^{k-b}\right\rangle_{\mathrm{top}}+\frac{1}{ \sqrt{z}}\left\langle\xi,e_{0}^{b}e_{1}^{k-b}\right\rangle_{\mathrm{top}}.\]
**Remark 21**.: _Note that the order of homology and cohomology in these two pairing are different. This is because we want to write down the matrix expression of quadratic relation (22) preventing the transpose notation._
**Proposition 22**.: _The period pairing of the rapid decay cycle \(\delta_{b}\) in (13) and the de Rham cohomology \(\omega_{k,j}\) in Definition 10 is given by_
\[\left\langle\delta_{b},v_{0}^{k}z^{j}\frac{\mathrm{d}z}{z}\right\rangle_{\mathrm{ per}}=(\pi\sqrt{-1})^{b}(-1)^{k-b}2^{k-2j}\mathrm{IKM}_{k}(b,2j)\]
_for \(0\leq b\leq k^{\prime}\) and \(0\leq j\leq k^{\prime}\)._
Proof.: Denote \(\varepsilon\sigma_{0}\) to be the scaling of the chain \(\sigma_{0}\), that is, \(\varepsilon\sigma_{0}\) is a chain of a circle of radius \(\varepsilon\). Then, by the scaling on the chain, we have
\[\left\langle\delta_{b},v_{0}^{k}z^{j}\frac{\mathrm{d}z}{z}\right\rangle_{ \mathrm{per}} =\left\langle\varepsilon\delta_{b},v_{0}^{k}z^{j}\frac{\mathrm{d }z}{z}\right\rangle_{\mathrm{per}}\] \[=\int_{\varepsilon\left(\sigma_{+}-\frac{1}{2}\sigma_{0}+\sum_{n =1}^{k-b}d_{k-b}(n)\sigma_{0}^{2n}\right)}\frac{1}{\sqrt{z}}\left\langle e_{0}^ {b}e_{1}^{k-b},v_{0}^{k}z^{j}\frac{\mathrm{d}z}{z}\right\rangle_{\mathrm{top}}\] \[=(-1)^{k-b}(\pi\sqrt{-1})^{b}\int_{\varepsilon\left(\sigma_{+}- \frac{1}{2}\sigma_{0}+\sum_{n=1}^{k-b}d_{k-b}(n)\sigma_{0}^{2n}\right)}\sqrt{z }(-A_{0})^{b}B_{0}^{k-b}z^{j-1}\mathrm{d}z\] \[=(\pi\sqrt{-1})^{b}(-1)^{k-b}2^{k}\int_{\varepsilon\left(\sigma_{ +}-\frac{1}{2}\sigma_{0}+\sum_{n=1}^{k-b}d_{k-b}(n)\sigma_{0}^{2n}\right)}z^{j -1}\sqrt{z}I_{0}(2\sqrt{z})^{b}K_{0}(2\sqrt{z})^{k-b}\mathrm{d}z\] \[=(\pi\sqrt{-1})^{b}(-1)^{k-b}2^{k}\int_{\varepsilon}^{\infty}z^{j -1}\sqrt{z}I_{0}(2\sqrt{z})^{b}K_{0}(2\sqrt{z})^{k-b}\mathrm{d}z\] \[\qquad-\frac{1}{2}(\pi\sqrt{-1})^{b}(-1)^{k-b}2^{k}\int_{ \varepsilon\sigma_{0}}z^{j-1}\sqrt{z}I_{0}(2\sqrt{z})^{b}K_{0}(2\sqrt{z})^{k- b}\mathrm{d}z\] \[\qquad+\sum_{n=1}^{k-b}d_{k-b}(n)(\pi\sqrt{-1})^{b}(-1)^{k-b}2^{k }\int_{\varepsilon\sigma_{0}^{2n}}z^{j-1}\sqrt{z}I_{0}(2\sqrt{z})^{b}K_{0}(2 \sqrt{z})^{k-b}\mathrm{d}z.\]
Changing of coordinate by \(z=\frac{t^{2}}{4}\), the first term becomes
\[(\pi\sqrt{-1})^{b}(-1)^{k-b}2^{k-2j}\int_{2\sqrt{\varepsilon}}^{\infty}I_{0}( t)^{b}K_{0}(t)^{k-b}t^{2j}\mathrm{d}t.\]
When \(\varepsilon\to 0^{+}\), this term tends to \((\pi\sqrt{-1})^{b}(-1)^{k-b}2^{k-2j}\mathrm{IKM}_{k}(b,2j)\).
For the other two terms, note that \(\int_{\varepsilon\sigma_{0}^{p}}z^{j-1}\sqrt{z}I_{0}(2\sqrt{z})^{b}K_{0}(2 \sqrt{z})^{k-b}\mathrm{d}z\) tends to zero as \(\varepsilon\to 0^{+}\) for the following reason. As \(s\to 0^{+}\), we have the asymptotic expansion (17) and (18). Then, as \(\varepsilon\to 0^{+}\) for all \(j\geq 0\), we have the estimate
\[\left|\int_{\varepsilon\sigma_{0}^{p}}z^{j-1}\sqrt{z}I_{0}(2\sqrt {z})^{b}K_{0}(2\sqrt{z})^{k-b}\mathrm{d}z\right| \leq\int_{\varepsilon\sigma_{0}^{p}}\left|z^{j-1}\sqrt{z}I_{0}(2 \sqrt{z})^{b}K_{0}(2\sqrt{z})^{k-b}\right|\left|\mathrm{d}z\right|\] \[\leq\int_{0}^{2\pi p}\varepsilon^{j-1}\sqrt{\varepsilon}\left|I_{ 0}\left(2\sqrt{\varepsilon e^{i\theta}}\right)^{b}K_{0}\left(2\sqrt{ \varepsilon e^{i\theta}}\right)^{k-b}\right|\varepsilon\mathrm{d}\theta\] \[\leq\varepsilon^{j}\sqrt{\varepsilon}\int_{0}^{2\pi p}\left|\gamma +\log\sqrt{\varepsilon e^{i\theta}}\right|^{k-b}\mathrm{d}\theta\] \[=\varepsilon^{j}\sqrt{\varepsilon}\int_{0}^{2\pi p}\left|\gamma +\log\sqrt{\varepsilon}+\frac{1}{2}i\theta\right|^{k-b}\mathrm{d}\theta\to 0.\qed\]
**Proposition 23**.: _The compactly supported period pairing of the compactly supported de Rham cohomology \(\widetilde{\omega}_{k,j}\) in Definition 11 and moderate decay cycle \(\gamma_{a}\) in (14) is given by_
\[\left\langle\widetilde{\omega}_{k,j},\gamma_{a}\right\rangle_{\mathrm{per},c}=2 ^{k-2j}(-1)^{k-a}(\pi\sqrt{-1})^{a}\cdot\mathrm{IKM},\]
_where \(0\leq a\leq\lfloor k/2\rfloor,0\leq j\leq k^{\prime}\) with \(j\neq r\) if \(k\equiv 2\pmod{4}\), and_
\[\mathrm{IKM}=\begin{cases}\mathrm{IKM}_{k}^{\text{reg}}(a,2j)&\text{if $4\mid k,a=k/2,r+1 \leq j\leq k^{\prime}$},\\ \mathrm{IKM}_{k}(a,2j)-\gamma_{k,j-k^{\prime}/2}2^{2j-k^{\prime}}\mathrm{IKM}_ {k}(a,k^{\prime})&\text{if $4\mid(k+2),0\leq a\leq k^{\prime},r+1\leq j\leq k^{ \prime}$},\\ \mathrm{IKM}_{k}^{\text{reg}}(a,2j)-\gamma_{k,j-k^{\prime}/2}2^{2j-k^{\prime}} \mathrm{IKM}_{k}^{\text{reg}}(a,k^{\prime})&\text{if $4\mid(k+2),a=k^{\prime}+1,r+1\leq j \leq k^{\prime}$},\\ \mathrm{IKM}_{k}(a,2j)&\text{otherwise}.\end{cases}\]
_Moreover, when \(k=4r+2\), we have_
\[\left\langle\widehat{m}_{2r+1},\gamma_{a}\right\rangle_{\text{per},c}=\delta_ {a,2r+1}(\pi\sqrt{-1})^{a}2^{k}\frac{1}{\binom{k}{k/2}}.\]
_Proof_. When \(k\equiv 1,3\pmod{4}\). We compute the compactly supported period pairing
\[\left\langle\left(\xi_{j},\eta_{j},\omega_{k,j}\right),\gamma_{a} \right\rangle_{\text{per},c}=\int_{\mathbb{R}_{+}}\frac{1}{\sqrt{z}}\left\langle v _{0}^{k},e_{0}^{a}e_{1}^{k-a}\right\rangle_{\text{top}}z^{j}\frac{\mathrm{d}z }{z}-\frac{1}{\sqrt{z}}\left\langle-\sum_{c=0}^{k}\binom{k}{c}\eta_{j,c}\frac {e_{0}^{k-c}\overline{e}_{1}^{c}}{\sqrt{z}},e_{0}^{a}e_{1}^{k-a}\right\rangle_ {\text{top}}\] \[\qquad\qquad+\frac{1}{\sqrt{z}}\left\langle\sum_{c=0}^{k}\xi_{j, c}v_{0}^{c}v_{1}^{k-c},e_{0}^{a}e_{1}^{k-a}\right\rangle_{\text{top}}\] \[=\int_{\mathbb{R}_{+}}\frac{1}{\sqrt{z}}\left\langle\sum_{c=0}^{ k}\binom{k}{c}\left(-A_{0}(z)\right)^{c}B_{0}(z)^{k-c}e_{0}^{k-c}\overline{e}_{1}^ {c},e_{0}^{a}e_{1}^{k-a}\right\rangle_{\text{top}}z^{j-1}\mathrm{d}z\] \[\qquad\qquad+(\pi\sqrt{-1})^{a}(-1)^{a}\eta_{j,a}+\frac{1}{\sqrt {z}}\left\langle\sum_{c=0}^{k}\xi_{j,c}v_{0}^{c}v_{1}^{k-c},e_{0}^{a}e_{1}^{ k-a}\right\rangle_{\text{top}}\] \[=(-1)^{a}(\pi\sqrt{-1})^{a}\int_{\mathbb{R}_{+}}\left(-A_{0}(z) \right)^{a}B_{0}(z)^{k-a}\sqrt{z}z^{j-1}\mathrm{d}z\] \[\qquad\qquad+(-1)^{a}(\pi\sqrt{-1})^{a}\eta_{j,a}+\frac{1}{\sqrt {z}}\left\langle\sum_{c=0}^{k}\xi_{j,c}v_{0}^{c}v_{1}^{k-c},e_{0}^{a}e_{1}^{ k-a}\right\rangle_{\text{top}}\] \[=(-1)^{a}(\pi\sqrt{-1})^{a}2^{k-2j}\int_{\mathbb{R}_{+}}I_{0}(s)^ {a}K_{0}(s)^{k-a}s^{2j}\mathrm{d}s+(-1)^{a}(\pi\sqrt{-1})^{a}\eta_{j,a}\] \[\qquad\qquad+\frac{2}{s}\left\langle\sum_{c=0}^{k}\xi_{j,c}v_{0} ^{c}v_{1}^{k-c},e_{0}^{a}e_{1}^{k-a}\right\rangle_{\text{top}}\]
where the last equality follows by the change of variable \(z=\frac{s^{2}}{4}\). The first term converges by Lemma 19. Since \(k>2a\), by (12), the second term tends to zero as \(s\to\infty\). The third term tends to zero as \(s\to 0\) since all \(\xi_{j,c}\in\mathbb{Q}[\frac{s^{2}}{4}]\) and the topological pairing gives a factor \(\frac{s^{2}}{4}\).
When \(k\equiv 0\pmod{4}\), write \(k=4r+4\). We compute the compactly supported period pairing
\[\left\langle\left(\xi_{j},\eta_{j},\omega_{k,j}\right),\gamma_{a} \right\rangle_{\text{per},c}=\int_{\mathbb{R}_{+}}\frac{1}{\sqrt{z}}\left\langle v _{0}^{k},e_{0}^{a}e_{1}^{k-a}\right\rangle_{\text{top}}z^{j}\frac{\mathrm{d}z }{z}-\frac{1}{\sqrt{z}}\left\langle-\sum_{c=0}^{k}\binom{k}{c}\eta_{j,c}\frac {e_{0}^{k-c}\overline{e}_{1}^{c}}{\sqrt{z}},e_{0}^{a}e_{1}^{k-a}\right\rangle_ {\text{top}}\] \[\qquad\qquad+\frac{1}{\sqrt{z}}\left\langle\sum_{c=0}^{k}\xi_{j, c}v_{0}^{c}v_{1}^{k-c},e_{0}^{a}e_{1}^{k-a}\right\rangle_{\text{top}}\] \[=2^{k-2j}(-1)^{a}(\pi\sqrt{-1})^{a}\int_{\mathbb{R}_{+}}I_{0}(s)^ {a}K_{0}(s)^{k-a}s^{2j}\mathrm{d}s\] \[\qquad\qquad+(-1)^{a}(\pi\sqrt{-1})^{a}\eta_{j,a}+\frac{2}{s} \left\langle\sum_{c=0}^{k}\xi_{j,c}v_{0}^{c}v_{1}^{k-c},e_{0}^{a}e_{1}^{k-a} \right\rangle_{\text{top}}\]
where the last equality is the change of variable \(z=\frac{s^{2}}{4}\). The third term tends to zero as \(s\to 0\) since all \(\xi_{j,c}\in\mathbb{Q}\llbracket\frac{s^{2}}{4}\rrbracket\) and the topological pairing gives a factor \(\frac{s^{2}}{4}\). By the same argument above, when \(a=0,1,\cdots,\frac{k-2}{2}=k^{\prime}\), that is, \(k>2a\), we have that the first term converges and the second term tends to zero as \(s\to\infty\).
Now, we turn to analyze the case that \(a=\frac{k}{2}\). The pairing becomes
\[\left\langle\left(\xi_{j},\eta_{j},\omega_{k,j}\right),\gamma_{a}\right\rangle _{\text{per},c}=2^{k-2j}(-\pi\sqrt{-1})^{a}\int_{0}^{s}I_{0}(t)^{a}K_{0}(t)^{k- a}s^{2j}\mathrm{d}s+(-\pi\sqrt{-1})^{a}\eta_{j,2r+2}\]
This term converges as \(s\to\infty\) for the following reason:
The singular part of the integral \(\left(I_{0}K_{0}\right)^{2r+2}s^{2j}\) is given by (19) and \(\eta_{j,2r+2}\) has expansion
\[\eta_{j,2r+2}\sim\frac{2^{2r-2j+1}}{r-j+1/2}s^{2j-2r-1}\cdot G_{i}\sim 2^{2r-2j+ 1}\sum_{n=0}^{\infty}\frac{2^{2n}\gamma_{k,n}}{r-j+1/2+n}s^{2j-2r-1-2n}\]
Thus, both of the singular terms cancel.
When \(k\equiv 2\pmod{4}\), write \(k=4r+2\). Recall in Definition 11, we have \(k^{\prime}+1\) elements in \(H^{1}_{\text{dR},c}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k}\operatorname {Kl}_{2}\right)\):
\[\widetilde{\omega}_{k,j}=\left(\xi_{j},\eta_{j},\omega_{k,j}\right),\ \ j=0,1,\cdots,\widehat{r},\cdots,k^{\prime}\text{ and }\widehat{m}_{2r+1}=\left(0,\frac{2^{k}(e_{0} \overline{e}_{1})^{2r+1}}{\sqrt{z}},0\right).\]
If we use the convention that \(\gamma_{k,p}=0\) whenever \(p<0\), we rewrite
\[\widetilde{\omega}_{i,k} =(\xi_{i},\eta_{i},\omega_{i,k})\] \[=\left(\sum_{a=0}^{k}\xi_{i,a}(z)v_{0}^{a}v_{1}^{k-a}-\gamma_{k,i -r}\sum_{a=0}^{k}\xi_{r,a}(z)v_{0}^{a}v_{1}^{k-a},\right.\] \[\left.\quad-\sum_{\begin{subarray}{c}a=0\\ a\neq k/2\end{subarray}}^{k}\binom{k}{a}(\eta_{i,a}-\gamma_{k,i-r}\eta_{r,a}) \frac{e_{0}^{k-a}\overline{e}_{1}^{a}}{\sqrt{z}}-\binom{k}{k/2}\eta_{i,2r+1} \frac{(e_{0}\overline{e}_{1})^{2r+1}}{\sqrt{z}},\] \[v_{0}^{k}z^{i}\frac{\mathrm{d}z}{z}-\gamma_{k,i-r}v_{0}^{k}z^{r} \frac{\mathrm{d}z}{z}\bigg{)}\]
In the pairing \(\left\langle\widetilde{\omega}_{j,k},\gamma_{a}\right\rangle_{\text{per},c}\), the third term
\[\frac{2}{s}\left\langle\sum_{c=0}^{k}\xi_{j,c}v_{0}^{c}v_{1}^{k-c},e_{0}^{a}e_ {1}^{k-a}\right\rangle_{\text{top}}\]
tends to zero as \(s\to 0\) since all \(\xi_{j,c}\in\mathbb{C}\llbracket\frac{s^{2}}{4}\rrbracket\) and the topological pairing gives a factor \(\frac{s^{2}}{4}\). The other two terms are equal to
\[\left(\int_{\mathbb{R}_{+}}\frac{1}{\sqrt{z}}\left\langle v_{0}^{ k},e_{0}^{a}e_{1}^{k-a}\right\rangle_{\text{top}}z^{j}\frac{\mathrm{d}z}{z}- \gamma_{k,j-r}\int_{\mathbb{R}_{+}}\frac{1}{\sqrt{z}}\left\langle v_{0}^{k},e_ {0}^{a}e_{1}^{k-a}\right\rangle_{\text{top}}z^{r}\frac{\mathrm{d}z}{z}\right)\] \[-\frac{1}{\sqrt{z}}\left\langle-\sum_{\begin{subarray}{c}b=0\\ b\neq k/2\end{subarray}}^{k}\binom{k}{b}\left(\eta_{j,b}-\gamma_{k,j-r}\eta_{r,b }\right)\frac{e_{0}^{k-b}\overline{e}_{1}^{b}}{\sqrt{z}}-\binom{k}{k/2}\eta_ {j,2r+1}\frac{(e_{0}\overline{e}_{1})^{2r+1}}{\sqrt{z}},e_{0}^{a}e_{1}^{k-a}\right\rangle_ {\text{top}}\] \[=(-1)^{a}(\pi\sqrt{-1})^{a}\left(2^{k-2j}\int_{\mathbb{R}_{+}}I_{ 0}(s)^{a}K_{0}(s)^{k-a}s^{2j}\mathrm{d}s-2^{k-2r}\gamma_{k,j-r}\int_{\mathbb{ R}_{+}}I_{0}(s)^{a}K_{0}(s)^{k-a}s^{2r}\mathrm{d}s\right)\] \[\qquad+\frac{1}{z}\left\langle\sum_{\begin{subarray}{c}b=0\\ b\neq k/2\end{subarray}}^{k}\binom{k}{b}\left(\eta_{j,b}-\gamma_{k,j-r}\eta_{r,b }\right)e_{0}^{k-b}\overline{e}_{1}^{b}+\binom{k}{k/2}\eta_{j,2r+1}(e_{0} \overline{e}_{1})^{2r+1},e_{0}^{a}e_{1}^{k-a}\right\rangle_{\text{top}}\]
We analyze the convergence of these terms. When \(2a<k\) or \(j<r\), the integral \(\int_{0}^{t}I_{0}(s)^{a}K_{0}(s)^{k-a}s^{2j}\mathrm{d}s\) converges as \(t\to\infty\) by Lemma 19. The second term is equal to
\[(-1)^{a}(\pi\sqrt{-1})^{a}\left(\eta_{j,a}-\gamma_{k,j-r}\eta_{r,a}\right).\]
By the expansion of \(\eta_{i,a}\):
\[\eta_{i,a}\sim\frac{\sqrt{\pi}^{k-2a}}{k-2a}e^{-(k-2a)s}\left(\frac{4}{s^{2}} \right)^{k/4-i}\cdot G_{i,a},\]
where \(G_{i,a}\in 1+\frac{2}{s}\mathbb{Q}\llbracket\frac{2}{s}\rrbracket\), this term tends to \(0\) as \(s\to\infty\).
When \(a=\frac{k}{2}\) and \(j\geq r\), the integral \(\int_{0}^{t}I_{0}(s)^{a}K_{0}(s)^{k-a}s^{2j}\mathrm{d}s\) has the singular part (20). The second term is equal to
\[(-1)^{2r+1}(\pi\sqrt{-1})^{2r+1}\eta_{j,2r+1}=(-1)^{2r+1}(\pi\sqrt{-1})^{2r+1 }\frac{1}{r-j}\left(\frac{4}{s^{2}}\right)^{r-j}\cdot H_{i}\]
where \(H_{i}\in 1+\frac{4}{s^{2}}\mathbb{Q}\llbracket\frac{4}{s^{2}}\rrbracket\). Thus, the singular part of this term is
\[(-1)^{2r+1}(\pi\sqrt{-1})^{2r+1}\sum_{n=1}^{j-r}\frac{-\gamma_{k,j-r-n}}{n} \left(\frac{4}{s^{2}}\right)^{-n}.\]
In consequence, the singular parts cancel.
Finally, for \(a=0,1,\cdots,\frac{k}{2}\), we have
\[\left\langle\left(0,\frac{2^{k}(e_{0}\overline{e}_{1})^{2r+1}}{ \sqrt{z}},0\right),\gamma_{a}\right\rangle_{\mathrm{per,c}} =-\frac{2^{k}}{\sqrt{z}}\left\langle\frac{(e_{0}\overline{e}_{1}) ^{2r+1}}{\sqrt{z}},e_{0}^{a}e_{1}^{k-a}\right\rangle_{\mathrm{top}}\] \[=\delta_{a,2r+1}(\pi\sqrt{-1})^{a}2^{k}\frac{1}{\binom{k}{k/2}}.\qed\]
**Corollary 24**.: _Form the period matrix \(P\) using period pairing under the basis \(\left\{\delta_{b}\right\}_{b=0}^{k^{\prime}}\) of \(H_{1}^{\mathrm{rd}}\) and \(\left\{\omega_{k,j}\right\}_{j=0}^{k^{\prime}}\) of \(H_{\mathrm{dR}}^{1}\)._
\[P=(P_{bj})=\left\langle\delta_{b},v_{0}^{k}z^{j}\frac{\mathrm{d}z}{z}\right \rangle_{\mathrm{per}}=(\pi\sqrt{-1})^{b}(-1)^{k-b}2^{k-2j}\mathrm{IKM}_{k}(b,2j).\]
_for \(0\leq b\leq k^{\prime}\) and \(0\leq j\leq k^{\prime}\). Then, \(P\) is invertible._
**Remark 25** (determinant of period matrix).: _In fact,_
\[\det P=(\pi\sqrt{-1})^{k^{\prime}(k^{\prime}+1)/2}(-1)^{(2k-k^{\prime})(k^{ \prime}+1)/2}2^{(k-k^{\prime})(k^{\prime}+1)}\det\left(\mathrm{IKM}_{k}(b,2j) \right),\]
_where_
\[\det\left(\mathrm{IKM}_{k}(b,2j)\right)=\begin{cases}\det\left(M_{k^{\prime}+ 1}\right)&\text{if $k$ is odd;}\\ \det\left(N_{k^{\prime}+1}\right)&\text{if $k$ is even.}\end{cases}\]
_The definition of \(M_{r}\) and \(N_{r}\) are given in the appendix A.2 and its determinant is given in Corollary 37 explicitly._
Proof.: We first prove the claim. Let \(\left\{\delta_{b}\right\}_{b=0}^{k^{\prime}}\) of \(H_{1}^{\mathrm{rd}}\) and \(\left\{\omega_{k,j}\right\}_{j=0}^{k^{\prime}}\) of \(H_{1}^{\mathrm{rd}}\). Then, by the definition of \(H_{1}^{\mathrm{rd}}\) and \(H_{1}^{\mathrm{rd}}\), we have
\[\left\langle\left(0,\frac{2^{k}(e_{0}\overline{e}_{1})^{2r+1}}{\sqrt{z}},0 \right),\gamma_{a}\right\rangle_{\mathrm{per,c}} =-\frac{2^{k}}{\sqrt{z}}\left\langle\frac{(e_{0}\overline{e}_{1}) ^{2r+1}}{\sqrt{z}},e_{0}^{a}e_{1}^{k-a}\right\rangle_{\mathrm{top}}\] \[=\delta_{a,2r+1}(\pi\sqrt{-1})^{a}2^{k}\frac{1}{\binom{k}{k/2}}.\qed\]
We have
\[\left\langle\left(0,\frac{2^{k}(e_{0}\overline{e}_{1})^{2r+1}}{\sqrt{z}},0 \right),\gamma_{a}\right\rangle_{\mathrm{per,c}} =-\frac{2^{k}}{\sqrt{z}}\left\langle\frac{(e_{0}\overline{e}_{1}) ^{2r+1}}{\sqrt{z}},e_{0}^{a}e_{1}^{k-a}\right\rangle_{\mathrm{top}}\] \[=\delta_{a,2r+1}(\pi\sqrt{-1})^{a}2^{k}\frac{1}{\binom{k}{k/2}}.
### \(\mathbb{Q}\)-linear and quadratic relations on Bessel moments
We have now developed all the tools and computations to see the wonderful results in \(\mathbb{Q}\)-linear and quadratic relations on Bessel moments.
**Corollary 26**.: _For \(k=4r+4\),_
\[\sum_{j=0}^{r}\binom{k/2}{2j}(-1)^{j}\pi^{2j}\mathrm{IKM}_{k}(2j,2i)=\begin{cases} (-1)^{r}\pi^{2r+2}\mathrm{IKM}_{k}(2r+2,2i)&\text{if $0\leq i\leq r$},\\ (-1)^{r}\pi^{2r+2}\mathrm{IKM}_{k}^{\mathrm{reg}}(2r+2,2i)&\text{if $r+1\leq i \leq k^{\prime}$}.\end{cases}\]
_For \(k=4r+2\),_
\[\sum_{j=0}^{r}\binom{k/2}{2j}(-1)^{j}\pi^{2j}\mathrm{IKM}_{k}(2j,2i)=\begin{cases} 0&\text{if $0\leq i<r$},\\ \gamma_{k,i-r}2^{2i-2r}\sum_{j=0}^{r}\binom{k/2}{2j}(-1)^{j}\pi^{2j}\mathrm{ IKM}_{k}(2j,2r)&\text{if $r<i\leq 2r$}.\end{cases} \tag{21}\]
Proof.: By Lemma 15, we know that
\[\sum_{j=0}^{k/4}\binom{k/2}{2j}\gamma_{2j} =0\text{ if $k\equiv 0\pmod{4}$};\] \[\sum_{j=0}^{(k-2)/4}\binom{k/2}{2j}\gamma_{2j} =0\text{ if $k\equiv 2\pmod{4}$}.\]
Then take pairing with \(\widetilde{\omega}_{k,i}\) in the compactly supported de Rham cohomology. Combining the result in Proposition 23, we obtain the desired algebraic relation.
**Remark 27**.: _The above linear algebraic relations for \(i\) in the range \(0\leq i\leq r\), under the name sum rule identities, are previously proved by analytic method in [22] (see [22, (1.3)] for \(k\equiv 2\pmod{4}\) and [22, (1.5)] for \(k\equiv 4\pmod{4}\))._
**Corollary 28**.: _For any \(k\) and any \(0\leq a\leq k^{\prime}\), the dimension of the \(\mathbb{Q}\)-vector space generated by the Bessel moments has an upper bound:_
\[\dim\operatorname{span}_{\mathbb{Q}}\left\{\mathrm{IKM}_{k}(a,2j)\mid j\in\{0 \}\cup\mathbb{N}\right\}\leq k^{\prime}+1.\]
_For \(k\) is even, the dimension of the \(\mathbb{Q}\)-vector space generated by the regularized Bessel moments has an upper bound:_
\[\dim\operatorname{span}_{\mathbb{Q}}\left\{\mathrm{IKM}_{k}^{\mathrm{reg}}(k/ 2,2j)\mid j\in\{0\}\cup\mathbb{N}\right\}\leq k^{\prime}+1.\]
_In the even case, note that when \(0\leq j\leq\left\lfloor\frac{k-1}{4}\right\rfloor=r\), we do not need to regularize the Bessel moments, that is, \(\mathrm{IKM}_{k}^{\mathrm{reg}}(k/2,2j)=\mathrm{IKM}_{k}(k/2,2j)\) (see Lemma 19)._
Proof.: We know that the dimension of \(H^{1}_{\mathrm{dR}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k}\mathrm{ Kl}_{2}\right)\) and \(H^{1}_{\mathrm{dR},c}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \mathrm{Kl}_{2}\right)\) are \(k^{\prime}+1\).
Now, since \(\left\{v_{0}^{k}z^{j}\frac{\mathrm{d}z}{z}\right\}_{j=0,\cdots,k^{\prime}}\) form a basis of \(H^{1}_{\mathrm{dR}}\left(\mathbb{G}_{m},\sqrt{z}\operatorname{Sym}^{k} \mathrm{Kl}_{2}\right)\), we may write \(v_{0}^{k}z^{s}\frac{\mathrm{d}z}{z}\) into the \(\mathbb{Q}\)-linear combination of the basis. Then after we take the period pairing with the rapid decay cycle \(\delta_{a}\) (see Proposition 22), the \(\mathbb{Q}\)-linear relation becomes a \(\mathbb{Q}\)-linear relation to the Bessel moments
\[\left\{\mathrm{IKM}_{k}(a,2j)\mid j=0,\cdots,k^{\prime}\right\}\cup\left\{ \mathrm{IKM}_{k}(a,2s)\right\}.\]
For \(k\) is even, similarly, when \(s>k^{\prime}\), write \(\widetilde{\omega}_{k,s}\in H^{1}_{\mathrm{dR},c}\left(\mathbb{G}_{m},\sqrt{z} \operatorname{Sym}^{k}\operatorname{Kl}_{2}\right)\) into the \(\mathbb{Q}\)-linear combination of basis. Then taking the compactly supported period paring (see Proposition 23), the \(\mathbb{Q}\)-linear equivalence become the \(\mathbb{Q}\)-linear relation to the regularized Bessel moments
\[\left\{\operatorname{IKM}_{k}(a,2j)\mid j=0,\cdots,r-1\right\}\cup\left\{ \operatorname{IKM}_{k}^{\mathrm{reg}}(a,2j)\mid j=r,\cdots,k^{\prime}\right\} \cup\left\{\operatorname{IKM}_{k}^{\mathrm{reg}}(a,2s)\right\}.\qed\]
**Remark 29**.: _In order to write down the explicit \(\mathbb{Q}\)-linear combination in the above corollary, we need to trace back to the proof of Lemma 7 in [10, SS2.9.13]. In [1], they provide a recurrence to find out the \(\mathbb{Q}\)-linear combination for Bessel moments by analyzing the symmetric power of the modified Bessel differential operator. Moreover, there is a similar result in [11] for the \(\mathbb{Q}\)-linear dependence for Bessel moments \(\operatorname{IKM}_{k}(a,2j-1)\). Our result is parallel to his result._
**Proposition 30**.: _Using the basis of \(H^{\mathrm{rd}}_{1}\), \(H^{\mathrm{mod}}_{1}\), \(H^{1}_{\mathrm{dR}}\), and \(H^{1}_{\mathrm{dR},c}\) described in Corollaries 13, 18, we form the pairing matrices that we have defined in previous texts:_
1. \(B\)_, the Betti intersection pairing between_ \(H^{\mathrm{rd}}_{1}\) _and_ \(H^{\mathrm{mod}}_{1}\)_. (Proposition_ 16_.)_
2. \(D\)_, the Poincare pairing between_ \(H^{1}_{\mathrm{dR},c}\) _and_ \(H^{1}_{\mathrm{dR}}\)_. (Proposition_ 12_.)_
3. \(P\)_, the period pairing between_ \(H^{1}_{\mathrm{1}}\) _and_ \(H^{1}_{\mathrm{dR}}\)_. (Proposition_ 22_.)_
4. \(P_{c}\)_, the period pairing between_ \(H^{1}_{\mathrm{dR},c}\) _and_ \(H^{\mathrm{mod}}_{1}\)_. (Proposition_ 23_.)_
5. \(B_{\mathrm{mid}}\)_, the Betti pairing on_ \(H^{\mathrm{mid}}_{1}\)_._
6. \(D_{\mathrm{mid}}\)_, the Poincare pairing on_ \(H^{1}_{\mathrm{mid}}\)_._
7. \(P_{\mathrm{mid}}\)_, the period pairing between_ \(H^{\mathrm{mid}}_{1}\) _and_ \(H^{1}_{\mathrm{mid}}\)_._3__
Footnote 3: Note that \(B,D,P,P_{c}\) are square matrices of size \(k^{\prime}+1\) and that \(B_{\mathrm{mid}},D_{\mathrm{mid}},P_{\mathrm{mid}}\) are of size \(k^{\prime}+1-\delta_{4\mathbb{Z}+2,k}\). When \(k\equiv 0,1,3\pmod{4}\), we have \(B=B_{\mathrm{mid}}\), \(D=D_{\mathrm{mid}}\), and \(P_{\mathrm{mid}}=P=P_{c}^{4}\).
_Then we have the algebraic quadratic relations_
\[PD^{-1}P_{c} =(-1)^{k}(2\pi\sqrt{-1})^{k+1}B, \tag{23}\] \[P_{\mathrm{mid}}D_{\mathrm{mid}}^{-1}P_{\mathrm{mid}}^{t} =(-1)^{k}(2\pi\sqrt{-1})^{k+1}B_{\mathrm{mid}}. \tag{22}\]
Proof.: This quadratic relation is a general phenomenon on periods of meromorphic flat connection on complex manifolds. We refer to [11] for more details.
From this proposition, when \(k\equiv 0,1,3\pmod{4}\), we see the Bessel moments have quadratic relation given by (22). On the other hand, when \(k\equiv 2\pmod{4}\), the relation involves some combination of Bessel moments and regularized Bessel moments in the matrix \(P_{c}\). In the following discussion, we provide another expression of this relation, and we will see the pure quadratic relation involving only Bessel moments.
When \(k\equiv 2\pmod{4}\), write \(k=4r+2\) and define two \((k^{\prime}+1)\times k^{\prime}\) matrices with rational coefficients:
\[R_{k} =\left(\begin{array}{cccc}I_{r}&&0&&\\ 0&-\gamma_{k,1}&\cdots&-\gamma_{k,k^{\prime}-r}\\ 0&&I_{r}&&\end{array}\right)\] \[L_{k} =\left(\begin{array}{cccc}0&-\binom{k/2}{2}&0&-\binom{k/2}{4}& \cdots&0&-\binom{k/2}{k^{\prime}}\\ &&I_{k^{\prime}}&&\end{array}\right)\]
By the linear relations (21) in Corollary 26, we have
\[PR_{k}=L_{k}P_{\mathrm{mid}}.\]
Also, \(P_{\rm mid}\) is obtained by deleting the first row of \(L_{k}P_{\rm mid}\). Set \(\widetilde{B}=L_{k}B_{\rm mid}L_{k}^{t}\) and \(\widetilde{D}=R_{k}D_{\rm mid}^{-1}R_{k}^{t}\) which are square matrices of size \(k^{\prime}+1\) with rational coefficients. Then \(B_{\rm mid}\) is obtained by deleting the first row and column from \(\widetilde{B}\). Therefore, the quadratic relation (23) (involving linear combinations of Bessel moments) now becomes
\[P\widetilde{D}P^{t}=(-1)^{k}(2\pi\sqrt{-1})^{k+1}\widetilde{B}\]
(involving pure Bessel moments).
**Remark 31**.: _The matrices \(\widetilde{B}\) and \(\widetilde{D}\) in the above expression are singular because of the linear relations (21) in Corollary 26. Note that this expression is equivalent to the middle part quadratic relation (23) together with linear relations (21)._
**Remark 32**.: _When \(k=4r+2\), the middle part period matrix is a \(k^{\prime}\times k^{\prime}\) matrix given by_
\[P_{\rm mid}=\left(\left\langle\delta_{b},\omega_{k,i}\right\rangle_{\rm per} \right)_{b=1,\cdots,k^{\prime},\ i=0,\cdots,\hat{r},\cdots,k^{\prime}}.\]
_The determinant of this matrix \(P_{\rm mid}\) is given by_
\[\det P_{\rm mid}=\pi^{r(k+1)}\sqrt{-1}^{r(k^{\prime}-1)}\,\frac{2^{r(2r+1)}}{ r!}\prod_{a=1}^{k^{\prime}}\frac{(2a+1)^{k^{\prime}+1-a}}{(a+1)^{a+1}}.\]
Proof.: Note that \(P_{\rm mid}\) appears in the upper left of the compactly supported period pairing matrix \(P_{\rm c}\). Now, just take determinant on (22) and then use the results of Propositions 12, 17, and Remark 25.
## Appendix A The Bessel operator and determinants of Bessel moments
### Symmetric power of the modified Bessel differential operator
Consider the Weyl algebra \(\mathbb{Q}\langle t,\partial_{t}\rangle\) consisting of ordinary differential operators. Write \(\theta=t\partial_{t}\). The modified Bessel differential operator is an element in the subalgebra \(\mathbb{Q}\langle t^{2},\theta\rangle\) given by \(L_{2}=\theta^{2}-t^{2}\). The corresponding solutions are the modified Bessel functions \(I_{0}(t)\) and \(K_{0}(t)\). The \(n\)-th symmetric power \(L_{n+1}\in\mathbb{Q}\left\langle\theta,t^{2}\right\rangle\) of \(L_{2}\) has order \(n+1\) and the corresponding solutions are \(I_{0}^{a}(t)K_{0}^{n-a}(t)\) for \(0\leq a\leq n\). By [1, 1], the operator \(L_{n+1}=L_{n+1,n}\) can be obtained by the recurrence relation as follows:
\[\begin{split} L_{0,n}&=1,\\ L_{1,n}&=\theta,\\ L_{k+1,n}&=\theta L_{k,n}-t^{2}k\left(n+1-k\right)L_{ k-1,n},\ 1\leq k\leq n.\end{split} \tag{24}\]
Here we provide two more concrete results about the operator \(L_{n+1}\).
Put the degree on \(\mathbb{Q}\langle t,\theta\rangle\) as \(\deg t=\deg\theta=1\). The associated graded ring \(\operatorname{gr}\mathbb{Q}\langle t,\theta\rangle=\mathbb{Q}[\overline{t}, \overline{\theta}]\) is a polynomial ring where \(\overline{t}\) and \(\overline{\theta}\) are the images of \(t\) and \(\theta\), respectively.
**Proposition 33**.: _The image of \(L_{n+1}\) in \(\mathbb{Q}[\overline{t},\overline{\theta}]\) is the polynomial_
\[\overline{L}_{n+1}(\overline{t},\overline{\theta})=\begin{cases}\prod_{i=1}^{r} \left(\overline{\theta}^{2}-(2i-1)^{2}\overline{t}^{2}\right)&\text{if $n+1=2r$ is even},\\ \overline{\theta}\prod_{i=1}^{r}\left(\overline{\theta}^{2}-(2i)^{2}\overline{ t}^{2}\right)&\text{if $n+1=2r+1$ is odd}.\end{cases} \tag{25}\]
Proof.: Taking the images in \(\mathbb{Q}[\overline{t},\overline{\theta}]\) of the relation (24), we obtain \(\overline{L}_{n+1}=\overline{L}_{n+1,n}\) satisfying
\[\overline{L}_{0,n} =1,\] \[\overline{L}_{1,n} =\overline{\theta},\] \[\overline{L}_{k+1,n} =\overline{\theta}\,\overline{L}_{k,n}-t^{2}k\left(n+1-k\right) \overline{L}_{k-1,n},\ 1\leq k\leq n.\]
The formula (25) is then a consequence of the following combinatorics lemma.
**Lemma 34**.: _For any \(m\in\mathbb{N}\), set the recurrence for \(\lambda_{n,m}(x),n\in\mathbb{N}\cup\left\{0\right\}\),_
\[\lambda_{0,m} =1,\] \[\lambda_{1,m} =x,\] \[\lambda_{k+1,m} =x\lambda_{k,m}-k\left(m+1-k\right)\lambda_{k-1,m},\quad k\geq 1.\]
_Then we have_
\[\lambda_{m+1,m}(x)=\begin{cases}\prod_{i=1}^{r}\left(x^{2}-(2i-1)^{2}\right), &m+1=2r,\\ x\prod_{i=1}^{r}\left(x^{2}-(2i)^{2}\right),&m+1=2r+1.\end{cases} \tag{26}\]
Proof.: Notice that \(\lambda_{i,m}\) is a monic integral polynomial of degree \(i\) for any \(m\). Consider the formal generating function4:
Footnote 4: This generating function satisfies the differential equation \(-y^{4}f^{\prime\prime}(y)-(2y^{3}-my^{3})f^{\prime}(y)+(1-xy+my^{2})f(y)=1\).
\[f_{m,x}(y)=\sum_{i=0}^{\infty}\lambda_{i,m}(x)y^{i}.\]
An induction on \(i\) immediately yields the relation \(f_{m,x-1}\left(y\right)+f_{m,x+1}\left(y\right)=2f_{m-1,x}\left(y\right)\) for any \(m\) and \(x\).5 In other words, \(\lambda_{i,m}(x-1)+\lambda_{i,m}(x+1)=2\lambda_{i,m-1}(x)\) for all \(i\). Therefore we obtain
Footnote 5: Equality also holds when viewed as the solution of the corresponding differential equations.
\[\lambda_{m+1,m}(x-1)+\lambda_{m+1,m}(x+1)=2\lambda_{m+1,m-1}(x)=2x\lambda_{m,m -1}(x)\]
by the recurrence. Thus, since \(\lambda_{m+1,m}(x)\) is a monic polynomial of degree \(m+1\), it is uniquely determined by the above functional equation when the polynomial \(\lambda_{m,m-1}(x)\) is given. Hence, by the induction, it suffices to show that
\[\prod_{i=1}^{r}\left((x-1)^{2}-(2i-1)^{2}\right)+\prod_{i=1}^{r} \left((x+1)^{2}-(2i-1)^{2}\right) =2x^{2}\prod_{i=1}^{r-1}\left(x^{2}-(2i)^{2}\right);\] \[(x-1)\prod_{i=1}^{r}\left((x-1)^{2}-(2i)^{2}\right)+(x+1)\prod_{ i=1}^{r}\left((x+1)^{2}-(2i)^{2}\right) =2x\prod_{i=1}^{r}\left(x^{2}-(2i-1)^{2}\right),\]
which are straightforward to verify.
**Proposition 35**.: _Write \(L_{n+1}\) into the form \(\sum_{i}t^{i}P_{i}\left(\theta\right)\), where \(P_{i}(x)\in\mathbb{Q}\left[x\right]\). Define the integers \(a,b\) by_
\[a=\max\left\{i\mid P_{i}\neq 0\right\};\quad b=\min\left\{i\mid P_{i}\neq 0 \right\}.\]
_Then we have \(a=2\left\lfloor\frac{n+1}{2}\right\rfloor\) and \(b=0\)._
Proof.: By the recurrence (24), if we set \(\deg t=1\) and \(\deg\theta=0\), we easily see that \(L_{j}\) has degree \(2\left\lfloor\frac{j}{2}\right\rfloor\) by the interchanging relation \(\theta t=t+t\theta\). Thus, we have \(a=2\left\lfloor\frac{n+1}{2}\right\rfloor\). On the other hand, if we set \(\deg t=0\) and \(\deg\theta=1\), we see that the leading term of \(L_{k+1}\) is given by \(\theta^{k+1}\). Therefore, we conclude that \(b=0\) by Proposition 33.
### Two-scale Bessel moments
Throughout this paper, we take for granted the properties of modified Bessel functions \(I_{0}(t),K_{0}(t)\) in the treatise [20].
Recall the Bessel moments \(\operatorname{IKM}_{k}(a,b)\) are defined in section 5.1. For \(r=1,2,\cdots\), define the two \(r\times r\) matrices
\[M_{r} =\Bigl{(}\operatorname{IKM}_{2r-1}(i-1,2j-2)\Bigr{)}_{1\leq i,j \leq r},\] \[N_{r} =\Bigl{(}\operatorname{IKM}_{2r}(i-1,2j-2)\Bigr{)}_{1\leq i,j \leq r}.\]
We aim to determine the two scalars \(\det M_{r}\), \(\det N_{r}\) adapting the inductive methods explored by Zhou [17].
For the initial values, we have [20, SS13.21, Eq.(8)]
\[M_{1}=\int_{0}^{\infty}K_{0}(t)\,\mathrm{d}t=\frac{\pi}{2}, \tag{27}\]
and, by [20, SS13.72], one has
\[N_{1}=\int_{0}^{\infty}K_{0}^{2}(t)\,\mathrm{d}t =\frac{1}{2}\int_{0}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty }^{\infty}e^{-2t\cosh x\cosh y}\,\mathrm{d}x\mathrm{d}y\mathrm{d}t\] \[=\frac{1}{4}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\frac{ \mathrm{d}x\mathrm{d}y}{\cosh x\cosh y}\] \[=\frac{\pi^{2}}{4}.\]
For \(r=0,1,\cdots\), let \(\omega_{2r+1}(x)\) be the Wronskian of the \((2r+1)\) functions \(f_{i}(x)\)
\[f_{i}(x)=\begin{cases}\int_{0}^{\infty}I_{0}(xt)I_{0}^{i-1}(t)K_{0}^{2r-i+1}(t )\,\mathrm{d}t,&1\leq i\leq r,\\ \int_{0}^{\infty}K_{0}(xt)I_{0}^{i-r-1}(t)K_{0}^{3r-i+1}(t)\,\mathrm{d}t,&r<i \leq 2r+1.\end{cases} \tag{28}\]
The functions \(f_{i}\) are well-defined analytic on the interval \((0,2)\) and hence so is \(\omega_{2r+1}\). In particular, \(\omega_{1}(x)=\int_{0}^{\infty}K_{0}(xt)\,\mathrm{d}t=\frac{\pi}{2x}\) by (27).
For \(r=1,2,\cdots\), let \(\omega_{2r}(x)\) be the Wronskian of the \(2r\) functions \(g_{i}(x)\) where
\[g_{i}(x)=\begin{cases}\int_{0}^{\infty}I_{0}(xt)I_{0}^{i-1}(t)K_{0}^{2r-i}(t) \,\mathrm{d}t,&1\leq i\leq r,\\ \int_{0}^{\infty}K_{0}(xt)I_{0}^{i-r-1}(t)K_{0}^{3r-i}(t)\,\mathrm{d}t,&r<i \leq 2r.\end{cases}\]
All entries in the Wronskian matrix are well-defined analytic functions on the interval \((0,1)\) and so is \(\omega_{2r}(x)\).
**Proposition 36**.: _The determinant \(\omega_{k}(x)\) and its evaluation at \(x=1\) are given by the following formulae:_
1. _For_ \(r=1,2,\cdots\)_,_ \[\omega_{2r+1}(x) =\frac{(-1)^{r(r+1)}}{2}\left[\frac{1}{x^{2}}\prod_{i=1}^{r}\frac {(2i)^{2}}{((2i)^{2}-x^{2})}\right]^{\frac{2r+1}{2}}\Gamma\Bigl{(}\frac{r+1}{ 2}\Bigr{)}^{2}\big{(}\det N_{r}\big{)}^{2},\] \[\omega_{2r+1}(1) =(-1)^{r(r+1)/2}\det M_{r}\cdot\det M_{r+1}.\]
2. _For_ \(r=2,3,\cdots\)_,_ \[\omega_{2r}(x) =(-1)^{\frac{r(r+1)}{2}}\left[\frac{1}{x}\prod_{i=1}^{r}\frac{(2i -1)^{2}}{((2i-1)^{2}-x^{2})}\right]^{r}\big{(}\det M_{r}\big{)}^{2},\] \[\lim_{x\to 1^{-}}2^{r}(1-x)^{r}\omega_{2r}(x) =(-1)^{r(r+1)/2}(r-1)!\det N_{r-1}\cdot\det N_{r}.\]
The above proposition leads to the recursive formulae
\[\det M_{r}\cdot\det M_{r+1} =\frac{1}{2}\left[\frac{2^{r}r!\sqrt{2r+1}}{(2r+1)!!}\right]^{2r+1} \Gamma\Big{(}\frac{r+1}{2}\Big{)}^{2}\big{(}\det N_{r}\big{)}^{2} (r\geq 1),\] \[\det N_{r-1}\cdot\det N_{r} =\frac{2^{r}}{(r-1)!}\left[\frac{(2r-1)!!\sqrt{2r}}{2^{r}r!} \right]^{2r}\big{(}\det M_{r}\big{)}^{2} (r\geq 2). \tag{29}\]
With the initial data \(M_{1}=\frac{\pi}{2},N_{1}=\frac{\pi^{2}}{4}\) and the relation
\[\Gamma\left(\frac{r}{2}\right)\Gamma\left(\frac{r+1}{2}\right)=\frac{(r-1)!}{2 ^{r-1}}\sqrt{\pi},\]
one immediately obtains the following results by induction.
**Corollary 37**.: _For positive integers \(r\), we have_
\[\det M_{r} =\sqrt{\pi}^{\,(r+1)}\sqrt{2}^{\,r(r-3)}\prod_{a=1}^{r-1}\frac{a ^{r-a}}{\sqrt{2a+1}^{2a+1}},\] \[\det N_{r} =\frac{1}{\Gamma\left(\frac{r+1}{2}\right)}\frac{\sqrt{\pi}^{(r+1 )^{2}}}{\sqrt{2}^{\,r(r+3)}}\prod_{a=1}^{r-1}\frac{(2a+1)^{r-a}}{(a+1)^{a+1}}.\]
_In particular, the two scalars \(\sqrt{(2r-1)!!}\pi^{-m_{r}}\det M_{r}\) and \(\pi^{-n_{r}}\det N_{r}\) are positive rational numbers, where \(m_{r}=\frac{r(r+1)}{2}\) and \(n_{r}=\left\lfloor\frac{(r+1)^{2}}{2}\right\rfloor\)._
### The Vanhove operators
The adjoint \(L_{n+1}^{*}\) of \(L_{n+1}\) is derived under the convolution \((t,\partial_{t})\mapsto(t,-\partial_{t})\) (so \(\theta\mapsto-(\theta+1)\)) and hence the leading term of the signed adjoint \(\Lambda_{n+1}=(-1)^{n+1}L_{n+1}^{*}\) equals \(\overline{L}_{n+1}(\overline{\theta},\overline{t})\) by Proposition 33. For \(F(xt)=I_{0}(xt),K_{0}(xt)\) and \(G(t)=I_{0}^{a}(t)K_{0}^{n-a}(t)\), we have, by integration by parts,
\[\int_{0}^{\infty}(\Lambda_{n+1}F(xt))G(t)\,\mathrm{d}t=(-1)^{n+1}\int_{0}^{ \infty}F(xt)(L_{n+1}G(t))\,\mathrm{d}t=0.\]
The _Vanhove operator_\(V_{n+1}\in\mathbb{Q}\left\langle\partial_{x},x^{\pm 1}\right\rangle\) is of order \((n+1)\) such that \(V_{n+1}F(xt)=\Lambda_{n+1}F(xt)\). So one has \(V_{n+1}f_{i}=0\) for \(f_{i}(x)\) in (28) and consequently \(\omega_{n+1}(x)\) satisfies a first order linear differential equation (See (30) below).
**Lemma 38**.: _Let \(\lambda_{n+1}(x)=\overline{L}_{n+1}(1,x^{-1})\in\mathbb{Q}[x^{-1}]\) of order \(2\left\lfloor\frac{n+1}{2}\right\rfloor\) with respect to \(x^{-1}\). Let \(\theta_{x}=x\partial_{x}\). One has_
\[V_{n+1} =\lambda_{n+1}(x)\theta_{x}^{n+1}+(n+1)\left[\lambda_{n+1}(x)+ \frac{x\lambda_{n+1}^{\prime}(x)}{2}\right]\theta_{x}^{n}+\delta_{1}\] \[=x^{n+1}\lambda_{n+1}(x)\partial_{x}^{n+1}+\frac{n+1}{2}x^{n} \left[(n+2)\lambda_{n+1}(x)+x\lambda_{n+1}^{\prime}(x)\right]\partial_{x}^{n} +\delta_{2}\]
_where \(\delta_{1},\delta_{2}\) are of order at most \((n-1)\) with respect to \(\partial_{x}\) in \(\mathbb{Q}\langle\partial_{x},x^{\pm 1}\rangle\)._
Proof.: By Vanhove [10], there exists \(\widetilde{L}_{n-1}\in\mathbb{Q}\langle\partial_{x},x^{\pm 1}\rangle\) of order \((n-1)\) such that
\[t\widetilde{L}_{n-1}F(xt)=\Lambda_{n+1}\frac{F(xt)}{t}.\]
The operator \(\widetilde{L}_{n-1}\) is of the form ([11, Eq. (4.29)])
\[\widetilde{L}_{n-1}=x^{2}\lambda(x)\theta_{x}^{n-1}+x^{2}\left[2(n-1)\lambda( x)+\frac{n-1}{2}x\lambda^{\prime}(x)\right]\theta_{x}^{n-2}+\widetilde{\delta}\]
where \(\widetilde{\delta}\) is of order at most \((n-3)\) with respect to \(\partial_{x}\) in \(\mathbb{Q}\langle\partial_{x},x^{\pm 1}\rangle\)6.
Footnote 6: Comparing \(\widetilde{L}_{n-1}(\theta_{x})\) with Zhou’s Vanhove operator \(\widetilde{L}_{n-1}(\theta_{u})\), we set his variable \(u=x^{2}\) and multiply \(\widetilde{L}_{n-1}(\theta_{u})\) by \(2^{n-1}\).
Set
\[\Delta_{n}(\theta_{t})=\Lambda_{n+1}(\theta_{t})-\Lambda_{n+1}(\theta_{t}-1).\]
Since \(\theta_{t}\frac{1}{t}=\frac{1}{t}(\theta_{t}-1)\) in \(\mathbb{Q}\left\langle\partial_{t},t\right\rangle\), we have
\[\Lambda_{n+1}(\theta_{t})F(xt) =t\Lambda_{n+1}(\theta_{t})\frac{F(xt)}{t}+\Delta_{n}(\theta_{t} )F(xt)\] \[=\big{[}t^{2}\widetilde{L}_{n-1}(\theta_{x})+\Delta_{n}(\theta_{t })\big{]}F(xt).\]
Since \(t^{2}F(xt)=\frac{1}{x^{2}}\theta_{x}^{2}F(xt)\), we have
\[t^{2}\widetilde{L}_{n-1}(\theta_{x})F(xt) =\widetilde{L}_{n-1}(\theta_{x})\frac{1}{x^{2}}\theta_{x}^{2}F(xt)\] \[=\frac{1}{x^{2}}\widetilde{L}_{n-1}(\theta_{x}-2)\theta_{x}^{2}F (xt)\]
and the differential operator reads
\[\lambda(x)\theta_{x}^{n+1}+\frac{n-1}{2}x\lambda^{\prime}(x)\theta_{x}^{n}+ \delta_{3}\]
where \(\delta_{3}\) is of order at most \((n-1)\) with respect to \(\partial_{x}\) in \(\mathbb{Q}\langle\partial_{x},x^{\pm 1}\rangle\).
On the other hand, since \(\theta_{t}F(xt)=\theta_{x}F(xt)\) and by Proposition 33, we have
\[\Delta_{n}(\theta_{t})F(xt)=\left[\Lambda_{n+1}\left(\theta\right)-\Lambda_{n+ 1}\left(\theta-1\right)\right]F\left(xt\right)=\left[((n+1)\lambda(x)+x \lambda^{\prime}(x))\,\theta_{x}^{n}+\delta_{4}\right]F(xt)\]
where \(\delta_{4}\) is of order at most \((n-1)\) with respect to \(\partial_{x}\) in \(\mathbb{Q}\langle\partial_{x},x^{-1}\rangle\). Therefore the leading two terms of \(V_{n+1}\) are determined.
_Rationality of \(\omega_{n+1}(x)\)._ Lemma 38 yields
\[\omega_{n+1}^{\prime}(x)=-\frac{n+1}{2x}\left[(n+2)+\frac{x\lambda_{n+1}^{ \prime}(x)}{\lambda_{n+1}(x)}\right]\omega_{n+1}(x). \tag{30}\]
Since \(\omega_{n+1}(x)\) takes real values on \((0,1)\), one obtains
\[\omega_{n+1}(x)=C_{n+1}\left[(-1)^{\left\lfloor\frac{n+1}{2}\right\rfloor}x^{n +2}\lambda_{n+1}(x)\right]^{-\frac{n+1}{2}}\]
for some real constant \(C_{n+1}\) for each \(n=0,1,\cdots\). We shall determine \(C_{n+1}\) by investigating the limiting behavior of \(\omega_{n+1}(x)\) as \(x\to 0^{+}\).
### Singularities of \(\omega_{n+1}(x)\)
For \(F(xt)=I_{0}(xt)\) or \(K_{0}(xt)\), we have
\[\partial_{x}F(xt)=tF^{\prime}(xt),\quad\partial_{x}^{2}F(xt)=-\frac{t}{x}F^{ \prime}(xt)+t^{2}F(xt).\]
So \(\omega_{n+1}(x)\) coincides with the determinant of the matrix \(\Omega_{n+1}(x)\) of size \((n+1)\) whose \((i,j)\)-entry is
\[\begin{cases}\int_{0}^{\infty}I_{0}(xt)I_{0}^{j-1}(t)K_{0}^{n-j+1}(t)t^{i-1}\, \mathrm{d}t,&1\leq j\leq\left\lfloor\frac{n+1}{2}\right\rfloor,i=1,3,\cdots,2 \left\lfloor\frac{n}{2}\right\rfloor+1,\\ \int_{0}^{\infty}tI_{0}^{\prime}(xt)I_{0}^{j-1}(t)K_{0}^{n-j+1}(t)t^{i-2}\, \mathrm{d}t,&1\leq j\leq\left\lfloor\frac{n+1}{2}\right\rfloor,i=2,4,\cdots,2 \left\lfloor\frac{n+1}{2}\right\rfloor,\\ \int_{0}^{\infty}K_{0}(xt)I_{0}^{j-r-1}(t)K_{0}^{n-j+r+1}(t)t^{i-1}\, \mathrm{d}t,&\left\lfloor\frac{n+1}{2}\right\rfloor<j\leq n+1,i=1,3,\cdots,2 \left\lfloor\frac{n}{2}\right\rfloor+1,\\ \int_{0}^{\infty}tK_{0}^{\prime}(xt)I_{0}^{j-r-1}(t)K_{0}^{n-j+r+1}(t)t^{i-2}\, \mathrm{d}t,&\left\lfloor\frac{n+1}{2}\right\rfloor<j\leq n+1,i=2,4,\cdots,2 \left\lfloor\frac{n+1}{2}\right\rfloor.\end{cases}\]
_Properties of \(I_{0}(t)\) and \(K_{0}(t)\)._ We collect some properties of the modified Bessel functions \(I_{0}(t)\) and \(K_{0}(t)\) in order to obtain information of \(\omega_{n+1}(x)\) as \(x\to 0^{+},1^{-}\).
The function \(I_{0}(t)\) is entire and even; it is real and increasing on the half line \([1,\infty)\). The function \(K_{0}(t)\) has a logarithmic pole at \(x=0\); it is real and decreasing on \((0,\infty)\). On the half plane \(\Re(t)>0\), we have the asymptotic approximations
\[I_{0}(t)=\frac{e^{t}}{\sqrt{2\pi t}}\left(1+O\Big{(}\frac{1}{t}\Big{)}\right), \quad K_{0}(t)=\sqrt{\frac{\pi}{2t}}e^{-t}\left(1+O\Big{(}\frac{1}{t}\Big{)}\right)\]
as \(t\to\infty\). In particular, for a positive integer \(a\),
\[[I_{0}(t)K_{0}(t)]^{a}-\frac{1}{(2t)^{a}}=O\Big{(}\frac{1}{t^{a+1}}\Big{)}\]
as \(t\to\infty\) along the real line. One has the boundedness
\[\sup_{t>0}\frac{|tK_{0}^{\prime}(t)+1|}{t(1+|\log t|)}<\infty.\]
Let \(c=0,1,\cdots\). One has the evaluation [11, SS13.21, Eq.(8)]
\[\int_{0}^{\infty}K_{0}(t)t^{c}\,\mathrm{d}t=2^{c-1}\Gamma\Big{(}\frac{c+1}{2} \Big{)}^{2}.\]
_Integrations._ With the data collected above, we list some consequences for the integrals appeared in \(\Omega_{n+1}(x)\).
For \(0\leq a<b\) and \(c\geq 0\), one obtains
\[\int_{0}^{\infty}K_{0}(xt)I_{0}^{a}(t)K_{0}^{b}(t)t^{c}\,\mathrm{d }t =O(\log x), \tag{32}\] \[\int_{0}^{\infty}I_{0}^{\prime}(xt)I_{0}^{a}(t)K_{0}^{b}(t)t^{c} \,\mathrm{d}t =O(x) \tag{31}\]
and
\[\int_{0}^{\infty}tK_{0}^{\prime}(xt)I_{0}^{a}(t)K_{0}^{b}(t)t^{c} \,\mathrm{d}t =\frac{-1}{x}\left[\int_{0}^{\infty}I_{0}^{a}(t)K_{0}^{b}(t)t^{c} \,\mathrm{d}t-\int_{0}^{\infty}\big{(}xtK_{0}^{\prime}(xt)+1\big{)}I_{0}^{a}( t)K_{0}^{b}(t)t^{c}\,\mathrm{d}t\right]\] \[=\frac{-1}{x}\int_{0}^{\infty}I_{0}^{a}(t)K_{0}^{b}(t)t^{c}\, \mathrm{d}t+O(\log x) \tag{33}\]
as \(x\to 0^{+}\). For \(0\leq c\leq a\) and as \(x\to 0^{+}\), we thus have
\[\int_{0}^{\infty}K_{0}(xt)I_{0}^{a}(t)K_{0}^{a}(t)t^{c}\,\mathrm{d }t =O\Big{(}\int_{0}^{\infty}K_{0}(xt)\,\mathrm{d}t\Big{)}=O\Big{(}\frac{1}{x} \Big{)}, \tag{35}\] \[\int_{0}^{\infty}tK_{0}^{\prime}(xt)I_{0}^{a}(t)K_{0}^{a}(t)t^{c} \,\mathrm{d}t =O\Big{(}\int_{0}^{\infty}tK_{0}^{\prime}(xt)\,\mathrm{d}t\Big{)}=O \Big{(}\frac{1}{x^{2}}\Big{)}. \tag{34}\]
If \(0\leq a<c\) and as \(x\to 0^{+}\), then
\[\int_{0}^{\infty}K_{0}(xt)I_{0}^{a}(t)K_{0}^{a}(t)t^{c}\mathrm{d }t =\int_{0}^{\infty}\frac{K_{0}(xt)t^{c-a}}{2^{a}}\mathrm{d}t+\int_{0 }^{\infty}K_{0}(xt)\Big{[}I_{0}^{a}(t)K_{0}^{a}(t)-\frac{1}{(2t)^{a}}\Big{]}t^{ c}\mathrm{d}t\] \[=\frac{2^{c-2a-1}}{x^{c-a+1}}\Gamma\Big{(}\frac{c-a+1}{2}\Big{)} ^{2}+O\Big{(}\frac{1}{x^{c-a}}\Big{)},\] \[\int_{0}^{\infty}tK_{0}^{\prime}(xt)I_{0}^{a}(t)K_{0}^{a}(t)t^{c} \mathrm{d}t =\int_{0}^{\infty}\frac{tK_{0}^{\prime}(xt)t^{c-a}}{2^{a}}\mathrm{ d}t+\int_{0}^{\infty}tK_{0}^{\prime}(xt)\Big{[}I_{0}^{a}(t)K_{0}^{a}(t)-\frac{1}{(2t)^{a}} \Big{]}t^{c}\mathrm{d}t\] \[=\frac{c-a+1}{2^{a}x}\int_{0}^{\infty}K_{0}(xt)t^{c-a}\,\mathrm{d }t+O\Big{(}\frac{1}{x^{2}}\Big{)}\] \[=O\Big{(}\frac{1}{x^{c-a+2}}\Big{)}. \tag{36}\]
On the real line, we have [12, Lemma 4.5]
\[\lim_{x\to 1^{-}}\frac{\int_{0}^{\infty}I_{0}(xt)K_{0}(t)\,\mathrm{d}t}{-\log(1-x)}= \frac{1}{2}\]
and for \(c=0,1,\cdots\),
\[\lim_{x\to 1^{-}}(1-x)^{c+1}\int_{0}^{\infty}I_{0}(xt)K_{0}(t)t^{c+1}\, \mathrm{d}t=\frac{c!}{2}=\lim_{x\to 1^{-}}(1-x)^{c+1}\int_{0}^{\infty}tI_{0}^{ \prime}(xt)K_{0}(t)t^{c}\,\mathrm{d}t.\]
Therefore for \(a\geq 1,a>c\) and \(x\to 1^{-}\), one has
\[\int_{0}^{\infty}I_{0}(xt)I_{0}^{a-1}(t)K_{0}^{a}(t)t^{c}\, \mathrm{d}t =\int_{0}^{\infty}tI_{0}(xt)K_{0}(t)\big{[}I_{0}^{a-1}(t)K_{0}^{a- 1}(t)t^{c}\big{]}\,\mathrm{d}t\] \[=O\Big{(}\int_{0}^{\infty}I_{0}(xt)K_{0}(t)\,\mathrm{d}t\Big{)}\] \[=O\big{(}\log(1-x)\big{)},\] \[\int_{0}^{\infty}tI_{0}^{\prime}(xt)I_{0}^{a-1}(t)K_{0}^{a}(t)t^ {c}\,\mathrm{d}t =\int_{0}^{\infty}tI_{0}^{\prime}(xt)K_{0}(t)\big{[}I_{0}^{a-1}( t)K_{0}^{a-1}(t)t^{c}\big{]}\,\mathrm{d}t\] \[=O\Big{(}\int_{0}^{\infty}tI_{0}^{\prime}(xt)K_{0}(t)\,\mathrm{d} t\Big{)} \tag{39}\] \[=O\Big{(}\frac{1}{1-x}\Big{)}. \tag{38}\]
If \(c\geq a\geq 1\) and \(x\to 1^{-}\), then
\[\int_{0}^{\infty}tI_{0}^{\prime}(xt)I_{0}^{a-1}(t)K_{0}^{a}(t)t^ {c}\,\mathrm{d}t =\int_{0}^{\infty}\frac{tI_{0}^{\prime}(xt)K_{0}(t)t^{c-a+1}}{2^{ a-1}}\,\mathrm{d}t\] \[\qquad+\int_{0}^{\infty}I_{0}(xt)K_{0}(t)\Big{[}I_{0}^{a-1}(t)K_ {0}^{a-1}(t)-\frac{1}{(2t)^{a-1}}\Big{]}t^{c}\,\mathrm{d}t\] \[=\frac{(c-a)!}{2^{a}(1-x)^{c-a+1}}+o\Big{(}\frac{1}{(1-x)^{c-a+1} }\Big{)},\] \[\int_{0}^{\infty}tI_{0}^{\prime}(xt)I_{0}^{a-1}(t)K_{0}^{a}(t)t^ {c}\,\mathrm{d}t =\int_{0}^{\infty}\frac{tI_{0}^{\prime}(xt)K_{0}(t)t^{c-a+1}}{2^{ a-1}}\,\mathrm{d}t\] \[\qquad+\int_{0}^{\infty}tI_{0}^{\prime}(xt)K_{0}(t)\Big{[}I_{0}^ {a-1}(t)K_{0}^{a-1}(t)-\frac{1}{(2t)^{a-1}}\Big{]}t^{c}\,\mathrm{d}t \tag{41}\] \[=\frac{(c-a+1)!}{2^{a}(1-x)^{c-a+2}}+o\Big{(}\frac{1}{(1-x)^{c-a+ 2}}\Big{)}. \tag{40}\]
_Evaluation of \(\omega_{2r+1}(x)\) at \(x=1\)._ All entries of \(\Omega_{2r+1}(x)\) can be evaluated at \(x=1\). We move \(2i\)-th row to row \(i\) in \(\Omega_{2r+1}(1)\) for \(1\leq i\leq r\) and then subtract \(j\)-th column by \((r+j+1)\)-st column for \(1\leq j\leq r\). By (3) on the upper-left block, we obtain
\[\omega_{2r+1}(1)=(-1)^{\frac{r(r+1)}{2}}\det\begin{pmatrix}M_{r}&*\\ 0&M_{r+1}\end{pmatrix}.\]
_Behavior of \(\omega_{2r+1}(x)\) as \(x\to 0^{+}.\)_ Fix \(r\geq 1\). We move row \((2i-1)\) of \(\Omega_{2r+1}(x)\) to row \(i\) for \(1\leq i\leq r\), which creates a sign \((-1)^{r(r-1)/2}\) to the determinant \(\omega_{2r+1}(x)\). As \(x\to 0^{+}\), the resulting matrix decomposes into \((r,r,1)\times(r,r,1)\) blocks of the form
\[\begin{pmatrix}N_{r}+o(1)&O(\log x)&O\big{(}\frac{1}{x^{r}}\big{)}\\ O(x)&\frac{-1}{x}N_{r}+O(\log x)&O(\frac{1}{x^{r}}\big{)}\\ O(1)&O(\log x)&\frac{1}{2x^{r+1}}\Gamma\big{(}\frac{r+1}{2}\big{)}^{2}+O\big{(} \frac{1}{x^{r}}\big{)}\end{pmatrix}\]
by direct evaluation and (32) in the left three blocks, (31) and (33) in the middle, and (34), (35), (36) and (37) in the last column. The leading term of \(\omega_{2r+1}(x)\), which is of order \(x^{-(2r+1)}\), comes from the diagonal blocks and one gets
\[\lim_{x\to 0^{+}}x^{2r+1}\omega_{2r+1}(x)=(-1)^{\frac{r(r+1)}{2}}\frac{1}{2} \Gamma\Big{(}\frac{r+1}{2}\Big{)}^{2}\text{det}^{2}N_{r}.\]
_Behavior of \(\omega_{2r}(x)\) as \(x\to 1^{-}.\)_ Fix \(r\geq 2\). We move \(2i\)-th row of \(\Omega_{2r}(x)\) to row \(i\) for \(i=1,2,\cdots,(r-1)\) and \(r\)-th column to the last, which adds a sign \((-1)^{r(r+1)/2}\) to the determinant \(\omega_{2r}(x)\). We subtract \(j\)-th column by \((r+j)\)-th for \(j=1,2,\cdots,(r-1)\). As \(x\to 1^{-}\), the resulting matrix decomposes into \((r-1,r,1)\times(r-1,r,1)\) blocks of the form
\[\begin{pmatrix}N_{r-1}+o(1)&O(1)&O\big{(}\frac{1}{(1-x)^{r-1}}\big{)}\\ 0&N_{r}+o(1)&O\big{(}\frac{1}{(1-x)^{r-1}}\big{)}\\ O(1)&O(1)&\frac{(r-1)!}{2^{r}(1-x)^{r}}+o\big{(}\frac{1}{(1-x)^{r}}\big{)} \end{pmatrix}\]
by (3) and direct evaluation in the left three blocks, direct evaluation in the middle, and (38), (39), (40) and (41) in the last column. The leading term of \(\omega_{2r}(x)\), which is of order \((1-x)^{-r}\), comes from the diagonal blocks. It yields
\[\lim_{x\to 1^{-}}(1-x)^{r}\omega_{2r}(x)=(-1)^{\frac{r(r+1)}{2}}\frac{(r-1)!} {2^{r}}\det N_{r-1}\det N_{r}.\]
_Behavior of \(\omega_{2r}(x)\) as \(x\to 0^{+}\)._ Fix \(r\geq 2\). We move row \((2i-1)\) of \(\Omega_{2r}(x)\) to row \(i\) for \(1\leq i\leq r\), which adds a sign \((-1)^{r(r-1)/2}\) to the determinant \(\omega_{2r}(x)\). As \(x\to 0^{+}\), the resulting matrix decomposes into four blocks of equal size of the form
\[\begin{pmatrix}M_{r}+o(1)&O(\log x)\\ O(x)&\frac{-1}{x}M_{r}+O(\log x)\end{pmatrix}\]
by direct evaluation and (32) in the left two blocks and (31) and (33) in the right. This leads to
\[\lim_{x\to 0^{+}}x^{r}\omega_{2r}(x)=(-1)^{\frac{r(r+1)}{2}}\text{det}^{2}M_{r}.\]
**Remark 39**.: _Proposition 36 indeed holds for \(\omega_{2}(x)\) by the same analysis if we set \(\det N_{0}=1\); it is consistent with the relation (29) for \(r=1\)._
|
2305.03124 | Games Under Network Uncertainty | We consider an incomplete information network game in which agents'
information is restricted only to the identity of their immediate neighbors.
Agents form beliefs about the adjacency pattern of others and play a
linear-quadratic effort game to maximize interim payoffs. We establish the
existence and uniqueness of Bayesian-Nash equilibria in pure strategies. In
this equilibrium agents use local information, i.e., knowledge of their direct
connections to make inferences about the complementarity strength of their
actions with those of other agents which is given by their updated beliefs
regarding the number of walks they have in the network. Our model clearly
demonstrates how asymmetric information based on network position and the
identity of agents affect strategic behavior in such network games. We also
characterize agent behavior in equilibria under different forms of ex-ante
prior beliefs such as uniform priors over the set of all networks, Erdos-Renyi
network generation, and homophilic linkage. | Promit K. Chaudhuri, Sudipta Sarangi, Hector Tzavellas | 2023-05-04T19:50:43Z | http://arxiv.org/abs/2305.03124v4 | # Games Under Network Uncertainty+
###### Abstract
We consider an incomplete information network game in which agents' information is restricted only to the identity of their immediate neighbors. Agents form beliefs about the adjacency pattern of others and play a linear-quadratic effort game to maximize interim payoffs. We establish the existence and uniqueness of Bayesian-Nash equilibria in pure strategies. In this equilibrium agents use local information, i.e., knowledge of their direct connections to make inferences about the complementarity strength of their actions with those of other agents which is given by their updated beliefs regarding the number of walks they have in the network. Our model clearly demonstrates how asymmetric information based on network position and the identity of agents affect strategic behavior in such network games. We also characterize agent behavior in equilibria under different forms of ex-ante prior beliefs such as uniform priors over the set of all networks, Erdos-Renyi network generation, and homophilic linkage.
**JEL Classifications:** C72, D81, D85
**Keywords:** Incomplete Information, Network Games, Network Uncertainty, Centrality, Local Complementarities
Introduction
Much of the work on network games assumes that agents have complete knowledge of the network structure in which they are embedded. This assumption is especially critical for games with local complementarities as in the seminal Ballester, Calvo and Zenou (2006) paper since equilibrium behavior depends on computations made on the entire network architecture. In reality however, agents typically do not know the entire network. For instance, in social media networks individuals know at most a few degrees of connection away. Moreover, as demonstrated by Breza, Chandrasekhar, and Tahbaz-Salehi (2018), agents are mostly aware only of the identity of their immediate neighbors. In this paper, we rely on these stylized facts to study the popular linear-quadratic network game of Ballester et al. (2006) played by agents who only have local information about the network. The key feature of our approach is that we carry over from complete information games the fact that network location and the identity of agents should play an important role in determining equilibrium behavior under incomplete information.
Our game proceeds as follows. Nature moves first and chooses an unweighted and undirected network on \(n\) vertices from an ex-ante distribution that is common knowledge. Agent \(i\)'s type corresponds to the \(i\)-th row of the adjacency representation of the network drawn by Nature. Agents are thus classified by their direct links and are hence able to identify the agents from whom they will directly extract network complementarities. However, they are unaware of the types of their adjacent agents, that is with whom their neighbors are connected to in the network. Given their realized type, and using Bayes rule, agents update their beliefs regarding the types of their neighbors and, therefore, their beliefs about the true topology of the network. Then they proceed to simultaneously exert actions to maximize their interim linear quadratic payoffs.
Observe that in our model, ex-ante beliefs of agents are prescribed by a probability mass function over the set of graphs on \(n\) vertices. This is in contrast to the most popular approach to modeling rational agent behavior under network uncertainty, which has been to assume that ex-ante beliefs are defined over degree distributions. That is, existing models endow agents with beliefs about the number of connections that each individual may have in the network, but not the individuals with whom these connections are present. Consequently, by abstracting away from the identity
of agents (which, in turn, contains crucial information regarding the architecture of the network itself), such an approach cannot fully explore strategic considerations in agent behavior. However, by changing the object over which agents have ex-ante beliefs to networks themselves, we are able to demonstrate the strategic interplay between local information and network structure.
We first establish the existence and uniqueness of pure strategy Bayesian Nash equilibria (BNE) for arbitrary ex-ante distributions over graphs. Interestingly, these properties hold for a bound on the modularity parameter of bilateral network interaction that is identical to the complete information variant of the model.
Turning to the characterization of the BNE, we show agents will use the information regarding their direct connections to make inferences about the complementarity strength of their actions with those of other agents. The strength of this complementarity is computed by their interim expectation regarding the number of walks they have in the network. In this sense, the BNE calculation is similar to the one performed by agents in the complete information problem, where the Nash equilibrium is proportional to the actual number of walks that agents have in the network (Ballester et al. (2006)) i.e., their Katz-Bonacich (KB) centrality.
We illustrate our model by working through a core-periphery network example (see figure 1). In particular, we endow agents with the ex-ante belief that the actual network drawn by nature assumes a core-periphery topology, and that any such network is equally likely to be selected by Nature. Under these beliefs, upon realizing their types, agents in the periphery are able to infer the architecture of the entire network. Core agents, on the other hand, do not. We show that while peripheral agents have complete information about the network they fully internalize the fact that core players' information is still incomplete. As a consequence, their actions do not conform to the optimal action choice under complete information. This example clearly illustrates the interplay between strategic behavior and information in the presence of incomplete network information.
information about the true architecture of the network, their behavior is not consistent with complete information behavior, nor with expectations over complete information equilibrium outcomes. This follows because equilibrium actions are determined by the interim expected sum of walks, which is different from the agent's ex-ante expected walks (i.e., ex-ante expected Katz-Bonacich centrality). This finding speaks to a body of applied work that structurally estimates network effects in environments in which the architecture of the network is not known. A typical approach to incorporating such unknown network effects has been to do so via expectations over complete information network effects.1 Our result, however, suggests that the issue with such applied approaches is that they fail to internalize the fact that the subjects themselves may not know the network.2 Whenever this is the case, estimators of complete information network effects may not be informative as to the true behavior of the network system of interest.
Footnote 1: Examples of work where the underlying network is unobserved by the researcher include Lewbel, Qu, and Tang (2023), de Paula, Rasul, and Souza (2018), Blume, Brock, Durlauf, and Jayaraman(2015) and Manresa (2013).
Footnote 2: It also does not exploit the fact that subjects have partial knowledge about the network.
Next, we show that the BNE of our game is not always monotonic in agents' degrees. In other words, first order connectivity alone is insufficient to characterize global patterns of equilibrium behavior. The absence of monotonicity as a general equilibrium property is in sharp contrast with existing results in network games with incomplete information. For instance, the Bayesian Nash equilibria of Galleotti et al. (2010) as well as Jackson (2019) are both monotonic in agents' degrees. Even though effort
Figure 1: Examples of core-periphery networks on 5 vertices.
monotonicity in degrees does not always hold in our game, it emerges if we assume the ex-ante distribution to be uniform over all networks. Besides monotonicity, the equilibrium under uniformity exhibits two more properties that are typically imposed as assumptions in other models. The first is anonymity, which states that while agents are aware of the number of agents they are connected to, they are unaware of the identity of these adjacent agents. The second is independence, which states that the degree of any agent who is connected to another is independent of the latter's degree.3
Footnote 3: Human networks typically tend to have degree dependence to some extent, In general if Alice and Bob are both Carol’s friends, then they also typically tend to be friends. Note that our model does not impose any such assumption.
Lastly, we consider a variation of our main model as a robustness check. Rather than assuming agents have beliefs over the network structure, we suppose they have beliefs over the links. This means that types are generated via a random network generation process instead of assuming that ex-ante beliefs are prescribed by a probability measure over the set of all graphs. Since the type space is maintained, the Harsanyi transformation of the linear quadratic game is preserved and thus, the system of best responses characterizing the BNE is also preserved. Therefore, even though the stochastic process generating types is of a different kind, the walk characterization of the equilibrium does not change.
Next, imposing independence in the generation links, we provide closed form characterizations of Bayesian-Nash equilibria when the formation process follows two well-known models. First, under Erdos-Renyi formation where all links are formed with equal probability, equilibrium effort levels are identical to those where beliefs are uniform over all networks. Therefore, the Erdos-Renyi generation process also gives rise to anonymity in equilibrium. Second, we consider homophilic linkage through a stochastic block generation process. Unlike the Erdos-Renyi process, homophilic linkage gives rise to a group identity property where agents weigh the complementarity strengths of their actions according to both intra and inter-group considerations.
To the best of our knowledge there have only been two other papers that introduce incomplete information into the model. De Marti and Zenou (2015) study a linear quadratic game of incomplete information in which agents lack information regarding model parameters other than the network itself. These include the link complementarity strength, and the return to own action. Unlike their work, we focus on incomplete information on the network. Closer to our model is the work of
Breza, Chandrasekhar, and Tahbaz-Salehi (2018) who also employ a linear quadratic game in which agents lack complete information regarding the network itself. One of their crucial assumptions, however, is that the information set of any agent (i.e. the identity of their neighbors) doesn't provide any information about their indirect connections. In other words, their expectations regarding the existence of links between their neighbors and other agents is independent of the information they are endowed with. As a result, their equilibrium gets mapped to agents' ex-ante beliefs about the network. In contrast, we find that as long as agents are endowed with beliefs about network topology itself, local connectivity provides information regarding indirect connectivity and agents will make use of it towards equilibrium play. This local information being different for each player in turn, implies that the equilibrium is no longer mapped to their ex-ante beliefs about the network.
Other than the equilibrium characterization, our model provides an approach for studying incomplete information within the framework of the canonical linear quadratic network game. The model thus paves the way for the formal study of incomplete information variants of the plethora of applications of this game including aspects like intervention (Galeotti et al. (2020)) and endogenous network formation (Konig et al. (2014)) among others.
The rest of the paper is structured as follows. Section 2 contains tools from network theory that will be used throughout the paper and sets up the game. In section 3, we characterize its BNE and illustrate its computation. Section 4 discusses the relationship between degree monotonicity and equilibrium effort. Section 5 deals with random network generation. Section 6 concludes. All proofs as well as additional discussion on certain aspects of our model are relegated to the appendix.
## 2 Model
### Preliminaries
Let \(N=\{1,2,...,n\}\) denote the set of players. Letting \(i\sim j\) denote a link between players \(i\) and \(j\), a _network_ (or _graph_) \(\mathbf{g}\) is the collection of all pairwise links that exist between the players. The links are undirected such that \(i\sim j\in\mathbf{g}\) implies \(j\sim i\in\mathbf{g}\). The network can be represented by its adjacency matrix which, with some abuse of
notation, is also denoted as \(\mathbf{g}=[g_{ij}]\), where \(g_{ij}=1\) if a link exists between players \(i\) and \(j\), and \(g_{ij}=0\) otherwise. There are no self-loops and thus \(g_{ii}=0\) for all \(i\in N\). We denote by \(\mathcal{G}_{n}\) the set of all unweighted and undirected networks on \(n\) vertices whose cardinality is \(2^{\frac{n(n-1)}{2}}\).4
Footnote 4: Note that our networks are labeled. Any two labeled graphs that would be the same under isomorphism are both in \(\mathcal{G}_{n}\).
Given the adjacency representation of a network \(\mathbf{g}\in\mathcal{G}_{n}\) we let \(\mathbf{g}_{i}\) denote its \(i^{th}\) row. That is, \(\mathbf{g}_{i}=(g_{i1},g_{i2},....,g_{in})\in\{0,1\}^{n}\), where it is understood that \(g_{ii}=0\). In the following section, it will be convenient to represent any network \(\mathbf{g}\) by the rows of its adjacency matrix:
\[\mathbf{g}^{T}=(\mathbf{g}_{1},\mathbf{g}_{2},..,\mathbf{g}_{n}) \tag{1}\]
Note that the fact that links are undirected implies \(\mathbf{g}=\mathbf{g}^{T}\). The _neighborhood_ of player \(i\) is the set of players with whom \(i\) is linked and is denoted by: \(\mathbf{N}(\mathbf{g}_{i})=\{j:g_{ij}=1\}\). The size of this set is \(i\)'s _degree_ which counts the agent's direct connections: \(d(\mathbf{g}_{i})\equiv|\mathbf{N}(\mathbf{g}_{i})|\).
A _walk_ of length \(s\) from a node \(i\) to a node \(j_{s}\) is a sequence of links in the network \(i\sim j_{1}\), \(j_{1}\sim j_{2}\),..., \(j_{s-1}\sim j_{s}\). It is denoted by \(ij_{1}j_{2}...,j_{s}\). Given two nodes \(i\) and \(j_{s}\) there may exist more than one such walk. Using the adjacency representation, the number of walks of length \(s\) from node \(i\) to node \(j_{s}\) can be computed by the \(ij_{s}\) element of the matrix \(\mathbf{g}^{s}\).
Finally, let \(\mathbf{g}^{0}=\mathbf{I}\), then for a sufficiently small \(\lambda>0\), the following _influence matrix_\(\mathbf{M}(\mathbf{g}\),\(\lambda)=\left[m_{ij}\left(\mathbf{g}\right)\right]\) is well-defined and non-negative:
\[\mathbf{M}(\mathbf{g},\lambda)\equiv\left[\mathbf{I}-\lambda\mathbf{g}\right] ^{-1}=\sum_{s=0}^{\infty}\lambda^{s}\mathbf{g}^{s}\]
Each element \(m_{ij}\left(\mathbf{g}\right)\) measures the total number of walks of all lengths from agent \(i\) to agent \(j\). Given \(\mathbf{M}(\mathbf{g}\),\(\lambda)\), the _Katz-Bonacich_ (KB) centrality of player \(i\), \(b_{i}(\mathbf{g}\), \(\lambda)\), is the \(i^{\text{th}}\)-component of the vector \(\mathbf{b}\left(\mathbf{g}\), \(\lambda\right)=\mathbf{M}(\mathbf{g},\lambda)\mathbf{1}_{n}\) where \(\mathbf{1}_{n}\) is the n-dimensional column vector of ones. It measures the total number of discounted walks of all lengths originating from player \(i\) to all other players in \(\mathbf{g}\) where longer paths are discounted more.
### The Game
We study a variant of the simultaneous move local complementarities game of Ballester et al. (2006), in which agents have incomplete information about the full architecture of the network.
We follow Harsanyi's (1967) approach to games of incomplete information by introducing Nature as a non-strategic player who chooses a network out of a set containing all possible graphs on the number of vertices equal to the number of agents. The network is chosen from an ex-ante distribution that is common knowledge among all agents. Following Nature's draw, players realize their direct connections (they can see the agents with whom they are linked), but do not know the network's architecture beyond that. In other words, they do not observe the links of their neighbors. Using the information on their direct connections, agents proceed to update their beliefs about the network chosen by Nature according to Bayes' rule. Given these updated beliefs, agents simultaneously exert actions to maximize their interim payoffs.
**Agents and Types**
\(N\) is the set of players (nodes), with \(|N|=n\). For each \(i\in N\), we let \(G_{i}\) denote the player's _type set_. To incorporate information regarding direct connections, agents' types are representative of their corresponding row in the adjacency representation of the network over which the game will be played. That is, each player's type set assumes the following form:
\[G_{i}=\{(g_{i1},g_{i2},....,g_{in})_{i}\in\{0,1\}^{n}:g_{ii}=0\}\]
where \(g_{ij}=1\) if player \(i\) is connected to \(j\) and \(0\) otherwise. Note that the outer subscript in \((g_{i1},g_{i2},....,g_{in})_{i}\) is imposed to differentiate between agents whose types consist of the same sequence of \(0\)'s and \(1\)'s. For instance, if \(n=3\) it differentiates between the type \((0,1,0)_{1}\) for agent \(1\) and \((0,1,0)_{3}\) for agent \(3\). The first refers to agent \(1\)'s links and the second to agent \(3\)'s links. The cardinality of each agent's type set is:
\[|G_{i}|\equiv\gamma=2^{n-1}\]
and we denote its elements by \(\mathbf{g}_{i}^{t_{i}}\in G_{i}\). Whenever the context is clear and we need not enumerate the elements of each type set we suppress the superscript \(t_{i}\). Given
each player's type set, we can write down the _type space_ of the game:
\[G=\raisebox{-1.0pt}{$\chi$}_{i\in N}G_{i}\]
Observe that if we invoke network representation (1), any element of \(G\) may, or may not, correspond to the adjacency matrix of an undirected and unweighted network. That is, not all elements of \(G\) have valid network representations. As an example, consider the case with 3 players, \(N=\{1,2,3\}\). The type sets of the players are given by:
\[G_{1} =\{(0,0,0)_{1},(0,1,0)_{1},(0,0,1)_{1},(0,1,1)_{1}\}\] \[G_{2} =\{(0,0,0)_{2},(1,0,0)_{2},(0,0,1)_{2},(1,0,1)_{2}\}\] \[G_{3} =\{(0,0,0)_{3},(1,0,0)_{3},(0,1,0)_{3},(1,1,0)_{3}\}\]
with corresponding type space \(G=G_{1}\times G_{2}\times G_{3}\). One element of \(G\) is \(((0,1,0)_{1}\), \((0,0,1)_{2}\), \((0,1,0)_{3})\). Observe that these entries do not correspond to rows of the adjacency matrix of an undirected and un-weighted network. According to this element, agent 1 is connected to agent 2 while agent 2 is not connected to agent 1. The corresponding adjacency matrix, therefore, would not symmetric. In this paper, we restrict attention to elements of \(G\) that have valid representations so that Nature's choice is reflective of an undirected and unweighted network.5 In what follows, we do so through the information structure.
Footnote 5: Note, however, that even though we choose to focus on undirected networks, the type space is general enough that it allows for beliefs over directed networks as well.
**Ex-Ante Beliefs**
We denote by \(p\in\Delta(G)\) the probability distribution over the type space, with \(\Delta(G)\) denoting the set of all probability distributions over \(G\). In our game, Nature moves first and chooses an element of the type space \(\mathbf{g}\in G\). As noted above, we want to restrict Nature's choice to those elements in \(G\) that have valid network representations. Towards this, we define the following set of admissible distributions, and impose the assumption that Nature draws a network from a distribution in this set.
**Definition 1**.: We say that the probability distribution \(p\in\Delta(G)\) is admissible if it satisfies:
\[p(\mathbf{g})=0,\ \forall\ \mathbf{g}\in G\ \text{s.t.}\ \mathbf{g}\neq \mathbf{g}^{T},\]
and denote the set of all admissible distributions by \(\Delta_{A}(G)\).
_Assumption 1_**:**_\(p\in\Delta_{A}(G)\) and this is common knowledge._
Observe that the imposition of assumption 1 implies that \(p(\mathbf{g})\geq 0\) if and only if \(\mathbf{g}\in\mathcal{G}_{n}\). Consequently, Nature will choose an unweighted and undirected network, and the fact that the agents are part of one such network is common knowledge.
As an example consider the uniform admissible distribution which is defined as follows:
**Definition 2**.: The probability distribution \(p\in\Delta_{A}(G)\) is _uniform_ if it satisfies:
\[p(\mathbf{g})=\begin{cases}\frac{1}{2^{\frac{n(n-1)}{2}}}&\text{if }\mathbf{g}\in \mathcal{G}_{n}\\ 0&\text{otherwise}\end{cases}\]
In the 3-player case, for instance, we have that \(|\mathcal{G}_{3}|=8\) and Nature chooses any unweighted and undirected network with probability \(p(\mathbf{g})=\frac{1}{8}\).
**Belief Updating**
Given assumption 1, agents know that Nature draws a network and proceed to update their beliefs regarding its true topology according to Bayes' Rule. These updated updated beliefs can be written as:
\[p(\mathbf{g}_{j}|\mathbf{g}_{i})=\frac{p(\mathbf{g}_{i},\mathbf{g}_{j})}{p( \mathbf{g}_{i})}=\frac{\sum_{\mathbf{g}\in G}p(\mathbf{g})\mathbb{I}\{ \mathbf{g}_{i}=\left.\mathbf{g}\right|_{i}\wedge\mathbf{g}_{j}=\left.\mathbf{g }\right|_{j}\}}{\sum_{\mathbf{g}\in G}p(\mathbf{g})\mathbb{I}\{\mathbf{g}_{i}= \left.\mathbf{g}\right|_{i}\}}\ \forall i,j\in N, \tag{2}\]
where \(\mathbb{I}\) is the indicator function. Specifically, for \(\mathbf{g}\in G\), \(\mathbb{I}\{\mathbf{g}_{i}=\left.\mathbf{g}\right|_{i}\}=1\) if \(\mathbf{g}_{i}\) is the projection of \(\mathbf{g}\) (i.e., \(\left.\mathbf{g}\right|_{i}\) ) on its \(i^{th}\) component and 0 otherwise. Intuitively, equation (2) states that agent \(i\) who is of type \(\mathbf{g}_{i}\in G_{i}\) will assign a probability to agent \(j\) being of type \(\mathbf{g}_{j}\in G_{j}\) according to (i) the number of states in the state space that contain both of these types, and (ii) the ex-ante probability that the agent's own type is realized. Given assumption 1, since agent types correspond to rows of an adjacency matrix, the probability the agent \(i\) (whose row is \(\mathbf{g}_{i}\)) will assign to an agent having a row \(\mathbf{g}_{j}\) will depend on the number of networks that contain these rows, and the probability that nature selects them.
As an example, consider the 3-player case and suppose that after Nature's draw, agent 2 is of type \((1,0,0)_{2}\). In other words, agent 2 learns that it is connected to agent 1
but is not connected to agent \(3\). Since players can only observe their neighbors, agent \(2\) does not know if agents \(1\) and \(3\) are themselves connected, and will thus have to form beliefs about the existence of a link between them. This is demonstrated in figure 2. However, the state space only contains \(2\) elements in which agent \(2\)'s type is admissible with a valid network representation: \(((0,1,0)_{1},\)\((1,0,0)_{2},\)\((0,0,0)_{3})\) and \(((0,1,1)_{1},\)\((1,0,0)_{2},\)\((1,0,0)_{3})\). In other words, there are only two graphs on \(3\) vertices that contain the link \(1\sim 2\) and do not contain the link \(2\sim 3\). If we took the ex-ante distribution to be uniform, then agent \(2\) would assign a probability of \(\frac{1}{2}\) that nature chose either of these.
Note that assumption 1 implies that beliefs are consistent, in the sense that agents will assign zero probability to others being of types that do not match the adjacency pattern induced by their own type. This is expressed formally in the remark below.
_Remark 1_.: For all \(g_{i}\in G_{i},\ p(\mathbf{g}_{j}|\mathbf{g}_{i})=0\ \forall\mathbf{g}_{j}\in G_{j}\) for which \(g_{ij}\neq g_{ji},\ \forall i,j\in N.\)
Lastly, we impose the following regularity assumption for conditioning on zero probability events.
_Assumption 2:_ For any \(\mathbf{g}_{j}\in G_{j}\), we set \(p(\mathbf{g}_{j}|\mathbf{g}_{i})=0\), for all \(\mathbf{g}_{i}\in G_{i}\) for which \(p(\mathbf{g}_{i})=0,\)\(\forall i,j\in N\)
In words, assumption 2 states that agents will assign zero probability to any type whenever its conditioned on another whose marginal probability is zero under the prior. This assumption is solely imposed for ease of notation and allows us to place zero probability mass that Nature selects specific sets of networks in examples that follow. In particular, whenever ex-ante beliefs place zero probability on specific elements of the type space being realized, Bayes rule for certain updates becomes ill
Figure 2: Representation of uncertainty for agent \(2\) whose type is \((1,0,0)_{2}\).
defined. To avoid this issue while still disallowing for certain networks to be drawn by Nature, agents' type sets and corresponding type spaces must be redefined on a full support. Nature would place a strictly positive probability on all networks of interest, while networks that would be assigned a zero probability measure under the current prior are excluded from the type space. We demonstrate this construction formally in appendix C, where we also show that this approach produces the same equilibrium as the one generated without altering the type space while imposing assumption 2.
**State Game and Equilibrium**
Given the above, conditional on a state \(\mathbf{g}\in G\) being realized, agents play the state game:
\[s_{\mathbf{g}}=(N,A,(u_{i}(\mathbf{a}_{i},\ \mathbf{a}_{-i}))_{i\in N})\]
where every agent has the same action set \(A\equiv\mathbb{R}_{+}\). Let \(\mathbf{a}_{j}=(a_{j}(\mathbf{g}_{j}^{1}),...,a_{j}(\mathbf{g}_{j}^{\gamma}))\), \(\mathbf{a}_{-i}=(\mathbf{a}_{1},...,\mathbf{a}_{i-1},\mathbf{a}_{i+1},..., \mathbf{a}_{n})\) and \(\mathbf{a}_{i}(\mathbf{g}_{i}^{-t_{i}})=(a_{i}(\mathbf{g}_{i}^{1}),...,a_{i}( \mathbf{g}_{i}^{t_{i}-1}),a_{i}(\mathbf{g}_{i}^{t_{i}+1}),...,a_{i}(\mathbf{ g}_{i}^{\gamma}))\). Interim utilities assume a linear-quadratic form:
\[u_{i}(a_{i}(\mathbf{g}_{i}^{t_{i}});\ \mathbf{a}_{i}(\mathbf{g}_{i}^{-t_{i}}),\ \mathbf{a}_{-i})=a_{i}(\mathbf{g}_{i}^{t_{i}})-\frac{1}{2}a_{i}( \mathbf{g}_{i}^{t_{i}})^{2}+\lambda a_{i}(\mathbf{g}_{i}^{t_{i}})\sum_{j=1}^{ n}g_{ij}^{t_{i}}\sum_{\mathbf{g}_{j}\in G_{j}}p(\mathbf{g}_{j}|\mathbf{g}_{i}^{t_{i}})a_ {j}(\mathbf{g}_{j}) \tag{3}\]
As in Ballester et al. (2006) the first two terms in the utility specification capture the direct benefit and cost to agent \(i\) from exerting its own action. The third term captures local complementarities with those agents that the player is connected to, with \(\lambda\) measuring the strength of this complementarity. Note, however, that unlike the complete information set up of Ballester et al. (2006), agents need to form beliefs about the actions of their neighbors.
Agents simultaneously exert actions to maximize (3). For each agent \(i\), a pure strategy \(\sigma_{i}\) maps each possible type to an action. That is,
\[\sigma_{i}=(a_{i}(\mathbf{g}_{i}^{1}),...,a_{i}(\mathbf{g}_{i}^{\gamma}))\]
This is a simultaneous move game of incomplete information so we invoke Bayes-Nash as the equilibrium notion.
**Definition 3**.: The pure strategy profile \(\sigma^{*}=(\sigma_{i}^{*},\ \sigma_{-i}^{*})\) where \(\sigma_{i}=(a_{i}^{*}(\mathbf{g}_{i}^{1}),...,a_{i}^{*}(\mathbf{g}_{i}^{\gamma}))\)
is a Bayesian-Nash equilibrium (BNE) if:
\[a_{i}^{*}(\mathbf{g}_{i}^{t_{i}})=arg\underset{a_{i}(\mathbf{g}_{i}^{t_{i}})}{ max}\ u_{i}(a_{i}(\mathbf{g}_{i}^{t_{i}}),\ \mathbf{a}_{i}^{*}(\mathbf{g}_{i}^{-t_{i}}),\ \sigma_{-i}^{*})\ \forall\ i\in N,\ \forall\ \mathbf{g}_{i}^{t_{i}}\in G_{i}\]
The above game can be summarized according to the tuple:
\[\mathbf{\Gamma}=\langle N,(G_{i})_{i\in N},p,(s_{\mathbf{g}})_{\mathbf{g}\in G}\rangle \tag{4}\]
## 3 Bayesian Nash Equilibrium
We now characterize the BNE of \(\mathbf{\Gamma}\) starting with best responses.
### Best Responses
Given the payoff structure, the best response of the \(i^{th}\) player who is of type \(\mathbf{g}_{i}^{t_{i}}\) is given by:
\[a_{i}(\mathbf{g}_{i}^{t_{i}})=1+\lambda\sum_{j=1}^{n}g_{ij}^{t_{i}}\sum_{ \mathbf{g}_{j}\in G_{j}}p(\mathbf{g}_{j}|\mathbf{g}_{i}^{t_{i}})a_{j}(\mathbf{ g}_{j})\]
The system characterizing the best responses for all players can be written in vector notation as follows:
\[\mathbf{a}=\mathbf{1}_{n\gamma}+\lambda\mathbb{B}\mathbf{a} \tag{5}\]
where \(\mathbf{1}_{n\gamma}\) is the \(n\gamma\)-dimesnional column vector of 1's, \(\mathbf{a}=\left[\mathbf{a}_{i}\right]_{i=1}^{n}\), \(\mathbf{a}_{i}=\left[a_{i}(\mathbf{g}_{i}^{t_{i}})\right]_{t_{i}=1}^{\gamma}\), \(\gamma=2^{n-1}\) is the total number of types of each player, and \(\mathbb{B}\) is a block matrix that assumes the following form:
\[\mathbb{B}=\begin{pmatrix}\mathbf{0}&G_{1\sim 2}&\ldots&G_{1\sim n}\\ G_{2\sim 1}&\mathbf{0}&\ldots&G_{2\sim n}\\ \ldots&\ldots&\ldots&\ldots\\ G_{n\sim 1}&G_{n\sim 2}&\ldots&\mathbf{0}\end{pmatrix}_{n\gamma\times n\gamma}\]
with
\[\left[G_{i\sim j}\right]_{t_{i}t_{j}}=g_{ij}^{t_{i}}p(\mathbf{g}_{j}^{t_{j}}| \mathbf{g}_{i}^{t_{i}})\quad\forall\ t_{j},t_{i}=1,..,\gamma\quad\text{and} \quad\forall\ \mathbf{g}_{j}^{t_{j}}\in G_{j},\mathbf{g}_{i}^{t_{i}}\in G_{i}\]
It can be verified that if the ex-ante distribution satisfies \(p(\mathbf{g})=1\) for a specific \(\mathbf{g}\in\mathcal{G}_{n}\) and \(p(\mathbf{g}^{\prime})=0\) for all \(\mathbf{g}^{\prime}\neq\mathbf{g}\), then \(a_{i}(\mathbf{g}_{i})=0\) for all \(i\in N\) for which \(\mathbf{g}_{i}\notin\mathbf{g}\). For this case, the system of best responses would reduce to the one that characterizes the complete information Nash equilibrium (Ballester et al. (2006)):
\[\mathbf{a}^{c}=\mathbf{1}_{n}+\lambda\mathbf{g}\mathbf{a}^{c} \tag{6}\]
where \(\mathbf{a}^{c}=(a_{1}(\mathbf{g}_{1}),..,a_{n}(\mathbf{g}_{n}))\) and \(\mathbf{g}=(\mathbf{g}_{1},..,\mathbf{g}_{n})\). In other words, in the complete information case, the matrix \(\mathbb{B}\) would reduce to the actual network over which the game is played, and agents would best respond to the actions of their adjacent agents. In the incomplete information case, however, agents do not know the types of their neighbors and best respond to updated beliefs regarding their actions. This is captured by the elements within the blocks of \(\mathbb{B}\). For instance, consider agent \(i\) and the block \(\left[G_{i\sim j}\right]_{t_{i}t_{j}}\). Its elements are of the form \(g_{ij}^{t_{i}}p(\mathbf{g}_{j}^{t_{j}}|\mathbf{g}_{i}^{t_{i}})\), which states that if agent \(i\) whose type \(\mathbf{g}_{i}^{t_{i}}\) is such that it is connected to agent \(j\), to this agent it will assign the probability being of type \(\mathbf{g}_{j}^{t_{j}}\) equal to \(p(\mathbf{g}_{j}^{t_{j}}|\mathbf{g}_{i}^{t_{i}})\). Observe, that such beliefs are not needed in the complete information case. Moreover, this updating affects equilibrium outcomes if and only if agent \(i\) is connected to agent \(j\), which may be interpreted as saying that agents form beliefs about others if and only if a link exists between them. In this sense, the matrix \(\mathbb{B}\) may be interpreted in a similar fashion to the complete information case, but instead of adjacency over agents, it provides the adjacency pattern over all network admissible types. In turn, this gives rise to a network between the types themselves.
To illustrate, suppose that \(n=3\) and let the underlying distribution be uniform on \(\mathcal{G}_{3}\). In this case, updated beliefs are given by \(p(\mathbf{g}_{j}|\mathbf{g}_{i})=\frac{1}{2},\forall\mathbf{g}_{j}\in G_{j}\) so that the
vector of actions and the matrix \(\mathbb{B}\) assume the following form:
\[\mathbf{a}=\begin{bmatrix}a_{1}((0,0,0)_{1})\\ a_{1}((0,1,0)_{1})\\ a_{1}((0,1,1)_{1})\\ a_{1}((0,1,1)_{1})\\ -\\ a_{2}((0,0,0)_{2})\\ a_{2}((1,0,0)_{2})\\ a_{2}((0,0,1)_{2})\\ a_{2}((1,0,1)_{2})\\ -\\ a_{3}((0,0,0)_{3})\\ a_{3}((1,0,0)_{3})\\ a_{3}((0,1,0)_{3})\\ a_{3}((1,1,0)_{3})\\ \end{bmatrix}\quad\mathbb{B}=\frac{1}{2}\left(\begin{array}{ccccccccc}0&0&0&0& \mid&0&0&0&\mid&0&0&0&0\\ 0&0&0&0&\mid&0&1&0&1&\mid&0&0&0&0\\ 0&0&0&0&\mid&0&0&0&0&\mid&0&1&0&1\\ 0&0&0&0&\mid&0&1&0&1&\mid&0&1&0&1\\ -&-&-&-&-&-&-&-&-&-&-&-&-&-\\ 0&0&0&0&\mid&0&0&0&\mid&0&0&0&0\\ 0&1&0&1&\mid&0&0&0&0&\mid&0&0&0&0\\ 0&0&0&0&\mid&0&0&0&\mid&0&0&1&1\\ 0&1&0&1&\mid&0&0&0&0&\mid&0&0&1&1\\ -&-&-&-&-&-&-&-&-&-&-&-&-&-\\ 0&0&0&0&\mid&0&0&0&0&\mid&0&0&0&0\\ 0&0&1&1&\mid&0&0&0&0&\mid&0&0&0&0\\ 0&0&0&0&\mid&0&0&1&1&\mid&0&0&0&0\\ 0&0&1&1&\mid&0&0&1&1&\mid&0&0&0&0\end{array}\right)\]
Consider player 2 and suppose it has realized the type \((1,0,0)_{2}\) (as visualized in figure 2). The player knows that it is connected to player 1, as \(g_{21}=1\), and that it is not connected to 3, as \(g_{23}=0\). Therefore, agent 2 will form beliefs over agent 1's types. Since the only types of agent 1 that are network admissible with the type \((1,0,0)_{2}\) are \((0,1,1)_{1}\) and \((0,1,0)_{1}\), then there exists a link between the types \((1,0,0)_{2}\) and \((0,1,1)_{1}\) as well as between \((1,0,0)_{2}\) and \((0,1,0)_{1}\). A similar argument holds for all other agents and all of their possible types. Therefore, we may think of \(\mathbb{B}\) as an adjacency matrix whose entries are representative of links between network admissible types. For this three player example, the network is shown in figure 3.
Before we proceed, we note that closest to our best response characterization is the interaction structure considered in Golub and Morris (2020). Although the signal realizations of each agent in their model can be thought of as arising from a more general information structure (which could potentially allow for network signals themselves), the network architecture itself is nonetheless common knowledge. In their general theory of networks and information, agent behavior is driven by an endowed interaction structure similar to our matrix \(\mathbb{B}\). In our model, however, this is generated endogenously as a result of optimizing behavior.
### Existence-Uniqueness
According to definition 3, the BNE is characterized by the fixed point of the system of equations in (5). We have the following classification:
**Theorem 1**.: _There exists a unique pure strategy BNE for \(\lambda\in\left[0,\frac{1}{n-1}\right)\)._
Observe that the bound on the local complementarity parameter \(\lambda\) which guarantees the existence and uniqueness of an equilibrium is identical to the complete information bound.6 Algebraically, this holds because the elements in each row of \(\mathbb{B}\) sum to at
Figure 3: Network between types of each players
most \(n-1\). This can be seen from the fact that the non-zero rows of its blocks \(\left[G_{i\sim j}\right]_{kl}\) sum to \(1\), as they correspond to conditional probability distributions over network admissible types.
The linear quadratic game played on networks is about direct and indirect complementarities. Intuitively, \(n-1\) represents the maximal number of agents that each individual can extract direct complementarities from. Since the complementarity strength arising from a single link is \(\lambda\), the maximal direct complementarity that may be extracted by a single agent is \(\lambda(n-1)\). Moreover, agents are embedded in a network, so they can also extract complementarities from their indirect connections. In the complete information case, the maximal complementarity that can be extracted by a single agent due to their \(s^{th}\) order indirect connections is \(\lambda^{s}(n-1)^{s}\).7 Therefore, summing over all \(s\in\mathbb{N}_{+}\) gives the maximal complementarity that any agent can extract from any network. This, in turn, provides a bound on the strength of \(\lambda\) for actions to be bounded.
Footnote 7: This is because any agent is connected to at most \(n-1\) others, so that \(s\) links aways from any node are at most \((n-1)^{s}\) other nodes, from which a complementarity strength of \(\lambda^{s}\) is extracted.
Now in the incomplete information case, a similar argument holds, but the bound on the maximal complementarity that may be extracted from the network is attained by decomposing it across states rather than links. For example, consider an agent \(i\) who is of type \(\mathbf{g}_{i}\), and who is connected to agent \(j\). Given updated beliefs, agent \(i\) assigns a probability \(p(\mathbf{g}_{j}|\mathbf{g}_{i})\) to agent \(j\) being of type \(\mathbf{g}_{j}\). This in turn induces a complementarity strength of \(\lambda p(\mathbf{g}_{j}|\mathbf{g}_{i})\) between the action of agent \(i\) and that of an agent \(j\) who is of the particular type \(\mathbf{g}_{j}\). Since \(\sum_{g_{j}\in G_{j}}p(\mathbf{g}_{j}|\mathbf{g}_{i})=1\), then the maximal complementarity that can be extracted from a single neighbor is \(\lambda\). A similar argument holds for indirect connections. In other words, the complementarity an agent \(i\) extracts from another \(j\), is spread out across all of \(j^{\prime}s\) types that are admissible with the realized type of agent \(i\). In this sense, the model generates network externalities on the agent-state specific level rather than the agent specific level. This has important consequences for the nature of the BNE. We turn to its characterization next.
### Walk Characterization
Recall that \(a_{i}^{*}(\mathbf{g}_{i}^{t_{i}})\) denotes the equilibrium action of agent \(i\) whose realized type is \(\mathbf{g}_{i}^{t_{i}}\in G_{i}\). The following theorem characterizes the BNE for any ex-ante distribution and any realized network.
**Theorem 2**.: _For any \(s\in\mathbb{N}_{+}\) let \(j_{1},j_{2},..,j_{s}\) denote an arbitrary collection of \(s\) indices. For any admissible probability distribution, and for any realized network \(\mathbf{g}\in G\), the equilibrium actions of agents are given by:_
\[a_{i}^{*}(\mathbf{g}_{i}^{t_{i}})=\sum_{s=0}^{\infty}\lambda^{s}\beta_{i,t_{i} }^{(s)}\ \ \ \forall\ i\in N,\ \forall\ \mathbf{g}_{i}^{t_{i}}\in G_{i}\]
_where \(\mathbf{g}_{i}^{t_{i}}=(g_{ij}^{t_{i}})_{j\in N}\) is the realized type of agent \(i\), and where:_
\[\beta_{i,t_{i}}^{(s)}=\sum_{j_{1},j_{2},..,j_{s}=1}^{n}\ \sum_{t_{j_{1}},t_{j_{2}},..,t_{j_{s-1} }=1}^{\gamma}g_{ij_{1}}^{t_{i}}g_{j_{1}j_{2}}^{t_{j_{1}}}...g_{j_{s-1}j_{s}}^{ t_{j_{s-1}}}p(\mathbf{g}_{j_{s-1}}^{t_{j_{s-1}}}|\mathbf{g}_{j_{s-2}}^{t_{j_{s-2}}} )p(\mathbf{g}_{j_{s-2}}^{t_{j_{s-2}}}|\mathbf{g}_{j_{s-3}}^{t_{j_{s-3}}})...p( \mathbf{g}_{j_{1}}^{t_{j_{1}}}|\mathbf{g}_{i}^{t_{i}})\]
Theorem 2 is best understood when compared to the complete information Nash equilibrium over the same network:
\[a_{i}^{c}(\mathbf{g}_{i}^{t_{i}})=\sum_{s=0}^{\infty}\lambda^{s}\left[\sum_{j _{1},j_{2},..,j_{s}=1}^{n}g_{ij_{1}}^{t_{i}}g_{j_{1}j_{2}}^{t_{j_{1}}}...g_{j_ {s-1}j_{s}}^{t_{j_{s-1}}}\right]\equiv\sum_{s=0}^{\infty}\lambda^{s}d_{i}^{(s) }=b_{i}\left(\mathbf{g},\,\lambda\right)\]
For each \(s\in\mathbb{N}_{+}\), \(d_{i}^{(s)}\) measures the total number of length \(s\) walks originating from player \(i\) to all others (including \(i\) itself). In the complete information scenario, each agent has knowledge of the full architecture of the network and can thus compute these walks for all lengths \(s\). Intuitively, each of these walks \(ij_{1}j_{2},..,j_{s}\), captures the complementarity of agent \(i^{\prime}s\) action with that of agent \(j_{s}\) due to the existence of a particular sequence of intermediate links \(i\sim j_{1}\), \(j_{1}\sim j_{2}\),.., \(j_{s-1}\sim j_{s}\) connecting them. Thus, each agent will take into account all of these complementarities and exert an action equal to their total strength. In turn, this produces Nash equilibrium effort levels equal to the agents' KB centralities.
In the incomplete information case, knowledge of these walks is limited to those that are of first order, as agents can only identify their neighbors. Even though information
is limited, agents are nonetheless aware of the fact that they participate in a network, and hence, internalize the fact that walks of arbitrary orders may exist between them and all other agents. Since these walks capture complementarity strengths, that in turn dictate the magnitude of actions, agents will need to form expectations as to what their actual strength is. In the statement of theorem 2, each term \(\lambda^{s}\beta_{i,t_{i}}^{(s)}\) captures this expected measure for all walks of the particular order \(s\).
To describe this expected measure in more detail, consider the case \(s=3\). With a slight rearrangement of terms we can write
\[\beta_{i,t_{i}}^{(3)} =\sum_{j,k,l=1}^{n}\sum_{t_{j},t_{k}=1}^{\gamma}g_{ij}^{t_{i}}g_ {jk}^{t_{j}}g_{kl}^{t_{k}}p(\mathbf{g}_{k}^{t_{k}}|\mathbf{g}_{j}^{t_{j}})p( \mathbf{g}_{j}^{t_{j}}|\mathbf{g}_{i}^{t_{i}})\] \[=\sum_{j,k=1}^{n}\sum_{l=1}^{n}g_{ij}^{t_{i}}\sum_{t_{j}=1}^{ \gamma}g_{jk}^{t_{j}}p(\mathbf{g}_{j}^{t_{j}}|\mathbf{g}_{i}^{t_{i}})\sum_{t_ {k}=1}^{\gamma}g_{kl}^{t_{k}}p(\mathbf{g}_{k}^{t_{k}}|\mathbf{g}_{j}^{t_{j}})\]
As per the timing of events in the game, agent \(i\) gets to know its type \(\mathbf{g}_{i}^{t_{i}}\), and hence has full knowledge of the links \(g_{ij}^{t_{i}}\). The player is, therefore, aware of the agents through which it can form a walk of length three. To fix ideas, suppose that player \(i\) wants to form a belief about the complementarity strength of its action with that of agent \(l\) due to the particular walk \(ijkl\). Recall that agent \(i\) has complete information about \(g_{ij}^{t_{i}}\), i.e. the link between it and agent \(j\). However, it does not have complete information about \(j^{\prime}s\) type, nor about \(j^{\prime}s\) neighbors' neighbors' type which in turn may or may not include a link with agent \(k\) through which the walk of interest \(ijkl\) reaches agent \(l\).
The expectation regarding the strength of this complementarity is formed in three
Figure 4: Expected walk of length 3
steps. First, the agent will condition on the fact that it has a link with agent \(j\). This occurs with probability \(p(\mathbf{g}_{i}^{t_{i}}|\mathbf{g}_{i}^{t_{i}})=1\) (since we are assuming that \(g_{ij}^{t_{i}}=1\)) and thus, we may think of \(g_{ij}^{t_{i}}\) as the expected number of ways that agent \(i\) can reach agent \(j\). Second, the agent internalizes its own type through \(p(\mathbf{g}_{j}^{t_{j}}|\mathbf{g}_{i}^{t_{i}})\), to compute expectations over the links between its neighbor \(j\) and its neighbors' neighbor \(k\). Using this, the agent counts the expected number of ways it can reach \(k\) through \(j\), conditional on the existence of the link \(g_{ij}^{t_{i}}\). This is given by \(\sum_{t_{j}=1}^{\gamma}g_{jk}^{t_{j}}p(\mathbf{g}_{j}^{t_{j}}|\mathbf{g}_{i}^{ t_{i}})\). Third, the agent internalizes the information about the possible types of its neighbor \(j\) through \(p(\mathbf{g}_{k}^{t_{k}}|\mathbf{g}_{j}^{t_{j}})\), to compute the expectations over the links between its neighbors' neighbor \(k\) and agent \(l\), (who is its neighbors' neighbors' neighbor). Using this, the agent counts the expected number of possible ways it can reach \(l\) through \(k\), conditional on the existence of the link \(g_{jk}^{t_{j}}\). This is given by \(\sum_{t_{k}=1}^{\gamma}g_{kl}^{t_{k}}p(\mathbf{g}_{k}^{t_{k}}|\mathbf{g}_{j}^ {t_{j}})\).
Given the above, the expected total number of ways player \(i\) can reach player \(l\) via a walk of length three is given by the product of (i) the actual link between it and \(j\), (ii) the number of ways it can reach \(k\) from \(j\) given the previous link \(g_{ij}^{t_{i}}\) exists and (iii) the number of ways it can reach \(l\) from \(k\) given \(g_{jk}^{t_{j}}\) exists. Repeating this process for all possible walks of length three which start from agent \(i\), summing over all possible values of \(j,k\) and \(l\), and multiplying by \(\lambda^{3}\), gives the expected complementarity strength of agent \(i^{\prime}s\) action with all other agents due to walks of length three, \(\lambda^{3}\beta_{i,t_{i}}^{(3)}\).
There are a couple of remarks that we make with regard to the nature of the preceding expected complementarity calculation.
_Remark 2_.: \(\beta_{i,t_{i}}^{(s)}\neq\mathbb{E}\left(d_{i}^{(s)}\right)\)
This remark states that the expected complementarity arising from walks of length \(s\) does not equal its ex-ante expected value. This is not surprising, since the equilibrium is an interim notion which allows for belief-updating. An important consequence of this, nonetheless, is that the BNE equilibrium of this game does not equal the ex-ante expectation of KB centrality.
Motivated by complete information equilibrium notions, applied work has proposed estimators of network effects in environments in which researchers cannot observe the network. These approaches implicitly presume that although the researcher does not have information about the network, the agents themselves do. In other words, the data generating process is presumed to arise from network interactions under complete
information. The proposed estimators are reflective of this, as they correspond to execute expectations of complete information outcomes. As demonstrated by Breza et al. (2018), however, the assumption that a researcher is unaware of the network while the subjects are aware of it, may in some cases be inconsistent. If so, and as Proposition 2 suggests, agent behavior under such information settings would not correspond to ex-ante expectations over complete information outcomes.
_Remark 3_.: \(\beta_{i,t_{i}}^{(s)}\neq\mathbb{E}\left(d_{i}^{(s)}|\;\mathbf{g}_{i}^{t_{i}}\right)\)
To shed more light on this, consider the case of \(s=3\).
\[\mathbb{E}\left(d_{i}^{(3)}|\;\mathbf{g}_{i}^{t_{i}}\right) =\mathbb{E}\left(\sum_{j,k=1}^{n}\sum_{l=1}^{n}g_{ij}g_{jk}g_{kl} \mid\mathbf{g}_{i}^{t_{i}}\right)\] \[=\sum_{j,k=1}^{n}\sum_{l=1}^{n}\mathbb{E}\left(g_{ij}g_{jk}g_{kl} \mid\mathbf{g}_{i}^{t_{i}}\right)\] \[=\sum_{j,k=1}^{n}\sum_{l=1}^{n}\sum_{t_{j}=1}^{\gamma}\sum_{t_{k }=1}^{\gamma}g_{ij}^{t_{i}}g_{jk}^{t_{j}}g_{kl}^{t_{k}}p\left(g_{jk}^{t_{j}}g _{kl}^{t_{k}}\mid\mathbf{g}_{i}^{t_{i}}\right)\]
This hypothetical interim expectation calculation fails to capture the process through which the agent \(i\) internalizes the possible types of its neighbors, its neighbors' neighbors and so on. In other words, only conditioning on its own type \(\mathbf{g}_{i}^{t_{i}}\) makes it a more restricted measure. On the other hand, \(\beta_{i,t_{i}}^{(s)}\) gives us the process by which agent \(i\) internalizes the possible types of all agents on arbitrary walks starting from the agent.
### A Core-Periphery Example
To illustrate the disparity between the complete information Nash equilibrium and incomplete information BNEs, we consider a special class of networks that are quite popular in the networks literature and allow for closed form characterizations.
**Definition 4**.: Let \(n_{co}\in\{0,...,\,n-2\}\cup\{n\}\) and \(n_{p}=n-n_{co}\). The adjacency matrix of a core-periphery network assumes the following form:
\[\mathbf{g}^{cp}=\begin{pmatrix}\mathbf{g}_{n_{co}}^{c}&\mathbf{1}_{n_{co} \times n_{p}}\\ \mathbf{1}_{n_{p}\times n_{co}}&\mathbf{0}_{n_{p}\times n_{p}}\end{pmatrix}\]
where \(\mathbf{g}_{n_{q}}^{c}\) denotes the adjacency matrix of the complete network on \(n_{q}\) vertices, and \(\mathbf{1}_{n_{k}\times n_{s}}\) and \(\mathbf{0}_{n_{k}\times n_{s}}\) respectively denote the \(n_{k}\times n_{s}\) matrices of ones and zeros.
In words, a network has a core-periphery architecture if it consists of \(n_{co}\) vertices that are connected to all others called the core, and \(n_{p}=n-n_{co}\) vertices that are only connected to the core called the periphery. The star (\(n_{co}=1\)), the empty (\(n_{co}=0\)), and the complete network (\(n_{co}=n\)) are special cases of core-periphery networks.
Under complete information, symmetry in network position induces symmetric best responses implying that all core agents incur an identical effort level and all peripheral agents also incur an identical effort level. Letting \(a_{co}^{c}\) and \(a_{p}^{c}\) denote these complete information Nash equilibrium actions of a core and a peripheral player respectively, it can be shown that:
\[a_{co}^{c}=\frac{1+\lambda n_{p}}{1-\lambda(n_{co}-1)-\lambda^{2}n_{p}n_{co}}\]
\[a_{p}^{c}=1+\lambda n_{co}a_{co}^{c}\]
To compare this complete information Nash equilibrium with an incomplete information BNE in which this particular type of network structure has a critical function, we endow individuals with the ex-ante belief that the actual network over which the game is played has a core-periphery architecture. Moreover, we assume that any such network is equally likely to be selected by Nature. Formally, let \(\mathcal{G}_{n}^{CP}\subset\mathcal{G}_{n}\) be the collection of all possible core-periphery networks on \(n\) vertices whose cardinality is \(|\mathcal{G}_{n}^{CP}|=2^{n}-(n-1)\). Then, these ex-ante beliefs can be written as follows:
\[p_{cp}(\mathbf{g})=\begin{cases}\frac{1}{|\mathcal{G}_{n}^{CP}|}&\text{if} \,\mathbf{g}\in\mathcal{G}_{n}^{CP}\\ 0&\text{otherwise}\end{cases}\]
Observe that if Nature selects a graph according to this distribution, the type of an arbitrary player \(\mathbf{g}_{i}^{t_{i}}\) can fall in one of two categories. Either \(\sum_{i}g_{ij}^{t_{i}}=n-1\) in which case the agent knows it is the core, or \(\sum_{i}g_{ij}^{t_{i}}=n_{co}<n-1\) where it realizes it is in the periphery. Since players observe the identity of their neighbors, and they know that Nature draws some core-periphery network with certainty, an individual who realizes it is in the periphery is able to infer the architecture of the entire network. On the other hand, when an individual realizes it is in the core it does not know whether its neighbors are core or peripheral players. This information structure, together with
the fact that all core-periphery networks are equally likely to be chosen by Nature, leads to the following characterization.
**Proposition 1**.: _Suppose \(p=p_{cp}\) over \(n>3\) vertices and that nature has chosen a core-periphery network with \(n_{co}\) core nodes and \(n_{p}\) peripheral nodes. Let \(a_{co_{i}}^{*}\) denote the equilibrium action of agent \(i\) who has realized that it is in the core, and \(a_{p}^{*}\) the equilibrium action of a peripheral agent. Then, the BNE is given by_
\[a_{co_{i}}^{*}=\frac{1+\lambda\mathbb{E}_{i}\left[n_{p}\right]}{ 1-\lambda\mathbb{E}_{i}\left[n_{co}-1\right]-\lambda^{2}\mathbb{E}_{i}\left[n_ {p}n_{co}\right]}\] \[a_{p}^{*}=1+\lambda n_{co}a_{co}^{*}\]
_where_
\[\mathbb{E}_{i}\left[n_{p}\right]=(n-1)\frac{\sum_{k=1}^{n-2} \binom{n-2}{k-1}}{2^{n-1}-(n-1)},\quad\mathbb{E}_{i}\left[n_{p}n_{co}\right]= (n-1)\frac{\sum_{k=1}^{n-2}k\binom{n-2}{k-1}}{2^{n-1}-(n-1)},\] \[\mathbb{E}_{i}\left[n_{co}-1\right]=(n-1)\frac{\sum_{k=2}^{n} \binom{n-2}{k-2}-\binom{n-2}{n-3}}{2^{n-1}-(n-1)}\]
Observe that the functional form of the BNE is identical to the complete information Nash equilibrium. This is a consequence of the fact that when all the probability mass is distributed over core-periphery networks, an agent who realizes it is in the core is able to infer that the types of walks that it has in the network are identical to those it would have under complete information. Examples of these walks include walks from the core to the core via the core, walks from the core to the periphery via the core, etc. While walk types of a core agent are the same as the complete information case, the agent cannot infer the actual number of walks that it has. Nonetheless, the uniform assumption implies that any two walks of a particular type provide the same complementarity strength. This in turn leads to the characterization in Proposition 1.
With regard to peripheral agents, even though they know the architecture of the entire network (and hence the actual number of walks they have in the network), they do no exert \(a_{p}^{c}\) in equilibrium. This is because they internalize that core agents cannot infer the topology of the entire network themselves. Consequently, a peripheral agent conditions upon the fact that core agents will exert \(a_{co_{i}}^{*}\) in equilibrium, and exerts
an action which is equal to the actual complementarity it is able to extract from the network i.e., \(1+\lambda n_{co}a_{co_{i}}^{*}\).
Finally, unlike the complete information case where equilibrium actions of core and peripheral agents are strictly increasing with the size of the core, incomplete information actions do not change with the network realization. The following Lemma shows that whenever the size of the core is below half the population size, agents over-exert actions relative to the complete information Nash equilibrium.
**Lemma 1**.: _For any core-periphery network on \(n\geq 7\) vertices, if the size of the core satisfies \(n_{co}\leq n/2\), then \(a_{co}^{c}<a_{co_{i}}^{*}\)._
As a corollary, it also follows that when the core size is below half the population size, incomplete information action levels are closer to the first best. Under complete information, efficient actions solve the welfare maximization problem. As shown by Belhaj, Bervoets and Deroian (2016), with linear quadratic utility over any network \(\mathbf{g}\) and link complementarity strength \(\lambda\), the efficient action level of player \(i\) is given by \(a_{i}^{e}=\boldsymbol{b}\left(\mathbf{g},\,2\lambda\right)\) which is strictly greater than the Nash equilibrium level \(a_{i}^{c}=\boldsymbol{b}\left(\mathbf{g},\,\lambda\right)\).8
Footnote 8: In appendix D, we also numerically verify that for \(n\geq 7\), \(n_{co}<n/2\), and \(\lambda\leq\frac{1}{2(n-1)}\), welfare under incomplete information is larger than under complete information.
**Corollary 1**.: _For any core-periphery network on \(n\geq 7\) vertices, if the size of the core satisfies \(n_{co}\leq n/2\), then \(a_{co}^{c}<a_{co_{i}}^{*}<a_{co}^{e}\). If \(n_{co}>\frac{n}{2}\), then \(a_{co_{i}}^{*}<a_{co}^{c}<a_{co}^{e}\). Similarly for peripheral agents._
Apart from closed-form comparisons between complete and incomplete information equilibria, this core-periphery example highlights the interplay between private information and strategic behavior in network games with local complementarity. In particular, our results in this section allude to two conflicting intuitions that concord under the basis of strategic behavior. To illustrate further, we first note that the core-periphery networks belong to a larger class of networks known as nested split graphs (NSGs). These networks are representative of connection hierarchies (Konig et al. (2014)), consisting of agents whose neighborhood sets are nested. That is, agents are only linked to others who are in turn at least as connected as them. Core periphery networks are a special case of NSGs consisting of only two groups in the hierarchy.
Within such a hierarchical structure, one might expect that individuals at the top of the hierarchy will have access to more information compared to those at the bottom. On the contrary, for the hierarchical structure defined by a core-periphery network, peripheral agents have complete information about network architecture while core agents do not. This anti-hierarchical access to information arises from the ex-ante belief that the realized network admits a core-periphery architecture. The most interesting aspect of this network is that it clearly demonstrates how asymmetric information affects strategic behavior. Note that while the peripheral agents know that they are in the periphery they fully internalize the fact that the core players are unaware of the complete network structure. Hence their actions do not conform to the optimal action choice under complete information. This information asymmetry also has another interesting consequence: even though peripheral agents have more information than core agents, equilibrium behavior is still primarily driven by the actions of core players.
## 4 Non-monotonicity, Uniformity and Identity
As in most local complementarity network games, equilibrium actions in our game are primarily driven by connectivity metrics. Agents who are highly connected, and expect the complementarity strength of their action with other agents to be high, will exert high effort levels in equilibrium. However, as in the complete information Nash equilibrium of this game, first order connectivity alone is not sufficient to characterize general patterns of equilibrium behavior. In other words, equilibrium actions are not monotonically increasing in agents degrees.
To demonstrate, consider the following counter-example. Let \(N=\{1,2,\ldots,10\}\), and consider networks \(\mathbf{g}^{1}\), \(\mathbf{g}^{2}\in\mathcal{G}_{10}\) as shown in figure 5.
Suppose that the ex-ante distribution \(p\) satisfies:
\[p(\mathbf{g})=\left\{\begin{array}{ll}\pi&\mbox{if }\mathbf{g}=\mathbf{g}^{1} \\ 1-\pi&\mbox{if }\mathbf{g}=\mathbf{g}^{2}\\ 0&\mbox{otherwise}\end{array}\right.\]
Let \(N_{i}(\mathbf{g}^{k})\) denote the neighborhood set of the \(i^{th}\) agent in the network \(\mathbf{g}^{k}\), and observe that \(N_{i}(\mathbf{g}^{1})\neq N_{i}(\mathbf{g}^{2})\), \(\forall i\in N\). Consequently, belief consistency (Remark 1) implies that when either network \(\mathbf{g}^{1}\) or \(\mathbf{g}^{2}\) is realized, on learning their types agents will be able to determine the architecture of the entire network drawn. Moreover, since \(p\in\Delta_{A}(G)\) is common knowledge, they also learn that all other players know the entire network architecture. As a result, equilibrium actions in this game of incomplete information will be identical to equilibrium actions under complete information. Setting \(\lambda=0.11\), and concentrating on the equilibrium actions of agent \(7\) in each of the two networks, we find that \(a_{7}^{*}(\mathbf{g}^{1}_{7})=1.26156\) and \(a_{7}^{*}(\mathbf{g}^{2}_{7})=1.25026\). Therefore, even though \(|N_{7}(\mathbf{g}^{2})|>|N_{7}(\mathbf{g}^{1})|\) and \(N_{7}(\mathbf{g}^{2})\supset N_{7}(\mathbf{g}^{1})\), we have that \(a_{7}^{*}(\mathbf{g}^{2}_{7})<a_{7}^{*}(\mathbf{g}^{1}_{7})\).
While action monotonicity in degrees is not a general property of this game, it may, nonetheless, arise in equilibrium when the ex-ante distribution is uniform over the set of all networks. Below we have the following characterization.
**Proposition 2**.: _Let the underlying probability distribution be uniform over all networks. Then, equilibrium actions of agents over any realized network are given by:_
\[a^{*}=1+\frac{\lambda d}{1-\frac{n\lambda}{2}},\quad\forall d\in\{0,1,2,\ldots,(n-1)\}\]
_where \(d\) is the degree of an agent in the realized network._
This proposition is similar to Proposition 2 in Galleotti et al. (2010), as well as Lemma 3 in Jackson (2019). Similar to our environment, they both study games under network uncertainty where agent information is restricted to first order connectivity. They provide Bayesian Nash equilibria where the actions of agents are monotonically increasing in their degree. Unlike our setup, however, ex-ante beliefs in their models are over degree distributions with types being represented by degrees themselves. Since degree distributions carry no vertex specific information other than degree, the key assumption driving their characterizations is anonymity. This property states that even though agents are aware of the number of agents they are connected to, they are unaware of the identity of these adjacent agents.
While our result provides the same quantitative insight as their findings, establishing a condition for monotonicity to arise in equilibrium, we do not require the anonymity assumption. As seen from Theorem 2, the BNE of this game is the result of an expected walk calculation. These walks are computed for all possible sequences of nodes originating from the agent who computes them and require agent identity to be accounted for. Clearly, these expected walks will be different for different ex-ante distributions that will place higher probability mass on specific sets of networks containing specific sets of walks. While anonymity is not imposed in our model, Proposition 2 provides a condition under which it appears to arise in equilibrium. This is due to the fact that equilibrium actions are completely characterized by agent degrees, which in turn only require first order connectivity information. However, this is a consequence of the uniform distribution assumption and the corresponding expected walks it induces. As the following lemma shows, the uniform case exhibits the special property that the expectation an agent has about every other agent's degree (who is either connected or not connected to the former) is index invariant.
**Lemma 2**.: _For any player \(i\) denote by \(\mathbb{E}_{i}(d_{j_{1}}),\mathbb{E}_{i}(d_{j_{2}}),..\) the agent's interim expectations of any of its neighbor's degree, any of its possible neighbor's neighbor's degree and so on. When the underlying probability distribution is uniform, we have:_
\[\mathbb{E}_{i}\left[d_{j_{1}}\right]=\mathbb{E}_{i}\left[d_{j_{2}}\right]= \ldots\ldots=\frac{n}{2}\]
_Consequently,_
\[\beta_{i,t_{i}}^{(s)}=\left(\frac{n}{2}\right)^{s}\]
Uniformity in ex-ante beliefs provides the least amount of information with respect to identifying which walks are present in the network, inducing the trivial belief that all other agents have a degree equal to \(\frac{n}{2}\).9 Consequently, each agent expects the complementarity strength of its action with any other agent due to a walk of length \(s\) to be equal to \(\lambda^{s}\left(\frac{n}{2}\right)^{s}\). This in turn generates equilibrium actions as in Proposition 1 and _in-equilibrium_ anonymity.
Footnote 9: It is interesting to note that Lemma 1 is also related to a well-known paradox in the network theory called the “Friendship Paradox”. In words, this paradox states the expected number of friends that a typical person’s friend has is greater than the expected number of friends for any typical agent in the population. Jackson (2019) demonstrates that in an environment in which agents have ex-ante beliefs over any degree distribution, the friendship paradox arises as an interim belief of each player. In our case, while ex-ante beliefs are of a different nature, uniformity over these beliefs induces the same interim belief. This can be see from the fact that \(\mathbb{E}_{i}\left[d_{j_{1}}\right]=\frac{n}{2}>\frac{n-1}{2}=\mathbb{E} \left[d\right]\) where \(\mathbb{E}\left[d\right]\) is the expected ex-ante degree of any agent under the uniform distribution. This inequality, however, does not hold for any distribution over networks. For instance, setting \(\pi=\frac{1}{3}\) in the example at the beginning of section 4, we have that \(\mathbb{E}_{7}\left[d_{j_{1}}\right]=2.3<3.6=\mathbb{E}\left[d\right]\)
Lastly, we note that Lemma 2 also speaks to a second assumption that drives Proposition 2 in Galleotti et al. (2010), namely degree independence. In their setup, independence of degrees implies that the belief of a player who has degree \(d\), and that of another who has degree \(d+1\), regarding the degrees of each of their neighbors are the same. In our case, the same property holds, but it arises endogenously as a result of ex-ante uniformity.
To sum up, through the uniform distribution this section provides us a precise way to see the connection between Galeotti et al. (2010) type degree models and walk based models. Regardless of the distribution, the degree model only counts the number of links an agent has; connectivity in the rest of the network does not matter and therefore it automatically invokes anonymity. Walk based models on the other hand rely on local information, i.e., agents know the identities of their direct connections. Nature draws a graph using different probability distributions and connectivity in the rest of the network matters. It turns out, however, that under the uniform distribution, agents expectations over every other agent's degree are identical. Consequently, degrees determine everything and identity no longer matters. Anonymity, therefore, arises in-equilibrium under the uniform distribution. This may however not be the case for other admissible probability distributions.
Random Generation
In the previous sections, we have assumed that the ex-ante priors are prescribed by a probability measure over the set of all possible graphs. These ex-ante beliefs, however, are not unique in their ability to describe uncertainty over network topology. An alternative description, and one that has been widely employed in both theoretical (e.g. Dasaratha (2020)) and applied work (e.g. Zheng, Salganik, and Gelman (2006)), is random network generation. Formally, a random network model is a random matrix \(\mathbf{g}\) whose entries \(g_{ij}\) are distributed according to admissible densities \(f_{i\sim j}\) such that the realizations of \(\mathbf{g}\) are within some network class of interest. Intuitively, instead of having beliefs over specific network topologies \(\mathbf{g}\), agents may have beliefs over the process itself that generates \(\mathbf{g}\). In this section we argue that our approach to network games with incomplete information is robust to such ex-ante beliefs, and compute the corresponding Bayesian-Nash equilibria associated with a general class of generation models.
#### Generation Process
Our focus is on unweighted and undirected networks, and so we define the generation process via a collection of Bernoulli random variables. Consider the random variable \(g_{i\sim j}\) which takes the value \(1\) if there exists a link between \(i\) and \(j\), i.e., \(i\sim j\), and \(0\) otherwise. Since we do not consider self-loops, we set \(g_{i\sim i}=0,\text{ for all }i\in N\). Its distribution is given by:
\[f_{i\sim j}(g_{ij})=\mathbb{P}\left(g_{i\sim j}=g_{ij}\right)=\begin{cases} \pi_{ij}&\text{ if }g_{ij}=1\\ 1-\pi_{ij}&\text{ if }g_{ij}=0\end{cases}\]
Recall that an undirected graph \(\mathbf{g}\) is the collection of pairwise links between the players, or \(\mathbf{g}:=\{i\sim j\}\) such that \(i\sim j\in\mathbf{g}\) iff \(j\sim i\in\mathbf{g}\). Thus, the probability of a graph \(\mathbf{g}\) being realized is the joint probability of the existence of all the links in \(\mathbf{g}\) and the non-existence of all the links that are not in \(\mathbf{g}\). This is given by the joint distribution of \(\{g_{i\sim j}:i<j\}\) which we represent by \(f(.):\mathcal{G}_{n}\longrightarrow[0,1]\) where:
\[f(\mathbf{g})=\mathbb{P}\left(\{g_{i\sim j}=g_{ij}\in\{0,1\}:i<j\}\right)\]
Observe that if agents' type sets \(G_{i}\) and corresponding type space \(G\) are the same as
in section 2, then the Harsanyi transformation of the linear quadratic game remains the same as in this section as long as agents have common knowledge of \(f(\mathbf{g})\). Moreover, the preceding network generation process guarantees that Nature will produce some unweighted and undirected network with certainty, implying that consistency in beliefs (Remark 1) is also preserved. Hence, agent beliefs over generation processes themselves induce similar type contingent beliefs as those over network topologies, preserving the functional form of the system of best responses that characterize the BNE. Consequently, Theorem 2 still holds with the expected complementarity strength arising from walks of different lengths being determined by agents' updated beliefs over \(f(\mathbf{g})\).
**Erdos-Renyi and Homophilic Linkage**
In order to gain some insight into how beliefs over network generation processes translate to equilibrium play, we impose the assumption that links are formed independently. In this case, ex-ante priors assume the following form:
\[f(\mathbf{g})=\prod_{i<j}f_{i\sim j}(g_{ij})=\prod_{i<j}\pi_{ij}^{g_{ij}}(1-\pi _{ij})^{1-g_{ij}}\]
Given link independence, it follows that a player \(i\) who is of type \(\mathbf{g}_{i}\in G_{i}\) will assign a probability to its neighbor being of type \(\mathbf{g}_{j}\in G_{j}\) according to:
\[p(\mathbf{g}_{j}|\mathbf{g}_{i})=\prod_{k\neq i}\pi_{jk}^{g_{jk}}(1-\pi_{jk}) ^{1-g_{jk}} \tag{7}\]
Plugging equation (7) into the walk characterization of Theorem 2 gives the BNE of the game when agents have beliefs over a network generation process whose links are formed independently. In what follows we use these independent generation beliefs to characterize the Bayesian-Nash equilibria under a general class of generation models known as stochastic block models.
**Definition 5**.: Consider a partition of the agent set into \(m\geq 1\) groups \(A_{1},\,A_{2},..,\,A_{m}\subset N\) each consisting of \(n_{i}\) agents respectively such that \(\sum_{k}n_{k}=n\), \(A_{i}\cap A_{j}=\emptyset\), \(\forall i\neq j\) and \(\cup_{k}A_{k}=N\). The network generation process follows a _stochastic block_ model if
\[\pi_{ij}=\begin{cases}p_{k}&\text{if}\,i,j\in A_{k}\\ \epsilon&\text{otherwise}\end{cases} \tag{8}\]
where \(p_{k},\epsilon\in[0,1]\) and \(p_{k}\geq\epsilon\) for all \(k\in\{1,2,\ldots,m\}\).
Observe that in the trivial case \(m=1\), the stochastic block model reduces to the _Erdos-Renyi_ random network model where all links are formed with equal probability \(p\).10 When \(m>1\) stochastic block generation allows for community structure and homophily to appear in the network.11. The following proposition characterizes the BNE of the game for an arbitrary number of groups.
Footnote 10: Note that the stochastic block model also reduces to the Erdos-Renyi model when the economy consists of more that one group \(m>1\) but the linking probabilities are all equal i.e. \(p_{k}=\epsilon\) for all \(k\in\{1,2,..,m\}\)
**Proposition 3**.: _Suppose that the underlying network generation process follows a stochastic block model. Then, equilibrium actions of agents are given by:_
\[a_{k}(\boldsymbol{d})=1+\lambda\sum_{l=1}^{m}\gamma_{kl}d_{l}\]
_where \(a_{k}(\boldsymbol{d})\) is the action of an agent who belongs in group \(A_{k}\), and \(\boldsymbol{d}\equiv(d_{1},d_{2},\ldots,d_{m})\) is a vector of degrees in which \(d_{l}\) denotes the agent's degree with those agents group \(l\), i.e. \(d_{l}=\sum_{j\in A_{l}}g_{ij}\) where \(i\in A_{k}\). The values of \(\gamma_{kl}\) are given by the fixed point of the following system:_
\[\gamma_{kl}=1+\lambda\gamma_{lk}[(n_{k}-1)\epsilon+1]+\lambda \gamma_{ll}\left(n_{l}-1\right)p_{l}+\lambda\sum_{\begin{subarray}{c}s=1\\ s\neq k,l\end{subarray}}^{m}\gamma_{ls}n_{s}\ \ \forall k\in\{1,..,\,m\},\,\forall\,l\neq k\] \[\gamma_{kk}=1+\lambda\gamma_{kk}[(n_{k}-2)p_{k}+1]+\lambda\sum_{ \begin{subarray}{c}s=1\\ s\neq k\end{subarray}}^{m}\gamma_{ks}n_{s}\epsilon\ \ \forall k\in\{1,..,\,m\}\]
First, let us consider the single group case where underlying network generation process follows an Erdos-Renyi model with linking probability \(p\). In this case, equilibrium actions of agents reduce to:
\[a^{*}=1+\frac{\lambda d}{1-\lambda[(n-2)p+1]}\quad\forall d\in\{0,1,2,\ldots, (n-1)\}\]
where \(d\) is the degree of an agent in the realized network. This closed form characterization resembles the one in Proposition 2 where ex-ante beliefs are uniform over
the set of all networks. If the linking probability satisfies \(p=\frac{1}{2}\), then the two characterizations are identical. Similar to the intuition behind Proposition 2, when all links are formed with equal probability \(p\) every agent has the trivial belief that all others have a degree of \((n-2)p+1\).12 Therefore, each agent expects the complementarity strength of its action with any other agent due to a walk of length \(s\) to be equal to \(\lambda^{s}\,((n-2)p+1)^{s}\). As in the uniform case, these Erdos-Renyi beliefs produce in-equilibrium anonymity.
Footnote 12: This is because the agent conditions on the link with its neighbor, and other than the agent itself, its neighbors have at most \(n-2\) other neighbors each with probability \(p\). Therefore, the expected degree of any one of the agents’ neighbors is \((n-2)p+1\).
Next consider \(m>1\). In this case, unlike Proposition 2 and Erdos-Renyi generation, the equilibrium exhibits a _group identity_ property. In particular, while the degree of an agent is important in determining the total complementarity it expects to extract from the network, agents extract different complementarity strengths depending on the identity of the group in which their neighbors belong to. Specifically, an agent in group \(A_{k}\) extracts complementarities from its intra (\(d_{k}\)) and inter-group (\(d_{l}\)) neighbors according to the maginitutes of parameters \(\gamma_{kk}\) and \(\gamma_{kl}\). Each of these parameters represents the extent to which walks that are formed via group-specific neighbors are complementary.
The intuition behind this complementarity decomposition is best understood when \(m=2\). In this case, the linear system in Proposition 3 reduces to:
\[\gamma_{11} =1+\lambda\gamma_{11}[(n_{1}-2)p_{1}+1]+\lambda\gamma_{12}n_{2}\epsilon\] \[\gamma_{12} =1+\lambda\gamma_{21}[(n_{1}-1)\epsilon+1]+\lambda\gamma_{22}(n_ {2}-1)p_{2}\] \[\gamma_{21} =1+\lambda\gamma_{11}(n_{1}-1)p_{1}+\lambda\gamma_{12}[(n_{2}-1) \epsilon+1]\] \[\gamma_{22} =1+\lambda\gamma_{21}n_{1}\epsilon+\lambda\gamma_{22}[(n_{2}-2)p _{2}+1]\]
When the population consists of two groups, stochastic block generation gives rise to the interim belief that an agent in any given group can form four different types of walks. Fixing an agent \(i\in A_{1}\), these walks assume the following forms: (i) \(i\sim j\sim..\sim k\,\mbox{s.t}\;j,k\in A_{1}\), (ii) \(i\sim j\sim..\sim k\,\mbox{s.t}\;j\in A_{1}\), \(k\in A_{2}\), (iii) \(i\sim j\sim..\sim k\,\mbox{s.t}\;j\in A_{2}\), \(k\in A_{1}\), and (iv) \(i\sim j\sim..\sim k\,\mbox{s.t}\;j,k\in A_{2}\). Since links are formed independently, and since all links of a particular type are formed with the same probability, this implies that any two walks of the same type provide the same complementarity strength.
To see how these strengths are determined, consider an agent \(i\in A_{1}\). The agent knows that within its group, all links have a probability \(p_{1}\) being realized. Since links are independent, this implies that the agent expects that all others within its own group have a degree of \((n_{1}-2)p+1\). Hence, if \(\gamma_{11}\) represents the spillover strength extracted from each walk of type (i), then their total strength is \(\lambda\gamma_{11}[(n_{1}-2)p_{1}+1]\). Next, the agent also knows that its neighbors within the group are connected to others across the group with probability \(\epsilon\). Therefore, it also expects to have walks via its intra group neighbors to those agents in \(A_{2}\). Since the agent expects that its intra group neighbors have \(n_{2}\epsilon\) inter group neighbors, if \(\gamma_{12}\) represents the spillover strength extracted from each walk of type (ii), then their total strength is \(\lambda\gamma_{12}n_{2}\epsilon\). Summing the two terms, and multiplying by the agents intra group degree \(d_{1}\) gives the total complementarity the agent expects to extract from walks that start within its own group to all other agents (i.e. type (i) and type (ii) walks).
Next, consider inter group spillovers. By link independence, an agent \(i\in A_{1}\) expects any of its inter group neighbors have \((n_{1}-1)\epsilon+1\) inter groups neighbors and \((n_{2}-1)p_{2}\) intra group neighbors of their own. Therefore, a similar argument as above establishes that \(\lambda\gamma_{21}[(n_{1}-1)\epsilon+1]\) and \(\lambda\gamma_{22}(n_{2}-1)p_{2}\) are the total complementarity strength extracted from type (iii) and type (iv) walks respectively. Summing the two terms, and multiplying by the agents inter group degree \(d_{2}\) gives the total complementarity the agent expects to extract from walks that start across its group to all other agents (i.e., type (iii) and type (iv) walks).
In the special case where the intra-group linking probabilities are the same, the magnitudes of these complementarity strengths are completely characterized by group size.
**Lemma 3**.: _Suppose that \(m=2\), \(p_{1}=p_{2}\) and \(n_{1},\,n_{2}\geq 3\). If \(n_{1}\geq n_{2}+2\), then \(\gamma_{11}>\gamma_{12}\) and vice-versa. However, if \(n_{1}=n_{2}\) then \(\gamma_{12}>\gamma_{11}\)._
## 6 Conclusion
We study a linear quadratic network game of incomplete information in which agents' information is restricted only to the identity of their neighbors. We characterize Bayesian-Nash equilibria, demonstrating that agents make use of local information to
form beliefs about the number of walks they have in the network, and consequently the complementarity strength of their action with all other agents. Unlike other models in the literature, we show that local information captured by identity and network position play a crucial role in allowing agents to determine this complementarity. Even though equilibria for certain ex-ante prior beliefs exhibit in-equilibrium anonymity, this anonymity is a consequence of trivial information structures such as uniform priors or an Erdos-Renyi network generation process.
The proposed model is flexible enough that allows for the formal study of strategic behavior in networks under any form of ex-ante beliefs, regardless if these are over network topology or network generation. For any given prior, as long as agent information is restricted to their local neighborhood, the BNE of this game can be computed via the walk characterization developed above and can be directly compared to its complete information Nash equilibrium counterpart. In turn, this allows for the study of how rational agent behavior will deviate from complete information behavior within the multiplicity of network systems that have been modeled via the canonical linear quadratic game.
|
2308.03866 | Trusting Language Models in Education | Language Models are being widely used in Education. Even though modern deep
learning models achieve very good performance on question-answering tasks,
sometimes they make errors. To avoid misleading students by showing wrong
answers, it is important to calibrate the confidence - that is, the prediction
probability - of these models. In our work, we propose to use an XGBoost on top
of BERT to output the corrected probabilities, using features based on the
attention mechanism. Our hypothesis is that the level of uncertainty contained
in the flow of attention is related to the quality of the model's response
itself. | Jogi Suda Neto, Li Deng, Thejaswi Raya, Reza Shahbazi, Nick Liu, Adhitya Venkatesh, Miral Shah, Neeru Khosla, Rodrigo Capobianco Guido | 2023-08-07T18:27:54Z | http://arxiv.org/abs/2308.03866v1 | # Trusting Language Models in Education +
###### Abstract
Language Models are being widely used in Education. Even though modern deep learning models achieve very good performance on question-answering tasks, sometimes they make errors. To avoid misleading students by showing wrong answers, it is important to calibrate the confidence - that is, the prediction probability - of these models. In our work, we propose to use an XGBoost on top of BERT to output the corrected probabilities, using features based on the attention mechanism. Our hypothesis is that the level of uncertainty contained in the flow of attention is related to the quality of the model's response itself.
Confidence calibration Natural Language Processing Machine Reading Comprehension
## 1 Introduction and Background
The innovation that Deep Learning has brought in the era of Big Data is considered a breakthrough, since those models gave practitioners the ability to solve a wide collection of difficult problems that Classical Machine Learning approaches couldn't perform well [1, 2, 3, 4, 5]. For example, we have seen great improvements in the medical area using computer vision [6, 7], and also in Natural Language Processing (NLP) [8, 9], just to cite a few examples. This last area is going to be the focus of this paper.
Specifically, at CK-12, we have a Question & Answering (Q&A) that starting with an input query, goes through several stages of processing. After the final stage we arrive at a set of candidate paragraphs that are likely to contain an answer to the query. This final stage is a softmax that ranks the paragraphs according to how likely they are to contain the correct answer. we take the top-\(3\) highest probabilities generated from the softmax output and show them to the user.
The system is intended to receive all kinds of academic questions, and it should answer confidently when the question belongs to one of the domains the models were trained on, like biology, physics, math, etc. Some questions might be completely Out-of-Domain (see Fig. 1 (a), for some examples), or they might be slightly domain-shifted. In the latter case, we mean questions that have an intersection with the training topics, however a complete answer is not present at all in the corpus. For example, one may want to know about what is the Relativity Theory at a graduate level, and the model's predictions could be some very introductory answers at a high-school level of depth (Fig 1(b) illustrates some other examples to give a clear idea of domain-shifted questions). So, it's important to know when to abstain from answering a specific question, as this will mitigate the chances of misleading a student. In other words, a model's internal confidence should first be **reliable** for taking the decision to answer the question or not.
One major problem present in Deep Learning models is the confidence miscalibration. To be specific, let's consider a binary classification problem in the supervised learning setting. We know that, given an input instance represented by a finite feature-vector (in our case, it's a collection of tokens that are transformed into these vectors, also called word embeddings) of the form \(x=(x_{1},x_{2},x_{3},...,x_{n}),n\in\mathbb{N}\), after a series of nonlinear transformations over each layer, generally the Neural Network outputs a value of a sigmoid activation (or softmax activation, in case of a multiclass classification problem), which can be interpreted as the internal confidence the model has of its prediction. For instance,
given a sample of \(n=100\) queries, if the model outputs an internal score of \(80\%\) for each of them, it's expected that about 80 of the queries will have a correct answer from the model. The problem is that DL models usually suffer from highly miscalibrated scores, so a high confidence but wrong prediction (or a low confidence but right prediction) might happen; see Fig. 1 for instance. Note that a model's empirical accuracy might be high, but the reliability of the internal probability scores may still be uncalibrated. In a wide variety of applications, especially ones involving high-risks such as fraud detection and self-driving cars, to cite a few - a miscalibrated prediction can have a really high cost and therefore is not tolerable. In other words, the challenge is knowing when the model is unlikely to have the right answer. By having a calibrated score, we can set a threshold to decide when to refuse answering the question. The problem of miscalibration in DL models is explored in further details in [10] and also [11].
The approach we propose here in this paper to overcome this problem is to train an XGBoost on top of the final softmax stage, receiving as input features related to the preceding encoder system, and the tokens. This is a similar approach to [12]; our main improvement, however, comes from adding new attention-based features. First, we begin by interpreting the attention as a **flow**. Consider the attention from [CLS] to the [SEP] token, for instance; each layer can then be interpreted as a discrete-time step, where the attention is evolving as a flow. We consider that the attention flow is of important relevance as a feature for the calibrator (and confirm this in our experiments in section V), as it captures how the BERT-based model is relating the semantics (embeddings) of the answer to the question over time. As we see in section VI, our approach yields improved results over the previous work from [12].
## 2 Related works
Previous works in RC to improve confidence calibration have been done. In ([10]), several methods for confidence calibration of Deep Neural networks have been discussed. Their best method is the famous Temperature scaling, where the logits before the softmax activation are divided by some constant T, found via NLL optimization on the validation set; this has the effect of increasing the entropy of the model's output probability distribution. However, this doesn't change the AUC scores, as observed in section IV. For the sake of comparison, we also present the difference in results with respect to other approaches proposed in the paper, i.e.: Isotonic regression, Platt scaling and Temperature scaling.
In ([13]), a Gradient Boosting Machine is proposed to calibrate the confidences. This GBM is trained on several features, including query and answers token length, softmax probability of start and end of answer spans, and even features based on the Query Embedding Gradients of the model, among others. The problem is that our RC model doesn't work by generating answer spans. Instead, given a query and a paragraph, it outputs the probability that the latter contains a response to the query. Another problem is the increase in complexity for calculating gradients, which is crucial for CK-12 and a lot of systems.
In ([12]), the most similar to our approach, they use an XGBoost with multiple features: softmax scores, attention-based, and features related to the query and answer token lengths. Our improvements come from adding relevant attention features by interpreting them as a **flow**. From the attention flow, one can extract different informations, as presented in section IV. As a side note, in the paper, the authors put emphasis only on knowing when to refrain. We show, additionally, that not only the new calibrator has a better threshold for non-answerability, but the answerable questions also become better calibrated. This is shown in section V, in the reliability plots.
Now, we briefly explain the methods used in ([10]) that were also tested in this work.
### Platt Scaling
([14]). The main idea here is to use a parametric approach for calibration. Basically, the prediction labels are used to train a logistic regression model. This calibrator model is trained on the **validation** set to return calibrated probabilities. For Deep Learning models, this approach has the following output, after learning parameters \(a\) and \(b\)\(\epsilon\)\(\mathcal{R}\) from optimizing NLL loss:
\[\hat{q}_{i}=\sigma\left(az_{i}+b\right)\]
One important detail is that this is a post-processing calibration method, that is, the Neural Network's parameters are frozen during optimization of \(a\) and \(b\).
### Temperature Scaling
Here, we have a specific case of Platt Scaling, since what this method does is learning a scalar parameter \(T>0\) (also found via NLL optimization), and produces the following output:
\[\hat{q}_{i}=\max_{k}\sigma_{SM}\left(\frac{z_{i}}{T}\right)^{k}\]
Given \(n\) logit vectors \(z_{1},...,z_{n}\) and labels \(y_{1},...,y_{n}\), then there is a unique \(T\) that corresponds to the unique solution for the following entropy maximization problem:
\[\begin{array}{rl}\max_{T}&-\sum_{i=1}^{n}\sum_{k=1}^{K}T\left(\mathbf{z}_{i} \right)^{(k)}\log T\left(\mathbf{z}_{i}\right)^{(k)}\\ \text{subject to}&T\left(\mathbf{z}_{i}\right)^{(k)}\geq 0\quad\forall i,k\\ &\sum_{k=1}^{K}T\left(\mathbf{z}_{i}\right)^{(k)}=1\quad\forall i\\ &\sum_{i=1}^{n}z_{i}^{(y_{i})}=\sum_{i=1}^{n}\sum_{k=1}^{K}z_{i}^{(k)}T\left( \mathbf{z}_{i}\right)^{(k)}.\end{array} \tag{1}\]
For which a detailed proof is found in ([10]). Essentially, it's preventing the logits before the activation to be pushed into extreme boundary regions of the softmax, and hence it's raising the entropy of the model's output. One caveat is that, since Temperature Scaling doesn't change the softmax maximum value (that is, it preserves the order between the outputs), it doesn't change the accuracy of the model, and hence also doesn't improve AUC, as we observe in Section VI.
### Isotonic Regression
Unlike Platt Scaling, this is a nonparametric regression method for calibration ([15]), a piecewise constant function \(f\) is learned to transform the uncalibrated outputs of a model: \(\hat{q}_{i}=f(\hat{p}_{i})\). The optimization problem is formally written as:
\[\begin{array}{rl}\min_{\begin{subarray}{c}\theta_{1},...,\theta_{M}\\ a_{1},...,a_{M+1}\end{subarray}}&\sum_{m=1}^{M}\sum_{i=1}^{n}\mathbf{1}(a_{m} \leq\hat{p}_{i}<a_{m+1})(\theta_{m}-y_{i})^{2}\\ \text{subject to}&0=a_{1}\leq a_{2}\leq...\leq a_{M+1}=1\\ &\theta_{1}\leq\theta_{2}\leq...\leq\theta_{M}.\end{array} \tag{2}\]
Where \(M\) represents the number of intervals, each \(a_{1},...,a_{M+1}\) is an interval boundary, and \(\theta_{1},...,\theta_{M}\) are the function values. Basically, \(f\) is found such that it minimizes the square loss \(\sum_{i=1}^{n}(f(\hat{p}_{i})-\hat{y}_{i})^{2}\).
## 3 Q&A system
The CK-12 Q&A system is fundamentally based around variations of the BERT language model that have been optimized to perform well on the CK-12 corpus of academic content. The straightforward application of BERT in Q&A often follows the instructions provided by its authors in the form of fine-tuning the vanilla BERT on the SQUAD data set. For a given question, the output of this system would be the span of text from the available corpus that likely constitutes an answer to the query. While this might be suitable for trivia type questions (e.g. "What is the capital of Brazil?"), we found that for academic questions it leaves out important context which is a crucial part of the answer. For instance, in response to the question "How many different types of volcano are there?", a model fine-tuned on SQUAD might answer with "Four kinds". Which is indeed the correct answer, but incomplete. A better answer might look like "Four kinds: cinder cones, composite volcanoes, shield volcanoes, and lava domes".
To that end, we devised our Q&A system to answer with complete paragraphs so as to provide the students with enough context around the question. Starting with a query from the student, we employ multiple BERT language models to encode the question and try to find a matching paragraph for it. Specifically, for a given query \(q\), the system outputs a conditional distribution over the available paragraphs, given \(q\): \(P(p_{j}|q)\) for \(p_{j}\) the \(j^{th}\) paragraph. From this distribution, we pick the maximum (or top \(N\) maximum) likelihood paragraphs. Therefore, effectively the answer comes in the form of
\[\arg\max_{j}P(q_{j}|p)\]
The straightforward application of above will always pick an answer for any query, and fails to properly navigate the various situations where the question cannot be answered (e.g. an out of domain question). Therefore it is necessary to have additional measures in place to sort out a maximum likelihood candidate paragraph that truly answers the question, from one that does not.
### Vaswani's original formulation
An important mechanism present today in the vast majority of the BERT-based models is the concept of **attention**. It is a mathematical formulation in the model's architecture that allows it to capture how itself is paying attention (and hence the name) to different tokens in the text. It is based on the idea that each word should have an importance to the meaning of the text. It can be described by mapping a query and a pair of key-value to some output, where all of these are vectors. The output is written as a weighted sum of the values, where the weights are given by a compatibility function between the query and the respective key. For Transformers, specifically, the model uses a similar concept called **Scaled Dot-Product Attention**, that was originally proposed by [16], and its input consists of queries and keys of dimension \(d_{k}\), and values of dimension \(d_{v}\). A dot-product between \(Q\) with all keys \(K\) is computed; then, each result is divided by \(\sqrt{d_{k}}\), and a softmax is applied to get the weights of the values.
In practice, a set of queries is assembled into a single matrix \(Q\), and the same is done for the set of keys and values, represented in the end by matrices \(K\) and \(V\). Then, the output (which is also a matrix, then), is mathematically described as:
\[Attention(Q,K,V)=softmax\left(\frac{QK^{T}}{\sqrt{d_{k}}}\right)V \tag{3}\]
The authors argue that the reason for the scaling factor in the formula \(\frac{1}{\sqrt{d_{k}}}\) is to prevent vanishing gradients, since the dot-products grow too large in magnitude as \(d_{k}\) increases, thus pushing the softmax function to regions where its gradient becomes small.
Another important detail is that in BERT-based models, there are many attention functions running in parallel per each layer, thus projecting the queries, key and values to different learned linear projections with dimensions \(d_{k},d_{k},d_{v}\). There is some evidence that each instance of this function - also called an attention head - contributes to understanding different parts of the semantic [17; 18]. In our model, specifically, we have 12 heads. See 2 for a visual explanation.
## 4 Design of confidence measure and its features
Given the needed background, in this work, our improvements come from interpreting the attention as a flow. Consider the attention from [CLS] to the [SEP] token in the first layer, for the first head, for instance; this gives us a scalar value.
Figure 1: BERT’s multihead attention mechanism illustrated. [16]
Each layer can then be interpreted as a discrete-time step, where the attention is evolving as a flow. We consider that the **attention flow** is of important relevance as a feature for the calibrator (and confirm this in our experiments in section V), as it captures how the BERT-based model is relating the semantics (embeddings) of the answer to the question over time.
### Shannon's Entropy
The idea of entropy has been proposed in multiple fields through history, like in statistical mechanics [19], in Ergodic Theory [20], and also in Information Theory [21]. Although all three definitions have connections, here we are interested in the latter. Given some random variable X and its distribution \(p(x)\), Shannon's entropy [21] is defined as:
\[H(p)=-\sum p(x)\ log\ p(x)\]
Shannon's entropy allows one to understand the notion of information as the unpredictability of \(p\), or the number of bits needed to describe the distribution. So, \(p\) in our case is the vector that represents the **attention flow**. Then, it is straightforward to see that measuring the entropy of \(p\) is measuring the unpredictability, or the **information** (in the sense of Information Theory) contained in the **flow**. We hypothesize here that this should be a valuable feature for the calibrator, and we'll see in section VI that the experiments confirm it is among the most important features.
### Delta scores
Another way of incorporating the flow information ("information" now being used in its usual meaning) is by calculating the _delta scores_. Given the vector of attention flow \(A=\{A_{1},A_{2},A_{3},...,A_{N}\}\), being \(N\) the number of layers, we define the delta of the flow as:
\[\delta_{A}=\{A_{i+1}-A_{i}:i=1,2,3,...,N-1\}\]
We also tested the idea of delta scores for the model's top-3 probabilities, which ended up having a strong feature importance, as shown in section VI.
### Confidence measure
As a way of comparing how well a model's internal probability is aligned with the true confidence measure, one useful metric that captures the notion of miscalibration is the **Average Calibration Error** (ACE), that works by partitioning the confidence and probability intervals (0-1) into \(M\) equally-spaced bins, and then takes the average weighted sum of each bins' accuracy/confidence difference. Mathematically speaking, we have:
\[ACE=\sum_{m=1}^{M}\frac{|B_{m}|}{n}|acc(B_{m})-conf(B_{m})|\]
Where \(n\) is the number of samples. To emphasize the worst calibration error observed among the bins, we also measured the **Maximum Calibration Error** (MCE). Formally, we have:
\[MCE=\max_{m\in\{1,2,...,M\}}|acc(B_{m})-conf(B_{m})|\]
It's particularly useful in high-risk applications, where one wants the worst-case error to be as minimum as possible. Ideally, for a perfectly calibrated model, both ACE and MCE is 0.
## 5 The New calibrator model
Our proposed model is an XGBoost, which is an ensemble model, and for which its objective function is a logistic regression for binary classification; that is, it outputs a probability. The loss function is formally given as:
\[J(\theta)=\frac{1}{n}\left[\sum_{i=1}^{n}-y^{(i)}\log\left(p_{\theta}\left(x^ {(i)}\right)\right)+\left(1-y^{(i)}\right)\log\left(1-p_{\theta}\left(x^{(i)} \right)\right)\right]\]
Where \(n\) is the total number of samples, and \(p_{\theta}(x^{i})\) is the probability the XGBoost model yields to the \(i\)-th data instance. In this work, we consider all the features used in [12], and also add the attention flow-based ones to get even better AUC and calibration. In total, we have the following features for the new calibrator:
* Length of query and top-3 answer tokens.
* Top-3 softmax scores.
* Variance of top-3 softmax scores.
* Deltas of top-3 softmax scores.
* Attention-based feature from the base calibrator (see [12] for more details).
* Entropy and deltas of attention flow.
## 6 Experiment and results
### Dataset
For training our calibrator, we considered in our experiments three types of data: in-domain, domain-shifted data, and completely out-of-domain data. In practice, the Calibrator's labeling receives 1 when there was at least one right answer among the top-3 most probable paragraphs, and 0 otherwise. By doing so, we have a simple binary classification problem, and yet we can identify when the model probably has at least one right prediction.
### Results
Given the sufficient theoretical background, we present the results obtained at CK-12 real questions. First, we present the ACE, MCE and AUC scores for all the calibration methods tested:
From the table, we can see that the only methods that greatly diminish both ACE and MCE, while increasing AUC substantially, are the Platt scaling, the Base calibrator and the New calibrator. From those 3, the New calibrator outperforms all. Also, although Temperature scaling diminished ACE, it increased MCE, and didn't change AUC. We also present the reliability plots and the ROC curves below.
Figure 2: Reliability plots
The top 5 feature importances from the XGBoost (New calibrator) are shown below:
From this, we see that attention flow features and the raw softmax probabilities make the most important features for the calibrator. We also present the results for the top 20 most important features, for the sake of comparison:
Figure 3: ROC curves with respectives AUCs
### Domain examples
To illustrate the difference between in-domain (ID), domain-shifted (DS) and out-of-domain (OOD) questions, we present below some query examples, and the top 3 answers the model yields in each situation.
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline ID questions & finetuned CK12 BERT predictions \\ \hline & 1. could he be an aging rock star? he’s not a famous musician, but he’s just as famous as many celebrities. his name is albert einstein, and he's arguably the most important scientist of the 20th century. einstein really shook up science with his discoveries in the early 1900s. that may sound like a long time ago, but in terms of the history of science, it's as though it was only yesterday. \\ & 2. albert einstein (1879-1955) deductive reasoning has helped us determine that albert einstein is a mortal being. \\ & 3. einstein's equation is possibly the best-known equation of all time. \\ \hline & 1. Water is a compound that consists of the elements hydrogen (h) and oxygen(o). like other compounds, the smallest particles of water are called molecules. each molecule of water (h2o) contains two atoms of hydrogen and one atom of oxygen. \\ What is water made of? & 2. Water is probably one of the simplest compunds that you know. a water molecule is made of two hydrogen atoms and one oxygen atom (figure below). all water molecules have the same ratio: two hydrogen ions to one oxygen ion. \\ & 3. Water is a binary compound composed of hydrogen and oxygen. the hydrogen and oxygen gases produced in the reaction are both diatomic molecules. \\ \hline \end{tabular}
Figure 4: Feature importances for the New calibrator.
From this, we can see different regimes. In the first situation, the user prompts a question to which the right answer is present in the corpus of possible responses. We call these queries in-domain questions. In another regime, the user might ask a question that contains only a partially right or related answer in the corpus. For example, the user might want to know about the General Theory of Relativity, and the corpus might contain an introductory high-school level of answer to this, but a complete dissertation is not present in the corpus at all. In the last situation, we consider completely out-of-domain questions, where no right answer at all is present in the corpus. These are questions like "Are you a bot?", or "When is your birthday?".
## 7 Conclusion
In this paper, we saw the importance of confidence calibration for Education Q&A systems. By having a calibrated confidence, we mitigate the chances of misleading a student, because it allows one to not only have a precise estimation of how good an answer is, but also when the model should refrain from answering a question; these are crucial components in Education Q&A systems. By adding the attention-flow-based features, we saw good improvements in
our models, both in AUC by increase in 4 points, and also in ACE/MCE by reduction in 4.46% and 6.32%, respectively, all compared to the best previous approach in [12].
## Declarations
### Ethical Approval
This work was supported by CK-12 Foundation.
### Conflict of interest
The authors have no conflicts of interest to declare that are relevant to the content of this paper.
### Data availability
Data availability is not applicable for this article.
## Acknowledgements
R.C.Guido gratefully acknowledges the grants provided by the Brazilian agencies "National Council for Scientific and Technological Development (CNPq)", Brazil and "The State of Sao Paulo Research Foundation (FAPESP)", Brazil, respectively through the processes 306808/2018-8 and 2021/12407-4, in support of this research.
|
2303.08499 | Dynamical evolution of basaltic asteroids outside the Vesta family in
the inner main belt | Basaltic V-type asteroids are leftovers from the formation and evolution of
differentiated planetesimals. They are thought to originate from mantles and
crusts of multiple different parent bodies. Identifying the links between
individual V-type asteroids and multiple planetesimals is challenging,
especially in the inner part of the main asteroid belt, where the majority of
V-type asteroids are expected to have originated from a single planetesimal,
namely, (4) Vesta.
In this work, we aim to trace the origin of a number of individual V-type
asteroids from the inner part of the main asteroid belt. The main goal is to
identify asteroids that may not be traced back to (4) Vesta and may therefore
originate from other differentiated planetesimals.
We performed a 2 Gy backward numerical integration of the orbits of the
selected V-type asteroids. For each asteroid, we used 1001 clones to map the
effect of orbital uncertainties. In the integration, we use information on
physical properties of the considered V-type asteroids such as pole
orientation, rotational period, and thermal parameters.
The majority of V-types in the inner main belt outside the Vesta family are
clearly Vesta fugitives. Two objects, namely, (3307) Athabasca and (17028) 1999
FJ$_{5}$, show no clear dynamical link to (4) Vesta. Together with (809) Lundia
(from our previous work), these objects could represent the parent bodies of
anomalous HED meteorites such as the Banbura Rockhole. Furthermore, some
objects of the low-inclination population cannot be traced back to (4) Vesta
within the 2 Gy integration. | Volodymyr Troianskyi, Pawel Kankiewicz, Dagmara Oszkiewicz | 2023-03-15T10:19:32Z | http://arxiv.org/abs/2303.08499v1 | # Dynamical evolution of basaltic asteroids
###### Abstract
Context:Basaltic V-type asteroids are leftovers from the formation and evolution of differentiated planetesimals. They are thought to originate from mantles and crusts of multiple different parent bodies. Identifying the links between individual V-type asteroids and multiple planetesimals is challenging, especially in the inner part of the main asteroid belt, where the majority of V-type asteroids are expected to have originated from a single planetesimal, namely, (4) Vesta.
Aims:In this work, we aim to trace the origin of a number of individual V-type asteroids from the inner part of the main asteroid belt. The main goal is to identify asteroids that may not be traced back to (4) Vesta and may therefore originate from other differentiated planetesimals.
Methods:We performed a 2 Gy backward numerical integration of the orbits of the selected V-type asteroids. For each asteroid, we used 1001 clones to map the effect of orbital uncertainties. In the integration, we use information on physical properties of the considered V-type asteroids such as pole orientation, rotational period, and thermal parameters.
Results:The majority of the studied objects can be traced back to the Vesta family within 2 Gy of integration. The number of objects of the low-inclination V-types did not reach the boundary of the Vesta family during the integration time. Two asteroids, namely, (3307) Athanabsca and (17028) 1999 FJ\({}_{3}\), do not show a dynamic link to (4) Vesta. Increasing the integration time for these objects leads to further separation from (4) Vesta.
Conclusions:The majority of V-types in the inner main belt outside the Vesta family are clearly Vesta fugitives. Two objects, namely, (3307) Athanabsca and (17028) 1999 FJ\({}_{5}\), show no clear dynamical link to (4) Vesta. Together with (809) Lundia (from our previous work), these objects could represent the parent bodies of anomalous HED meteorites such as the Banbura Rockhole. Furthermore, some objects of the low-inclination population cannot be traced back to (4) Vesta within the 2 Gy integration.
## 1 Introduction
Generally, V-type asteroids are known to trace the history of differentiated planetesimals in the Solar System (Nesvorny et al., 2008). These bodies were the precursors of terrestrial planets and thus hold clues to planetary formation and the more general evolution of the Solar System. Specifically, the number of differentiated planetesimals and their distribution map the evolution of our planetary system (Burbine et al., 2002; Scott et al., 2015). The planetesimal formation and evolution theory by Bottke et al. (2006b); Bottke (2014) suggest that these bodies formed close to the Sun in the terrestrial planet region. Other authors have suggested wider formation reaction ranges (1.3 au - 7.5 au) (Lichtenberg et al., 2021). These bodies were then collisionally disrupted and scattered into the main asteroid belt. Some of the fragments were then recovered on Earth as meteorites. In particular, iron meteorites, which are believed to originate from the iron cores of differentiated planetesimals, suggest that there were 100-150 such bodies in the early Solar System (Burbine et al., 2002). Fragments of those disrupted bodies should still be plentiful in the Main Asteroid Belt, especially in the inner section (a \(<\)2.5 au), which is dynamically easier to evolve from the terrestrial planet region and from which most meteorites come (Bottke et al., 2006b).
However, up-to-date spectral observations do not show a large number of distinct V-types (parts of mantles of the differentiated bodies) across the Solar System. Most V-types reside in the inner main belt in the vicinity of the fossil planetesimal (4) Vesta and are therefore considered related (Binzel & Xu, 1993; Bus & Binzel, 2002; DeMeo et al., 2009; Moskovitz et al., 2008b, 2010; Popescu et al., 2018; Oszkiewicz et al., 2019, 2020). Most are parts of the Vesta family or considered Vesta fugitives, that is, objects that escaped the borders of the family through the combination of the Yarkovsky effect and dynamical resonances (Nesvorny, 2015; Nesvorny et al., 2008). Objects genetically related to (4) Vesta are commonly named Vestoids and those that cannot be linked to (4) Vesta are known as non-Vestoids. There is strong evidence linking (4) Vesta, Vestoids, and Howardite-Lucrite-Diogine meteorites (HEDs), including those delivered by the NASA Dawn mission. This link between HEDs, Vestoids, and (4) Vesta was first identified in 70 ties through spectral observations (McCord et al., 1970). Subsequent observations of other V-types that extended across the 3:1 and \(\nu_{6}\) Jovian resonances have provided a plausible Earth-delivery scenario (Binzel & Xu, 1993; Burbine et al., 2001). More de
tailed comparative petrologic and geochemical measurements of the Vesta's surface and HEDs further strengthened that link (McSween Jr et al., 2013). The two largest craters on the surface of (4) Vesta are thought to be the main source of most Vestoids and HEDs (Thomas et al., 1997; Marchi et al., 2012; Schenk et al., 2012). The age of those craters (Rhesaikvia and Veneneia) are thought to be 1Gy and 2Gy, respectively, and correspond to the age of the Vesta family (Schenk et al., 2022; Spoto et al., 2015). Taking into account the above considerations described in the references cited above, we chose an integration time of 2 Gy for our dynamic simulation described below, as best comparable to the age of the Vesta family.
Some unique V-types were identified beyond the 3:1 and 5:2 mean motion resonances with Jupiter. The first of those was (1459) Magnya, which was recognized as basaltic in the early 2000s Lazzaro et al. (2000). Based on dynamical investigations, Michtchenko et al. (2002) suggested that this \(\sim\)30 km asteroid located beyond 2.8 au from the Sun is most probably a part of other (than (4) Vesta) planetesimal that existed in the outer parts of the main belt. Hardersen et al. (2004) further substantiated this claim by finding the discordant pyroxene chemistry of (1459) Magnya compared to that of (4) Vesta. Currently, a dozen other V-type asteroids have been identified in the middle and outer parts of the main belt (Duffard & Roig, 2009; Hammergren et al., 2006; Ieva et al., 2016, 2018). Roig et al. (2008) has shown that large asteroids (\(>5\) km) in the middle of the main belt region have a low probability \(\sim\)1% to have evolved from (4) Vesta through a combination of the Yarkovsky effect and dynamical resonances. Ieva et al. (2016) and Leith et al. (2017) analyzed their spectroscopic and mineralogical properties and found surface compositions that are not compatible with that of (4) Vesta. Ieva et al. (2018) suggested that the Eos asteroid family in the outer main belt could be the source of some of the V-types in this region. The dynamical evolution of V-type candidates also suggests that the parent bodies of the Eunomia and Mexia / Agnia families could also be a potential source of V-types in the middle and outer main belt (Carrupa et al., 2014). Huaman et al. (2014) identified three possible source regions of the V-types in the outer main belt, associated with the parent bodies of (1459) Magnya, (349) Debowska, and (221) Eos. Objects in the middle and outer main belt could also be delivered to its current locations from the inner main belt through the so-called "jumping Jupiter" mechanism (Brasil et al., 2017; Migliorini et al., 2021).
Theoretically, the number of V-type fragments originating from distinct planetesimals should be even greater in the inner main belt (Bottke et al., 2006b). However, most of the V-type asteroids in the inner belt of the main belt are parts of the Vesta family or are considered fugitives (Nesvorny et al., 2008), which are objects that evolved away from the family and are now beyond recognition as family members with traditional clustering methods (Nesvorny et al., 2015). Early studies showed that some of the V-types in the inner main belt present deeper 1.0 \(\mu\)m absorption bands than (4) Vesta (Florczak et al., 2002). However, these findings could not be confirmed by Ieva et al. (2016, 2018). Recently, Oszkiewicz et al. (2023a) showed that most V-type asteroids in the inner main belt have spectral properties that overlap with that of the Vesta family. However, few V-type asteroids were found with an unusually deep 0.9 \(\mu\)m band depth, which is more consistent with those of (1459) Magnya (Lazzaro et al., 2000); thus, deeper mineralogical analysis is required.
There is limited evidence that in addition to a large population of Vestoids, there might be some non-Vestoids present in the inner main belt. Bland et al. (2009) and Spurny et al. (2012) observed a fall of anomalous HED meteorite (V-type material not related to (4) Vesta) and estimated there was a very high probability (of 98%) that it had originated from the innermost main belt. Oszkiewicz et al. (2015) showed that due to its observationally constrained prograde rotation (809), Lundia is unlikely to be a former Vesta family member. Earlier dynamical work on (809) Lundia suggested a link to (4) Vesta, but did not consider the prograde rotation of the object (Carruba et al., 2005).
In this work, we investigate a number of V-type asteroids in the inner main belt with known spin axis coordinates. We performed a numerical integration on the 2 Gy scale (estimated age of the Vesta family (Spoto et al., 2015; Schenk et al., 2012)) to investigate their possible origin and verify the hypothetical presence of non-Vestoids in the inner main belt. In Sect. 2, we report the objects studied. In Sect. 3, we describe the dynamical model used in this study. The results are presented in Sect. 4, along with a discussion of our results in Sect. 5 and conclusions in Sect. 6.
## 2 Target selection
We selected V-type asteroids (identified spectrally or by multi-filter photometry) outside the dynamical Vesta family and for which spin and shape are determined. The family membership was extracted from Nesvorny (2015). The selected objects are listed in Table 1. Detailed knowledge of the sidereal period and the sense of rotation (along with the size and other parameters) determines the direction and scale of the Yarkovsky drift and is incorporated into into the dynamical integration.
We further divided the objects into categories based on orbital elements (see Table 1 and Fig. 1): Fugitives - objects outside the dynamical Vesta family having \(2.1\,\mathrm{au}<a\leq 2.3\,\mathrm{au},5\ ^{\circ}<i<8\ ^{\circ}\), and \(0.035<e<0.162\); Low-\(i\) - objects outside the dynamical Vesta family having \(2.3\,\mathrm{au}<a\leq 2.5\,\mathrm{au}\) and \(i<6\ ^{\circ}\); Inner other - remaining objects in the inner main belt outside the dynamical Vesta family.
The division into fugitives, low-\(i\), and inner other is consistent with previous spectral and dynamical studies of V-types (Ieva et al., 2016, 2018; Nesvorny et al., 2008; Oszkiewicz et al., 2023a). Fugitives are objects with a smaller semimajor axis than Vesta family members and comparable orbital inclination and eccentricity. In principle, those objects should be easily explained by migration from the Vesta family. Low-inclination objects (with orbital inclinations smaller than typical members of the Vesta family) could not be previously fully explained (in a dynamical sense) based on the assumption of an impact on (4) Vesta 2 Gy ago. Furthermore, lithological differences between Vesta family members and low-\(i\) asteroids were reported by Mansour et al. (2020). Nesvorny et al. (2008) suggest that these objects originated in an earlier impact on (4) Vesta, plausibly 3.9 Gy ago, or originated from a different differentiated parent body. The last population are the remaining V-types in the inner main belt.
## 3 Dynamical model
In order to apply an appropriate dynamical model to the long-term numerical integration, we adopted a very similar approach as in our previous work on the asteroid (2579) Spartacus (Oszkiewicz et al., 2019) and asteroids from the Flora region (Oszkiewicz et al., 2015). The main tool was the swift_rmvsy software developed by Broz (2006), which is a modification of the swift_rmvs method from the Swift package (Duncan et al., 1998).
The initial elements of the asteroids and their errors (uncertainities) with planetary data were taken from the JPL Horizons
and related SBDB database1. The simulation starting point was unified to JD 2459200.5 (AD 2020, December 17). The rmvsy method (Broz, 2006) was used for integration, applying the type of regularization used in the original rmvs algorithm (Levison & Duncan, 1994; Duncan et al., 1998). This approach involves reducing the integration step when close encounters occur between massless test particles and massive bodies at distances smaller than 3 Hill radii. In practice, small bodies spend most of their integration time outside this distance range, and then the calculations are performed with a fixed-step symplectic MVS integrator, using Wisdom-Holman scheme (Wisdom & Holman, 1991). This procedure optimizes the computation time. In our simulations, the basic integration step was set to 10 days. A control dump of the data occurred every 1,000 years, but for presentation purposes, the data were subsequently filtered, giving the equivalent of an output of orbital elements every 100,000 years.
Footnote 1: [https://ssd.jpl.nasa.gov/horizons/](https://ssd.jpl.nasa.gov/horizons/)
We used the concept of virtual clones (virtual massless test particles). To reproduce the real distribution of observational errors, we proposed the following solution: for each rotation model, we used 1001 clones of each asteroid, distributed along the variation of orbital elements. This was achieved by making the error distribution of the elements of the clones identical to that of the actual observational errors as a multidimensional Gaussian distribution. The variation of each orbital element corresponds to the original orbit determination errors (1-\(\sigma\)), so the dispersion of the clones in the six-dimensional space of orbital elements corresponds to the spread of the original observations. As a result, the initial conditions generated a scattered cloud in the six-element space according to the given Gaussian distribution. In the next stage, the integrator used Cartesian positions and velocities. Finally, for presentation purposes, we used the averaged proper elements that we derived from the functions and routines of the swift_rmvsy package (Broz, 2006). This tool contains an internal set of filtering routines, known as the 'proper-elements filter', which partially eliminates the effects of short-period perturbations related to planet-specific precession frequencies. The whole filtering procedure allowed for a better visualization of the potential impact of non-gravitational effects on the migration of selected asteroids. The cases in which clones were ejected from the system during backward integration were marginal (a few objects per thousand throughout the integration), since most of the 27 orbits studied are generally considered dynamically stable on the time scale we used.
The cloud of clones was integrated backward for 2 billion years, with different models assuming various perturbations. This interval of time is comparable to the age of the Vesta family or slightly longer. The estimated ages of the Vesta family are related to two cratering events 1 and 2 Gy ago (Spoto et al., 2015). In the simplest model (grav. model) we assumed only gravitational forces. This model includes the Sun and eight perturbing planets, with the possibility of more subtle perturbations from the largest asteroids tested in advance. As we conducted for the (2579) Spartacus object (Oszkiewicz et al., 2019) previously, here we performed a preliminary test of the extended dynamical model with the asteroids (1) Ceres, (2) Pallas, (4) Vesta, and (10) Hygiea (CPVH) on a limited time scale of \(10^{8}\) years before the main simulation. The aim was to test whether close approaches to these asteroids could potentially affect our final results. In addition to the previously studied (2579) Spartacus, in this way, we checked four asteroids whose elements evolve most closely to the centroid of the Vesta family: (2432) Soomana, (7899) Joya, (3536) Schleicher, and (18641) 1998 EG\({}_{10}\). All four asteroids have had close approaches recorded to: (1) Ceres, (2) Pallas and (4) Vesta, while only (2432) Soomana and (very occasionally) (7899) Joya approach (10) Hygiea. The cumulative effect of these approaches on the elements is detectable, but as an effect of marginal importance, we considered it negligibly small in the studied group of asteroids. The maximum differences on the main semi-major axis were on the order of \(10^{-4}\) to \(10^{-3}\) au on a time scale of \(10^{8}\) years. Consequently, we found it reasonable to use a dynamic model consisting of the Sun and eight planets throughout the whole simulation.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline \hline Asteroid No. & H & \(a\) & \(e\) & sin (\(i\)) & Sense of & Notes \\ and name & & [iii] & [iv] & rotation & & \\ \hline (956) Elisa & 12.13 & 2.298 & 0.158 & 0.1119 & ret.\({}^{17,26}\) & Fugitive \\ \hline (914) Harteboerodum & 12.09 & 2.406 & 0.139 & 0.0848 & pros.\({}^{20,27}\) & Low-i \\ \hline (1946) Wal Walvares & 12.13 & 2.294 & 0.190 & 0.1304 & ret.\({}^{20,34}\) & Fugitive \\ \hline (242) Socorana & 12.76 & 2.352 & 0.129 & 0.13158 & pro.\({}^{17,12}\) & IOs \\ \hline (2566) Kirshizia & 12.48 & 2.450 & 0.104 & 0.0771 & ret.\({}^{17,12}\) & Low-i \\ \hline (2579) Spartacus & 13.59 & 2.210 & 0.082 & 0.1054 & ret.\({}^{15}\) & Fugitive \\ \hline (2658) Pinciupis & 12.22 & 2.444 & 0.114 & 0.0888 & pro.\({}^{17,46}\) & Low-i \\ \hline (2704) Julian Lowe & 12.81 & 2.385 & 0.117 & 0.0889 & ret.\({}^{17,26}\) & Low-i \\ \hline (2763) Jeans & 12.47 & 2.404 & 0.179 & 0.0756 & ret.\({}^{20,27}\) & Low-i \\ \hline (2851) Harbin & 12.32 & 2.478 & 0.123 & 0.1348 & pro.\({}^{20,27}\) & IOs \\ \hline (2912) Laplima & 12.76 & 2.289 & 0.118 & 0.1186 & ret.\({}^{20,19}\) & Fugitive \\ \hline (3307) Athabuca & 14.12 & 2.259 & 0.100 & 0.1212 & pro.\({}^{17}\) & Fugitive \\ \hline (3363) Schleicher & 14.02 & 2.433 & 0.077 & 0.1156 & ret.\({}^{17,12}\) & IOs \\ \hline (3849) Indeidentia & 13.09 & 2.474 & 0.065 & 0.0949 & pros.\({}^{20,27}\) & Low-i \\ \hline (4796) Lewis & 13.65 & 2.355 & 0.141 & 0.0538 & ret.\({}^{20,27}\) & Low-i \\ \hline (5150) Follini & 13.32 & 2.477 & 0.138 & 0.1076 & pro.\({}^{17,12}\) & IOs \\ \hline (5524) Leechaux & 13.01 & 2.366 & 0.059 & 0.1194 & ret.\({}^{17,12}\) & IOs \\ \hline (5525) 1991 TA\({}_{3}\) & 13.27 & 2.221 & 0.081 & 0.1279 & ret.\({}^{17,46}\) & Fugitive \\ \hline (5754) 1992 FR\({}_{2}\) & 12.96 & 2.267 & 0.091 & 0.0843 & ret.\({}^{20,4,7,8}\) & Fugitive \\ \hline (5952) Davancont & 13.66 & 2.270 & 0.141 & 0.0796 & ret.\({}^{17,10,10}\) & Fugitive \\ \hline (6406) Mikieura & 13.59 & 2.276 & 0.124 & 0.1336 & ret.\({}^{20,35,6}\) & Fugitive \\ \hline (7589) Yavlov & 13.58 & 2.290 & 0.110 & 0.0904 & ret.\({}^{20}\) & Fugitive \\ \hline (7899) Joya & 13.74 & 2.343 & 0.114 & 0.0937 & ret.\({}^{20,36}\) & Low-i \\ \hline (10208) 1993 FD\({}_{3}\) & 15.20 & 2.226 & 0.152 & 0.0758 & pros.\({}^{20,36}\) & Fugitive \\ \hline (10841) 1998 EG\({}_{10}\) & 14.01 & 2.357 & 0.097 & 0.1344 & pro.\({}^{20,44}\) & IOs \\ \hline (25327) 1999 BB\({}_{63}\) & 14.03 & 2.434 & 0.186 & 0.2187 & ret.\({}^{17}\) & IOs \\ \hline (25542) Grabobian & 14.98 & 2.443 & 0.116 & 0.1224 & pro.\({}^{20,36}\) & IOs \\ \hline \hline \end{tabular} 1
\end{table}
Table 1: Asteroids covered in this study.
Figure 1: Orbital distribution of asteroids considered in this work.
In the next step, we used a more complex model with the Yarkovsky effect. Details of the thermal parameters of the asteroids used in this model (Fenucci & Novakovic, 2021): bulk density 3000 kg m\({}^{-3}\); surface density 1500 kg m\({}^{-3}\); surface emissivity 0.95; thermal conductivity 0.001 W K\({}^{-1}\) m\({}^{-1}\); thermal capacity of 680 J kg\({}^{-1}\) K\({}^{-1}\) for all objects and data from Table 2.
The thermal parameters described in Table 2 are obviously only more or less accurate assumptions that we had to make in order to adopt the model for the purpose of numerical simulations. However, crucial information, such as the asteroid radius, rotation period, and direction, as well as the axis orientation, is reasonably well determined. The approximate rate and direction (positive or negative, respectively) of the Yarkovsky drift in the semi-major axis are most essential here. Therefore, our new observational results (Oszkiewicz et al., 2023b) have the most fundamental value.
## 4 Results
We performed numerical integrations for a total of 27 asteroids. For each object, we investigated two symmetric rotational pole solutions from Oszkiewicz et al. (2023b), then used the known diameters and geometric albedos, and assumed the remaining thermal parameters as described in Sec. 3. Thus, each asteroid was integrated twice with each pole solution separately. This allowed us to estimate the maximum and average values of the Yarkovsky drift that could potentially occur depending on the chosen spin model. These approximate values give a general idea of the strength of this effect and are presented in Table 3. Many of these average da/dt rates are on the order of magnitude of Yarkovsky drift that can be nowadays detected with high precision astrometry, such as that arriving from the Gaia space mission (Dziadura et al., 2022).
In Figs. 2 and 3, we show the evolution of proper elements for all the studied asteroids within 2 Gy of backward integration. The large red points denote the current location of the investigated objects, the black points trace the evolution backward in time, and the large blue points denote their location 2 Gy ago. The current locations of the Vesta family members are marked in grey. Figure 2 shows the first spin orientation and Fig. 3 shows the second spin solution. The general evolutionary trends are consistent between the two solutions for all objects. We discuss each population separately in the following.
even further away from the Vesta family 2 Gy ago. Furthermore, extending the integration time will lead to the additional decrease of a proper semi-major axis, thus placing them even further away from the Vesta family. In our earlier work, we also found that a prograde rotator (809) Lundia is also a highly unlikely Vesta family member (Oszkiewicz et al., 2015). Asteroids (3307) Athabasca and (17028) 1999 FJ\({}_{5}\) thus cannot be linked to (4) Vesta and may represent material left over from other differentiated planetesimals. These objects could represent the parent bodies of the Banbura Rockhole meteorite for which the origin the of the inner main belt is the most probable (Spurny et al., 2012; Bland et al., 2009).
The two remaining asteroids (5952) Davemonet and (6406) Mikejura did not clearly overlap with the dynamical Vesta family during the 2 Gy integration. However, due to their location in space and passage of the orbital elements through multiple resonances around 2.3 au and 2.35 au, we do not claim that there is no link to (4) Vesta.
### Low inclination
A number of asteroids from the low-inclination region, for instance, (1914) Hartbeespoortdam, (2653) Principia, and (2763) Jeans, (4796) Lewis did not reach the edge of the Vesta family within the 2 Gy integration. Figures 2 and 3 show four such objects; the four remaining objects are traced back to the border of the family. Interestingly Nesvorny et al. (2008) in their forward integration model could not reproduce the observed fraction of low-inclination V-types with sufficient efficiency within their 2 Gy integration. The authors argued that these objects could be fragments of crusts of other than (4) Vesta differentiated parent bodies. Alternatively, these bodies could have been freed from the surface of (4) Vesta during the late heavy bombardment at \(\sim\) 3.9 Gy ago (Nesvorny et al., 2008; Scott & Bottke, 2011).
Our work also indicates that these objects might need a longer integration time to trace their evolution back to the Vesta family. Origin in a different differentiated parent body cannot also be excluded. Intriguingly lithological differences between the vestiods and the low-\(i\) asteroids were recently reported by Mansour et al. (2020) and differences in the median values of the 0.9\(\mu\)m band depth by Oszkiewicz et al. (2023a). Variations in mineralogy could be explained by different depths of excavation within the surface of the (4) Vesta or by the fact that the mineralogy of different bodies can be roughly the same. Additional spectral and dynamical analysis for (1914) Hartbeespoortdam, (2653) Principia, (2763) Jeans, and (4796) Lewis asteroids may help answer this question.
### Inner other
Many of the V-types from the inner other population cannot be traced back to (4) Vesta ((2432) Soomana, (3536) Schleicher, (5150) Fellini, (5524) Lecacheux, (18641) 1998 EG\({}_{10}\), (25542) Garbedian). Two objects, namely, (2851) Harbin and (25327) 1999 JB\({}_{63}\), do not overlap with members of the Vesta family within the integration of 2 Gy in the proper-element space. The most intriguing example is asteroid (25327) 1999 JB\({}_{63}\), which drifts towards a larger semimajor axis and towards the 3:1 resonance when integrated backward in time.
For some of the objects studied here, the behaviour of eccentricity and inclination during long-term past evolution is similar to the changes accompanying the Kozai resonance. More precisely, there is a kind of periodic increase in eccentricity with a decrease in orbit inclination and vice versa in the case of (25542) Garabedian, which can be seen in Fig. 3. This asteroid has one of the largest Yarkovsky drifts (see Tab. 3). After passing the 4-2-1 MMR with Jupiter and Saturn at 2.4 au resonance, we observed periodic changes in the eccentricity and inclination of the orbit for this object. However, the time scale of these changes is inadequately large, so the presence of Kozai resonance, in this case, should be excluded.
Similar changes are accompanied by the eccentricity of the asteroid (5150) Fellini, but are not related to changes in inclination. Considering the influence of additional accelerations (drift in the semimajor axis) from the Yarkovsky effect, it is difficult to clearly separate the studied effects. In particular, when they are virtually absent for the studied asteroids in the simplest, gravitational model of forces. However, it cannot be excluded that due to nongravitational effects, they can migrate to regions where they pass through a specific kind of resonances.
Overall, among all studied asteroids, the fraction of objects that are not linked to (4) Vesta and are not explained by previous impacts is very small, indicating a small number of plausible non-Vestoids (two clear cases: (3307) Athabasca and (17028) 1999 FJ\({}_{5}\) in 27 studied asteroids). This is consistent with the low fraction of anomalous HEDs in the meteorite collections (Zhang et al., 2019). The first of those objects was classified as V-type by Bus & Binzel (2002) during the course of the Small Mainbelt Asteroid Spectroscopic Survey (SMASS). Asteroid (3307) Athabasca also has a Gaia DR3 spectrum showing band depth (ratio of reflectances at 0.75\(\mu\)m and 0.9\(\mu\)m) of 1.55- a value that is more than 1\(\sigma\) away from typical Vesta family members (Oszkiewicz et al., 2023a). However, spectral follow-up observations and deeper mineralogical analysis is needed. Asteroid (17028) 1999 FJ\({}_{5}\) is not present in the Gaia DR3 catalogue. However, its colours, derived from the Sloan Digital Sky Survey (SDSS), are consistent with V-type object (Carvano et al., 2010).
## 5 Discussion
A potential additional source of uncertainty in this study is the modification of the pole orientation of the studied objects through random collisions and the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect (Rubincam, 2000; Bottke et al., 2006a). The YORP effect is caused by a thermal torque and acts on small (d \(<\) 40 km) asteroids that are well within the range of the objects studied in this work. The effect modifies the spin rates and spin-axis orientation of small bodies. So far, only the change in spin rates, that is, the so-called spin-up of a few asteroids, has been detected (Lowry et al., 2007; Durech et al., 2008). In the work of Oszkiewicz et al. (2017), we have already attempted to estimate the maximum impact of the YORP effect for some asteroids, also appearing in the current study: (2704) Julian Loewe, (4796) Lewis, (5150) Fellini, (5525) 1991 TS\({}_{4}\), (5754) 1992 FR\({}_{2}\), and (18641) 1998 EG\({}_{10}\). In general, these considerations indicated that the axis reorientation time scales are on the same order, or longer, than the presumed lifetime of the Vesta family. Furthermore, other previously estimated obliquity rates and YORP timescales for asteroids in this work do not appear to have values large enough to significantly affect the results (Golubov et al., 2021). Therefore, we consider the Yarkovsky effect to be the most important non-gravitational force that is applicable.
It is also worth mentioning that at this stage, the YORP effect, although present, is hardly verifiable against the more precise and predictable Yarkovsky effect (based on a model created
from our new observational data). The YORP effect contains both a static and a stochastic part, for instance, the reorientation of the axes due to collisions occurs very rarely and is supposed to have a random character. For this reason, in order to maintain better computational integrity, in the sample group, we have limited the analysis only to the simulation of the Yarkovsky effect.
It has been hypothesised before that some of the low-i V-types may have originated in collisions 3.9 Gy ago. However, such long integration times are questionable. For example, Dybczynski et al. (2022) found multiple close stellar passages near the Sun. One of those at a very close distance of around 0.014 pc (\(\sim\) 3000 au) around 2.5 Mys ago. Effects of such close stellar flybys are not included in our numerical integration but could potentially introduce significant perturbations. We note that Nesvorny et al. (2008) extrapolated their simulation to 3.5 Gy which also did not lead to a substantial increase of the number density of V-type fugitives in this population.
Two asteroids from the fugitive population have a prograde sense of rotation and drift in the opposite direction from the Vesta family: (3307) Athabasca and (17028) 1999 FJ\({}_{5}\). They show a drift to the region of the inner Solar System (2.0 au - 2.2 au). Another basaltic asteroid (908) Lundia also shows a non-Vesta drift (cannot be directly traced back to (4) Vesta) (Oskievirez et al., 2015). Zhang et al. (2019) based on measurements of HED meteorites (which show oxygen isotopic anomalies) (Bland et al., 2009; Spurny et al., 2012)) suggest that there were at least five basaltic parent bodies in the past. These parent bodies may be related to objects such as (908) Lundia, (3307) Athabasca, and (17028) 1999 FJ\({}_{5}\), which are unlikely dynamical Vestoids. To determine the origin of these objects asteroids, further studies of their dynamical and physical properties are needed.
A number of authors, for example Moskovitz et al. (2008a), Roig et al. (2008), Hammergren et al. (2011), Popescu et al. (2017), Migliorini et al. (2017, 2021), and Oskievirez et al. (2023a) confirmed a number of basaltic asteroids in the middle main belt. Interestingly, asteroids (2566) Kirghizia and (25327) 1999 JB\({}_{63}\) show a possible drift from the 3:1 resonance at around 2.5 au. These asteroids could be V-type fragments originating from the middle main belt.
## 6 Conclusions
We have investigated the dynamical evolution of 27 V-type asteroids outside the Vesta dynamical family. For the long-term dynamical integration, we used swift_rmvsy software, which includes the regularized mixed-variable symplectic integration method and Yarkovsky effect. The 2 Gy backward dynamical integration took into account the physical properties of the objects, such as spin orientation, size, and thermal parameters.
Most asteroids can be explained by migration from (4) Vesta. A small fraction (\(<\)7%) cannot be directly linked to (4) Vesta in our simulation. This is consistent with the low fraction of anomalous HEDs in the meteorite collections (Zhang et al., 2019). Asteroids (3307) Athabasca and (17028) 1999 FJ\({}_{5}\) show a drift from the direction of the inner Solar System. Thus, these objects are not likely to have formed in the Vesta family. Together with a prograde rotator (908) Lundia, reported in our earlier work (Oszkiewicz et al., 2015), we have three asteroids that cannot be directly traced back to (4) Vesta. These objects could be a potential source of anomalous HED meteorites, such as the Banburra Rockhole meteorite (Spurny et al., 2012; Bland et al., 2009).
## 7 Acknowledgments
This work has been supported by Grant No. 2017/26/D/ST9/00240 from the National Science Center, Poland.
During the preparation of this work, the resources of the Center of Computing and Computer Modeling of the Faculty of Natural Sciences of Jan Kochanowski University in Kielce were used.
The authors thank Tomasz Kwiatkowski, Iryna Durbalova, and Antoine Choukroun for the helpful comments on our paper.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline Asteroid No. & \(D\) & Total drift & \(\left<\frac{\mathrm{d}\eta}{\mathrm{d}}\right>\) & Pole coordinates \\ and same & & \(\Delta a\) & (mean drift rate) & \(\lambda,\beta\) \\ \hline \multirow{3}{*}{(956) Elisa} & 10,474 & 2.62e-02 & 1.31e-06 & 62 e - \(4.54\pm 6\) \\ & \(\pm\) 0.208 & 1.67e-02 & 8.35e-06 & 25 e - \(4.55\pm 3\) \\ \hline \multirow{3}{*}{(1914) Harfbeen-spordant} & 9.561 & 2.28e-02 & 1.15e-05 & 37 e - \(33.9\pm 20\) \\ & \(\pm\) 0.186 & 2.92e-02 & 4.15e-05 & 17 e - \(4.52\pm 6\) \\ \hline \multirow{3}{*}{(1946) Walieyn} & 9.205 & 7.80e-02 & 3.90e-05 & 259 e - \(30.80\pm 20\) \\ & \(\pm\) 0.109 & 3.36e-01 & 1.68e-04 & 80 e - \(5.90\pm 20\) \\ \hline \multirow{3}{*}{(2432) Scomma} & 7.387 & 4.21e-02 & 2.10e-05 & \(<6.8\pm 3\) \\ & \(\pm\) 0.083 & & & \\ \hline \multirow{3}{*}{(2566) Kirghizia} & 7.816 & 4.30e-02 & 2.15e-05 & \(98\pm 1.1\) - \(50\pm 2\) \\ & \(\pm\) 0.172 & 6.25e-02 & 3.12e-05 & \(262\pm 1.1\) - \(60\pm 4\) \\ \hline \multirow{3}{*}{(2579) Spartosa} & 4.604 & 6.36e-01 & 3.29e-04 & 312 e - \(5.37\pm 5\) \\ & \(\pm\) 0.369 & 6.55e-01 & 3.28e-04 & 11 e - \(5.00\pm 5\) \\ \hline \multirow{3}{*}{(2653) Principia} & 9.882 & 1.86e-02 & 9.30e-06 & 67 e - \(2.29\pm 2\) \\ & \(\pm\) 0.984 & 2.14e-02 & 1.06e-05 & 241 e - \(2.37\pm 4\) \\ \hline \multirow{3}{*}{(2704) Julian Loewew} & 5.199 & 4.90e-02 & 2.46e-05 & 37 e - \(4.08\pm 23\) \\ & \(\pm\) 0.291 & 4.32e-02 & 2.16e-05 & 250 e - \(50.64\pm 11\) \\ \hline \multirow{3}{*}{(2763) Jeans} & 5.714 & 1.45e-01 & 7.24e-05 & \(<85.6\pm 5\) \\ & \(\pm\) 0.157 & 1.06e-02 & 5.27e-02 & 255 e \(8.48\pm 8\) \\ \hline \multirow{3}{*}{(2851) Harbin} & 8.838 & 5.72e-03 & 2.38e-06 & 99 e - \(3.10\pm 5\) \\ & \(\pm\) 0.236 & 1.44e-00 & 5.68e-04 & \(282\pm 1.5\pm 5\) \\ \hline \multirow{3}{*}{(2912) Lapalma} & 6.519 & 4.90e-02 & 2.24e-05 & 205 e \(15\) - \(75\pm 6\) \\ & \(\pm\) 0.289 & & & \\ \hline \multirow{3}{*}{(3307) Athabasca} & 3.628 & 5.67e-02 & 2.84e-05 & \(46\pm 3.39\pm 8\) \\ & \(\pm\) 0.167 & 4.50e-02 & 2.25e-04 & 218 e - \(3.30\pm 6\) \\ \hline \multirow{3}{*}{(3363) Schleicher} & 3.145 & 7.49e-02 & 3.75e-05 & \(78\pm 3.34\pm 6\) \\ & \(\pm\) 0.161 & 1.19e-01 & 5.97e-05 & \(260\pm 3.51\pm 5\) \\ \hline \multirow{3}{*}{(3849) Inclentia} & 5.798 & 2.70e-02 & 1.35e-05 & \(163\pm 3.09\pm 6\) \\ & \(\pm\) 0.125 & 2.80e-02 & 9.45e-05 & \(31\pm 13\) & \(13\pm 13\) & \(9\pm 6\) \\ \hline \multirow{3}{*}{(4796) Lewis} & 5.36 & 8.20e-02 & 4.15e-05 & 260 e - \(57\pm 3.05\) \\ & \(\pm\) 0.239 & 5.40e-02 & 328 e - \(81\pm 9\) \\ \cline{1-1} & 7.76e-02 & 3.61e-02 & 3.60e-05 & \(7\pm 3.86\pm 5\) \\ \hline \multirow{3}{*}{(5150) Fellini} & 5.401 & 5.92e-02 & 2.96e-05 & \(7\pm 3.37\pm 5\) \\ \cline{1-1} & \(\pm\) 0.229 & 5.20e-02 & 2.61e-05 & 257 e - \(3.73\pm 5\) \\ \hline \multirow{3}{*}{(5524) Lachenux} & 19.902 & 1.79e-02 & 8.93e-0 |
2305.12997 | Evaluating Privacy Leakage in Split Learning | Privacy-Preserving machine learning (PPML) can help us train and deploy
models that utilize private information. In particular, on-device machine
learning allows us to avoid sharing raw data with a third-party server during
inference. On-device models are typically less accurate when compared to their
server counterparts due to the fact that (1) they typically only rely on a
small set of on-device features and (2) they need to be small enough to run
efficiently on end-user devices. Split Learning (SL) is a promising approach
that can overcome these limitations. In SL, a large machine learning model is
divided into two parts, with the bigger part residing on the server side and a
smaller part executing on-device, aiming to incorporate the private features.
However, end-to-end training of such models requires exchanging gradients at
the cut layer, which might encode private features or labels. In this paper, we
provide insights into potential privacy risks associated with SL. Furthermore,
we also investigate the effectiveness of various mitigation strategies. Our
results indicate that the gradients significantly improve the attackers'
effectiveness in all tested datasets reaching almost perfect reconstruction
accuracy for some features. However, a small amount of differential privacy
(DP) can effectively mitigate this risk without causing significant training
degradation. | Xinchi Qiu, Ilias Leontiadis, Luca Melis, Alex Sablayrolles, Pierre Stock | 2023-05-22T13:00:07Z | http://arxiv.org/abs/2305.12997v3 | # EXACT: Extensive Attack for Split Learning
###### Abstract
Privacy-Preserving machine learning (PPML) can help us train and deploy models that utilize private information. In particular, on-device Machine Learning allows us to completely avoid sharing information with a third-party server during inference. However, on-device models are typically less accurate when compared to the server counterparts due to the fact that (1) they typically only rely on a small set of on-device features and (2) they need to be small enough to run efficiently on end-user devices. Split Learning (SL) is a promising approach that can overcome these limitations. In SL, a large machine learning model is divided into two parts, with the bigger part residing on the server-side and a smaller part executing on-device, aiming to incorporate the private features. However, end-to-end training of such models requires exchanging gradients at the cut layer, which might encode private features or labels. In this paper, we provide insights into potential privacy risks associated with SL and introduce a novel attack method, _EXACT_, to reconstruct private information. Furthermore, we also investigate the effectiveness of various mitigation strategies. Our results indicate that the gradients significantly improve the attacker's effectiveness in all three datasets reaching almost 100% reconstruction accuracy for some features. However, a small amount of differential privacy (DP) is quite effective in mitigating this risk without causing significant training degradation.
## 1 Introduction
On-device machine learning involves training and/or deploying models directly on the device, without relying on cloud-based computing. This approach brings several benefits to the table, including increased privacy, reduced latency, and access to fine-grained real-time data. Such models have been deployed for a variety of machine learning tasks, such as smart keyboard [3], personalized assistant services [17], computer vision [26], healthcare [32], and ranking [19; 18; 30].
At the same time, there are certain limitations that hinder the wide adoption of on-device AI. Firstly, the limited computational and memory resources of client devices also restrict the size of the deployed models. As a result, the learning capacity and accuracy can be significantly worse than the equivalent server-based models. Secondly, end-user devices might have limited access to large datasets or the capacity to store and process features that require large embedding tables.
While on-device AI helps us ensure privacy, a key observation is that not all features might actually be sensitive, user-specific or generated on-device. Examples include e-commerce item embeddings in a recommendation system, word embeddings of a large language model, ads-related features, etc. As such, training a small model entirely on-device might not be the most optimal policy.
One promising approach to overcome these limitations is Split Learning (SL) [16; 37]. Typically, a large machine learning model is divided into two parts: the bigger part resides on the server-side (typically hosted by the model owner) and a small part can be executed on-device (typically hosted by the end-users). Larger models can then be collaboratively trained on both private (client) and non-private (server) features while limiting the information exchange between the involved parties.
During inference, the server initiates the forward pass utilizing all the server-side features. Large embedding tables and model architectures can be utilized at this stage. Only the activations of the _cut layer_ are then shared with the end-devices, typically a small vector. Each device continues the execution on its own sub-model, combining it with its own private features. Due to the limited capabilities, the client-side model is small, only utilizing numerical features or categorical features with small cardinality. In this paper we consider the worse-case scenario where the label is also private (e.g., the label represents a user purchase or a conversion after seeing an ad).
While split learning only considers two parties (e.g, ads publishers and advertisers), Federated Split Learning (FSL) allows us to train such models between a central party and million of client devices [36]. In both cases, training the server-side model requires exchanging the gradients at the cut layer for each sample. Consequently, the returning cut-layer gradients might encode information that can reveal either private features and/or labels.
In this paper, we provide insights into the potential risks of private data leakage during split model training. To achieve this, we introduce a novel attack method that aims to reconstruct private information - features or labels - by exploiting diverse information sources, such as the model parameters, the activations and gradients at the cut layer. Through our study, we aim to highlight the potential privacy risks associated with split learning by studying in which way, features might be more sensitive to leakage. Finally, a significant part of the paper is devoted to studying how different strategies can help us mitigate these risks.
To sum up, in this work:
* We introduce a novel attack method that exploits diverse information sources, such as the model parameters, activations and gradients at the cut layer, to reconstruct private information, including features or labels.
* We study how different mitigation strategies such as label and gradient differential privacy can help us protect such private features.
* Our results indicate that the gradients significantly improve the attacker's effectiveness when compared to the baselines. In all three datasets, an attacker is able to perfectly reconstruct labels and most features. However, adding a small amount of noise on the gradients at the cut layer (e.g., \(\sigma=0.01\)) is quite effective in mitigating this risk with a mere 0.01 drop in the model's AUC.
## 2 Background and Related Work
**Split Learning (SL)** enables collaborative training of deep learning models among multiple parties without the need to share raw data. While Federated Learning [27] can be utilized for such models, it may not always be practical. For instance, e-commerce and ad-ranking models often involve numerous sparse (categorical) and numerical features, requiring large machine learning models that can reach sizes of hundreds of gigabytes. Such models are typically too large to be trained on mobile devices whereas user-side features and labels (e.g., past purchases) might be too sensitive to collect on the server-side.
In split learning, the overall model is horizontally divided into two parts. The server handles the forward/backward passes of the first and larger portion of the model, while keeping the last few layers to be trained with sensitive user-side data on the device. This division point in the model architecture is referred to as the _cut layer_. The server uses first-party features to perform a forward pass until the cut layer, and then forwards the intermediate representations to the respective clients. At each client, a smaller architecture processes the private features, which are then combined with the server-side representations. An overarching architecture is employed to make the final prediction. An example illustrating this process is depicted on the left side of Figure 1. During back-propagation, gradients are calculated from the last layer to the cut layer in a similar manner. The corresponding gradients are then sent back to the server to complete the server-side back-propagation. While no raw data
are exchanged, these gradients might still encode private information that can reveal either features and/or the device labels.
Membership inferenceattacks aim at inferring whether a given sample was part of a model's training set. Originally proposed in [20], it was popularized by the shadow models approach of Shokri et al. [35]. Follow-up work has shown that the membership signal can be derived from simple quantities, such a low loss [34; 33]. Recent research has improved the performance of such attacks by comparing the loss to a calibrating term [39] or computing statistics on the distribution of the loss [5].
Reconstruction attacks and attribute inferenceaim at reconstructing points from the training set given access to a trained model [6], and to partial information about the sample in the case of attribute inference [11; 40]. Carlini et al. [7] show that given access to a trained language model, an attacker is able to reconstruct verbatim samples with high precision (but low recall).
**Attacks on Federated Learning.** Recovering private features from gradients has gained growing interest in the privacy-preserving machine learning area. A popular method called Deep Leakage from Gradients (DLG) [43] has been developed to extract training data by using the shared model gradients. An improved version of DLG, iDLG [42], resulted in a more reliable approach to extracting accurate data and perfectly reconstructing the labels. However, these methods lack generalization on model architecture and weight distribution initialization [38]. In [12], an analytical approach has been developed to derive the inputs before a fully connected (FC) layer. [10] claimed that a convolutional layer can always be converted to an FC layer, but the gradients of the original convolutional layer are still different from the gradients of the converted FC layer, which impedes the data reconstruction. In [41], the authors developed GradInversion to recover images from noise based on given gradients. All related work assumes access to the gradient of all the _weights_; in this paper, we consider attacks given access only to gradients of the _activations_ at the cut layer.
**Differential Privacy (DP)**[9] One of the methods to mitigate the effectiveness of these attacks is DP. In the paper, we experiment with both DP-SGD [1] and Label DP [13; 14]. Since SL requires the device and the server to exchange activations and gradients, it can potentially leak private label information [19; 29]. Differential privacy[2] constitutes a strong standard for privacy guarantees for algorithms on aggregate databases.
**Definition 2.1**.: A randomized mechanism \(\mathcal{M}:\mathcal{D}\rightarrow\mathcal{R}\) with domain \(\mathcal{D}\) and range \(\mathcal{R}\) satisfies \((\varepsilon,\delta)\)-differential privacy if for any two adjacent inputs \(d,d^{\prime}\in\mathcal{D}\) and for any subset of outputs \(S\subseteq\mathcal{R}\) it holds that:
\[Pr[\mathcal{M}(d)\in S]\leq e^{\varepsilon}Pr[\mathcal{M}(d^{\prime})\in S]+\delta.\]
One of the most widely adopted methods to ensure DP is through DP-SGD [2], via norm clipping and adding noise (\(\sigma\)) to the gradients. In our case, since only the gradient of the activations is shared directly from the client to the server, we only consider clipping and adding the noise to this part of the whole gradient. In addition, the \(\varepsilon\) can be estimated through DP-accountant in various packages.
Figure 1: Illustration of split learning (left) and our attack (right). During training the server performs a forward pass until the cut later and then sends the intermediate representations to each client. This information, together with private features, is used on-device to resume the computation. During the backward pass the partial gradients returned might encode private client features or labels. Our attack uses these gradients to reconstruct the private features.
There is a trade-off between model performance and model security, and carefully tuning the noise level of DP is required to ensure that the trained model is effectively protected while maintaining a reasonable model performance[21].
On the other hand, Label DP is also based on a randomized response algorithm to improve the robustness of the training model, and it is implemented when only labels need to be protected. It operates by randomly flipping the label based on the flipping probability (\(p\)) while training. The corresponding privacy budget \(\varepsilon\) can be estimated through the formula: \(p=\frac{1}{e^{\varepsilon}+1}\).
## 3 Privacy and Split Learning
In this section, we present our novel attack method for SL. We assume that clients have client-side private features that they would not want to share with any third party. Also, we consider that the ground-truth labels can also be private (i.e., only known to the clients), to study label leakage too. We consider the attack scenario where an honest-but-curious server follows the regular SL protocol but intends to recover clients' private data and the ground-truth label based on the gradients of the cut layer. Our method is termed _EXACT: Exhaustive Attack for Split Learning_. We use tabular datasets throughout the paper and experiments and will discuss the extension and future research later.
We consider a \(C\) class classification problem defined over a server feature space \(\mathcal{X}_{\mathrm{server}}\), a client feature space \(\mathcal{X}_{\mathrm{client}}\) and a label space \(\mathcal{Y}=[C]\), where \([C]=\{1,...,C\}\). We define \(F_{\mathrm{server}}\) to be the server-side function, such that \(F_{\mathrm{server}}:\mathcal{X}_{\mathrm{server}}\rightarrow\mathbb{R}^{d}\), which outputs the server-side activations \(a_{c}\). We also define the client-side \(F_{\mathrm{client}}:\mathcal{X}_{\mathrm{client}}\times\mathbb{R}^{d} \rightarrow\mathcal{S}\), which maps the client feature space and the server's output to the probability simplex \(\mathcal{S}\), \(\mathcal{S}=\{\mathbf{z}|\sum_{i=1}^{L}z_{i}=1,z_{i}\geq 0,\forall i\in[C]\}\). Both \(F_{\mathrm{server}}\) and \(F_{\mathrm{client}}\) are parameterized over the hypothesis class \(w=(w_{server},w_{client})\), which is the weight of the neural network. \(\mathcal{L}(\mathbf{w})\) is the loss function, and we assume the widely used cross-entropy loss.
In this way, the server's output (\(a_{c}\)) is the activation transmitted from the server to the client at the cut layer, and the weight of the cut layer is \(w_{c}\). Also, the gradient transmitted from the client to the server is the gradient of the cut layer activations: \(\partial\mathcal{L}/\partial a_{c}\). On the other hand, the gradient of the cut layer weights \(\partial\mathcal{L}/\partial w_{c}\) stays on the client side to finish the back-propagation and allows the weights (weights on the clients-side, including the cut layer) to be updated, as shown on the left side of Figure 1.
**Threat Model:** We assume a strong attacker who has access to the client-side model parameters during training. While we want to study cases where the attacker has fine-grained information, the attack model can be relaxed in cases where secure aggregation is used and only the final model parameters are known after training is finished. In our scenario, we assume that an honest-but-curious server has knowledge of the server-side features, the server-side and client-side models, which is a realistic assumption given the distributed setting of both SL and FSL. As a result, we assume that an adversary can compute the server-side outputs \(a_{c}\) for any server-side feature values, and with the input of client side feature, the adversary can then obtain the corresponding cut-layer gradients (\(\partial\mathcal{L}/\partial a_{c}\)) from client-side. As \(\partial\mathcal{L}/\partial a_{c}\) depends on the private features, the client-side architecture and the output of the server \(a_{c}\), we want to use the available information to reconstruct the private features (Figure 1).
### Attack Method
_EXACT_ assumes that the private features on the client side are either categorical or can be binned/clustered into a finite number of categories. We then build a list \(L\) that contains all the possible combinations of features and labels. For a given sample, the adversary can then calculate \(a_{c}\) and the gradient \(\partial\mathcal{L}_{i}/\partial a_{c}\) for every possible private configuration \(i\). We then try to match the gradient \(\partial\mathcal{L}_{i}/\partial a_{c}\) by choosing the configuration \(i\) that minimizes the distance to the true gradient \(\partial\mathcal{L}/\partial a_{c}\) returned by the client. Here, we choose to use the L2 distance as the distance metric to compare the gradient. The details of the algorithm can be found in Algorithm 1.
Noting that in case that the search space is growing with the number of features or categories to be attacked (i.e., many private features with thousands of categorical values), heuristic approach or smart search methods can be used to speed up convergence to a given configuration. However, it is also worth mentioning that for attack method, our priory concern is the attack performance rather than speed. As a reference, in the datasets used here, we could successfully reconstruct features
of a given sample within 16.8 seconds. Also, similar to many existing attack methods, such as DLG [43], iDLG [42], and GradInversion [41], _EXACT_ reconstruct the private data via gradient matching. Unlike the previous methods, _EXACT_ does not rely on optimization steps, which often involves second derivatives computations or carefully tuned regularization terms. By going through all possible possibilities, _EXACT_ guarantee to reconstruct the most relevant private features without having convergence issues in the optimization steps.
## 4 Evaluating Privacy Leaks and Mitigation Strategies
### Experimental Setup
We conducted extensive experiments on three different datasets, and the details of implementation can be found below. We conducted training in both SL and FSL ways. For FSL, we simulate the federated environment by randomly allocating 16 samples for each client.
**Datasets:** Experiments are conducted over three datasets: _Adult Income dataset_[23], _Bank Marketing dataset_[28], and _Taobao ad-display/click dataset_[24]. The Adult Income dataset is a classification dataset aiming to predict whether the income exceeds 50K a year based on census data. It contains \(48,842\) and \(14\) columns. The Bank Marketing dataset is related to the direct marketing campaigns of a Portuguese banking institution. It contains \(45,211\) rows and \(18\) columns ordered by date. We also conduct our experiment over a production scale ad-display/click dataset of Taobao [24]. The dataset contains \(26\) million interactions (click/non-click when an Ad was shown) and \(847\) thousand items across an 8-day period. We use \(90\%\) of the dataset as the training set and leave \(10\%\) as the testing set.
For the Adult Income and the Bank dataset, we randomly partition the features into server features and private client features. For the Taobao dataset, we keep the user-related features as private client features. It is worth noting that since the datasets are not pre-partitioned, our methods can work with any feature partitioned, which is shown separately in Section 4.4. We attack all the private features, and the particular private features for each dataset can be found in Table 32 and 4.
**Model Architecture** We deploy the state-of-the-art model, DeepFM, as the classification model [24; 15]. We use a learning rate of \(0.01\) with Adagrad, and binary cross entropy as the loss function. The default number of neurons for the DNN layer for the DeepFM is chosen to be (256,128). For the attack, we train the models in different scenarios in the SL and FSL fashion with the training set and then attack the private client features using the testing set that is not seen by the model before.
**Baseline:** We also implement two baselines to compare the attack performance. The baselines serve as guidance to show the extra information the gradient is leaking compared to our prior ability to reconstruct these private features using only server-side information. The first baseline is to use the server's features to reconstruct the client's features. The second baseline is to use the server's output \(a_{c}\) to reconstruct the client's features. For both baselines, we use the K-nearest-neighbors algorithm (KNN), which is a non-parametric method like our method. Since our method already choose the features combination that matches the closest to the original gradient, there is no extra benefit of comparing with existing parametric or optimization based methods.
**DP:** We implement both Label DP and DP as explained in Section 2 as mitigation and defense strategy for our attack. For Label DP, we implement flipping probability \(p\) of both \(0.1(\epsilon=2.2)\) and \(0.01(\epsilon=4.6)\). For DP, we implemented DP-SGD using clip norm C as half the gradient norm according to [4] and noise multiplier \(0.01\).
### Results
First of all, Table 1 shows the model performance for all three datasets in various setups and scenarios. As all three datasets are unbalanced in terms of classes, the table reports the AUC with or without DP noise. SL and FSL columns report the unmitigated training without DP. We also consider both Label DP, DP, and a combination of Label DP and DP on the SL training. Both methods are explained in Section 2. SL and FSL reports almost the same performance, which is reasonable, as both training are following per mini-batch step. For Label DP, the table reports two different flipping probabilities. As we expected, with higher probability, the performance dropped for all datasets. For DP, we set the clip norm to be half the size of gradients of the cut layer and set the \(\sigma\) to be \(0.01\) for all datasets. Since the noise added to the gradient is small, the degradation of the performance compared to the normal split training is also minimal. It is worth noting that the model performance (AUC), as shown in Table 1 for Taobao, is much lower than the other datasets, due to the fact that Taobao is a more difficult dataset to train with. Then, we demonstrate the attack performance in detail in Table 2, 3 and 4. Since all features are not balanced, we choose to report the F1 score for our attack performance, instead of accuracy.
The first thing to notice is that with both SL and FSL training for all datasets, the label can be reconstructed perfectly, which shows the importance of applying techniques such as DP. Also for both FL and FSL, the attack performance on the private features for both adult income and bank marketing datasets are all above \(0.95\), implying accurate reconstructed performance for the attack in the unmitigated setting. As for the production scale dataset Taobao, some performance dropped to be
\begin{table}
\begin{tabular}{c|c c|c c|c|c} \hline \hline
**Data** & **SL** & **FSL** & \multicolumn{2}{c|}{**Label DP**} & **DP** & **Label DP \& DP** \\ & & & \(p=0.1\) & \(p=0.01\) & \(\sigma=0.01\) & \(p,\sigma=0.01\) \\ \hline Bank & 0.88\(\pm\)0.0024 & 0.88\(\pm\)0.0023 & 0.87\(\pm\)0.0039 & 0.88\(\pm\)0.0025 & 0.87\(\pm\)0.0003 & 0.87\(\pm\)0.0017 \\ Adult & 0.89\(\pm\)0.0033 & 0.89\(\pm\)0.0030 & 0.89\(\pm\)0.0037 & 0.89\(\pm\)0.0036 & 0.89\(\pm\)0.0034 & 0.89\(\pm\)0.0024 \\ Taobao & 0.66\(\pm\)0.0001 & 0.66\(\pm\)0.0001 & 0.62\(\pm\)0.0007 & 0.654\(\pm\)0.0002 & 0.654\(\pm\)0.0012 & 0.65\(\pm\)0.0013 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results (AUC) of model for each dataset on test-set in different scenarios.
\begin{table}
\begin{tabular}{c|c c|c c|c|c c|c c} \hline \hline
**Features** & **SL** & **FSL** & \multicolumn{2}{c|}{**Label DP**} & **DP** & **Comb.** & \multicolumn{2}{c}{**Baseline**} \\ (Num) & & & \(p=0.1\) & \(p=0.01\) & & & features & output \\ \hline Gender(2) & 0.9977 & 0.9990 & 0.9996 & 0.9996 & 0.3652 & 0.3430 & 0.7909 & 0.7729 \\ Race(5) & 0.9878 & 0.9888 & 0.9777 & 0.9711 & 0.1119 & 0.0808 & 0.3344 & 0.2789 \\ Relationship (6) & 0.9952 & 0.9957 & 0.9860 & 0.9974 & 0.0828 & 0.0756 & 0.1998 & 0.2917 \\ Marital(7) & 0.9912 & 0.9526 & 0.9794 & 0.9903 & 0.1424 & 0.1241 & 0.1736 & 0.2570 \\ \hline Label(2) & 1 & 1 & 0.800(0.90) & 0.98(0.99) & 0.5497 & 0.5558 & 0.3850 & 0.5234 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results (F1 scores) of the feature reconstruction attack on test-set, compared with the baselines on the Adult Income dataset. The number of categories for each feature is shown in the bracket next to each feature name. For Label in Label DP, the accuracy is reported in the bracket
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c} \hline \hline
**Features** & **SL** & **FSL** & \multicolumn{2}{c|}{**Label DP**} & **DP** & **Comb.** & \multicolumn{2}{c}{**Baseline**} \\ (Num) & & & \(p=0.1\) & \(p=0.01\) & & & features & output \\ \hline Martial(3) & 0.9578 & 0.9712 & 0.9877 & 0.9800 & 0.2157 & 0.3037 & 0.3229 & 0.4343 \\ Job(12) & 0.9490 & 0.9515 & 0.9780 & 0.9632 & 0.0182 & 0.0188 & 0.0966 & 0.1697 \\ Education(4) & 0.9499 & 0.9622 & 0.9782 & 0.9795 & 0.1898 & 0.1280 & 0.2499 & 0.2845 \\ Housing(2) & 0.9835 & 0.9808 & 0.9941 & 0.9911 & 0.5975 & 0.5670 & 0.7112 & 0.7584 \\ Loan(2) & 0.9332 & 0.9418 & 0.9737 & 0.9621 & 0.2656 & 0.2666 & 0.0909 & 0.1310 \\ Contact(3) & 0.9770 & 0.9716 & 0.9886 & 0.9841 & 0.2683 & 0.3859 & 0.5406 & 0.6102 \\ \hline Label(2) & 1 & 1 & 0.6893 (0.90) & 0.9587(0.99) & 0.3929 & 0.3682 & 0.3504 & 0.4275 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results (F1 scores) of the feature reconstruction attack on test-set, compared with the baselines on the Bank Marketing dataset. The number of categories for each feature is shown in the bracket next to each feature name. For Label in Label DP, the accuracy is reported in the bracket
around \(0.80\), but still shows good attack performance, which might due to the fact the that the model performance for Taobao is lower than other two datasets.
As for Label DP, the attack accuracy is reported next to the F1 score in all three tables, which shows that the accuracy is exactly the same as the flipping probability. The attack performance for private features for the Label DP case is quite similar to the performance for the standard split training in all three datasets, showing that flipping label does not provide enough protection for the private features.
DP, on the other hand, has much more impact on the attack performance compared to Label DP. As we can see from Table 2, 3 and 4, the F1 scores significantly decrease for all private features and the label, with some of the F1 scores near 0, even with very small noise. For example, the attack performance for the Label for the Taobao dataset is 0 in Table 4, because the attack reconstructed all the labels to be \(0\). It might due to the fact that the dataset is extremely unbalanced, with only \(5\%\) of the total sample labeled as positive. As for the Age features in Table 4, the F1 score is extremely low, since the attack accuracy is extremely low.
In addition, the tables also demonstrate the attack performance if we combine both Label DP and DP. As we expected, the performance for the combination will be more similar to the DP as DP has a significant impact on the attack performance, and all results show similar performance for the combined setup and the DP-only setup.
Lastly, for the baselines, the reconstructed performance for the baselines varies depending on the dataset. The baselines provide guidance to show the lower-bound information leakage from the server side, as we might see from the tables that the attack performance for the normal split training and training with Label DP outperforms the baselines for all datasets. However, there is no clear way to distinguish between the DP case and the baselines.
### Studying how the model architecture affects the reconstruction
We also conduct experiments to show that our results are not dependent on the particular model size as shown in Table 5. We vary the DNN layers on the client side, as the gradient of the activation at the cut layer only depends on the model architecture on the client side, to see if model architecture can impact the attack performance. As we can see from the table, all SL training without mitigation demonstrates consistent performance across all private features, and the attack performances for the label are perfect. Similarly, for the DP performance, it all degrades to a similar level across all different model architectures indicating the effectiveness of DP on the attack.
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c} \hline \hline
**Features** & **SL** & **FSL** & \multicolumn{2}{c|}{**Label DP**} & **DP** & **Comb.** & \multicolumn{2}{c}{**Baseline**} \\ (Num) & & & \(p=0.1\) & \(p=0.01\) & & & features & output \\ \hline Age(7) & 0.8284 & 0.8135 & 0.8470 & 0.7852 & 0.0464 & 0.0001 & 0.2643 & 0.1660 \\ P-value(3) & 0.8880 & 0.8499 & 0.9178 & 0.8607 & 0.0256 & 0.0256 & 0.4022 & 0.3280 \\ Shopping(3) & 0.9034 & 0.8582 & 0.9321 & 0.8583 & 0.3065 & 0.3062 & 0.4503 & 0.3281 \\ Occupation(2) & 0.8036 & 0.8562 & 0.8931 & 0.6523 & 0.1031 & 0.0975 & 0.1151 & 0.0228 \\ \hline Label(2) & 1 & 1 & 0.4832(0.90) & 0.9076(0.99) & 0 & 0 & 0.0066 & 0.0326 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results (F1 scores) of the feature reconstruction attack on test-set, compared with the baselines on the Taobao dataset. The number of categories for each feature is shown in the bracket next to each feature name. For Label in Label DP, the accuracy is reported in the brackets. 0 F1 scores indicates that the attacker reconstruct all the label as \(0\) in all cases.
\begin{table}
\begin{tabular}{c|c c|c c c} \hline \hline
**Features** & \multicolumn{2}{c|}{**SL**} & \multicolumn{2}{c}{**DP**} \\ (Num) & (256,128) & (64,32) & (32,16) & (256,128) & (64,32) & (32,16) \\ \hline Gender(2) & 0.9977 & 0.9979 & 0.9987 & 0.3652 & 0.3330 & 0.1631 \\ Race(5) & 0.9878 & 0.9692 & 0.9830 & 0.1119 & 0.2010 & 0.0671 \\ Relationship6) & 0.9952 & 0.9894 & 0.9896 & 0.0828 & 0.0831 & 0.0674 \\ Marital(7) & 0.9912 & 0.9436 & 0.9789 & 0.1424 & 0.1233 & 0.1054 \\ \hline Label(2) & 1 & 1 & 1 & 0.5497 & 0.4855 & 0.4554 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results (F1 scores) of the feature reconstruction attack on test-set on the Adult Income dataset with various architecture on the client-side model.
### Studying how the number and type of Features affects the attack
First, we conduct experiment to show that our results are not dependent on the selection of private features. The results are shown as in Table 6. It demonstrate that out method can reconstructed the private features with unmitigated SL regardless the partition of private features and the number of categories for each features. If we only attack on 1 private feature (Gender with 2 categories or Education with 16 categories), the reconstruction performance can be \(100\%\) regardless the number of categories of the features. It is worth noticing that the attack performance drop below \(0.5\) if we consider the extreme case when we attack all categorical features, but it can still perfectly attack the true label. Similar as before, if we incorporate DP, the attack performance degrade significantly. Also, if we attack all 7 features with DP, the label attack returns everything as label \(0\), which output the \(0\) F1 score, meaning that the gradient of the cut layer yields no useful information.
In addition, we also conduct experiment to investigate the attack effectiveness if we include a non-relevant private features on the client-side. 'cms group' is the private feature that does not contribute to the model performance. With or without 'cms group', testing AUC for the model are both \(0.66\), but as we can see from Tabel 7, adding the non-relevant features has the impact on the attack performance, especially for the unmitigated SL case.
### Using the Majority Vote
Furthermore, we also add a variation to our attack method. Instead of choosing the feature of derives the most closest distance between the reconstructed gradient and the original gradient, like mention in Line 10 in Algorithm 1, we return the \(k\) most closest combination of reconstructed features, and take the majority vote for each feature as the final reconstructed feature. Table 8 shows the results on the Adult dataset, with different value of \(k\). As we can see from the results that, as \(k\) increase, the attack performance drop for both unmitigated SL training and training with DP, which shows that the attack performance is highly sensitive to the variation of gradient, and it has to match exactly to generate good reconstruction performance.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline
**Features** & **SL** & **DP** & **SL** & **DP** & **SL** & **DP** & **SL** & **DP** & **SL** & **DP** & **SL** & **DP** & **SL** & **DP** & **SL** & **DP** \\ \hline Gender(2) & 1 & 0.36 & 1 & 0.33 & 1.00 & 0.26 & 1.00 & 0.36 & 0.97 & 0.24 & 0.90 & 0.21 & 0.81 & 0.51 & - & - & - \\ Race(7) & - & - & 1 & 0.33 & 1 & 0.08 & 1.00 & 0.12 & 0.77 & 0.10 & 0.46 & 0.09 & 0.25 & 0.15 & - & - & - \\ Marista(7) & - & - & - & - & 1.00 & 0.14 & 1.00 & 0.09 & 0.74 & 0.12 & 0.50 & 0.09 & 0.25 & 0.07 & - & - & - \\ Relationship & - & - & - & - & - & - & 0.99 & 0.14 & 0.92 & 0.07 & 0.70 & 0.09 & 0.52 & 0.04 & - & - & - \\ Occu(15) & - & - & - & - & - & - & - & 0.83 & 0.05 & 0.62 & 0.04 & 0.39 & 0.08 & - & 1.00 & 0.06 \\ Work(9) & - & - & - & - & - & - & - & - & - & - & 0.52 & 0.05 & 0.26 & 0.10 & - & - & - \\ Edu.(16) & - & - & - & - & - & - & - & - & - & - & 0.32 & 0.03 & 1 & 0.08 & 0.99 & 0.04 \\ \hline Label(2) & 1 & 0.62 & 1 & 0.77 & 1 & 0.55 & 1 & 0.55 & 1 & 0.39 & 1 & 0.32 & 1 & 0 & 1 & 0.62 & 1 & 0.66 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results (F1 scores) of the feature reconstruction attack on test-set on the Adult Income dataset with various set of private features. Feature ’Occu’ is short for ’Occupation’, ’Work’ for ’Workclass’, ’Edu’ for ’Education’. For the F1 score, \(1\) means that it is perfectly equal to \(1\) and \(1.00\) means it is rounded up to \(1.00\). ’-’ means that the feature is not considered as private feature, so it is not attacked.
\begin{table}
\begin{tabular}{c|c c|c} \hline \hline
**Features** & **SL** & **DP** & **SL** & **DP** \\ (Num) & & \(\sigma=0.01\) & & \(\sigma=0.01\) \\ \hline CMS Group (12) & 0.0584 & 0.0001 & - & - \\ Age(7) & 0.6884 & 0.0119 & 0.8284 & 0.0464 \\ P-value(3) & 0.5792 & 0.0246 & 0.8880 & 0.0256 \\ Shopping(3) & 0.6635 & 0.3076 & 0.9034 & 0.3065 \\ Occupation(2) & 0.3890 & 0.1032 & 0.8562 & 0.1031 \\ \hline Label(2) & 1 & 0 & 1 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Results (F1 scores) of the feature reconstruction attack on test-set on the Taobao dataset with private features with or without ‘cms group’. 0 F1 scores indicates that the attacker reconstruct all the label as \(0\) in all cases.
## 5 Discussion
**Feature reconstruction:** Our experiments indicate that the gradients required for SL can be used to reconstruct private features and labels. Compared to our baseline ability to predict the private features from the public features, the gradients significantly improve the attacker's effectiveness in all three datasets that we tried.
**Effectiveness of DP noise:** We have studied adding a wide range of DP noise and we observe that even a small amount is enough to mitigate this attack while allowing the model to reach similar training performance. This is because we only need to add noise on the cut-layer returned gradients. Note that in the Federated Split Learning setting Global DP is also added on the client-side models too before releasing outside the secure aggregator to ensure that the client-side weights do not encode any private information.
**Studying different architectures:** While we mostly focused on a typical (fully-connected) DNN architecture, as future work, we want to further examine how different architectures (e.g., CNNs, RNNs, etc) can affect the ability to reconstruct private features and the sensitivity to these DP mitigation strategies.
**Speeding up the attack:** In our attack, we need to compare the returned gradient with every possible gradient that can result from different configurations of private features. Obviously, the wall clock time of the search depends largely on the search space, and it will significantly increase if the number of categories to search over is big. For example, for our dataset, on average, we require 20 milliseconds to finish one forward and backward pass. In this case, for the Adult dataset, for each reconstruction, it goes over 840 times, which amounts to 16.8 seconds for each sample reconstruction. There are multiple heuristic-based grid-search techniques that can help us accelerate this search, including subset searching, gradient descent, Bayesian search [8], and so on.
**Split learning and Federated Learning:** Our initial experiments indicate that these findings also hold in Federated Split Learning (FSL) where there are more than one clients participating in training. However, in our scenarios, we randomly allocate samples to each client, which represents the Independent and identically data distribution (IID) scenario. We plan to study how non-IID can cause extra difficulties either in attacking FSL [25; 22; 31].
**Extending to other attacks:** In this work, we focused on reconstruction attack on categorical features. However, _EXACT_ can easily be extended to membership inference attacks if we binned/clustered private features into finite number of categories. In this case, our method would be able to infer if the the particular feature is from the particular cluster (membership).
## 6 Conclusion
In this paper, we study the potential leakage of private features and labels during split model training. We introduce a novel feature reconstruction method and apply it on various datasets and DNN architectures. Our results indicate that the exchanged gradients do encode private information, allowing the adversary to perfectly reconstruct labels, and reconstruct the features with excellent performance. We then examine how mitigation strategies such as DP-SGD and Label DP can be used to successfully mitigate these risks without affecting the training quality. As in this work, we focused on tabular data, we would like to expand on other tasks, such as image, text, and audio processing, and other model architectures in future works.
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline \hline
**Features** & \multicolumn{3}{c|}{**SL**} & \multicolumn{3}{c}{**DP**} \\ (Num) & \(k=1\) & \(k=5\) & \(k=10\) & \(k=1\) & \(k=5\) & \(k=10\) \\ \hline Gender(2) & 0.9977 & 0.8231 & 0.7760 & 0.3652 & 0.3269 & 0.2761 \\ Race(5) & 0.9878 & 0.2359 & 0.0543 & 0.1119 & 0.0786 & 0.0626 \\ Relationship(6) & 0.9952 & 0.5723 & 0.3777 & 0.0828 & 0.0759 & 0.0760 \\ Marital(7) & 0.9912 & 0.4956 & 0.5153 & 0.1424 & 0.1348 & 0.1375 \\ \hline Label(2) & 1 & 1 & 0.9995 & 0.5479 & 0.5351 & 0.5029 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Results (F1 scores) of the feature reconstruction attack on test-set on the Adult Income dataset with various k value for the majority vote. |
2305.03642 | Jointly Extracting Interventions, Outcomes, and Findings from RCT
Reports with LLMs | Results from Randomized Controlled Trials (RCTs) establish the comparative
effectiveness of interventions, and are in turn critical inputs for
evidence-based care. However, results from RCTs are presented in (often
unstructured) natural language articles describing the design, execution, and
outcomes of trials; clinicians must manually extract findings pertaining to
interventions and outcomes of interest from such articles. This onerous manual
process has motivated work on (semi-)automating extraction of structured
evidence from trial reports. In this work we propose and evaluate a
text-to-text model built on instruction-tuned Large Language Models (LLMs) to
jointly extract Interventions, Outcomes, and Comparators (ICO elements) from
clinical abstracts, and infer the associated results reported. Manual (expert)
and automated evaluations indicate that framing evidence extraction as a
conditional generation task and fine-tuning LLMs for this purpose realizes
considerable ($\sim$20 point absolute F1 score) gains over the previous SOTA.
We perform ablations and error analyses to assess aspects that contribute to
model performance, and to highlight potential directions for further
improvements. We apply our model to a collection of published RCTs through
mid-2022, and release a searchable database of structured findings:
http://ico-relations.ebm-nlp.com | Somin Wadhwa, Jay DeYoung, Benjamin Nye, Silvio Amir, Byron C. Wallace | 2023-05-05T16:02:06Z | http://arxiv.org/abs/2305.03642v3 | # Jointly Extracting Interventions, Outcomes, and Findings from RCT Reports with LLMs
###### Abstract
Results from Randomized Controlled Trials (RCTs) establish the comparative effectiveness of interventions, and are in turn critical inputs for evidence-based care. However, results from RCTs are presented in (often unstructured) natural language articles describing the design, execution, and outcomes of trials; clinicians must manually extract findings pertaining to interventions and outcomes of interest from such articles. This onerous manual process has motivated work on (semi-)automating extraction of structured evidence from trial reports. In this work we propose and evaluate a text-to-text model built on instruction-tuned Large Language Models (LLMs) to jointly extract _Interventions_, _Outcomes_, and _Comparators_ (ICO elements) from clinical abstracts, and infer the associated results reported. Manual (expert) and automated evaluations indicate that framing evidence extraction as a conditional generation task and fine-tuning LLMs for this purpose realizes considerable (\(\sim\)20 point absolute F1 score) gains over the previous SOTA. We perform ablations and error analyses to assess aspects that contribute to model performance, and to highlight potential directions for further improvements. We apply our model to a collection of published RCTs through mid-2022, and release a searchable database of structured findings: [http://ico-relations.ebm-nlp.com](http://ico-relations.ebm-nlp.com).
## 1 Introduction
Robust medical evidence concerning the comparative effectiveness of treatments is primarily disseminated in published free-text articles that report outcomes from randomized controlled trials (RCTs). Such trial results are critical inputs for practicing _Evidence-based medicine_ (EBM; Sackett, 1997), which seeks to inform patient care using the totality of relevant findings. Trial results are also potentially important for augmenting clinical predictions (Naik et al., 2022), and for calibrating trust in treatment suggestions offered by AI support systems (Yang et al., 2023), which ought to agree with the established evidence.
A challenge to making use of all available evidence is that findings from trials are disseminated via unstructured published articles. Researchers and healthcare providers must trawl through these to extract findings relevant to their clinical question(s). This problem has been exacerbated by the rapid production of new evidence: A now outdated estimate suggests that 75 trial reports are published _every single day_(Bastian et al., 2010); more recent estimates put this number at \(\sim\)140 trial reports per day (Marshall et al., 2020).
To allow practitioners to draw upon newly published evidence as it accumulates, we need tools that make navigating findings more efficient. This has motivated work on Natural Language Processing (NLP) methods to semi-automate aspects of data extraction from clinical trial reports (Kang et al., 2021; Kiritchenko et al., 2010; Wallace et al., 2016; Nye et al., 2022, _inter alia_). In this work we capitalize on and extend recent advances in NLP, specifically _instruction-tuned_ LLM capabilities (Chung et al., 2022), to perform end-to-end structured evidence extraction from free-text (Figure 1). We achieve state-of-the-art (SOTA) performance on this challenging task: The model we introduce yields a \(\sim\)20 point absolute gain in F1 score over the prior SOTA approach. We ablate model components to assess their contributions. We also release model weights, and a database of structured findings inferred by our model over a comprehensive dataset of articles describing RCTs.
Generalizable Insights about Machine Learning in the Context of Healthcare
With respect to _healthcare_, this work makes significant progress on the important practical problem of structured evidence extraction from published articles describing RCTs. The outputs of this system may aid evidence synthesis, and might also serve as inputs to other
Figure 1: We fine-tune a Large Language Model (LLM) to map from free-text descriptions of clinical trials to structured representations of findings.
machine learning models in healthcare which could benefit from conditioning on robust evidence. Beyond this, the need for data extraction from free-text (e.g., clinical notes) is widespread in healthcare: Improved extraction methods have the potential to ultimately allow clinicians to focus on providing patient care instead of navigating unstructured data.
In terms of _machine learning_, we introduce and evaluate a method for training LLMs to perform a complex instance of _relation extraction_, a long-standing problem in ML (Ireson et al., 2005). To our knowledge, this is one of the first efforts to evaluate LLMs for medical relation extraction; we find that they outperform existing systems for this task by a large margin. As an additional contribution which may be of interest to the broader machine learning community, our ablations indicate that including _evidence spans_ in extraction targets is an an important design decision--this complements recent developments inducing LLMs to provide free-text "rationales" for their outputs (Wei et al., 2022), and may have implications for those working with LLMs for relation extraction going forward.
## 2 Related Work
In this work we develop and evaluate methods using LLMs to extract results from clinical trial reports. Information and Relation Extraction (RE), generally, are well established sub-fields within NLP (Cowie and Lehnert, 1996), and we do not attempt to provide a general survey here. Instead, we contextualize our work by reviewing closely related efforts that focus on: (i) Information extraction from biomedical/clinical texts (Section 2.1); (ii) Models for jointly identifying entities and inferring relations between them (Section 2.2); and (iii) Recent approaches that treat RE as a _text-to-text_ problem, a strategy that we adopt here (Section 2.3).
### Information Extraction from Biomedical Literature and Clinical Text
A line of prior work in NLP attempts to extract relevant _Populations, Interventions, Comparators_ and _Outcomes_ (PICO elements) from clinical texts (Kim et al., 2011). Nye et al. (2018) collected a corpus of 5,000 annotated RCT abstracts and introduced novel NLP tasks aiding evidence-based medicine. Lee and Sun (2019) highlighted important aspects of PICO human-annotations to refine datasets by adopting a relaxed agreement schemes for human annotations of PICO. Jin and Szolovits (2018) introduced baselines in detecting PICO elements at the sentence level using LSTMs. Schmidt et al. (2020) proposed framing PICO extraction as a question-answering task and subsequently using transformer models, including SciBERT (Beltagy et al., 2019) -- a masked language model pretrained on large-scale scientific data. These efforts either pre-dated Transformers, or used small encoder backbones, i.e., BERT (Devlin et al., 2018), rather than the generative models we use here.
Elsewhere, Lehman et al. (2019) introduced the _evidence inference_ dataset which entailed inferring which medical treatments work with respect to a _given_ICO-set of interest. Using this dataset as a starting point, Nye et al. (2022) considered the end-to-end task of extracting PICO elements _and_ inferring results (as opposed to performing inference for a given ICO triplet). They proposed an _extractive_ entity extraction-linking-inference (ELI) sequential approach for this challenging task, and showed that it yielded results superior to standard joint architectures for relation extraction (Wadden et al., 2019). We improve upon
these earlier efforts by introducing an end-to-end _generative_ model for the task of medical evidence inference.
### Jointly Extracting Entities and their Relations
Early work in RE used pipeline approaches comprising separate models to, first, extract entities from a span of text, and then infer relations between those entities (if any). More recently, researchers have introduced joint extraction models since they tend to reduce error propagation and can capitalize on the connections between entities and their relations (Wang and Lu, 2020). Traditionally, such joint extraction methods principally worked by predicting "BILOU" tags (Beginning, Inside, Last, Outside, and Unit) for tokens in the input (Bekoulis et al., 2018, 2018, 2019; Miwa and Bansal, 2016; Zheng et al., 2017; Verga et al., 2018). Span-based approaches extend these methods by constructing spans of tokens and then labeling these with respect to specific entity types, which enables processing of overlapping entities (Eberts and Ulges, 2019; Wadden et al., 2019).
### Generative Relation Extraction
Most earlier methods for identifying entities and extracting relations in free text trained models with a joint objective (Eberts and Ulges, 2021; Wang and Lu, 2020). The recent rise in (_very_) large language models (LLMs) (Brown et al., 2020; Chung et al., 2022) has motivated research into using these models for structured prediction tasks such as named entity recognition and RE (Nayak and Ng, 2019; Paolini et al., 2021; Huguet Cabot and Navigli, 2021). This usually entails _linearizing_--that is, encoding into strings--the structured information and then tasking models with generating linearized target relations conditioned on corresponding inputs.
Building on these efforts, we propose to train and evaluate models to conditionally _generate_ ICO spans, findings regarding the reported comparative effectiveness of the corresponding intervention compared to the comparator for the outcome in question, _and supporting textual evidence_. Specifically, we fine-tune an LLM to generate sets of linearized outputs (tuples) containing all the entities, relations, and supporting evidence from a given input RCT abstract (Figure 3).
## 3 Methods
### End-to-End Evidence Inference
The task of _clinical evidence inference_ comprises two sub-tasks: (i) Extraction of sets of relevant medical elements, i.e. ICO triplets; and (ii) Inference regarding the effect of the primary intervention on the outcome (i.e., _significant increase, significant decrease, no significant effect_), given the available evidence. These two subtasks can be seen as specialized instances of entity tagging and relation extraction, respectively. Recent work on clinical evidence inference has adopted a sequential (pipeline) approach in which ICO extraction is treated as a sequence tagging step, and then a separate inference module processes the tagged entities (Nye et al., 2022). This specialized approach outperformed model variants that attempted to jointly perform the task. However, prior methods for joint extraction and inference pre-dated the modern LLMs which are the current dominant paradigm in
NLP. Here we adopt such models, and treat the task of end-to-end evidence inference as a conditional language generation task (Figure 2).
Our targets are linearized strings comprising _multiple_ tuples, each containing the elements (_Intervention_, _Comparator_, _Outcome_, _Evidence_, _Inference label_), extracted directly from an input abstract describing a RCT. Formally, given a RCT abstract \(\mathcal{C}\), we model the probability of generating a linearized string \(y\) of length \(T\) containing \(N\) tuples (separated by special tokens in the linearized forms), conditioned on \(\mathcal{C}\):
\[p_{\text{LM}}(y|\mathcal{C})=\prod_{t=1}^{T}p(y_{t}|\mathcal{C},y_{<t})\]
This is the standard (conditional) language modeling objective, and we optimize for per token cross-entropy loss. During training, we "teacher force", i.e., condition production of target token \(y_{t}\) on the reference sequence \(y_{<t}\) and \(\mathcal{C}\). At test time, the model iteratively conditions on its own outputs (we use greedy decoding).
The number of tuples associated with inputs is variable; language model flexibly models this by allowing the model to produce a special EOS token after enumerating all tuples. Note, however, that the model is unconstrained, and so can--and sometimes does, as we discussion in Section 4.2--produce invalid outputs (i.e., which do not conform to the linearized structured we assume).
Figure 3 provides an illustrative example where the abstract comprises two unique reference tuples:
(zinc sulfate capsules, placebo, warts, _warts resolved in \(68\%\) of the patients in treatment group and \(64\%\) of the patients in placebo group_, no significant difference)
(zinc sulfate capsules, placebo, recurrence of warts, _three patients in treatment group and six patients in placebo group had a recurrence of warts_ (p=19), no significant difference)
Figure 2: We propose instructional fine-tuning a large language model (top) using standard supervision to elicit evidence within generated ICO tuples. This approach yields substantial improvements over existing joint extraction approaches (bottom) where the entire task is decomposed into different _independent_ phases.
#### Background
Cutaneous warts are caused by a small group of specific types of human papillomavirus. Cychyrev is a highly effective treatment for patients with viral warts, however, it is a parallel method and usually requires several treatment sessions. _Zmc_ is a trace element with many proven effects on the immune system.
#### OBJECT
Our aim was to assess the efficacy and safety of oral zinc sulfate in the treatment and recurrence rate of common warts.
#### METHODS
Eightly-three patients with common warts participated in this double-blind, randomized, placebo-controlled trial, in both groups, three sessions of biological hydrogen cychyrev performed over ten to 10 months with 3-weeks intervals. The treatment group (in \(\approx\)45) enclosed _realite sensitive capacities in a dose of 10 mg/kg per day_ up to 600 mg. The control group (in \(\approx\)38) was provided with placebo of similar appearance. Treatment confirmed for 2 months and the follow-up period united to 6 months.
#### RESULTS
Watts completed completed 20.5 g/kg in the treatment group (\(\approx\)81.5 g/kg) and 23 patients in the placebo group (\(\approx\)63.9%, \(\approx\)48). The remaining three in the placebo group did not report complete resolutions but subturative improvements in confidence. Time requirements (\(\approx\)76%) in the treatment group and six patients (\(\approx\)16.6%) in the placebo group has a requirement of the warts (\(\approx\)1.1%).
#### CONCLUSION
According to our study, the addition of zinc to cychyrev was not beneficial in the treatment of patients with common warts nor did it prevent recurrences.
### Data
We derived the data we use for training from the Evidence Inference dataset (Lehman et al., 2019; DeYoung et al., 2020). This comprises articles describing RCTs annotated by medical doctors.1 An instance in this dataset comprises an abstract annotated with five elements: An ICO triplet, a _label_ that indicates the directionality of a reported effect of the intervention for the given outcome relative to the comparator (i.e., categorizing that the intervention yielded _statistically significant increase, decrease, no effect_ with respect to the outcome), and an _evidence snippet_. The latter is an excerpt from the abstract providing support for a particular label. This may be viewed as an explanation or "rationale". Together, these five elements form our targets. Table 1 provides basic data statistics for our training, validation, and test sets.
Footnote 1: Although the full dataset contains full-text RCT reports, here use use an abstract-only subset.
Evaluation DataTo get an accurate assessment of model performance, Nye et al. (2022) also collected _exhaustive_ manual annotations from medical experts for 160 RCT abstracts. Owing to the inherent noise in distantly-supervised training lables, we observed that human annotators often identify substantially more tuples per abstract -- 4.97 tuples per abstract in the _validation_ set, and 4.01 in the _test_ set, as opposed to 2.76 in the (non-exhaustive) _training_ set (Table 1). We provide more detailed examples of this phenomenon in our error analysis in Section 4.2.
Figure 3: An illustration of the full evidence inference task. An end-to-end model is expected to extract all ICOs for which results were reported (highlighted here in pink, green, and orange) in an abstract describing an RCT, and infer a label (_significant increase_, _significant decrease_, _no significant difference_) based on the relevant evidence snippets which are also to be output (underlined here).
### Experimental Setup
We performed all of our experiments on a single NVIDIA Quadro RTX 8000 GPU. We used the Huggingface library (v4.26.1; Wolf et al., 2020) and publicly available checkpoints.2 of the language models we used in our experiments Our best performing model was trained for 8 epochs with a learning rate of \(1e-6\), batch size of 2 (for both training and evaluation), with a maximum input length of 1024, and maximum output length of 512. For hyperparameter tuning, we only varied the learning rate, and max epochs. The remaining hyperparameters were left to their default values. We used the Adam optimizer without gradient accumulation or gradient checkpointing.
Footnote 2: [https://huggingface.co/docs/transformers/model_doc/flam-t5](https://huggingface.co/docs/transformers/model_doc/flam-t5)
## 4 Results
We perform both an end-to-end evaluation (Table 2) and ablate performance over ICO-triplet extractions only (Table 3), maintaining comparability to existing work (Nye et al., 2022). Section 4.1 contains details of our manual evaluation, and Section 4.2 a detailed error analysis of model performance.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{Train} & \multicolumn{2}{c}{Dev} & \multicolumn{2}{c}{Test} \\ \hline Abstracts & 1,964 & (1.00) & 46 & (1.00) & 89 & (1.00) \\ Total ICO Tuples & 5,430 & (2.76) & 229 & (4.97) & 357 & (4.01) \\ Unique ICO Triplets & 4,951 & (2.52) & 224 & (4.86) & 351 & (3.94) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset statistics. We report the number of abstracts and the number of relations per abstract (denoted parenthetically). Development and test set statistics differ from their source (Nye et al., 2022) as we omit documents with no annotated relations.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Full Inference End to End** & Precision & Recall & F-1 \\ \hline BRAN (Verga et al., 2018) & 0.05 & 0.41 & 0.08 \\ DyGIE++ (Wadden et al., 2019) & 0.24 & 0.13 & 0.17 \\ ELI (Nye et al., 2022) & 0.33 & 0.31 & 0.32 \\ \hline \multicolumn{4}{c}{_(end-to-end generation of ICO triplets with labels and supporting evidence)_} \\ BART (Lewis et al., 2020) & 0.38 & 0.33 & 0.35 \\ T5-_base_(Raffel et al., 2020) & 0.56 & 0.35 & 0.43 \\ Flan-T5-_base_(Chung et al., 2022) & 0.69 & 0.43 & 0.53 \\
**Flan-T5-_large_** & **0.75** & **0.48** & **0.59** \\ Flan-T5-_large_ (without evidence span extraction) & 0.49 & 0.36 & 0.41 \\ \hline \hline \end{tabular}
\end{table}
Table 2: End-to-end relation extraction results, compare to Nye et al. (2022) Table 2a
### Evaluation
Open-ended free text generation poses challenges to the evaluation of model outputs. Past work in the area, especially prior to LLMs, tended to perform a "strict" evaluation (Taille et al., 2020) requiring exact matches of entities and their corresponding relations to reference targets. This was appropriate because the models were effectively annotating input tokens, and references are assumed to be extractive. By contrast, because they are abstractive, LLMs can produce a variety of outputs that convey the desired semantic content--i.e., aligned with the reference target--without matching words exactly.
This motivates manual evaluation of RE outputs. Specifically, we recruited three medical doctors (domain experts) via the Upwork platform.3 We asked these experts to individually evaluate each reference (to measure precision) _and_ generated tuple (to measure recall) from our exhaustive test set. For each reference tuple we asked experts to indicate: (1) Whether the reference ICO triplet appears in the set of generated tuples for that given abstract; and (2) Whether the target tuple as a whole could be derived from the set of generated tuples for that given abstract. Similarly, for each generated tuple we asked annotators to indicate: (1) Whether the ICO triplet appears in the abstract; and (2) Whether the tuple as a whole is correct (i.e., if it also gets the relevant supporting evidence and reported directionality). We provide examples of each category in the Appendix A. Human evaluators achieved strong annotation agreement; Fleiss kappa, \(\kappa=0.77\). All three evaluators chose the same relevance label \(\sim\)92.4% of the time. We derived final (consensus) labels by simple majority vote.
Footnote 3: [https://upwork.com](https://upwork.com). We paid these experts §30/hour to evaluate generated tuples.
### Error Analysis
We now describe, and provide examples of, some of the recurring error types from our best performing model (Flan-T5-large) on the validation data, and a set of abstracts from approximately 660,000 RCTs from the Trialstreamer database.4
Footnote 4: [https://trialstreamer.ieai.robotreviewer.net/](https://trialstreamer.ieai.robotreviewer.net/)
Incorrectly structured outputsThe model sometimes generated incorrectly formatted outputs which cannot be evaluated because they do not conform to the expected structure. (Recall that the model is not explicitly constrained to yield outputs that follow the desired linearization scheme.) These include generations where: (1) there are missing elements in the (partial) ICO triplets; (2) outputs have an invalid syntactic structure (and are thus
\begin{table}
\begin{tabular}{l r r r} \hline \hline
**ICO-Triplet Extraction** & Precision & Recall & F-1 \\ \hline DyGIE++ (Wadden et al., 2019) & 0.45 & 0.47 & 0.46 \\ ELI (Nye et al., 2022) & 0.46 & 0.69 & 0.55 \\ \hline \multicolumn{4}{c}{_(end-to-end generation of ICO-triplets)_} \\ T5-base (Raffel et al., 2020) & 0.68 & 0.62 & 0.65 \\ Flan-T5-_base_(Chung et al., 2022) & 0.78 & 0.68 & 0.73 \\
**Flan-T5-_large_** & **0.85** & **0.74** & **0.79** \\ \hline \hline \end{tabular}
\end{table}
Table 3: ICO-Triplet Ablation, compare to Nye et al. (2022) Table 2(b) (entity extraction)
unparseable by any downstream tools); (3) some elements are duplicated; (4) the output contains irrelevant or unrelated tokens. The following is an example of one such instance:
**Generated**: [none, score, no, none, score was not significantly different between the two groups., no significant difference]
Here the instance has an incorrect number of tuple elements (6 instead of 5), multiple elements are invalid, and while it does produce a valid label ("no significant difference"), there are no primary intervention and outcome spans associated with the label. This behavior occurs in only a small fraction (\(\sim\)0.53%) of the RCT abstracts from Trialstreamer we ran through our model.
**Opposite inference labels for same ICOs** Approximately 12.3% of generated tuples had ICO-triplet matches in the reference set (i.e., the ICO triplet was correctly extracted), but the inferred label regarding the reported findings concerning these was incorrect (e.g., significant _increase_ instead of significant _decrease_). On inspection we found that such tuples belonged to two categories: (1) The primary intervention and comparator were swapped (leading to a flipped, albeit still correct, inference label with the same extracted evidence span); (2) Minor differences in generated _outcomes_ which resulted in a change in the label. The following is an example of the latter from our development set (PMID: 24227660:5)
Footnote 5: [https://pubmed.ncbi.nlm.nih.gov/24227660/](https://pubmed.ncbi.nlm.nih.gov/24227660/)
**Abstract snippet**: Canagliflozin increased urinary glucose excretion in a dose-dependent manner and produced statistically significant reductions in body weight compared with placebo (least squares mean percent changes from baseline of -2.2%, -2.9%, -2.7%, and -1.3% with canagliflozin 50, 100, and 300 mg and placebo; P \(<\) 0.05 for all comparisons). Overall adverse event (AE) rates were similar across groups. Canagliflozin was associated with higher rates of genital mycotic infections in women, which were generally mild and led to few study discontinuations. Osmotic diuresis-related AE rates were low and similar across groups.
**Reference**: [canagliflozin, body weight, placebo, Canagliflozin increased urinary glucose excretion in a dose-dependent manner and produced statistically significant reductions in body weight compared with placebo., canagliflozin [LABEL] significantly decreased [OUT] body weight [COMP] placebo]
**Generated**: [canagliflozin, body weight reduction, placebo, Canagliflozin increased urinary glucose excretion in a dose-dependent manner and produced statistically significant reductions in body weight compared with placebo., canagliflozin [LABEL] significantly increased [OUT] body weight reduction [COMP] placebo]
An _increase_ in _body weight reduction_ is functionally the same as a _decrease_ in _body weight_, and this explains the label flip.
Combining multiple tuplesOn average, our best performing model generates 3.49 ICO tuples per instance, as opposed to 4.01 per instance in the reference test set (Table 1). This difference appears to be due to the model _combining_ multiple interventions and/or outcomes into one in cases where the inference label is preserved, in turn reducing the number of generated tuples. Consider the following example6 from our dev set where this behavior can be observed (PMID: 279810247):
Footnote 6: Example simplified for brevity.
**Reference**: [memory game with fruit, banana intake, no fruit game, evidence, significant increase], [memory game with fruit, mandarin intake, no fruit game, evidence, significant increase]
**Generated**: [fruit version of memory game, intake of mandarins and bananas, no fruit game, evidence, significant increase]
Here we can observe that the generated tuple has combined banana and mandarin intake, yielding a single output instead of the two in the reference.
**Correctly generated but without any corresponding reference** This type of "error" is limited to non-exhaustive reference sets, and occurs when there is no corresponding reference tuple for a correctly generated ICO output (because the reference set is non-exhaustive). While this is rare, instances featuring this type of error highlight the utility of the retrieved evidence snippets, which can be used to verify the output. The following is one such example taken from an abstract that was **not** exhaustively annotated (PMID: 282110208):
Footnote 7: [https://pubmed.ncbi.nlm.nih.gov/27981024/](https://pubmed.ncbi.nlm.nih.gov/27981024/)
**Abstract snippet**:9 High-risk patients undergoing brain surgery were randomly assigned to a usual care group (control group) or a GDFR group. In the GDFR group, (1) fluid maintenance was restricted to 3 ml/kg/h of a crystalloid solution and (2) colloid boluses were allowed only in case of hypotension associated with a low cardiac index and a high stroke volume variation. The primary outcome variable was ICU length of stay, and secondary outcomes were lactates at the end of surgery, postoperative complications, hospital length of stay, mortality at day 30, and costs. ICU length of stay was shorter (3 days [1-5] vs. 6 days [3-11], p = 0.001) and ICU costs were lower in the GDFR group. The total number of complications (46 vs. 99, p = 0.043) and the proportion of patients who developed one or more complications (19.2 vs. 34.7%, p = 0.034) were smaller in the GDFR group. Hospital length of stay and costs, as well as mortality at 30 day, were not significantly reduced.
Footnote 9: Shortened for brevity.
**Reference**: [goal-directed fluid restriction (GDFR) strategy, median hospital length of stay, usual care, _Hospital length of stay and costs, as well as mortality at 30-day, were not significantly reduced_, no significant difference]
**Generated**: [goal-directed fluid restriction (GDFR), ICU length of stay, usual care group (control group), _ICU length of stay was shorter (3 days [16] vs. 6 days [3{11], p = 0.001) and ICU costs were lower in the GDFR group.,_ significantly decreased]
Here, on inspecting only the ICO-triplet and the inference label, one might assume that the generated tuple is incorrect with respect to the reference (due to a flipped label). However, examining the extracted evidence in support of the label and the full abstract reveals that the study does indeed report _median length of hospital stay_ and _ICU-length of stay_ as separate outcomes with different (opposite) labels.
## 5 A Prototype for Browsing Structured Evidence
To further demonstrate the (potential) utility of structured evidence extraction over the published evidence base, we make available a demonstration web application.10 This permits free-text search, which retrieves relevant structured evidence extracted from papers (we also link back to the original PubMed articles).
Footnote 10: Hosted at [http://ico-relations.ebm-nlp.com](http://ico-relations.ebm-nlp.com).
We processed all Randomized Control Trials indexed by Trialstreamer (Marshall et al., 2020) as of June 2022, yielding 657,698 total studies and a total of 1,204,027 extracted relations. Relation extraction required 584 GPU (32GB NVIDIA V100) hours. Of the 770,356 unique Trialstreamer documents, approximately 50k instances were missing a full abstract. When processed via FLAN, 74k (about 10%) had an unparseable output; lacking (or possessing an extra) a syntatic element (e.g. missing a bracket or having an extra one, or other terminator symbol). Another 5k had an output with an incorrect number of fields. 82 had a malformed label. When parsing misclassified RCTs (erroneously included in Trialstreamer), the model would hallucinate ICOs and findings not present in the data.
The prototype implements a BM25 search (Robertson et al., 1994) backed by SQLite (Hipp, 2020), allowing for search over multiple fields.11 The website allows for downloading search results (by search or by list of PMIDs/PMCIDs); our hope is that this may be of interest to researchers. We will make the entire raw database of inferred relations available upon publication.
Footnote 11: We experimented with embedding based methods but were ultimately disappointed with results
## 6 Discussion
We have introduced and evaluated a state-of-the-art approach to end-to-end structured evidence extraction from natural language articles describing the conduct and results of clinical trials. Specifically, we treat this problem as a conditional generation task and fine-tune Flan-T5 (Chung et al., 2022)--a modestly sized instruction-tuned sequence-to-sequence model--to consume unstructured texts and yield structured tuples composed of interventions, comparators, outcomes and the results reported regarding these. The latter comprises a discrete prediction encoding the direction of the reported finding, and a snippet of evidence supporting this determination. Ablations indicate the importance of jointly extracting evidence spans to support the inference task; this may have implications for work on relation extraction via conditional generative models more broadly.
Figure 4: A screenshot of our prototype search interface over structured evidence. (A) User inputs a search query and select the fields (B) to be searched over via an SQL search (C; Hipp 2020, e.g., entire abstract, only ICOs). Search results can either be downloaded as a structured CSV (D) or the user can browse through individual results (E). We retrieve up to 100 documents per search query with 10 documents per page (F). The interface allows the users to read expanded abstracts, view structured findings _(shown above)_, and expand structured markup for a tabular view of findings.
Structured evidence extraction is an important task for realizing the promise of evidence-based medicine (EBM; Sackett 1997), which aspires to inform treatment decisions on the basis of all available relevant evidence. The vast (unstructured) evidence base and rapid accumulation of new findings render practicing EBM challenging. The proposed approach to evidence extraction achieves substantially better performance than the prior state-of-the-art (Nye et al., 2022), and this brings us closer to being able to synthesize all evidence relevant to a given query, in real-time.
To illustrate the potential utility of this model, we have also made available a prototype interface that permits search directly over structured evidence tuples automatically extracted from a comprehensive database of randomized controlled trial reports. Our hope is that this demonstrates the precision of model outputs, and suggests how such extracted evidence might help researchers and healthcare providers navigate the evidence base more efficiently than is currently possible. We also anticipate that the resultant database (comprising tuples from all RCTs in humans) may be a useful resource for researchers in machine learning for healthcare broadly, as one might draw upon such trial results to inform and/or justify ML predictions (Yang et al., 2023; Naik et al., 2022).
LimitationsThis work has several important limitations. First, while we have reported promising empirical results, the model we have trained here still makes errors (e.g., provides inexhaustive extractions from an inputs; see Section 4.2). Any downstream use of the structured evidence outputs need to take this into account.
A methodological limitation is that we did not investigate the capabilities of even larger LLMs like GPT-3.5/4 Brown et al. (2020) for this task. One could, in principle, use OpenAI's API to fine-tune such models for this task, and given their size it is likely that this would yield (probably moderately) improved results. We opted not to pursue this primarily because we prefer to use open-source models, to ensure scientific transparency and so that we can release model weights. Furthermore, the main contribution here is the framing of the task as a language modeling problem; the particular choice of underlying LLM is a secondary consideration.
Finally, while we think structured evidence in the format that we have extracted--providing explicit sets of interventions, comparators, outcomes and evidence concerning these--will provide meaningful downstream utility for those interested in navigating and making sense of the published evidence base, it is currently an intermediate output. The actual utility of this sort of model for downstream tasks which ultimately might affect care will require conducting further research.
This work was supported by the National Institutes of Health (NIH) under award R01LM012086, and by the National Science Foundation (NSF) award 1750978.
|
2310.09867 | A remark on Ado's Theorem for principal ideal domains | Ado's Theorem had been extended to principal ideal domains independently by
Churkin and Weigel. They demonstrated that if $R$ is a principal ideal domain
of characteristic zero and $\mathfrak{L}$ is a Lie algebra over $R$ which is
also a free $R$-module of finite rank, then $\mathfrak{L}$ admits a finite
faithful Lie algebra representation over $R$.
We present a quantitative proof of this result, providing explicit bounds on
the degree of the Lie algebra representations in terms of the rank of the free
module. To achieve it, we generalise an established embedding theorem for
complex Lie algebras: any Lie algebra as above embeds within a larger Lie
algebra that decomposes as the direct sum of its nilpotent radical and another
subalgebra. | Andoni Zozaya | 2023-10-15T15:47:36Z | http://arxiv.org/abs/2310.09867v1 | # A remark on Ado's theorem for principal ideal domains
###### Abstract.
Ado's Theorem had been extended to principal ideal domains independently by Churkin and Weigel. They demonstrated that if \(R\) is a principal ideal domain of characteristic zero and \(\mathfrak{L}\) is a Lie algebra over \(R\) which is also a free \(R\)-module of finite rank, then \(\mathfrak{L}\) admits a finite faithful Lie algebra representation over \(R\).
Key words and phrases:Ado's Theorem, Lie algebras, representations, principal ideal domains 2020 Mathematics Subject Classification: 17B10, 17B30, 17B35 The author acknowledges support by the Basque Government, project IT483-22, and the Spanish Government, project PID2020-117281GB-I00, partly with ERDF funds.
Iwasawa [8] extended Ado's Theorem to Lie algebras over fields of positive characteristic, and there are further generalizations in the base ring. Following the terminology of [13], for a general ring \(R\) we denote by \(R\)-Lie lattice an \(R\)-Lie algebra that is a free \(R\)-module of finite rank as well. Actually, any \(R\)-Lie algebra that admits a finite matricial representation is indeed an \(R\)-Lie lattice. Conversely, suppose that \(R\) is a principal ideal domain (PID) of characteristic zero or a general ring of positive characteristic, Churkin [5] and Weigel [13] proved that every \(R\)-Lie lattice \(\mathfrak{L}\) admits a finite faithful \(R\)-Lie algebra representation \(\Phi\colon\mathfrak{L}\hookrightarrow\operatorname{End}_{R}(V)\), where \(V\) is a free \(R\)-module of finite rank. Like for fields, the degree of the preceding representation \(\Phi\) is defined to be \(\operatorname{rk}_{R}V,\) the rank of \(V\) as a free \(R\)-module, and the degree of an \(R\)-Lie lattice is defined exactly as in (1.1).
Suppose that \(R\) is a PID of characteristic zero. Both [5] and [13] follow Jacobson's proof of the Theorem of Ado (see [9, Chapter VI]) --which in its turn, is based on a proof due by Harish-Chandra [7]--, but, unlike for fields, it cannot be directly affirmed that \(\deg\mathfrak{L}\) depends uniquely on \(\operatorname{rk}_{R}\mathfrak{L}.\) In fact, in [13, Proposition 3.4], the degree-to-be is finite because \(R\) is a Noetherian ring, and so a particular ascending chain of ideals must be stationary. However, the length of the chain --which eventually will be the degree of the representation--, might not be bounded in terms of \(\operatorname{rk}_{R}\mathfrak{L}.\)
In this note, we collect several existing proofs of Ado's Theorem, and by adapting them to PIDs we prove the following:
**Theorem 1.1**.: _Let \(R\) be a PID of characteristic zero and let \(\mathfrak{L}\) be an \(R\)-Lie lattice of rank \(r.\) Then, \(\deg\mathfrak{L}\leq r+\eta\frac{2^{r}}{\sqrt{r}},\) where \(\eta\sim 2.763.\)_
In particular, we recover for PIDs the best bound yet known over fields of characteristic zero.
More concretely, in Subsections 3.1 and 3.2 we reproduce quantitative results about the representability of nilpotent and splittable \(R\)-Lie lattices, and in Subsection 3.3 (see Theorem 3.5) we prove the following:
**Theorem 1.2**.: _Let \(R\) be a PID of characteristic zero and let \(\mathfrak{L}\) be an \(R\)-Lie lattice. There exists an \(R\)-Lie lattice of the form \(\bar{\mathfrak{L}}=R_{n}(\bar{\mathfrak{L}})\rtimes\mathfrak{S}\) extending \(\mathfrak{L},\) where \(R_{n}(\bar{\mathfrak{L}})\) is the nilpotent radical of \(\bar{\mathfrak{L}}.\)_
This result is based on the analogue for complex Lie algebras proved by Neretin [11], and previously discussed in [10, 12]. Lastly, Theorem 1.1 is proved in Subsection 3.4 using Theorem 1.2 and the arguments of the previous subsections.
Finally, we must note that for rings of positive characteristic, the generalisation is proved reproducing word-by-word the original demonstration of Iwasawa, and therefore we obtain the same bound we had for these fields, namely
\[\deg\mathfrak{L}\leq n^{\operatorname{rk}^{3}\mathfrak{L}},\]
where \(n=\operatorname{char}R\) (compare with [2, Section 6.24]).
**Notation** Hereinafter \(R\) will always be a PID of characteristic zero, and we will use \(K\) to denote fields. For an \(R\)-Lie lattice \(\mathfrak{L}\), \(R_{n}(\mathfrak{L})\) and \(R_{s}(\mathfrak{L})\) refer to the nilpotent and solvable radicals of \(\mathfrak{L}.\) We denote by \(\dim_{K}\) the \(K\)-vector space dimension, by \(\operatorname{rk}_{R}\) (\(\operatorname{rk}\) when \(R\) is clear from the context) the rank of a free \(R\)-module, by \(\langle X\rangle_{R}\) the \(R\)-module generated by a set \(X,\) and \(\mathfrak{I}\leq\mathfrak{L}\) and \(\mathfrak{I}\trianglelefteq\mathfrak{L}\) represent respectively that \(\mathfrak{I}\) is a subalgebra
and an ideal of \(\mathfrak{L}.\) We will use the abbreviation \([\mathfrak{I}_{1},\ldots,\mathfrak{I}_{n}]=[[\mathfrak{I}_{1},\ldots,\mathfrak{I}_ {n-1}],\mathfrak{I}_{n}]\) for iterated Lie brackets, and throughout the manuscript "\(:=\)" is used to mean _defined to be_ in contrast with "\(=\)", which is used to denote _equal to_.
Finally, we recall that an \(R\)-submodule \(\mathfrak{I}\leq\mathfrak{L}\) is isolated if whenever \(rx\in\mathfrak{I}\) for some \(r\in R\) and \(x\in\mathfrak{L},\) then \(x\in\mathfrak{I},\) that is, the quotient \(R\)-module \(\nicefrac{{x}}{{\mathfrak{I}}}\) is torsion-free, and thus free.
## 2. Preliminaries: adjoint and regular representations
There are two natural Lie algebra representations in any \(R\)-Lie lattice \(\mathfrak{L}.\) On the one hand, by virtue of Jacobi's identity the adjoint representation \(\operatorname{Ad}\colon\mathfrak{L}\to\operatorname{End}_{R}(\mathfrak{L}),\)\(x\mapsto\operatorname{ad}_{x},\) where \(\operatorname{ad}_{x}\colon\mathfrak{L}\to\mathfrak{L},\)\(y\mapsto[x,y],\) is a finite Lie algebra representation. However, this representation is not faithful as its kernel is the centre of \(\mathfrak{L},\) namely
\[Z(\mathfrak{L})=\{x\in\mathfrak{L}\mid[x,y]=0\ \forall y\in\mathfrak{L}\}.\]
In particular, when \(\mathfrak{L}\) is a semisimple \(R\)-Lie lattice, i.e. \(\mathfrak{L}\) has no abelian ideal, then \(\deg\mathfrak{L}\leq\operatorname{rk}_{R}\mathfrak{L}.\)
Typically, Ado's Theorem is proved by constructing a _finite_ representation \(\Phi\colon\mathfrak{L}\to\operatorname{End}_{R}(W)\) that is faithful in \(Z(\mathfrak{L}),\) and then taking the finite faithful representation \(\operatorname{Ad}\oplus\Phi.\)
On the other hand, \(\mathfrak{L}\) acts on its universal enveloping algebra \(\mathcal{U}_{R}(\mathfrak{L}).\) Indeed, the tensor algebra of \(\mathfrak{L}\) is
\[\mathbf{T}_{R}(\mathfrak{L})=R\oplus\mathfrak{L}_{1}\oplus\mathfrak{L}_{2} \oplus\cdots\oplus\mathfrak{L}_{i}\oplus\ldots,\]
where \(\mathfrak{L}_{i}:=\mathfrak{L}\otimes\stackrel{{(i)}}{{\ldots}} \otimes\mathfrak{L}\) is an \(R\)-module with the natural \(R\)-module structure of the tensor product, and the multiplication in \(\mathbf{T}_{R}(\mathfrak{L})\) is defined extending by linearity the rule
\[(x_{1}\otimes\cdots\otimes x_{i})\otimes(y_{1}\otimes\cdots\otimes y_{j})=x_{ 1}\otimes\cdots\otimes x_{i}\otimes y_{1}\otimes\cdots\otimes y_{j}.\]
Then, the universal enveloping algebra of \(\mathfrak{L}\) is
\[\mathcal{U}_{R}(\mathfrak{L}):=\frac{\mathbf{T}_{R}(\mathfrak{L})}{\mathfrak{ R}},\]
where \(\mathfrak{R}\) is the ideal generated by the elements
\[[x,y]-(x\otimes y-y\otimes x),\ \forall x,y\in\mathfrak{L}. \tag{2.1}\]
The image of \(x_{i_{1}}\otimes\cdots\otimes x_{i_{t}}\) in \(\mathcal{U}_{R}(\mathfrak{L})\) will be simply denoted by the monomial \(x_{i_{1}}\ldots x_{i_{t}}.\)
Since \(R\) is a PID and \(\mathfrak{L}\) is finitely generated, the natural inclusion \(\iota\colon\mathfrak{L}\cong\mathfrak{L}_{1}\hookrightarrow\mathcal{U}_{R}( \mathfrak{L})\) is a monomorphism (see [13, Theorem 3.2]), and therefore, we can assume that \(\mathfrak{L}\subseteq U_{R}(\mathfrak{L}).\) The universal enveloping algebra is characterised by the Poincare-Birkhoff-Witt Theorem:
**Theorem 2.1** (cf. [13, Theorem 3.2]).: _Let \(\mathfrak{L}\) be an \(R\)-Lie lattice with basis \(\{x_{1},\ldots,x_{r}\}.\) Then \(\mathcal{U}_{R}(\mathfrak{L})\) is a free \(R\)-module with basis_
\[\left\{x_{1}^{\alpha_{1}}\ldots x_{r}^{\alpha_{r}}\mid\alpha_{i}\in\mathbb{N}_ {0}\right\}, \tag{2.2}\]
_where \(x_{1}^{0}\ldots x_{r}^{0}=1\) is the identity of \(R.\)_
The idea is that given two monomials, their product can expressed as a suitable linear combination of elements of the form (2.2) by successively applying the identity \(x_{j}x_{i}=x_{i}x_{j}-[x_{i},x_{j}]\) to reorder the indeterminates.
As we have said, \(\mathfrak{L}\) acts on \(\mathcal{U}_{R}(\mathfrak{L})\) by multiplication and this gives rise to the (left) regular representation \(\mathcal{R}\colon\mathfrak{L}\hookrightarrow\operatorname{End}_{R}\left( \mathcal{U}_{R}(\mathfrak{L})\right),\) where \(\mathcal{R}(x)\) is the left multiplication map \(\ell_{x}\colon\mathcal{U}_{R}(\mathfrak{L})\to\mathcal{U}_{R}(\mathfrak{L}),\)\(y\mapsto xy.\)
By virtue of (2.1), \(\mathcal{R}\) is a Lie algebra representation, and it is faithful as
\[\ell_{x}(1)=x\neq y=\ell_{y}(1)\text{ for all }x,y\in\mathfrak{L}.\]
Although \(\mathcal{R}\) is not finite, by virtue of the universal property of the enveloping algebra (see [13, Proposition 3.1]), every finite Lie algebra representation of \(\mathfrak{L}\) factors through \(\mathcal{U}_{R}(\mathfrak{L}).\) Hence, \(\mathfrak{L}\) admits a finite faithfull Lie algebra representation if and only if there exists an isolated ideal \(\mathfrak{X}\unlhd\mathcal{U}_{R}(\mathfrak{L})\) such that \(\mathfrak{X}\cap\mathfrak{L}=\{0\}.\) Actually, for the _if_ it is enough to consider the induced action on the free \(R\)-module \(\nicefrac{{\mathcal{U}_{R}(\mathfrak{L})}}{{\mathfrak{X}}}.\)
## 3. Main result
Let \(\mathfrak{L}\) be an \(R\)-Lie lattice and let \(K\) be a field extending \(R\) -e.g. the fraction field of \(R\)-, the tensorial \(K\)-Lie algebra \(\mathfrak{L}_{K}:=\mathfrak{L}\otimes_{R}K\) will be useful in the following subsections. Note in passing that even though \(\mathfrak{L}_{K}\) admits a matricial representation \(\Phi\colon\mathfrak{L}_{K}\hookrightarrow\operatorname{M}_{n}(K),\)\(\Phi|_{\mathfrak{L}}\) might not be a matricial representation over \(R.\)
### Nilpotent Lie lattices
For nilpotent \(R\)-Lie lattices the construction of Birkhoff [3] is still valid over PIDs.
Suppose that \(\mathfrak{L}\) is a nilpotent \(R\)-Lie lattice of nilpotency class \(c,\) and let \(K\) be the fraction field of \(R.\) Since the Lie bracket is bilinear, \(\mathfrak{L}_{K}=\mathfrak{L}\otimes_{R}K\) is also a nilpotent Lie algebra of nilpotency class \(c.\)
For each \(i\in\{1,\ldots,c\},\) define the isolated ideal \(\mathfrak{L}_{i}=[\mathfrak{L}_{K},\nicefrac{{(i)}}{{\cdot}},\mathfrak{L}_{ K}]\cap\mathfrak{L}\unlhd\mathfrak{L},\) and choose a basis \(\{x_{1},\ldots,x_{r}\}\) for \(\mathfrak{L}\) as free \(R\)-module in such way that the first elements \(x_{1},\ldots,x_{r_{1}}\) are an \(R\)-basis for \(\mathfrak{L}_{c},\) the first elements \(x_{1},\ldots,x_{r_{2}}\) (\(r_{2}>r_{1}\)) are an \(R\)-basis for \(\mathfrak{L}_{c-1}\) and so forth. In view of Theorem 2.1, the elements of \(\mathcal{U}_{R}(\mathfrak{L})\) are of the form \(\sum_{\alpha\in\mathbb{N}_{0}^{(r)}}c_{\alpha}\mathbf{x}^{\alpha},\) where \(\mathbf{x}^{\alpha}\) stands for \(x_{1}^{\alpha_{1}}\ldots x_{r}^{\alpha_{r}}.\) Accordingly, define a weight function \(\omega\colon\mathcal{U}_{R}(\mathfrak{L})\to\mathbb{N}_{0}\cup\{\infty\}\) in the following fashion:
\[\omega(x_{i})=\max\left\{m\mid x_{i}\in\mathfrak{L}_{m}\right\}, \omega(\mathbf{x}^{\alpha})=\sum_{i=1}^{r}\alpha_{i}\omega(x_{i}),\] \[\omega\left(\sum_{\alpha}c_{\alpha}\mathbf{x}^{\alpha}\right)= \min\left\{\omega(\mathbf{x}^{\alpha})\mid c_{\alpha}\neq 0\right\}, \omega(0)=\infty.\]
Observe that \(\omega([x_{i},x_{j}])\geq\omega(x_{i})+\omega(x_{j})\) for all \(i,j\in\{1,\ldots,r\},\) and so
\[\omega(uv)\geq\omega(u)+\omega(v)\ \forall u,v\in\mathcal{U}_{R}(\mathfrak{L}). \tag{3.1}\]
For each \(m\in\mathbb{N}_{0},\) consider the isolated \(R\)-modules
\[\mathfrak{U}^{m}(\mathfrak{L}):=\{u\in\mathcal{U}_{R}(\mathfrak{L})\mid\omega( u)>m\}\]
-or simply \(\mathfrak{U}^{m}\) when the lattice is clear from the context-. By (3.1), \(\mathfrak{U}^{m}(\mathfrak{L})\) is an ideal and thus for every \(x\in\mathfrak{L}\) we have that \(\ell_{x}(\mathfrak{U}^{m})\subseteq\mathfrak{U}^{m},\) so for any \(m\in\mathbb{N}\) the regular representation induces the finite representation
\[\mathcal{R}_{m}\colon\mathfrak{L}\to\operatorname{End}_{R}\left(\frac{ \mathcal{U}_{R}(\mathfrak{L})}{\mathfrak{U}^{m}(\mathfrak{L})}\right),\ x\mapsto\ell_{x},\]
whose kernel is \(\mathfrak{L}\cap\mathfrak{U}^{m}(\mathfrak{L})\) -with an abuse of notation, whenever \(f\in\operatorname{End}_{R}(\mathcal{U}_{R}(\mathfrak{L}))\) satisfies \(f(\mathfrak{X})\subseteq\mathfrak{X}\) for some ideal \(\mathfrak{X}\unlhd\mathcal{U}_{R}(\mathfrak{L})\), we will keep \(f\) to denote the endomorphism of \(\operatorname{End}_{R}\left(\nicefrac{{\mathcal{U}_{R}(\mathfrak{L})}}{{ \mathfrak{X}}}\right)\) defined as \(x+\mathfrak{X}\mapsto f(x)+\mathfrak{X}\).
Since \(\mathfrak{L}\cap\mathfrak{U}^{c}(\mathfrak{L})=\{0\}\), \(\mathcal{R}_{c}\) is a finite faithful representation and its degree is
\[\left|\left\{\mathbf{x}^{\alpha}\mid\omega(\mathbf{x}^{\alpha})\leq c\right\} \right|,\]
as these monomials are a basis for \(\nicefrac{{\mathcal{U}_{R}(\mathfrak{L})}}{{\mathfrak{U}^{c}}}\). Finally, this number was bounded by Burde (see [4, Lemma 5(3) and Proposition 6]):
\[\deg\mathcal{R}_{c}=\operatorname{rk}_{R}\left(\frac{\mathcal{U}_{R}( \mathfrak{L})}{\mathfrak{U}^{c}(\mathfrak{L})}\right)=\left|\left\{\mathbf{x }^{\alpha}\mid\omega(\mathbf{x}^{\alpha})\leq c\right\}\right|\leq\eta\frac{2^ {r}}{\sqrt{r}}, \tag{3.2}\]
where \(\eta=\sqrt{\frac{2}{\pi}}\prod_{l=1}^{\infty}\frac{2^{l}}{2^{l}-1}\sim 2.763\).
### Splittable Lie lattices
We say that \(\mathfrak{L}\) is splittable if the short exact sequence
\[0\to R_{n}(\mathfrak{L})\to\nicefrac{{\mathfrak{L}}}{{R_{n}(\mathfrak{L})}}\to 0\]
splits in the category of \(R\)-Lie algebras, that is, if there exists an \(R\)-Lie subalgebra \(\mathfrak{S}\leq\mathfrak{L}\) such that \(\mathfrak{L}=R_{n}(\mathfrak{L})\rtimes\mathfrak{S}\).
In the splittable case we can blend the preceding regular representation for \(R_{n}(\mathfrak{L})\) and representations induced from derivations, namely endomorphisms \(D\in\operatorname{End}_{R}(\mathfrak{L})\) that satisfy Leibniz identity, i.e.
\[D([x,y])=[x,D(y)]+[D(x),y]\ \forall x,y\in\mathfrak{L}.\]
The collection of all derivations of \(\mathfrak{L}\) is denoted by \(\operatorname{Der}_{R}(\mathfrak{L})\). For example, in view of Jacobi's identity, \(\operatorname{ad}_{x}\) is a derivation for every \(x\in\mathfrak{L}.\) Starting from \(D\in\operatorname{Der}_{R}(\mathfrak{L})\) we can define a derivation \(D^{*}\) of \(\mathcal{U}_{R}(\mathfrak{L})\) by imposing Jacobi's identity, that is, by taking the linear extension of the rule
\[D^{*}(x_{i_{1}}\ldots x_{i_{t}})=\sum_{j}x_{i_{1}}\ldots x_{i_{j-1}}D(x_{i_{j} })x_{i_{j+1}}\ldots x_{i_{t}},\]
together with \(D^{*}(1)=0\) as it must happen for every derivation of an algebra with identity.
In keeping with the notation of the previous subsection, we have:
**Lemma 3.1**.: _Let \(\mathfrak{L}\) be a nilpotent \(R\)-Lie lattice and \(D\in\operatorname{Der}_{R}(\mathfrak{L}).\) Then \(D^{*}(\mathfrak{U}^{m}(\mathfrak{L}))\subseteq\mathfrak{U}^{m}(\mathfrak{L})\) for every \(m\in\mathbb{N}.\)_
Proof.: Let \(c\) be the nilpotency class of \(\mathfrak{L}\), and let \(\{x_{1},\ldots,x_{r}\}\) be the basis of \(\mathfrak{L}\) with respect to which the weight function \(\omega\) has been defined. Since \(D\) is a derivation, \(D(\mathfrak{L}_{i})\subseteq\mathfrak{L}_{i}\) for all \(i\in\{1,\ldots,c\}\), so \(\omega(D(x_{i}))\geq\omega(x_{i})\) for all \(i\in\{1,\ldots,r\}.\) Hence, if \(x_{i_{1}}\ldots x_{i_{t}}\in\mathfrak{U}^{m}(\mathfrak{L})\), by (3.1),
\[\omega\left(D^{*}(x_{i_{1}}\ldots x_{i_{t}})\right)\geq\min_{j=1,\ldots,t} \left\{\omega(x_{i_{1}}\ldots D(x_{i_{j}})\ldots x_{i_{t}})\right\}\geq\omega( x_{i_{1}}\ldots x_{i_{t}})>m.\qed\]
**Proposition 3.2** (Zassenhaus extension, cf. [9, Chapter VI.2, Theorem 1]).: _Let \(\mathfrak{L}\) be a splittable \(R\)-Lie lattice and let \(c\) be the nilpotency class of \(R_{n}(\mathfrak{L})\). Then, there exists a finite \(R\)-Lie algebra representation_
\[\Phi\colon\mathfrak{L}\to\operatorname{End}_{R}\left(\frac{\mathcal{U}_{R}(R_ {n}(\mathfrak{L}))}{\mathfrak{U}^{c}(R_{n}(\mathfrak{L}))}\right)\]
_that is injective in \(R_{n}(\mathfrak{L})\) and such that_
\[\deg\Phi\leq\eta\frac{2^{\operatorname{rk}R_{n}(\mathfrak{L})}}{\sqrt{ \operatorname{rk}R_{n}(\mathfrak{L})}}, \tag{3.3}\]
_where \(\eta\sim 2.763.\)_
Proof.: Denote \(R_{n}(\mathfrak{L})\) by \(\mathfrak{N},\) then \(\mathfrak{L}=\mathfrak{N}\rtimes\mathfrak{S}\) for some \(R\)-Lie subalgebra \(\mathfrak{S}\leq\mathfrak{L}\). By Lemma 3.1, \(\operatorname{ad}_{x}^{*}\left(\mathfrak{U}^{c}(\mathfrak{N})\right)\subseteq \mathfrak{U}^{c}(\mathfrak{N})\) for all \(x\in\mathfrak{L},\) so we can define the map
\[\Phi\colon\mathfrak{L}=\mathfrak{N}\oplus\mathfrak{S}\to\operatorname{End}_{ R}\left(\nicefrac{{\mathfrak{U}_{R}(\mathfrak{N})}}{{\mathfrak{U}^{c}( \mathfrak{N})}}\right),\ n+s\mapsto\ell_{n}+\operatorname{ad}_{s}^{*}.\]
In order to show that it is an \(R\)-Lie algebra homomorphism, it suffices to confirm that
\[\Phi\left([s,n]\right)=[\Phi(s),\Phi(n)]=[\operatorname{ad}_{s}^{*},\ell_{n}]\]
for all \(n\in\mathfrak{N}\) and \(s\in\mathfrak{S}.\) Indeed, for any \(n\in\mathfrak{N}\) and \(D\in\operatorname{Der}_{R}\left(\mathcal{U}_{R}(\mathfrak{N})\right):\)
\[[D,\ell_{n}](u)=D\circ\ell_{n}(u)-\ell_{n}\circ D(u)=D(n)u=\ell_{D(n)}(u)\ \forall u\in\mathfrak{N},\]
and, since \(\mathfrak{N}\) is an ideal, \([s,n]\in\mathfrak{N},\) so
\[\Phi\left([s,n]\right)=\ell_{[s,n]}=\ell_{\operatorname{ad}_{s}^{*}(n)}=[ \operatorname{ad}_{s}^{*},\ell_{n}]=[\Phi(s),\Phi(n)]\,.\]
Consequently, \(\Phi\) is a finite \(R\)-Lie algebra representation. In addition, \(\Phi|_{\mathfrak{N}}\) is nothing but the faithful representation \(\mathcal{R}_{c}\) of \(\mathfrak{N}.\) Finally, (3.3) follows from (3.2).
### Embedding Theorem
In [9, Chapter IV.2], and the succeeding works following it, Levi's Theorem is crucial; namely, if \(K\) is a field of characteristic zero there exists a semisimple Lie algebra \(\mathfrak{S}\leq\mathfrak{L}\) such that \(\mathfrak{L}=R_{s}(\mathfrak{L})\rtimes\mathfrak{S}\) (see [9, Chapter III.9]). However, this results is not longer true for PIDs. For instance, the \(\mathbb{Z}\)-Lie algebra \(\mathfrak{sl}_{2}(2\mathbb{Z})\oplus\mathfrak{t}_{2}(2\mathbb{Z})\) --the direct sum of \(2\times 2\) matrices of trace zero and \(2\times 2\) upper triangular matrices with coefficients in \(2\mathbb{Z}\)-- does not admit such a decomposition (see [5, Example in pg. 838]).
Nevertheless, every Lie lattice embeds in a splittable (in the sense of Subsection 3.2) \(R\)-Lie lattice. In effect, over algebraically closed fields this result was first proved for solvable Lie algebras by Mal'cev [10] and Reed [12], and using similar ideas Neretin [11] proved the following (albeit [11] is about complex Lie algebras, the proof is still valid, with small remarks, for any field of characteristic zero):
**Theorem 3.3** (cf. [11, Lemma 1]).: _Let \(K\) be a field of characteristic zero and \(\mathfrak{L}\) a finite dimensional \(K\)-Lie algebra. There exists a splittable \(K\)-Lie algebra \(\bar{\mathfrak{L}}=R_{n}(\bar{\mathfrak{L}})\rtimes\mathfrak{S},\) where \(\mathfrak{S}\) is reductive, extending \(\mathfrak{L}.\)_
The above theorem is proved by successively applying elementary expansions. Indeed, suppose that we have a \(K\)-Lie algebra \(\mathfrak{K}=\mathfrak{N}\rtimes\mathfrak{S}\) extending \(\mathfrak{L}\) such that \(\mathfrak{N}\) is a solvable ideal containing \(R_{n}(\mathfrak{K})\) and \(\mathfrak{S}\) is a reductive -direct sum of a semisimple and an abelian algebra- subalgebra that acts fully irreducibly on \(\mathfrak{N}.\) We shall construct another Lie algebra \(\mathfrak{K}^{\prime}\) extending \(\mathfrak{K}\) that satisfies those same conditions.
By [9, Chapter III.7, Theorem 13], \([\mathfrak{N},\mathfrak{K}]\leq R_{n}(\mathfrak{K}).\) Thus, unless \(\mathfrak{N}\) is nilpotent, there exists an ideal \(\mathfrak{I}\unlhd\mathfrak{N}\) of codimension one containing \(R_{n}(\mathfrak{K})\). Since the action of \(\mathfrak{S}\) is fully irreducible, there exists an element \(y\in\mathfrak{N}\setminus R_{n}(\mathfrak{K})\) such that \(\mathfrak{N}=\mathfrak{I}\oplus Ky\) as \(\mathfrak{S}\)-modules, in particular, \([y,\mathfrak{S}]\subseteq R_{n}(\mathfrak{K})\cap Ky=\{0\}.\) Moreover, according to the Jordan-Chevalley decomposition (see [9, Chapter III.11, Theorem 16]), the derivation \(\operatorname{ad}_{y}\in\operatorname{Der}_{R}(\mathfrak{K})\) decomposes as \(d_{s,y}+d_{n,y}\) where \(d_{s,y}\) and \(d_{n,y}\) are respectively a semisimple and a nilpotent \(K\)-linear endomorphism.
_Remark 3.4_.: Both \(d_{s,y}\) and \(d_{n,y}\) are in \(\operatorname{Der}_{K}(\mathfrak{K})\). Indeed, when \(K\) is algebraically closed it was proved in [12, Proposition 3], as \(d_{s,n}(v)=\alpha v\) provided that \(v\) belongs to the generalised \(\alpha\)-eigenspace of \(\operatorname{ad}_{y}.\)
In general, let us write \(S=d_{s,y}\) and \(N=d_{n,y}\) and let \(\bar{K}\) be the algebraic closure of \(K.\) Suppose that \(\bar{S}+\bar{N}\) is the Jordan-Chevalley decomposition of \(\operatorname{ad}_{y}\) in \(\mathfrak{K}_{\bar{K}}=\mathfrak{K}\otimes_{K}\bar{K}.\) Then
\[S\otimes\bar{K}+N\otimes\bar{K}=\operatorname{ad}_{y}=\bar{S}+\bar{N}\]
are two decompositions of \(\operatorname{ad}_{y}\in\operatorname{End}_{\bar{K}}(\mathfrak{K}_{\bar{K}}),\) so by the uniqueness \(\bar{S}=S\otimes\bar{K}\) and \(\bar{N}=N\otimes\bar{K}.\) Finally, since \(\bar{S}\) and \(\bar{N}\) satisfy Leibniz identity, so do \(S\) and \(N.\)
Thus, we can construct a so-called elementary expansion, namely the \(K\)-Lie algebra
\[\mathfrak{K}^{\prime}=\mathfrak{I}\oplus\mathfrak{S}\oplus Kx^{\prime}\oplus Kz ^{\prime},\]
where \(x^{\prime}\) and \(z^{\prime}\) are formal symbols satisfying
\[[x^{\prime},u]=d_{n,y}(u),\ \ [z^{\prime},u]=d_{s,y}(u),\ \ [x^{\prime},z^{ \prime}]=0\]
for every \(u\in\mathfrak{I}\oplus\mathfrak{S},\) and where we keep the original Lie bracket for the elements of \(\mathfrak{I}\oplus\mathfrak{S}.\) Observe that \(\mathfrak{K}=\mathfrak{I}\oplus Ky\oplus\mathfrak{S}\) embeds as a Lie algebra in \(\mathfrak{K}^{\prime}\) with respect to \(y=x^{\prime}+z^{\prime}.\)
In addition, \(\ker\operatorname{ad}_{y}\subseteq\ker d_{s,y}\) (see Remark 3.4), so \([z^{\prime},\mathfrak{S}]=0\) and \(\mathfrak{S}^{\prime}:=\mathfrak{S}\oplus Kz^{\prime}\) is a reductive Lie algebra. Moreover, \(R_{n}(\mathfrak{K})\oplus Kx^{\prime}\) is the nilpotent radical of \(\mathfrak{K}^{\prime},\)\(\mathfrak{N}^{\prime}:=\mathfrak{I}\oplus Kx^{\prime}\) is solvable, and, since \(d_{s,y}\) is a semisimple operator, the action of \(\mathfrak{S}^{\prime}\) in \(\mathfrak{N}^{\prime}\) is fully reducible. In particular, \(\mathfrak{K}^{\prime}=\mathfrak{N}^{\prime}\rtimes\mathfrak{S}^{\prime}.\) In passing, note that
\[\dim_{K}\mathfrak{N}^{\prime}=\dim_{K}\mathfrak{N}\text{ and }\dim_{K}R_{n}( \mathfrak{K}^{\prime})=\dim_{K}R_{n}(\mathfrak{K})+1. \tag{3.4}\]
Levi's Theorem gives us the first of the step of the above-described procedure. Indeed, \(\mathfrak{L}=R_{s}(\mathfrak{L})\rtimes\mathfrak{S}\) for a semisimple subalgebra \(\mathfrak{S}\leq\mathfrak{L},\) and by virtue of Weyl's Theorem on complete reducibility (see [9, Chapter III.7, Theorem 8]), the action of \(\mathfrak{S}\) on \(R_{s}(\mathfrak{L})\) is fully reducible. Fix bases \(\{x_{1},\ldots,x_{s}\}\) of \(R_{n}(\mathfrak{L})\) and \(\{z_{1},\ldots,z_{t}\}\) of \(\mathfrak{S}.\) In view of (3.4), repeating the previous process eventually we obtain a \(K\)-Lie algebra
\[\bar{\mathfrak{L}}=\bar{\mathfrak{N}}\rtimes\bar{\mathfrak{S}}=\langle x_{1}, \ldots,x_{s},x_{1}^{\prime},\ldots,x_{r}^{\prime}\rangle_{K}\rtimes\langle z_{ 1},\ldots,z_{t},z_{1}^{\prime},\ldots,z_{r}^{\prime}\rangle_{K}, \tag{3.5}\]
where \(\bar{\mathfrak{N}}\) is a nilpotent ideal and \(\bar{\mathfrak{S}}\leq\bar{\mathfrak{L}}\) is a reductive subalgebra, and a \(K\)-basis \(\{x_{1},\ldots,x_{s},y_{1},\ldots,y_{r},z_{1},\ldots,z_{t}\}\) of \(\mathfrak{L}\) such that \(R_{s}(\mathfrak{L})=\langle x_{1},\ldots,x_{s},y_{1},\ldots,y_{r}\rangle_{K}\) and \(y_{i}=x_{i}^{\prime}+z_{i}^{\prime}.\) In particular, \(\bar{\mathfrak{L}}\) extends \(\mathfrak{L},\) and by construction:
1. for all \(i,j\in\{1,\ldots,r\}\) and \(k\in\{1,\ldots,t\}\) \[\left[z_{i}^{\prime},z_{j}^{\prime}\right]=\left[z_{i}^{\prime},z_{k}\right]=0,\] and therefore \[[x_{i}^{\prime},z_{k}]=[y_{i},z_{k}]\in R_{n}(\mathfrak{L})\] (see [9, Chapter II.7, Theorem 13]);
2. for all \(i\in\{1,\ldots,s\}\) and \(j,k\in\{1,\ldots,r\},\) by [9, Chapter III.6, Theorem 7], \(d_{n,y_{j}}(R_{s}(\mathfrak{L}))\subseteq R_{n}(\mathfrak{L}),\) so \[\left[x_{j}^{\prime},x_{i}\right]=d_{n,y_{j}}(x_{i})\in R_{n}(\mathfrak{L})\text { and }\left[x_{j}^{\prime},y_{k}\right]=d_{n,y_{j}}(y_{k})\in R_{n}(\mathfrak{L}).\] In particular, \(R_{n}(\mathfrak{L})\unlhd R_{n}(\bar{\mathfrak{L}});\)
3. \(\dim_{K}R_{n}(\bar{\mathfrak{L}})=s+r=\dim_{K}R_{s}(\mathfrak{L}).\)
Furthermore, in view of (N1)-(N2), we have that
\[\left[x_{i}^{\prime},\mathfrak{L}\right]:=\left\{\left[x_{i}^{\prime},u\right]\,| \,\,u\in\mathfrak{L}\right\}\subseteq R_{n}(\mathfrak{L})=\langle x_{1}, \ldots,x_{s}\rangle_{K} \tag{3.6}\]
for all \(i\in\{1,\ldots,r\}.\) As a consequence, we can prove the following strengthened version of Theorem 1.2:
**Theorem 3.5**.: _Let \(R\) be a PID of characteristic zero and let \(\mathfrak{L}\) be an \(R\)-Lie lattice. Then \(\mathfrak{L}\) embeds into a splittable \(R\)-Lie lattice \(\bar{\mathfrak{L}}\) such that_
* \(R_{n}(\mathfrak{L})\leq R_{n}(\bar{\mathfrak{L}})\) _and_
* \(\operatorname{rk}R_{n}(\bar{\mathfrak{L}})=\operatorname{rk}R_{s}(\mathfrak{ L}).\)__
Proof.: Let \(K\) be the fraction field of \(R\) and \(\mathfrak{L}_{K}:=\mathfrak{L}\otimes_{R}K.\) According to Theorem 3.3, there exists a finite dimensional splittable \(K\)-Lie algebra \(\mathfrak{L}_{K}^{\prime}=R_{n}(\mathfrak{L}_{K}^{\prime})\rtimes\mathfrak{ S}_{K}^{\prime}\) extending \(\mathfrak{L}_{K}\) and satisfying conditions (N1)-(N3). Denote for simplicity \(\mathfrak{N}_{K}^{\prime}:=R_{n}(\mathfrak{L}_{K}^{\prime})\).
Let \(\{x_{1},\ldots,x_{s}\}\) be a basis for \(R_{n}(\mathfrak{L})\) as free \(R\)-module, then \(R_{n}(\mathfrak{L}_{K})=\langle x_{1},\ldots,x_{s}\rangle_{K}\) and, by (3.5), there is a \(K\)-vector space basis of \(\mathfrak{N}_{K}^{\prime}\) of the form
\[\left\{x_{1},\ldots,x_{s},x_{1}^{\prime},\ldots,x_{r}^{\prime}\right\},\]
where \(s+r=\dim_{K}R_{s}(\mathfrak{L}_{K})=\operatorname{rk}_{R}R_{s}(\mathfrak{L})\) (compare with (N3)).
Furthermore, by (N2), there exists \(\mu\in R\setminus\{0\}\) such that
\[\mathfrak{N}:=\langle x_{1},\ldots,x_{s},\mu x_{1}^{\prime},\ldots,\mu x_{r} ^{\prime}\rangle_{R}\]
is a nilpotent \(R\)-Lie lattice of rank \(s+r\), \(R_{n}(\mathfrak{L})=\langle x_{1},\ldots,x_{s}\rangle_{R}\unlhd\mathfrak{N}\) and
\[[\mu x_{j}^{\prime},\mathfrak{L}]\subseteq\langle x_{1},\ldots,x_{s}\rangle_{ R},\]
for all \(j\in\{1,\ldots,r\}\) (using (3.6) and that \(\mathfrak{L}\) is finitely generated for the last condition). In particular,
\[[\mathfrak{N},\overset{(i)}{\dashv},\mathfrak{N},\mathfrak{L}]\leq[ \mathfrak{N},\overset{(i)}{\dashv},\mathfrak{N}]\,\,\forall i\in\mathbb{N}. \tag{3.7}\]
Let \(\bar{\mathfrak{S}}\) be the projection of \(\mathfrak{L}\) into \(\mathfrak{S}_{K}^{\prime}\), that is,
\[\bar{\mathfrak{S}}=\left\{\sigma\in\mathfrak{S}_{K}^{\prime}\,\,|\,\,\exists \,\,x\in\mathfrak{L},\,\exists\,\,n\in\mathfrak{N}_{K}^{\prime}\text{ such that }x=n+\sigma\right\}.\]
Then \(\bar{\mathfrak{S}}\) is an \(R\)-Lie algebra. Indeed, if \(x_{1}=n_{1}+\sigma_{1}\) and \(x_{2}=n_{2}+\sigma_{2}\in\mathfrak{L}\), where \(n_{i}\in\mathfrak{N}_{K}^{\prime}\) and \(\sigma_{i}\in\mathfrak{S}_{K}^{\prime}\) (\(i\in\{1,2\}\)), then
\[[x_{1},x_{2}]=[n_{1},x_{2}]+[\sigma_{1},n_{2}]+[\sigma_{1},\sigma_{2}],\]
\([x_{1},x_{2}]\in\mathfrak{L}\), \([n_{1},x_{2}]+[\sigma_{1},n_{2}]\in\mathfrak{N}_{K}^{\prime}\) and \([\sigma_{1},\sigma_{2}]\in\mathfrak{S}_{K}^{\prime}.\) In addition, since \(\mathfrak{L}\) is finitely generated, \(\bar{\mathfrak{S}}\) is a free \(R\)-module of finite rank.
Moreover, since \(\mathfrak{L}\) is a finitely generated \(R\)-module there exists \(\lambda\in R\setminus\{0\}\) such that
\[\mathfrak{L}=\frac{1}{\lambda}\mathfrak{N}\oplus\bar{\mathfrak{S}}. \tag{3.8}\]
Let \(c\) be the nilpotency class of \(\mathfrak{N}\), define \(\mathfrak{N}_{i}:=[\mathfrak{N},\overset{(i)}{\dashv},\mathfrak{N}]\), for each \(i\in\{1,\ldots,c\}\), and
\[\bar{\mathfrak{N}}:=\sum_{i=1}^{c}\frac{1}{\lambda^{i}}\mathfrak{N}_{i}\leq \mathfrak{N}_{K}^{\prime},\]
which is a free \(R\)-module of rank \(s+r=\operatorname{rk}R_{s}(\mathfrak{L})\).
On the one hand,
\[\left[\frac{1}{\lambda^{i}}\mathfrak{N}_{i},\frac{1}{\lambda^{j}}\mathfrak{N}_{j }\right]=\frac{1}{\lambda^{i+j}}\left[\mathfrak{N}_{i},\mathfrak{N}_{j}\right] \leq\frac{1}{\lambda^{i+j}}\mathfrak{N}_{i+j},\]
so \(\mathfrak{N}\) is a nilpotent \(R\)-Lie lattice.
On the other hand, by (3.8) and (3.7),
\[\left[\frac{1}{\lambda^{i}}\mathfrak{N}_{i},\bar{\mathfrak{S}}\right] \leq\left[\frac{1}{\lambda^{i}}\mathfrak{N}_{i},\mathfrak{L}+ \frac{1}{\lambda}\mathfrak{N}\right]\leq\frac{1}{\lambda^{i}}\left[\mathfrak{ N}_{i},\mathfrak{L}\right]+\frac{1}{\lambda^{i+1}}\left[\mathfrak{N}_{i}, \mathfrak{N}\right]\] \[\leq\frac{1}{\lambda^{i}}\mathfrak{N}_{i}+\frac{1}{\lambda^{i+1 }}\mathfrak{N}_{i+1}\leq\bar{\mathfrak{N}}\]
for every \(i\in\{1,\ldots,c\}\).
Hence, \(\bar{\mathfrak{L}}:=\bar{\mathfrak{N}}\rtimes\bar{\mathfrak{S}}\) is an \(R\)-Lie lattice that extends \(\mathfrak{L}\); by construction \(\bar{\mathfrak{L}}\) is splittable, \(R_{n}(\mathfrak{L})\leq\bar{\mathfrak{N}}=R_{n}(\bar{\mathfrak{L}})\) and \(\operatorname{rk}R_{n}(\bar{\mathfrak{L}})=\operatorname{rk}\bar{\mathfrak{N }}=s+r=\operatorname{rk}R_{s}(\mathfrak{L})\).
### Ado's Theorem for PIDs
Finally, we gather all the ingredients:
proof of Theorem 1.1.: Let \(\mathfrak{L}\) be an \(R\)-Lie lattice of rank \(r\). According to Theorem 3.5, there exists a splittable \(R\)-Lie lattice \(\bar{\mathfrak{L}}=R_{n}(\bar{\mathfrak{L}})\rtimes\mathfrak{S}\) extending \(\mathfrak{L}\) such that \(R_{n}(\mathfrak{L})\leq R_{n}(\bar{\mathfrak{L}})\) and \(\operatorname{rk}R_{n}(\bar{\mathfrak{L}})=\operatorname{rk}R_{s}(\mathfrak{ L}).\) By Proposition 3.2, there exists an \(R\)-Lie algebra representation \(\Phi\) of \(\bar{\mathfrak{L}}\) which is injective in \(R_{n}(\bar{\mathfrak{L}})\) and whose degree is bounded by \(f(\operatorname{rk}R_{n}(\bar{\mathfrak{L}}))\), for \(f\colon\mathbb{N}_{\geq 1}\to\mathbb{R}\), \(r\mapsto\eta\frac{2^{r}}{\sqrt{r}}\).
Therefore \(\tilde{\Phi}:=\Phi|_{\mathfrak{L}}\oplus\operatorname{Ad}\) is an \(R\)-Lie algebra representation of \(\mathfrak{L}\) that is faithful, as
\[\ker\tilde{\Phi}=\ker\Phi|_{\mathfrak{L}}\cap\ker\operatorname{Ad}\subseteq( \mathfrak{L}\setminus R_{n}(\mathfrak{L}))\cap Z(\mathfrak{L})=\{0\}.\]
Thus, since \(\operatorname{rk}R_{n}\left(\bar{\mathfrak{L}}\right)=\operatorname{rk}R_{s}( \mathfrak{L})\) and \(f\) is non-decreasing
\[\deg\mathfrak{L}\leq\deg\tilde{\Phi}=\deg\Phi+\deg\operatorname{Ad}\leq f( \operatorname{rk}R_{s}(\mathfrak{L}))+r\leq f(r)+r.\qed\]
_Remark 3.6_.: For a \(K\)-Lie algebra \(\mathfrak{L}\), with \(K\) being a field of characteristic zero, Harish-Chandra [7] improved the original result of Ado by constructing a finite faithful representation \(\Psi\colon\mathfrak{L}\hookrightarrow\operatorname{End}_{K}(V)\) with the additional property of been a so-called nil-representation, i.e. \(\Psi(x)\) is a nilpotent endomorphism for every \(x\in R_{n}(\mathfrak{L})\).
Note that the representation \(\tilde{\Phi}\) of the preceding proof is also a nil-representation, as both \(\Phi\) and \(\operatorname{Ad}\) are so.
|
2309.02689 | Effective Description of the Quantum Damped Harmonic Oscillator:
Revisiting the Bateman Dual System | In this work, we present a quantization scheme for the damped harmonic
oscillator (QDHO) using a framework known as momentous quantum mechanics. Our
method relies on a semiclassical dynamical system derived from an extended
classical Hamiltonian, where the phase-space variables are given by expectation
values of observables and quantum dispersions. The significance of our study
lies in its potential to serve as a foundational basis for the effective
description of open quantum systems (OQS), and the description of dissipation
in quantum mechanics. By employing the Bateman's dual model as the initial
classical framework, and undergoing quantization, we demonstrate that our
description aligns exceptionally well with the well-established Lindblad master
equation. Furthermore, our approach exhibits robustness and broad applicability
in the context of OQS, rendering it a versatile and powerful tool for studying
various phenomena. We intend to contribute to the advancement of quantum
physics by providing an effective means of quantizing the damped harmonic
oscillator and shedding light on the behavior of open quantum systems. | Carlos Raul Javier Valdez, Hector Hugo Hernandez Hernandez, Guillermo Chacón Acosta | 2023-09-06T03:53:09Z | http://arxiv.org/abs/2309.02689v1 | _Effective Description of the Quantum Damped Harmonic Oscillator: Revisiting the Bateman Dual System_
###### Abstract
In this work, we present a quantization scheme for the damped harmonic oscillator (QDHO) using a framework known as momentous quantum mechanics. Our method relies on a semiclassical dynamical system derived from an extended classical Hamiltonian, where the phase-space variables are given by expectation values of observables and quantum dispersions. The significance of our study lies in its potential to serve as a foundational basis for the effective description of open quantum systems (OQS), and the description of dissipation in quantum mechanics. By employing the Bateman's dual model as the initial classical framework, and undergoing quantization, we demonstrate that our description aligns exceptionally well with the well-established Lindblad master equation. Furthermore, our approach exhibits robustness and broad applicability in the context of OQS, rendering it a versatile and powerful tool for studying various phenomena. We intend to contribute to the advancement of quantum physics by providing an effective means of quantizing the damped harmonic oscillator and shedding light on the behavior of open quantum systems.
## 1 Introduction
Quantum mechanics (QM) enables us to investigate the dynamics and interactions of physical systems at atomic and subatomic scales. It has successfully addressed various phenomena that classical physics could not explain. Notable examples include the black body radiation, the double-slit, and the Stern-Gerlach experiments, among many others. The traditional approaches of quantum mechanics, namely the Schrodinger equation, or matrix mechanics, have primarily focused on
closed systems, where no interaction with the environment occurs. However, quantum systems are not isolated but constantly exchange energy and information with their surroundings: these are known as open quantum systems (OQS). The significance of such systems has led to the development of theoretical frameworks and computational techniques that account for the complexities of real-world quantum phenomena.
The lack of a direct description of dissipation in quantum mechanics has been a long-standing challenge, and usually simple systems are analyzed as they serve as fundamental scenarios for investigation and could potentially offer insights into more general methods. As such, the quantum damped harmonic oscillator (QDHO) has been studied under several approaches [1, 2, 3], that have, however, limitations to varying extents. For example, for the Bateman dual system, the energy spectrum fails to remain real-valued [4, 5]. Similarly, the time-dependent Caldirola-Kanai Hamiltonian exhibits an exponential decay in the quantum analog, affecting the evolution of the energy expectation value and position width [6, 7], violating Heisenberg's uncertainty principle. These problems arise due to the presence of non-Hermitian operators, responsible for dissipation, and their corresponding complex eigenvalues, lacking a physical interpretation [8].
There are alternative approaches to OQS using density matrix theory through master equations, such as the Lindblad equation (also known as Gorini-Kossakowski-Lindblad-Sudarshan equation) [9, 10], valuable schemes in studying OQS and their extensive applications in quantum optics [11, 12, 13, 14]. However, master equations require certain assumptions and approximations, such as the Born approximation, the Markov property, the rotating wave approximation, weak coupling, and an infinite number of degrees of freedom [15]. This implies that, when utilizing master equations for analyzing more complex quantum systems, a thorough examination of these assumptions and a comprehensive understanding of their implications are essential.
Frameworks like Bohmian mechanics and Everett's interpretation, presently without a clear treatment of open quantum systems, could benefit from incorporating a proper dissipation consideration. Both descriptions rely on the Schrodinger equation (SE), leading again to the involvement of non-Hermitian operators [6, 16, 17].
The utilization of effective theories for studying nontrivial quantum mechanical systems has emerged as a compelling alternative [18]. Particularly interesting are methods derived from geometric formulations, such as the momentous quantum mechanics [19], which offer valuable mathematical tools and intuitive insights for interpreting complex phenomena. Noteworthy applications of this method include the quantum pendulum, quantum tunneling, the double-slit experiment, and various quantum cosmological scenarios [20, 21, 22, 23, 24, 25]. In this work, we analyze the QDHO as a testing ground for applying momentous quantum mechanics to OQS. By revisiting Bateman's dual model, we derive a system of differential equations that yield the semi-classical dynamics of expectation values of observables for the oscillator. Comparing these equations with those arising from the Lindblad master equation, we discuss the conditions under which the systems coincide. Our analysis demonstrates that momentous quantum mechanics successfully overcomes the limitations encountered with the quantization methods for the Bateman Hamiltonian mentioned above. We intend to advance the understanding of dissipation in open quantum systems and emphasize the potential of momentous quantum mechanics as a powerful tool in quantum research. The versatility of this approach opens exciting prospects for exploring a wide range of quantum phenomena with enhanced accuracy and physical insight.
## 2 Approaches for the Quantum Damped Harmonic Oscillator
In this section, we review two approaches to the problem of dissipation for the QHO. It provides a starting point for our effective description. The Bateman Hamiltonian is historically one of the first attempts to study the quantization of the QDHO, and it is still under study [26, 27, 28, 29]. The Lindblad master equation is a widely used approach in quantum optics [11] and in quantum information [30, 31].
### Bateman's Dual System
The classical equation for the linearly damped harmonic oscillator is given by
\[\ddot{x}+2\lambda\dot{x}+\omega_{o}^{2}x=0 \tag{1}\]
where \(\lambda\) is the damping constant, and \(\omega_{0}\) is the natural frequency of the oscillator. This equation of motion can be obtained from the two-dimensional Bateman Lagrangian
\[L(x,\dot{x},y,\dot{y})=m\dot{x}\dot{y}+\lambda m(x\dot{y}-\dot{x}y)-kxy. \tag{2}\]
An equation for the auxiliary variable \(y(t)\) can also be obtained
\[\ddot{y}-2\lambda\dot{y}+\omega_{0}^{2}y=0 \tag{3}\]
which represents a mirror image, or time reversed oscillator. This coupled \(x-y\) system is conserved, because the energy dissipated by the \(x\)-oscillator is being absorbed by the \(y\)-oscillator [32].
For the quantum version we need the Hamiltonian. By using the Legendre transformation, we obtain the Bateman Hamiltonian
\[H=m\dot{x}\dot{y}+kxy=\frac{1}{m}p_{x}p_{y}+\lambda(yp_{y}-xp_{x})+\Omega^{2}xy \tag{4}\]
where \(\Omega^{2}=\omega_{0}^{2}-\lambda^{2}\). The canonical momenta read
\[p_{y}=m(\dot{x}+\lambda x),\quad p_{x}=m(\dot{y}-\lambda y), \tag{5}\]
and it can be seen that classical position and momenta are canonical
\[\{x,p_{x}\}=\{y,p_{y}\}=1. \tag{6}\]
The usual canonical momenta \(p_{i}=m\dot{x}_{i}\), are modified due to dissipation, even in the limit \(\lambda\to 0\).
To obtain the quantum dynamics, promoting classical phase space variables to operators, we need to remove the ambiguity in the Hamiltonian (4). To this end we use the Weyl ordering:
\[W(\hat{x}_{i}\cdot\hat{p}_{i})=\frac{1}{2}(\hat{x}_{i}\hat{p}_{i}+\hat{p}_{i} \hat{x}_{i}). \tag{7}\]
In this way we obtain the Hamiltonian operator for the Bateman model
\[\hat{H}=\frac{1}{m}\hat{p}_{x}\hat{p}_{y}-\frac{\lambda}{2}\Big{(}(\hat{x} \hat{p}_{x}+\hat{p}_{x}\hat{x})-(\hat{y}\hat{p}_{y}+\hat{p}_{y}\hat{y})\Big{)}+ \Omega^{2}\hat{x}\hat{y} \tag{8}\]
from which one obtains the quantum evolution of the QDHO. Following the quantization procedure of Feshbach and Tikochinsky [4, 32], or the approach used by Chruscinsk and Jurkowski [5], one obtains the following energy spectrum
\[\begin{split}\hat{H}|\psi^{\pm}_{j,m}\rangle&=E^{\pm}_ {j,m}|\psi^{\pm}_{j,m}\rangle\\ &=(2\hbar\Omega j\pm i\hbar\lambda(2m+1))\ket{\psi^{\pm}_{j,m}}, \quad m=|j|,|j|+1/2,|j|+1,...,\end{split} \tag{9}\]
\(j\in\mathbb{Z}\). Thus, complex eigenvalues are obtained. We can see that a unitary evolution, and a physical interpretation, is no longer possible. Moreover, as shown by Dekker in [1], the uncertainty relations decay to zero, violating Heisenberg's principle.
### The master equations approach: Lindblad Equation
The Lindblad master equation is formulated within the Density Matrix Theory (DMT). This framework offers a set of tools allowing the study of pure and mixed states by using the density operator
\[\rho:=\sum_{n}p_{n}|\psi_{n}\rangle\langle\psi_{n}|, \tag{10}\]
where \(|\psi_{n}\rangle\) is a normalized vector in the Hilbert space \(\mathcal{H}\). It also applies to composite quantum systems by using the reduced matrix1
Footnote 1: For a composite system given by \(\rho_{AB}=\rho\otimes\sigma\), where \(\rho\) and \(\sigma\) belong to Hilbert spaces \(\mathcal{H}_{A}\) and \(\mathcal{H}_{B}\) respectively, the reduced density operator on the composite system \(\rho_{AB}\) is
\[\rho_{A}: =\mathrm{Tr}_{B}\{\rho_{AB}\}\] \[=\mathrm{Tr}_{B}\big{\{}\rho\otimes\sigma\}\] \[=\rho\mathrm{Tr}\{\sigma\}\]
. This basis idea to describe dissipation under DMT is by defining a total Hamiltonian \(\hat{H}_{T}\)
\[\hat{H}_{T}=\hat{H}_{S}+\hat{H}_{R}+\hat{H}_{SR}, \tag{11}\]
composed by the system of interest \(\hat{H}_{S}\), the environment or reservoir \(\hat{H}_{R}\), and the interaction between them \(\hat{H}_{SR}\), forming a closed system that can be analyzed by the von Neumann equation. Because, in general, \(\hat{H}_{T}\) describes an extremely complex system, the problem is put in a more tractable form by using the reduced matrix method, which allows the study of a subsystem of the composite system, thus limiting the analysis only to the system of interest \(\hat{H}_{S}\).
\(\hat{H}_{S}\), \(\hat{H}_{R}\) and the interaction \(\hat{H}_{SR}\) are defined as follows
\[\hat{H}_{S} =\hbar\omega\hat{a}^{\dagger}\hat{a},\] \[\hat{H}_{R} =\sum_{j}\hbar\omega_{j}\hat{r}^{\dagger}_{j}\hat{r}_{j},\] \[H_{SR} =\sum_{j}\hbar(k^{*}_{j}\hat{a}\hat{r}^{\dagger}_{j}+k_{j}\hat{a} ^{\dagger}\hat{r}_{j}). \tag{12}\]
The system of interest \(\hat{H}_{S}\) is the QHO, with \(\hat{a}^{\dagger}\) and \(\hat{a}\) are the creation and annihilation operators, respectively. The reservoir Hamiltonian \(\hat{H}_{R}\) is modeled by an infinite number of harmonic oscillators, where \(\hat{r}^{\dagger}_{j}\) and \(\hat{r}_{j}\) are the corresponding creation and annihilation operators of the \(jth\)
oscillator. Finally, \(\hat{H}_{SR}\) is the interaction between the system of interest and the reservoir, and \(k_{j}\) are coupling constants.
Following [11, 12, 14], where the Born, Markov, rotating wave, and weak coupling approximations are used, one arrives at the Lindblad master equation for the QDHO
\[\dot{\rho}=-i\omega_{o}^{\prime}[a^{\dagger}a,\rho]+\frac{\gamma}{2}(2a\rho a^ {\dagger}-a^{\dagger}a\rho-\rho a^{\dagger}a)+\gamma\bar{n}(a\rho a^{\dagger}+ a^{\dagger}\rho a-a^{\dagger}a\rho-\rho aa^{\dagger}), \tag{13}\]
where \(\omega_{o}^{\prime}\) is a frequency shift \(\omega_{o}+\Delta\), \(\gamma\) is a damping constant, and \(\bar{n}=\bar{n}(\omega,T)\) is the mean photon number for the reservoir oscillators in thermal equilibrium at temperature \(T\).
Instead of solving this equation for \(\rho(t)\), as was done in [34], one can work directly with the evolution of expectation values of observables as in [35]
\[\frac{d}{dt}\langle\hat{O}\rangle=Tr\{\hat{O}\dot{\rho}\}. \tag{14}\]
From it the mean energy evolution is obtained
\[\langle\hat{E}(t)\rangle=\left\langle\hat{n}(t)+\frac{1}{2}\right\rangle\hbar \omega=\left(\big{(}\langle\hat{n}(0)\rangle-\bar{n}\big{)}e^{-\gamma t}+\bar{ n}+\frac{1}{2}\right)\hbar\omega. \tag{15}\]
As \(t\to\infty\) the energy of the system decays to an excited state above the ground state \((\bar{n}+1/2)\hbar\omega\). Focusing on the evolution of the expectation value of position and momentum operators
\[\hat{x}=\sqrt{\frac{\hbar}{2m\omega}}\big{(}\hat{a}+\hat{a}^{\dagger}\big{)}, \quad\hat{p}=\frac{1}{i}\sqrt{\frac{\hbar m\omega}{2}}\big{(}\hat{a}-\hat{a}^{ \dagger}\big{)}, \tag{16}\]
equations of motion follow
\[\frac{d}{dt}\langle\hat{x}\rangle=\frac{\omega_{o}^{\prime}}{m\omega}\langle \hat{p}\rangle-\frac{\gamma}{2}\langle\hat{x}\rangle,\quad\frac{d}{dt}\langle \hat{p}\rangle=-m\omega\omega_{o}^{\prime}\langle\hat{x}\rangle-\frac{\gamma} {2}\langle\hat{p}\rangle, \tag{17}\]
which are the classical equations of the damped oscillator. Thus, in the classical limit, the Lindblad master equation recovers the classical dynamics. To complement the quantum dynamics, we need to investigate the evolution of the dispersions \(\langle\hat{x}^{2}\rangle\), \(\langle\hat{p}^{2}\rangle\), which are obtained from Eqs. (14) and (13)
\[\frac{d}{dt}\langle\hat{x}^{2}\rangle=-\gamma\langle\hat{x}^{2} \rangle+\frac{\omega_{o}^{\prime}}{m\omega}\langle\hat{x}\hat{p}+\hat{p}\hat{ x}\rangle+\frac{\gamma\hbar}{2m\omega}(2\bar{n}+1),\] \[\frac{d}{dt}\langle\hat{p}^{2}\rangle=-\gamma\langle\hat{p}^{2} \rangle-m\omega_{o}^{\prime}\omega\langle\hat{x}\hat{p}+\hat{p}\hat{x}\rangle+ \frac{\gamma\hbar m\omega}{2}(2\bar{n}+1),\] \[\frac{d}{dt}\langle\hat{x}\hat{p}+\hat{p}\hat{x}\rangle=-2m \omega_{o}^{\prime}\omega\langle\hat{x}^{2}\rangle+\frac{2\omega_{o}^{\prime} }{m\omega}\langle\hat{p}^{2}\rangle-\gamma\langle\hat{x}\hat{p}+\hat{p}\hat{ x}\rangle, \tag{18}\]
From Eqs. (17) and (18) we observe that the classical \((\langle\hat{x}\rangle,\langle\hat{p}\rangle)\) and quantum \((\langle\hat{x}^{2}\rangle,\langle\hat{p}^{2}\rangle,\langle\hat{x}\hat{p} \rangle,\langle\hat{p}\hat{x}\rangle)\) variables decouple, thus, the classical evolution does not get modified by the quantum back-reaction [19, 25].
## 3 Effective description
### Momentous Quantum Mechanics
Momentous quantum mechanics [19, 25], is a semiclassical setting that allows studying complex quantum systems by approximating their behavior through effective classical equations. Classical
and quantum evolution are related by means of the following prescription
\[\{\langle\hat{A}\rangle,\langle\hat{B}\rangle\}=\frac{1}{i\hbar}\langle[\hat{A}, \hat{B}]\rangle. \tag{19}\]
Although expectation values of observables give classical variables, \(x=\langle\hat{x}\rangle,p=\langle\hat{p}\rangle\), because in general \(\langle\hat{A}^{n}\rangle\neq\langle\hat{A}\rangle^{n}\), most expectation values of quantum operators cannot be associated with classical variables. In the momentous formalism, the expectation value of general quantum correlation operators are defined as
\[G^{a_{1},b_{1},...,a_{k},b_{k}}:=\!\!\left\langle(\hat{x}_{1}-\langle\hat{x}_{ 1}\rangle)^{a_{1}}(\hat{p}_{1}-\langle\hat{p}_{1}\rangle)^{b_{1}}\cdots(\hat{ x}_{k}-\langle\hat{x}_{k}\rangle)^{a_{k}}(\hat{p}_{k}-\langle\hat{p}_{k} \rangle)^{b_{k}}\right\rangle_{\text{Weyl}}, \tag{20}\]
for a system with \(k\) degrees of freedom. Once more, Weyl ordering is employed. From this definition, for one degree of freedom, we can recover the usual fluctuations \((\Delta x)^{2}=G^{2,0},(\Delta p)^{2}=G^{0,2}\), and covariance \(\Delta(xp)=G^{1,1}\). Heisenberg's uncertainty can be written in terms of these variables
\[G^{2,0}G^{0,2}-(G^{1,1})^{2}\geq\frac{\hbar^{2}}{4}. \tag{21}\]
Equipped with this structure, and classical and quantum variables, the evolution of the quantum system can be obtained from the effective Hamiltonian \(H_{Q}\)
\[\langle\hat{H}\rangle=H_{Q} =\langle H[x_{1}+(\hat{x}_{1}-x_{1}),p_{1}+(\hat{p}_{1}-p_{1}),...,x_{k}+(\hat{x}_{k}-x_{k}),p_{k}+(\hat{p}_{k}-p_{k})]\rangle_{\text{Weyl}}\] \[=\sum_{a_{1},b_{1},...,a_{k},b_{k}}^{\infty}\frac{1}{a_{1}!b_{1}! \cdots a_{k}!b_{k}!}\frac{\partial^{a_{1}+b_{1}+\cdots+a_{k}+b_{k}}H_{class}} {\partial x_{1}^{a_{1}}\partial p_{1}^{b_{1}}\cdots\partial x_{k}^{a_{k}} \partial p_{k}^{b_{k}}}G^{a_{1},b_{1},...,a_{k},b_{k}}\] \[=H_{class}+\sum_{a_{1}+b_{1}+\cdots+a_{k}+b_{k}\geq 2}^{\infty} \frac{1}{a_{1}!b_{1}!\cdots a_{k}!b_{k}!}\frac{\partial^{a_{1}+b_{1}+\cdots+ a_{k}+b_{k}}H_{class}}{\partial x_{1}^{a_{1}}\partial p_{1}^{b_{1}}\cdots \partial x_{k}^{a_{k}}\partial p_{k}^{b_{k}}}G^{a_{1},b_{1},...,a_{k},b_{k}} \tag{22}\]
with \(H_{class}=H(x_{1},p_{1},...,x_{k},p_{k})\). Equations of motion can be obtained by obtaining the Poisson bracket between variables and the Hamiltonian2
Footnote 2: For instance, the algebra of quantum variables up to second order, for one degree of freedom, is
\[\{G^{2,0},G^{1,1}\}=2G^{2,0},\quad\{G^{2,0},G^{0,2}\}=4G^{1,1},\quad\{G^{1,1},G^{0,2}\}=2G^{0,2}\]
\[\frac{d\langle\hat{f}\rangle}{dt}=\{\langle\hat{f}\rangle,H_{Q}\}, \tag{23}\]
considering that classical and quantum variables are simplectic orthogonal
\[\{x_{k},G^{a_{1},b_{1},...,a_{k},b_{k}}\}=\{p_{k},G^{a_{1},b_{1},...,a_{k},b_{ k}}\}=0. \tag{24}\]
The above Hamiltonian can be understood as a classical one augmented with quantum corrections, and the resulting equations of motion provide an effective evolution equivalent to the Schrodinger equation [19, 36]. In other words, the semiclassical description shows how quantum corrections modify the classical dynamics. We can always obtain the classical dynamics in the classical limit \(\hbar\to 0,\ G^{a_{1},b_{1},...,a_{k},b_{k}}\to 0\).
### Effective Bateman Model
We now study the quantum evolution of the Bateman model shown in section 2.1. Specifically, we use the Tikochinsky transformation for the Bateman Hamiltonian [37], and then analyze its similarities with the total Hamiltonian used in the Lindblad master equation approach.
The Bateman-Tikochinsky Hamiltonian (BTH) is
\[H=\left(\frac{p_{1}^{2}}{2m}+\frac{1}{2}m\Omega^{2}x_{1}^{2}\right)-\left( \frac{p_{2}^{2}}{2m}+\frac{1}{2}m\Omega^{2}x_{2}^{2}\right)-\lambda(x_{1}p_{2 }+x_{2}p_{1}). \tag{25}\]
Note the parallelism between both models as shown in section 2.2
\[\text{Bateman-Tikochinsky}\qquad\qquad\text{Lindblad }H_{T}\]
\[H_{S} \left(\frac{p_{1}^{2}}{2m}+\frac{1}{2}m\Omega^{2}x_{1}^{2}\right) \qquad\qquad\quad\hbar\omega a^{\dagger}a \tag{26}\] \[H_{R} \left(\frac{p_{2}^{2}}{2m}+\frac{1}{2}m\Omega^{2}x_{2}^{2}\right) \qquad\qquad\quad\sum_{j}\hbar\omega_{j}r_{j}^{\dagger}r_{j}\] (27) \[H_{SR} \lambda(x_{1}p_{2}+x_{2}p_{1}) \qquad\qquad\quad\sum_{j}\hbar(k_{j}^{*}ar_{j}^{\dagger}+k_{j}a^{ \dagger}r_{j}) \tag{28}\]
with the obvious differences regarding the number of degrees of freedom.
We can obtain the BTH by applying the following canonical transformations on Eq. (4) for position and momenta
\[x=\frac{1}{\sqrt{2}}\big{(}x_{1}+x_{2}\big{)}, y=\frac{1}{\sqrt{2}}\big{(}x_{1}-x_{2}\big{)},\] \[p_{y}=\frac{1}{\sqrt{2}}\big{(}p_{1}-p_{2}\big{)}, p_{x}=\frac{1}{\sqrt{2}}\big{(}p_{1}+p_{2}\big{)}. \tag{29}\]
As we showed in section 2.1, the corresponding canonical quantization generates an inconsistent physical evolution.
We have found, however, a canonical transformation for the classical variables Eq. (29)
\[x_{1}\longrightarrow\hat{x}_{1}=\frac{\sqrt{2}}{2}\big{(}\hat{x }+\hat{y}\big{)},\quad p_{1}\longrightarrow\hat{p}_{1}=\frac{\sqrt{2}}{2} \big{(}\hat{p}_{x}+\hat{p}_{y}\big{)},\] \[x_{2}\longrightarrow\hat{x}_{2}=\frac{\sqrt{2}}{2}\big{(}\hat{x }-\hat{y}\big{)},\quad p_{2}\longrightarrow\hat{p}_{2}=-\frac{\sqrt{2}}{2} \big{(}\hat{p}_{x}-\hat{p}_{y}\big{)}, \tag{30}\]
whose quantum operators obey canonical commutation relations
\[[\hat{x}_{1},\hat{p}_{1}]=[\hat{p}_{2},\hat{x}_{2}]=i\hbar,\quad\text{and} \quad[\hat{x}_{1},\hat{p}_{2}]=[\hat{x}_{2},\hat{p}_{1}]=0. \tag{31}\]
that indeed provides a physically correct evolution. The quantum dynamical variables for the BTH are given by
\[G_{1}^{a,b,c,d}:=\Big{\langle}\big{(}\hat{x}_{1}-\langle\hat{x}_{1}\rangle \big{)}^{a}\big{(}\hat{p}_{1}-\langle\hat{p}_{1}\rangle\big{)}^{b}\big{(}\hat {p}_{2}-\langle\hat{p}_{2}\rangle\big{)}^{c}\big{(}\hat{x}_{2}-\langle\hat{x}_ {2}\rangle\big{)}^{d}\Big{\rangle}_{\text{Weyl}} \tag{32}\]
and the corresponding quantum variables are as follows
\[G^{2,0,0,0} =\left\langle\big{(}\hat{x}-\langle\hat{x}\rangle\big{)}^{2}\right\rangle _{\text{Weyl}} G^{0,2,0,0} =\left\langle\big{(}\hat{p}_{x}-\langle\hat{p}_{x}\rangle\big{)}^{2} \right\rangle_{\text{Weyl}}\] \[=\frac{1}{2}\Big{\langle}\big{[}(\hat{x}_{1}-\langle\hat{x}_{1} \rangle)+(\hat{x}_{2}-\langle\hat{x}_{2}\rangle)\big{]}^{2}\Big{\rangle}_{ \text{Weyl}} =\frac{1}{2}\Big{\langle}\big{[}(\hat{p}_{1}-\langle\hat{p}_{1} \rangle)-(\hat{p}_{2}-\langle\hat{p}_{2}\rangle)\big{]}^{2}\Big{\rangle}_{ \text{Weyl}}\] \[=\frac{1}{2}\Big{[}G_{1}^{2,0,0,0}+G_{1}^{0,0,2}+2G_{1}^{1,0,0,1} \Big{]} =\frac{1}{2}\Big{[}G_{1}^{0,2,0,0}+G_{1}^{0,0,2,0}-2G_{1}^{0,1,1,0} \Big{]}\] \[G^{1,1,0,0} =\left\langle\big{(}\hat{x}-\langle x\rangle\big{)}\big{(}\hat{p }_{x}-\langle p_{x}\rangle\big{)}\right\rangle_{\text{Weyl}}\] \[=\frac{1}{2}\Big{\langle}\big{[}(\hat{x}_{1}-\langle x_{1} \rangle)+(\hat{x}_{2}-\langle x_{2}\rangle)\big{]}\big{[}(\hat{p}_{1}-\langle p _{1}\rangle)-(\hat{p}_{2}-\langle p_{2}\rangle)\big{]}\Big{\rangle}_{\text{ Weyl}}\] \[=\frac{1}{2}\Big{[}G_{1}^{1,1,0,0}-G_{1}^{0,0,1,1}-G_{1}^{1,0,1,0} +G_{1}^{0,1,0,1}\Big{]} \tag{33}\]
Classical variables (29), quantum variables (30), and the BTH Eq.(25), give the quantum corrected Hamiltonian \(\langle H_{Q}\rangle\), Eq. (22)
\[\langle H_{Q}\rangle =H_{classical}+H_{quantum}\] \[=\left(\frac{p_{1}^{2}}{2m}+\frac{1}{2}m\Omega^{2}x_{1}^{2} \right)-\left(\frac{p_{2}^{2}}{2m}+\frac{1}{2}m\Omega^{2}x_{2}^{2}\right)- \lambda(x_{1}p_{2}+x_{2}p_{1})-\lambda G_{1}^{1,0,1,0}\] \[+\frac{m\Omega^{2}}{2}G_{1}^{2,0,0,0}+\frac{1}{2m}G_{1}^{0,2,0,0} -\frac{m\Omega^{2}}{2}G_{1}^{0,0,0,2}-\frac{1}{2m}G_{1}^{0,0,2,0}-\lambda G_{ 1}^{0,1,0,1}. \tag{34}\]
Henceforth we will call this Hamiltonian, Eq. (34), the Semiclassical Bateman-Tikochinsky Hamiltonian (SBTH).
We are ready to study the effective evolution. Using Eqs. (19) and (23) we obtain the dynamical equations of motion
\[\dot{x}_{1} =\frac{p_{1}}{m}-\lambda x_{2},\] \[\dot{x}_{2} =-\frac{p_{2}}{m}-\lambda x_{1},\] \[\dot{p_{1}} =-m\Omega^{2}x_{1}+\lambda p_{2},\] \[\dot{p_{2}} =m\Omega^{2}x_{2}+\lambda p_{1},\] \[\dot{G}_{1}^{2,0,0,0} =-2\lambda G_{1}^{1,0,0,1}+\frac{2}{m}G_{1}^{1,1,0,0},\] \[\dot{G}_{1}^{0,2,0,0} =2\lambda G_{1}^{0,1,1,0}-2m\Omega^{2}G_{1}^{1,1,0,0},\] \[\dot{G}_{1}^{0,0,2,0} =-2\lambda G_{1}^{0,1,1,0}-2m\Omega^{2}G_{1}^{0,0,1,1},\] \[\dot{G}_{1}^{0,0,0,2} =2\lambda G_{1}^{1,0,0,1}+\frac{2}{m}G_{1}^{0,0,1,1},\] \[\dot{G}_{1}^{1,0,1,0} =-\lambda G_{1}^{1,1,0,0}-\lambda G_{1}^{0,0,1,1}+\frac{1}{m}G_{1} ^{0,1,1,0}-m\Omega^{2}G_{1}^{1,0,0,1},\] \[\dot{G}_{1}^{0,1,0,1} =\lambda G_{1}^{1,1,0,0}+\lambda G_{1}^{0,0,1,1}-\frac{1}{m}G_{1} ^{0,1,1,0}-m\Omega^{2}G_{1}^{1,0,0,1},\] \[\dot{G}_{1}^{1,0,0,1} =\lambda G_{1}^{2,0,0,0}-\lambda G_{1}^{0,0,0,2}+\frac{1}{m}G_{1} ^{0,1,0,1}+\frac{1}{m}G_{1}^{1,0,1,0},\]
\[\dot{G}_{1}^{0,1,1,0} = \lambda G_{1}^{0,0,2,0}-\lambda G_{1}^{0,2,0,0}-m\Omega^{2}G_{1}^{1, 0,1,0}-m\Omega^{2}G_{1}^{0,1,0,1},\] \[\dot{G}_{1}^{1,1,0,0} = \lambda G_{1}^{1,0,1,0}-\lambda G_{1}^{0,1,0,1}+\frac{1}{m}G_{1}^ {0,2,0,0}-m\Omega^{2}G_{1}^{2,0,0,0},\] \[\dot{G}_{1}^{0,0,1,1} = \lambda G_{1}^{1,0,1,0}-\lambda G_{1}^{0,1,0,1}+\frac{1}{m}G_{1}^ {0,0,2,0}-m\Omega^{2}G_{1}^{0,0,0,2}, \tag{35}\]
If we were to restore to original variables for the Bateman model, Eq. (1), we could rewrite Eq. (35), resulting in a very interesting SDE
\[\dot{x}=\frac{1}{m}p_{x}-\lambda x\] \[\dot{p}_{x}=-m\Omega^{2}x-\lambda p_{x}\] \[\dot{G}^{2,0,0,0} = -2\lambda G^{2,0,0,0}+\frac{2}{m}G^{1,1,0,0}+\frac{2}{m}\big{[}G _{1}^{0,0,1,1}+G_{1}^{1,0,1,0}\big{]}+2\lambda\big{[}G_{1}^{2,0,0,0}+G_{1}^{1,0,0,1}\big{]}\] \[\dot{G}^{0,2,0,0} = -2\lambda G^{0,2,0,0}-2m\Omega^{2}G^{1,1,0,0}+2\lambda\big{[}G_{ 1}^{0,2,0,0}-G_{1}^{0,1,1,0}\big{]}-2m\Omega^{2}\big{[}G_{1}^{0,1,1}-G_{1}^{0,1,0,1}\big{]}\] \[\dot{G}^{1,1,0,0} = -2\lambda G^{1,1,0,0}+\frac{1}{m}G^{0,2,0,0}-m\Omega^{2}G^{2,0,0, 0}+\lambda[2G_{1}^{1,1,0,0}+G_{1}^{0,1,0,1}-G_{1}^{1,0,1,0}] \tag{36}\] \[-\frac{1}{m}G_{1}^{0,0,2,0}+m\Omega^{2}[G_{1}^{0,0,0,2}+G_{1}^{1,0,0,1}],\]
where, the classical dynamics is decoupled from the quantum dynamics, and the latter is very similar to the one given in Eq. (18). Let us stress exactly this by comparing the classical effective dynamics for the Lindblad and the momentous description
Lindblad SBTH \[\frac{d}{dt}\langle\hat{x}\rangle = \frac{\omega_{o}^{\prime}}{m\omega}\langle\hat{p}\rangle-\frac{ \gamma}{2}\langle\hat{x}\rangle,\hskip 42.679134pt\dot{x}=\frac{1}{m}p_{x}-\lambda x\] \[\frac{d}{dt}\langle\hat{p}\rangle = -m\omega\omega_{o}^{\prime}\langle\hat{x}\rangle-\frac{\gamma}{2} \langle\hat{p}\rangle,\hskip 28.452756pt\dot{p}_{x}=-m\Omega^{2}x-\lambda p_{x}.\] (37)
Both descriptions are equivalent if we set \(\omega_{0}^{\prime}=\omega=\Omega\) and \(\gamma=2\lambda\). For the quantum counterpart, it is useful to rewrite the SDE for Lindblad in the following way
\[\frac{d}{dt}G_{L}^{2,0} = \frac{d}{dt}\langle\hat{x}^{2}\rangle-\frac{d}{dt}\langle\hat{x} \rangle^{2}\] \[= -\gamma\langle\hat{x}^{2}\rangle+\frac{\omega_{o}^{\prime}}{m \omega}\langle\hat{x}\hat{p}+\hat{p}\hat{x}\rangle+\frac{\gamma\hbar}{2m\omega }(2\bar{n}+1)-2\langle\hat{x}\rangle\frac{d}{dt}\langle\hat{x}\rangle\] \[= -\gamma\big{(}\langle\hat{x}^{2}\rangle-\langle\hat{x}\rangle^{2} \big{)}+2\frac{\omega_{o}^{\prime}}{m\omega}\Big{(}\frac{\langle\hat{x}\hat{p}+ \hat{p}\hat{x}\rangle}{2}-\langle\hat{x}\rangle\langle\hat{p}\rangle\Big{)}+ \frac{\gamma\hbar}{2m\omega}(2\bar{n}+1)\] \[\dot{G}_{L}^{2,0} = -\gamma G_{L}^{2,0}+\frac{2\omega_{o}^{\prime}}{m\omega}G_{L}^{1,1 }+\frac{\gamma\hbar}{2m\omega}(2\bar{n}+1), \tag{38}\]
where \(G_{L}\) is seen as a quantum variable in the momentous scheme. The full set of equations gives the Lindblad SDE, Eq. (18), in terms of quantum variables in the semiclassical approach
\[\dot{G}_{L}^{2,0} = -\gamma G_{L}^{2,0}+\frac{2\omega_{o}^{\prime}}{m\omega}G_{L}^{1,1 }+\frac{\gamma\hbar}{2m\omega}(2\bar{n}+1)\] \[\dot{G}_{L}^{0,2} = -\gamma G_{L}^{0,2}-2m\omega\omega_{o}^{\prime}G_{L}^{1,1}+\frac{ \gamma\hbar m\omega}{2}(2\bar{n}+1)\] \[\dot{G}_{L}^{1,1} = -m\omega\omega_{o}^{\prime}G_{L}^{2,0}+\frac{\omega_{o}^{\prime} }{m\omega}G_{L}^{0,2}-\gamma G_{L}^{1,1}. \tag{39}\]
If we call the constant terms in the above equations in the following way
\[D_{xx}=\frac{\gamma\hbar}{2m\omega}(2\bar{n}+1)\quad D_{pp}=\frac{\gamma\hbar m \omega}{2}(2\bar{n}+1)\quad D_{px}=0 \tag{40}\]
we can see that they obey the fundamental constrain for diffusion coefficients [15, 38, 39]
\[D_{xx}>0\quad D_{pp}>0\quad D_{xx}D_{pp}-D_{px}^{2}\geq\frac{\hbar^{2}\gamma^{ 2}}{4}. \tag{41}\]
Now, if in Eq. (36) we perform a similar identification
\[D_{Gxx}=2\lambda G_{1}^{2,0,0,0}\quad D_{Gpp}=2\lambda G_{1}^{0,2,0,0}\quad D_ {Gpx}=2\lambda G_{1}^{1,1,0,0}, \tag{42}\]
and taking into account the generalized uncertainty relation for quantum variables given in Eq. (21)
\[D_{Gxx}D_{Gpp}-D_{Gpx}^{2}\geq\lambda^{2}\hbar^{2}, \tag{43}\]
we get a complete agreement in both descriptions: for the SDE, Eq. (36) also stands as the fundamental constraint for diffusion coefficients.
## 4 QDHO effective evolution
### Initial Conditions
In order to study the evolution of the QDHO, Eq, (36), we need to establish the initial conditions. We assume that, initially, no correlation exist between the physical oscillator and the reservoir. We also propose a coherent state as our initial wave function, for which initial conditions can be computed.
The initial coherent state reads
\[|\bar{x}_{0}\rangle=e^{-\frac{i}{\hbar}\hat{p}x_{0}}|0\rangle, \tag{44}\]
yielding the following initial conditions for the quantum variables
\[(\Delta\hat{x})_{t=0}^{2}=\frac{\hbar}{2m\omega}, (\Delta\hat{p})_{t=0}^{2}=\frac{m\hbar\omega}{2},\] \[\langle\hat{x}^{2}\rangle_{t=0}=\frac{\hbar}{2m\omega}+x_{0}^{2}, \langle\hat{p}^{2}\rangle_{t=0}=\frac{m\hbar\omega}{2},\] \[\langle\hat{x}\hat{p}+\hat{p}\hat{x}\rangle_{t=0}=0, \langle\hat{x}\rangle_{t=0}=x_{0}=\sqrt{\frac{2n\hbar}{m\omega}}. \tag{45}\]
where \(\langle\bar{x}_{0}|\hat{H}|\bar{x}_{0}\rangle=\frac{1}{2m}\langle\bar{x}_{0}| \hat{p}^{2}|\bar{x}_{0}\rangle+\frac{1}{2}m\omega^{2}\langle\bar{x}_{0}|\hat{ x}^{2}|\bar{x}_{0}\rangle.\) Since the left hand side \(\langle\bar{x}_{0}|\hat{H}|\bar{x}_{0}\rangle\) is one of the quantized energy levels of the QHO, say \(\Big{(}n+\frac{1}{2}\Big{)}\hbar\omega\), we have
\[\Big{(}n+\frac{1}{2}\Big{)}\hbar\omega=\frac{1}{2}\hbar\omega+\frac{1}{2}m \omega^{2}x_{0}^{2}\quad\longrightarrow\quad x_{0}=\sqrt{\frac{2n\hbar}{m \omega}} \tag{46}\]
The corresponding conditions for the transformed variables in Eq. (35) read
\[x_{0}=\sqrt{\frac{2n\hbar}{m\omega}} y_{0}=\sqrt{\frac{2n\hbar}{m\omega}} \longrightarrow \langle\hat{x}_{1}\rangle_{t=0}=x_{10}=2\sqrt{\frac{n\hbar}{m \omega}} \langle\hat{x}_{2}\rangle_{t=0}=x_{20}=0,\] \[\langle\hat{p}_{x}\rangle_{t=0}=p_{x_{0}}=0 \langle\hat{p}_{y}\rangle_{t=0}=p_{y_{0}}=0 \longrightarrow \langle\hat{p}_{1}\rangle_{t=0}=p_{10}=0 \langle\hat{p}_{2}\rangle_{t=0}=p_{20}=0. \tag{47}\]
The evolution for the mirror system is obtained with the same initial conditions for \(x\) and \(y\) at \(t=0\) (they are mutually mirror imaged [40]).
We can obtain the initial conditions for the quantum variables \(G_{1}^{a,b,c,d}\) by applying an inverse transformation, and rewriting them in terms of quantum variables for which we can use the coherent states. For instance, for \(G_{1}^{2,0,0,0}\) we get
\[G_{1}^{2,0,0,0}=\langle(\hat{x}_{1}-\langle\hat{x}_{1}\rangle)^{2}\rangle_{ \text{Weyl}}=\frac{1}{2}\langle[(\hat{x}-\langle\hat{x}\rangle)+(\hat{y}- \langle\hat{y}\rangle)]^{2}\rangle_{\text{Weyl}}=\frac{1}{2}\Big{[}G^{2,0,0,0}+ G^{0,0,2,0}+2G^{1,0,1,0}\Big{]}. \tag{48}\]
Given that in the classical description there is an extra degree of freedom representing the mirror image of \(x\) and \(p_{x}\), we assume a similar situation in the quantum counterpart. This means equal initial conditions for the quantum variables \((y,p_{y})\) and \((x,p_{x})\). Explicitly, from Eq. (45)
\[G_{t=0}^{2,0,0,0}=\langle\hat{x}^{2}\rangle_{t=0}-\langle\hat{x} \rangle_{t=0}^{2}=\frac{\hbar}{2m\omega},\] \[G_{t=0}^{0,2,0,0}=\langle\hat{p}_{x}^{2}\rangle_{t=0}-\langle \hat{p}_{x}\rangle_{t=0}^{2}=\frac{m\hbar\omega}{2},\] \[G_{t=0}^{0,1,0,0}=\big{(}\langle\hat{x}\hat{p}_{x}\rangle_{\text {Weyl}}\big{)}_{t=0}-\langle\hat{x}\rangle\langle\hat{p}_{x}\rangle_{t=0}=0,\] \[G_{t=0}^{0,0,2,0}=\langle\hat{y}^{2}\rangle_{t=0}-\langle\hat{y} \rangle_{t=0}^{2}=\frac{\hbar}{2m\omega},\] \[G_{t=0}^{0,0,2}=\langle\hat{p}_{y}^{2}\rangle_{t=0}-\langle\hat{ p}_{y}\rangle_{t=0}^{2}=\frac{m\hbar\omega}{2},\] \[G_{t=0}^{0,0,1,1}=\big{(}\langle\hat{y}\hat{p}_{y}\rangle_{\text {Weyl}}\big{)}_{t=0}-\langle\hat{y}\rangle\langle\hat{p}_{y}\rangle_{t=0}=0, \tag{49}\]
and by using the inverse transform relations, we get the initial conditions for the SBTH
\[x_{10}=2\sqrt{\frac{n\hbar}{m\omega}} x_{20}=0 p_{10}=0 p_{20}=0\] \[G_{1_{t=0}}^{2,0,0,0}=\frac{\hbar}{2m\omega} G_{1_{t=0}}^{0,2,0,0}=\frac{m\hbar\omega}{2} G_{1_{t=0}}^{0,0,2,0}=\frac{m\hbar\omega}{2} G_{1_{t=0}}^{0,0,2,0}=\frac{\hbar}{2m\omega}\] \[G_{1_{t=0}}^{1,1,0,0}=0 G_{1_{t=0}}^{1,0,1,0}=0 G_{1_{t=0}}^{1,0,1,0,1}=0 G_{1_{t=0}}^{1,0,0,1}=0 G_{1_{t=0}}^{0,1,0,1}=0\] \[G_{1_{t=0}}^{0,0,1,1}=0 G_{1_{t=0}}^{0,1,1,0}=0 \tag{50}\]
### Dynamical Evolution
As we mentioned in section 3.2, the dynamics for our SBTH model, Eq. (36), and that of Lindblad's model, Eq. (39), are completely equivalent at the effective level; classical and quantum variables have similar equations of motion, Eqs. (36-38). However, it is important to remember that although the equations of motion are classical (obtained from a Hamiltonian), the whole system is quantum in nature, so its probabilistic behavior must be tracked. We show next how to do this.
We use the initial conditions proposed in the previous section to obtain the dynamical evolution of the SBTH and Lindblad's SDEs, and the following parameters
\[x_{0}=2\sqrt{\frac{n\hbar}{m\omega}},\quad p_{x_{0}}=0\quad\gamma=0.08,\quad \omega=\omega_{0}^{\prime}=1.5,\quad\bar{n}\in\{0,1,2\}\quad m=\hbar=1,\quad n =3. \tag{51}\]
As we mentioned above, the behavior of physical variables should display their probabilistic nature, hence, for a physically meaningful evolution, quantum dispersions should be taken into account, that is,
\[x(t)\pm\Delta x=x(t)\pm\sqrt{G^{2,0}(t)},\quad p(t)\pm\Delta p=p(t)\pm\sqrt{G^{0, 2}(t)}.\]
In Figs. 1 and 2, we show the evolution of position and momentum, alongside their uncertainty regions bounded by their corresponding dispersions, for both SBTH (purple shaded area) and Lindblad (golden shaded area), respectively. Evidently, the dispersions exhibit non-trivial dynamics within the effective approach, yet their evolution remains distinctly synchronized with the classical variables. Furthermore, one can observe that the amplitudes of the dispersions differ between the two descriptions due to the thermal values \(\bar{n}\) in Lindblad's approach. In contrast, in our description, the uncertainty regions remain consistently distanced from the semiclassical evolution.
Although both approaches yield similar results, our semiclassical method has a clear advantage over other schemes: it is possible to determine the quantum mechanical behavior of any physical variable in a direct, more intuitive way. In particular, we can analyze the energy of the QDHO: the classical total mechanical energy of the harmonic oscillator is given by
\[E_{mech} = K(t)+U(t) \tag{52}\] \[= \frac{\big{(}p_{x}(t)\big{)}^{2}}{2m}+\frac{1}{2}m\omega^{2}\big{(} x(t)\big{)}^{2}.\]
This mechanical energy corresponds to the Hamiltonian, and thus we extend the expression above to the quantum energy within the momentous formalism
\[\langle E(t)\rangle=\frac{1}{2m}\big{(}p_{x}(t)\big{)}^{2}+\frac{1}{2}m\omega ^{2}\big{(}x(t)\big{)}^{2}+\frac{1}{2m}G^{0,2}(t)+\frac{1}{2}m\omega^{2}G^{2,0 }(t). \tag{53}\]
Figure 1: Evolution of \(\langle\hat{x}(t)\rangle\) of the QDHO (red line), and its dispersion belts: Lindblad’s with \(\bar{n}=2\) (golden shaded area), and SBTH (purple shaded area).
We can also give an expression including the quantum uncertainty in the evolution as follows
\[\langle E_{\pm}\rangle=\frac{1}{2m}\Big{(}p_{x}(t)\pm\sqrt{G^{0,2}(t)}\Big{)}^{2} +\frac{1}{2}m\omega^{2}\Big{(}x(t)\pm\sqrt{G^{2,0}(t)}\Big{)}^{2}. \tag{54}\]
Eq. (53) represents the mean energy of the system within the momentous quantum mechanics scheme, being an effective alternative to Eq. (15) in the Lindblad approach.
In Fig. (3), we obtained QDHO's mean energy as described in Eq. (53). Note how the semiclassical trajectory (orange dashed line) and the one in the Lindblad approach (brown solid line), Eq. (15), coincide for \(\bar{n}=0\). Furthermore, one can see the decay of the QDHO's energy: initially, the QDHO's energy is in the third excited state \(n=3\), and as it evolves in time, this energy gradually converges towards the ground state.
## 5 Discussion and conclusions
In this study, we have derived an effective description of the quantum damped harmonic oscillator, wherein semiclassical Hamiltonian equations govern the evolution of expectation values for physical operators. We have shown how the challenging issue of energy dissipation in quantum mechanics can be implemented and analyzed, particularly for the harmonic oscillator. We hope our contributions will pave the way for research into complex phenomena and general open quantum systems. As we mentioned in the text, we contrasted our results with those obtained in the Lindblad formulation with master equations, demonstrating the robustness of our approach.
The effective evolution shown in the preceding section precisely illustrates this point, revealing remarkable similarities between the dynamics derived from the SETH and the Lindblad approach in the study of the QDHO. From this, we understand that the SBTH description not only overcomes the shortcomings and difficulties encountered in the canonical quantization of the BTH, but it also exhibits the same behavior of the QDHO obtained through master equations. The solution given
Figure 2: Evolution of \(\langle\hat{p}_{x}(t)\rangle\) of the QDHO (red line), and its dispersion belts: Lindblad’s with \(\bar{n}=2\) (golden shaded area), and SBTH (purple shaded area).
by employing a coherent state shows that the initial quantum state for the semiclassical model is preserved throughout its evolution. Explicitly, the effective dynamics of the QDHO is governed by Eq. (36), that, fed with initial conditions Eq. (50), provides the evolution of the system, in perfect agreement with Lindblad's Eq. (39) for \(\bar{n}=0\).
To the authors' knowledge, no prior explicit comparison between the Bateman model and the Lindblad master equation for the QDHO has been obtained. Remarkably, the success of our effective description was achieved mainly due to the introduction of the canonical transformations Eqs. (30)-(31), and the independent nature of classical and quantum variables Eq. (24).
Naive straightforward canonical quantization of the Bateman model results in an invalid evolution, which is corrected by using the canonical transformation mentioned above, which preserves the Heisenberg uncertainty and describes the same quantum dynamics as the one provided by Lindblad's, as demonstrated by the quantization procedure shown in section 3.2.
Furthermore, we can demonstrate that the presence of a ground state arises as a consequence of the generalized uncertainty constraint applied to the quantum variables (21). Analyzing the evolution of the quantum effective energy, Eq. (53), the contribution of \(x(t)\) and \(p(t)\) for large times is negligible, as classical variables should do. Thus we end up with
\[\langle E\rangle=\frac{1}{2m}G^{0,2}+\frac{1}{2}m\omega^{2}G^{2,0}. \tag{55}\]
We can find a more suitable expression by multiplying by \(G^{2,0}\) on both sides of this equation
\[G^{2,0}\langle E\rangle=\frac{1}{2m}G^{2,0}G^{0,2}+\frac{1}{2}m\omega^{2}(G^{ 2,0})^{2}. \tag{56}\]
Now, by using the uncertainty relation \(G^{2,0}G^{0,2}\geq\hbar^{2}/4\), we get
\[G^{2,0}\langle E\rangle\geq\frac{1}{2m}\frac{\hbar^{2}}{4}+\frac{1}{2}m\omega^ {2}(G^{2,0})^{2}, \tag{57}\]
Figure 3: Energy evolution of the QDHO for \(\bar{n}=0,1,2\) (brown, pink, and blue solid lines), and the QDHO’s semiclassical energy (orange dashed line), together with its dispersion energy belt (black solid lines).
or, in a more insightful way
\[\frac{1}{2}m\omega^{2}(G^{2,0})^{2}-G^{2,0}\langle E\rangle+\frac{1}{2m}\frac{ \hbar^{2}}{4}\leq 0. \tag{58}\]
Here we observe that \(G^{2,0}\) is equal or greater than the minimum value given by the solution of the quadratic equation
\[G^{2,0}_{\pm}=\frac{1}{m\omega^{2}}\Big{(}\langle E\rangle\pm\sqrt{\langle E \rangle^{2}-\hbar^{2}\omega^{2}/4}\Big{)}, \tag{59}\]
however, by definition, quantum variables satisfy the following conditions
\[\text{Im}[G^{2,0}]=0\quad\text{and}\quad G^{2,0}>0. \tag{60}\]
Finally we get
\[\langle E\rangle^{2}\geq\frac{\hbar^{2}\omega^{2}}{4}\ \Rightarrow\ \langle E \rangle\geq\frac{\hbar\omega}{2}, \tag{61}\]
thereby confirming the existence of a ground state.
We conclude by emphasizing that the momentous effective approach captures the essential information of quantum systems, which, as demonstrated in the present manuscript, can accommodate dissipative effects. It is also important to mention that the particular expression of the effective Hamiltonian (34) for the damped oscillator is, in general, much more involved, usually containing an infinite number of quantum corrections [20, 25]; the evolution in such cases can be analyzed order by order in the quantum variables, or by the implementation of an alternative description in terms of canonical variables [41]. Extension to more general open quantum systems will appear in future work.
**Appendix: Poisson Brackets between Quantum Dynamical Variables**
The Poisson bracket between quantum variables, for two degrees of freedom, \(\{G^{a,b,c,d},G^{e,f,g,h}\}\)3, can be computed by using [36, 42]
Footnote 3: We are considering the moments defined in Eq. (20)
\[\{\langle\hat{f}\rangle,\langle\hat{g}\rangle\}=\frac{1}{i\hbar}\langle[\hat{ f},\hat{g}]\rangle. \tag{62}\]
In the momentous approach, as mentioned in section 3.1, equations of motion can be obtained in the usual way, \(\dot{q}_{i}=\{q_{i},H_{Q}\}\). For the SBTH, the effctive Hamiltonian reads
\[H_{Q}= \left(\frac{p_{1}^{2}}{2m}+\frac{1}{2}m\Omega^{2}x_{1}^{2}\right) -\left(\frac{p_{2}^{2}}{2m}+\frac{1}{2}m\Omega^{2}x_{2}^{2}\right)-\lambda(x _{1}p_{2}+x_{2}p_{1})-\lambda G_{1}^{1,0,1,0}\] \[+\frac{m\Omega^{2}}{2}G_{1}^{2,0,0,0}+\frac{1}{2m}G_{1}^{0,2,0,0} -\frac{m\Omega^{2}}{2}G_{1}^{0,0,0,2}-\frac{1}{2m}G_{1}^{0,0,2,0}-\lambda G_ {1}^{0,1,0,1} \tag{63}\]
the equations of motion are given by
\[\dot{x}_{k}=\{x_{k},H_{Q}\}\quad\dot{p}_{k}=\{p_{k},H_{Q}\} \tag{64}\]
and
\[\dot{G}_{1}^{a,b,c,d}=\{G_{1}^{a,b,c,d},H_{Q}\} \tag{65}\]
where \(k=\{1,2\}\).
The non-trivial Poisson brackets between momenta \(\{G_{1}^{a,b,c,d},G_{1}^{e,f,g,h}\}\) are the following
\[\{G_{1}^{2,0,0,0},G_{1}^{1,0,1}\} =0 \{G_{1}^{1,0,0,1},G_{1}^{0,0,2,0}\} =-2G_{1}^{1,0,1,0}\] \[\{G_{1}^{2,0,0,0},G_{1}^{0,1,0,1}\} =2G_{1}^{1,0,0,1} \{G_{1}^{0,1,1,0},G_{1}^{1,0,1,0}\} =-G_{1}^{0,0,2,0}\] \[\{G_{1}^{2,0,0,0},G_{1}^{0,2,0,0}\} =4G_{1}^{1,1,0,0} \{G_{1}^{0,0,1,1},G_{1}^{0,0,0,2}\} =2G_{1}^{0,0,0,2}\] \[\{G_{1}^{0,2,0,0},G_{1}^{1,0,1,0}\} =-2G_{1}^{0,1,1,0} \{G_{1}^{0,1,1,0},G_{1}^{0,1,0,1}\} =G_{1}^{0,2,0,0}\] \[\{G_{1}^{0,0,2,0},G_{1}^{0,1,0,1}\} =2G_{1}^{0,1,1,0} \{G_{1}^{0,1,1,0},G_{1}^{2,0,0,0}\} =-2G_{1}^{1,0,1,0}\] \[\{G_{1}^{0,0,2,0},G_{1}^{0,0,0,2}\} =4G_{1}^{0,0,1,1} \{G_{1}^{0,1,1,0},G_{1}^{0,0,0,2}\} =2G_{1}^{0,1,0,1}\] \[\{G_{1}^{1,0,0,2,},G_{1}^{1,0,1,0}\} =-2G_{1}^{1,0,0,1} \{G_{1}^{1,1,0,0},G_{1}^{1,0,1,0}\} =-G_{1}^{1,0,1,0,1}\] \[\{G_{1}^{1,1,0,0,},G_{1}^{0,1,0,1}\} =G_{1}^{0,1,0,1} \{G_{1}^{1,1,0,0},G_{1}^{0,2,0,0}\} =2G_{1}^{0,2,0,0}\] \[\{G_{1}^{1,0,1,0,1},G_{1}^{2,0,0,0}\} =-2G_{1}^{1,0,0,1} \{G_{1}^{1,1,0,0},G_{1}^{2,0,0,0}\} =-2G_{1}^{2,0,0,0}\] \[\{G_{1}^{1,0,0,1},G_{1}^{1,0,1,0}\} =-G_{1}^{2,0,0,0} \{G_{1}^{0,0,1,1},G_{1}^{1,0,1,0}\} =-G_{1}^{1,0,1,0}\] \[\{G_{1}^{1,0,1,0,1},G_{1}^{0,1,0,1}\} =G_{1}^{0,0,0,2} \{G_{1}^{0,0,1,1},G_{1}^{0,1,0,1}\} =G_{1}^{0,1,0,1}\] \[\{G_{1}^{1,0,0,1},G_{1}^{0,2,0,0}\} =2G_{1}^{0,1,0,1} \{G_{1}^{0,0,1,1},G_{1}^{0,0,2,0}\} =-2G_{1}^{0,0,2,0}\] \[\{G_{1}^{1,0,0,1},G_{1}^{0,1,1,0}\} =G_{1}^{0,0,1,1}-G_{1}^{1,1,0,0} \{G_{1}^{1,0,1,0,1}\} =G_{1}^{1,1,0,0}+G_{1}^{0,0,1,1}.\]
The explicit calculation is, for example, as follows
\[\{G_{1}^{2,0,0,0},G_{1}^{0,2,0,0}\} =(i\hbar)^{-1}\langle[(\hat{x}_{1}-x_{1})^{2},(\hat{p}_{1}-p_{1}) ^{2}]\rangle\] \[=(i\hbar)^{-1}\langle(\hat{x}_{1}-x_{1})[\hat{x}_{1},\hat{p}_{1} ](\hat{p}_{1}-p_{1})+[\hat{x}_{1},\hat{p}_{1}](\hat{x}_{1}-x_{1})(\hat{p}_{1}-p_{ 1})\] \[+(\hat{p}_{1}-p_{1})(\hat{x}_{1}-x_{1})[\hat{x}_{1},\hat{p}_{1}]+( \hat{p}_{1}-p_{1})[\hat{x}_{1},\hat{p}_{1}](\hat{x}_{1}-x_{1})\rangle\] \[=2\langle(\hat{x}_{1}-x_{1})(\hat{p}_{1}-p_{1})+(\hat{p}_{1}-p_{ 1})(\hat{x}_{1}-x_{1})\rangle\] \[=4\langle(\hat{x}_{1}-x_{1})(\hat{p}_{1}-p_{1})\rangle_{\rm Weyl}\] \[=4G_{1}^{1,1,0,0}. \tag{66}\]
and similarly for the rest.
|
2310.17481 | Intermediate Field Coupling of Single Epitaxial Quantum Dots to
Plasmonic Waveguides | Key requirements for quantum plasmonic nanocircuits are reliable
single-photon sources, high coupling efficiency to the plasmonic structures and
low propagation losses. Self-assembled epitaxially grown GaAs quantum dots are
close to ideal stable, bright and narrowband single-photon emitters. Likewise,
wet-chemically grown monocrystalline silver nanowires are among the best
plasmonic waveguides. However, large propagation losses of surface plasmons on
the high-index GaAs substrate prevent their direct combination. Here, we show
by experiment and simulation that the best overall performance of the quantum
plasmonic nanocircuit based on these building blocks is achieved in the
intermediate field regime with an additional spacer layer between the quantum
dot and the plasmonic waveguide. High-resolution cathodoluminescence
measurements allow a precise determination of the coupling distance and support
a simple analytical model to explain the overall performance. The coupling
efficiency is increased up to four times by standing wave interference near the
end of the waveguide. | Michael Seidel, Yuhui Yang, Thorsten Schumacher, Yongheng Huo, Saimon Filipe Covre da Silva, Sven Rodt, Armando Rastelli, Stephan Reitzenstein, Markus Lippitz | 2023-10-26T15:36:14Z | http://arxiv.org/abs/2310.17481v1 | # Intermediate Field Coupling of Single Epitaxial Quantum Dots to Plasmonic Waveguides
###### Abstract
Key requirements for quantum plasmonic nanocircuits are reliable single-photon sources, high coupling efficiency to the plasmonic structures and low propagation losses. Self-assembled epitaxially grown GaAs quantum dots are close to ideal stable, bright and narrowband single-photon emitters. Likewise, wet-chemically grown monocrystalline silver nanowires are among the best plasmonic waveguides. However, large propagation losses of surface plasmons on the high-index GaAs substrate prevent their direct combination. Here, we show by experiment and simulation that the best overall performance of the quantum plasmonic nanocircuit based on these building blocks is achieved in the intermediate field regime with an additional spacer layer between the quantum dot and the plasmonic waveguide. High-resolution cathodoluminescence measurements allow a precise determination of the coupling distance and support a simple analytical model to explain the overall performance. The coupling efficiency is increased up to four times by standing wave interference near the end of the waveguide.
Quantum photonics has the potential to revolutionize our world with breakthrough technologies such as quantum computing and quantum communication, for instance using quantum dots (QDs) as single photon emitters [1]. Especially in terms of applications, scalability is indispensable, and integrated photonic networks are highly sought after [2]. Plasmonic nanocircuits are a promising platform since they not only dramatically reduce circuit size, but also allow light to be controlled and manipulated at a truly nanoscale level [3; 4; 5; 6; 7]. Even though many electrons are involved in the surface plasmon polariton (SPP), the quantum-optical nature is preserved [8].
In recent years, the coupling of various quantum emitters to plasmonic waveguides has been demonstrated (for a review see [9]). Sources of single plasmons have been reported at both room and liquid helium temperatures using various combinations of waveguides and emitters [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. However, for true quantum-optical operation of the circuit, high-quality sources of single photons are essential. Epitaxially grown self-assembled quantum dots possess close to ideal quantum properties as they are bright, non-blinking, and have narrow linewidths [21; 22; 23]. Nonetheless, the typical approach of bringing the waveguide close to the emitter does not work, as the high refractive index of the semiconductor host induces significant damping of SPP due to ohmic and radiative losses. Additionally, the optical properties of epitaxial quantum dots degrade with decreasing distance from the dot's surface [24]. These issues have previously been addressed through an indirect coupling method using dielectric-plasmonic mode conversion [16], requiring significant nanofabrication.
Here, we address conflicting requirements in a different and much simpler way: Instead of placing the plasmonic waveguide directly on the semiconductor substrate containing quantum dots, a planar dielectric layer with a lower refractive index is used as a spacer between the semiconductor host and plasmonic waveguide, as depicted in Fig. 1a. This allows us to balance the efficiency of coupling \(\eta_{in}\) and propagation \(\eta_{p}\): a thicker spacer layer enhances the plasmon propagation length, a thinner layer enhances the coupling efficiency between the quantum dot and the waveguide. In the following, we demonstrate that the nanocircuit performance is expected to be superior when the coupling between the emitter and waveguide occurs in the intermediate field (\(kr\approx 1\)), rather than the near field (\(kr\ll 1\)), where \(k\) and \(r\) denote the light wavevector and radial distance from the emitter,
respectively.
As sketched in Fig. 1a, we assume an infinitely extended waveguide in propagation direction. Mode profiles and corresponding effective mode indices are calculated as a function of the spacer thickness \(t\) with Comsol Multiphysics. For all simulations we use \(\lambda=790\,\mathrm{nm}\), the emission wavelength of our GaAs quantum dots [25]. The chemically grown monocrystalline silver nanowire [26] is modeled with a pentagonal cross-section (\(d=50\,\mathrm{nm}\)). The refractive indices for silver (\(n_{Ag}=0.035+5.49i\)) and AlGaAs (\(n_{AlGaAs}=3.44\)) are taken from literature [27; 28]. The dielectric spacer (spin-on-glass IC1-200 from Futurrex) is modeled with \(n_{spacer}=1.41\) according to the manufacturer.
To compute the coupling efficiency \(\eta_{in}\) for the plasmonic waveguide mode, we follow the framework of Ref. [29]. The decay rate of the emitter into the plasmonic mode is related to the dot product between its transition dipole moment \(\mathbf{\mu}\) and the modal field \(\mathbf{E}^{mode}\). We have to consider that the transition dipole moments of our quantum dots are given by two energetically almost degenerate exciton states oriented orthogonal to each other in the sample plane. For the sake of simplicity, we assume that one of the dipole moments is oriented parallel and the other one perpendicular to the nanowire axis. The coupling efficiency
\[\eta_{in}=\sum_{j}\left|\mathbf{\mu}_{j}\cdot\mathbf{E}^{mode}(x,z)\right|^{2} \tag{1}\]
is then obtained by incoherently adding up the two dipole moment contributions \(j=x,y\) and normalizing to the emission of a dipole in homogeneous AlGaAs (see Supporting Information S1).
The coupling efficiency \(\eta_{in}\) for a spacer thickness of \(t=130\,\mathrm{nm}\) is shown in Fig. 1b as a function of the emitter position in the \(xz\)-plane. Note that in the simulation we place the dipole not only within the AlGaAs matrix as in the experiment, but also inside the dielectric spacer and around the nanowire. Apart from the strongly confined hot spots directly at the nanowire, there is a less confined region in the semiconductor where the incoupling efficiency \(\eta_{in}\) does not vary much. At the quantum dot burial depth \(z_{b}\approx 30\,\mathrm{nm}\), \(\eta_{in}\) drops only by a factor of two when leaving the wire axis laterally by \(125\,\mathrm{nm}\). This stems from the rather loosely bound character of the waveguide mode and relaxes the required QD alignment accuracy, even though the absolute coupling efficiency is lower. Furthermore, the weak depth dependence suggests that the QD can be placed deeper in the AlGaAs without much change in coupling efficiency. This is particularly interesting when considering that the optical properties of the QD improve rapidly with increasing burial depths [24].
The effect of the spacer thickness on the waveguide coupling efficiency \(\eta_{in}\) and the propagation length \(L_{p}\) is shown in the top panel of Fig 1c: With increasing layer thickness, the amplitude of the waveguide mode at the
Figure 1: **Intermediate field coupling of a quantum dot to a plasmonic waveguide.** a) Sketch of the coupling scheme: A quantum dot embedded within AlGaAs barriers radiatively couples to a silver nanowire that is separated by the capping layer and an additional dielectric spacer. b) Spatial variation of the coupling efficiency \(\eta_{in}\) into the waveguide for an emitter located in the xz-plane and with dipole moment in the xy-plane of the sample. The circle enclosing \(kr=1\) illustrates the transition between near and far field. Note the different color scale bars for the lower and upper half spaces. c) Upper panel: Coupling efficiency \(\eta_{in}\) and propagation length \(L_{p}\), as a function of the spacer thickness \(t\). The coupling efficiency is evaluated for an emitter that is centrally located beneath the nanowire in the quantum dot layer and placed at a depth \(z_{b}=30\,\mathrm{nm}\) below the spacer/AlGaAs interface. Lower panel: waveguide efficiency \(\eta_{wg}\) as a function of spacer thickness \(t\) and waveguide length \(L\). Optimal performance is achieved in the intermediate field for \(kr\gtrsim 1\).
quantum dot position is reduced, and therefore the coupling efficiency decreases. Here we evaluate the coupling efficiency for an emitter that is centered with respect to the nanowire, at a depth of \(z_{b}=30\,\mathrm{nm}\). On the other hand, the propagation length \(L_{p}\) of the mode is strongly increased for thicker spacers. This is due to the diminishing influence of the high-index AlGaAs, resulting in a mode that more strongly bound to the waveguide and features less radiative losses.
For the experimental realization of our plasmonic coupling concept, we are interested in the waveguide efficiency \(\eta_{wg}=\eta_{in}\,\eta_{p}\), which also includes the propagation efficiency \(\eta_{p}=e^{-L/L_{p}}\) for a waveguide of finite length \(L\). Obviously, the optimal spacer thickness also depends on the waveguide length \(L\), as can be seen in the lower panel of Fig. 1c. Accordingly, the highest waveguide efficiency \(\eta_{wg}\) is achieved for short waveguides and rather thin spacers. For an experimentally meaningful nanocircuit, however, the waveguide should be longer than the spatial resolution of the optical microscope, i.e. \(L\gtrsim 1\,\mathrm{\SIUnitSymbolMicro m}\). For such waveguide lengths we find the optimum in the intermediate field regime at \(kr=1.6-2.6\) or \(t=70-160\,\mathrm{nm}\). Here, the spacer thickness \(t\) is rewritten in terms of \(kr=k_{0}(n_{AlGaAs}z_{b}+n_{spacer}t)\) with the vacuum wavevector \(k_{0}\). The transition from near to far field is also illustrated as a circle enclosing \(kr=1\) in Fig. 1b. Let us now turn to the experimental realization of such an intermediate field coupling. The sample is based on near-surface self-assembled GaAs quantum dots in AlGaAs barriers grown by molecular beam epitaxy on a GaAs substrate. For the dielectric spacer, the polysiloxane-based spin-on glass (IC1-200 Intermediate Coating, Futurrex) is spin-coated on top of the GaAs surface, resulting in a film with a thickness of \(t=(130\pm 15)\,\mathrm{nm}\). Chemically grown monocrystalline silver nanowires (PL-AgW100, PlasmaChem) with average widths of \(d=(50\pm 10)\,\mathrm{nm}\) and typical lengths of a few micrometers are dispersed on top of the IC1 film. A detailed description of the sample fabrication can be found in the Supporting Information S2. The random arrangement of dots and wires samples all relative orientations and coupling distances, requiring preselection of potentially coupled quantum dot - nanowire pairs. Therefore, we determine the spatial arrangement by high-resolution cathodoluminescence mapping and then measure the waveguide performance by optical microscopy. Low-temperature cathodoluminescence combines high-resolution electron microscopy with access to quantum dot emission, making it an excellent technique to specify the relative positions of quantum dots and nanowires. Our setup is described in detail in Ref. [30]. The sample is mounted on a liquid He-flow cryostat (\(20\,\mathrm{K}\)) and excited with a focused electron beam of \(20\,\mathrm{kV}\) acceleration voltage, which is scanned over the sample surface. As sketched in Fig. 2a, the cathodoluminescence emission of each excitation spot position is collected by a spectrometer, simultaneously providing a secondary electron image and cathodoluminescence spectrum mapping with the same coordinates.
An example of a data set can be found in the lower section of Fig. 2a. For comparison, a room temperature scanning electron micrograph is included as an inset. Although the diameter of the electron beam is only a few nanometers, the actual size of the cathodoluminescence spots mostly results from the effective diameter of the generation volume and charge carrier diffusion [31; 32]. Hence, a two-dimensional Gaussian profile is used to fit the cathodoluminescence emission spots, allowing for a precise determination of the relative lateral positions of quantum dots and nanowires with \(10-30\,\mathrm{nm}\) accuracy. For the depicted nanosystem, the cathodoluminescence image reveals the QD positions \(x_{QD}=(77\pm 12)\,\mathrm{nm}\) and \(y_{QD}=(685\pm 26)\,\mathrm{nm}\) with respect to the nanowire end. However, electron beam scanning is not suitable to distinguish between direct QD emission and remote plasmon emission at the nanowire end due to lacking spatial resolution in the detection path.
Consequently, we use an all-optical confocal microscope to demonstrate intermediate field coupling. A fast scan mirror moves the excitation laser focus (\(\mathrm{NA}=0.9\), \(635\,\mathrm{nm}\) wavelength) over the sample inside a closed-cycle cryostat (\(20\,\mathrm{K}\)). Different detection schemes are employed. To identify the preselected QD-nanowire system, we map the sample by photon counting and a combination of photoluminescence and reflection (Fig. 2b). In order to enhance the contrast in the reflection image, the direct laser reflection is suppressed with a polarizer. Slight sample drifts during the laser scans can be neglected since this measurement is only used for identification of the nanowires.
Intermediate field coupling is demonstrated in Fig. 2c by launching and detecting plasmons: Here, the excitation laser is stationary focused on the QD while the surrounding sample area's luminescence is imaged onto a CCD-camera. We observe clear emission from the SPP that is launched by the coupled quantum dot and scattered at the nanowire's end. We find identical photoluminescence spectra for the direct QD emission and the outcoupled photons of the plasmon (see Supporting Information S3). Emission from the short wire end is also expected but is experimentally hidden in the airy-patterned background of the direct QD emission (see Supporting Information S4).
We analyzed a total of nine QD-nanowire systems, which differ by up to a factor of 80 in the intensity ratio of the respective SPP emission \(I_{pl}\) and the direct QD emission \(I_{qd}\) (see Supporting Information Tab. S3). In the following, we extract the coupling efficiency for these nanosystems and explain this - on the first sight - large variation as an interference effect near a waveguide end.
All nine QD-nanowire systems are formed by QDs near (about \(1\,\mathrm{\SIUnitSymbolMicro m}\)) one end of the silver waveguide. Con
sidering typical propagation lengths in the range of a few micrometers and the small diameter of our silver nanowires, we expect substantial reflection of the SPP at the near wire end [11]. This results in an interference \(|E|^{2}=|E_{\mathrm{dir}}+E_{\mathrm{ref}}|^{2}\) of the direct surface plasmon \(E_{\mathrm{dir}}\) and the reflected surface plasmon \(E_{\mathrm{ref}}\) (see Fig. 3a) and consequently in a position-dependent coupling efficiency. In the worst case, both fields destructively interfere with each other, and no net coupling would be observable, although the emitter is close to the nanowire.
To model the position-dependent coupling efficiency in the \(xy\)-plane, we make use of reciprocity and assume a semi-infinite wire. The mode profile in the \(xz\)-plane is already shown in Fig. 1b. We keep the \(z\) coordinate constant at the burial depth \(z_{b}=30\,\mathrm{nm}\) to obtain the mode profile \(\mathbf{E}^{mode}(x)\). In propagation direction (\(y\)), we interfere the direct wave and the reflected wave, both propagating with an effective mode index \(\tilde{n}_{\mathrm{eff}}\). The reflection coefficient of the wire end is also complex-valued \(\tilde{r}=re^{i\phi_{r}}\) with the reflection amplitude \(r\) and reflection phase \(\phi_{r}\). Overall, we obtain the coupling efficiency in the \(xy\)-plane
\[\eta_{in,sim}(x,y)=\sum_{j}\left|E_{j}^{mode}(x)\left(1+\tilde{r}\,e^{2\,i\,k_ {0}\,\tilde{n}_{\mathrm{eff}}\,y}\right)\right|^{2} \tag{2}\]
by incoherently summing the dipole moment contributions in \(j=x,y\). The resulting coupling efficiency map (Fig. 3b) shows the expected oscillatory interference features that decay in interference contrast with distance to the near waveguide end, as the amplitudes of \(E_{\mathrm{dir}}\) and \(E_{\mathrm{ref}}\) separate. Although measured at different waveguides, we draw all nine investigated structures in this map by overlaying the nanowire ends, which already suggests strong fluctuations in their coupling efficiency.
In the experiment, the photon rate detected at the out-coupling end of the waveguide is given by the product of the partial efficiencies for incoupling, propagation, out-coupling and detection times the QD's bare emission rate. At the QD position, we detect this bare rate times the QD detection efficiency. Knowing all these factors from either numerical simulations or measurements (see Supporting Information S5) allows us to calculate back to the experimentally observed incoupling efficiency \(\eta_{in,exp}\) at the specific QD positions relative to the waveguide.
For comparison with the interference model, it is more convenient to compare a one-dimensional data set. Knowing the spatial mode profile, we shift the experimental QD positions to below the waveguide (\(x=0\,\mathrm{nm}\)) by
\[\eta_{in,exp,shift}=\eta_{in,exp}\,\frac{\sum_{j}|E_{j}^{mode}(0)|^{2}}{\sum_{ j}|E_{j}^{mode}(x_{QD})|^{2}} \tag{3}\]
for the offset \(x_{QD}\) of the respective QD via the mode profile \(E^{mode}(x)\). This is the incoupling efficiency that would have been measured if the dot had been centered below the waveguide. Fig. 3c compares these values
Figure 2: **Investigation of a coupled quantum dot - nanowire system by complementary methods.** a) Raster-scanning the electron beam while detecting the cathodoluminescence and scattered electrons. The obtained cathodoluminescence image with overlaid electron micrograph allows a precise distance measurement. The inset depicts a room temperature scanning electron micrograph of the same nanowire. b) Sketch of confocal laser scanning imaging with corresponding photoluminescence data, overlaid with the reflection image of the structure. c) Proof of intermediate field coupling by plasmon propagation imaging via a CCD-camera and stationary excitation of the quantum dot. In the recorded image, the area around the quantum dot is software-attenuated by a factor of 150 to increase the visibility.
with the model at \(x=0\,\)nm. We have fixed the reflection amplitude (\(r=0.65\)) and phase (\(\phi_{r}=-\pi/2\)), as well as the air-sided far-field collection efficiency ratio (\(\eta_{spp,ff}/\eta_{qd,ff}=17\)) based on our numerical simulations (see Supporting Information S5), and vary mode index, propagation length and an overall scaling parameter. The optimal fit is achieved with a mode index of \(n_{\text{eff}}=1.53\), a propagation length \(L_{p}=0.86\,\)um and a scaling factor of 1.23. These values are already used to plot Fig. 3b.
We find good agreement between our interference model and the corrected coupling efficiency, although not all individual variations of the nanosystems are taken into account. Minor differences in geometry parameters can shift the datapoints somewhat. In particular, imperfections such as slightly bent wires or small kinks can cause additional losses due to reflections or far field scattering but were not observed in the propagation images and are therefore neglected. Furthermore, the exact dipole moment orientations within each QD are unknown to us, which only affects QDs located far away from the nanowire axis (see Supporting Information S6). The resulting uncertainty can be quantified and is included in the error bars for the experimental coupling efficiency. In addition, the errorbars comprise the uncertainty arising from the the QD-SPP emission ratio extraction, and the uncertainty in the lateral QD position determination.
Nevertheless, the fit parameters consistently lie within a realistic range. For the mode index, we expect values ranging from \(n_{\text{eff}}=1.5-1.7\) from numerical simulations, depending on the details of the chosen geometry. The propagation length in the finite element simulation (see Fig. 1c) is approximately \(L_{p}=2-3\,\)um, around three times greater than the fit outcome, which is attributed to material imperfections and residual surfactant at the nanowire's surface. We experimentally extracted a propagation length of \(L_{p}\approx 1.0\,\)um from laser transmission experiments, which is consistent with the fit result but also subject to large variations (see Supporting Information S7). Additionally, we find an overall scaling parameter of 1.23, indicating that all major contributions are included in the model.
The interference model in Fig. 3c implies that the waveguide coupling rate is substantially larger toward the near end of the wire compared to the infinite length waveguide. The coupling rate can in principle be increased up to a factor of four for a reflection coefficient of \(r=1\). This would result in a coupling efficiency of \(5.5\,\)%. The overall efficiency of the device could be further optimized by impedance-matching the waveguide ends [33] and achieving constructive interference with substrate reflections [16].
In summary, we have demonstrated the coupling of single self-assembled GaAs quantum dots to silver nanowires in the intermediate field. This is achieved by balancing coupling and propagation efficiency, using a planar dielectric spacer of about \(130\,\)nm thickness. The relative positions of quantum dots and nanowires are determined with high accuracy by simultaneously imaging them through low-temperature cathodoluminescence. This enabled us to establish an interference model that explains the varying coupling efficiencies. The reflection of the propagating plasmon at the wire's near end boosts the efficiency by up to four times. Intermediate field coupling does not necessitate nanostructuring processes in the QD's dielectric surroundings, which often degrade the (quantum) optical characteristics of the QD. Furthermore, the intermediate field approach is not limited to QDs grown near the surface because of its weak depth dependence (see Fig. 1b). Taken together, this means that a Fourier-limited source of single plasmons is within reach.
Acknowledgements: This work was funded by the Ger
Figure 3: **Interference caused by reflection at the waveguide end explains the spatial variation of the incoming efficiency.** a) Schematic of the interfering SPPs, arising from non-zero reflectivity at the nanowire termination. b) Coupling efficiency map in the sample plane at the burial depth \(z_{b}=30\,\)nm. The lateral quantum dot positions are displayed by gray crosses, the size of which indicates the position uncertainty from the cathodoluminescence images. The dashed horizontal lines correspond to the nanowire width. c) Simulated coupling efficiency (black line) along the nanowire axis and experimental coupling efficiency (red circles) according to Eq. 3 and corrected for the offset \(x_{QD}\) from the nanowire axis. The dashed line represents the coupling efficiency \(\eta_{in,\text{inf}}\approx 1.4\,\)% for a quantum dot centered below an infinitely extended wire (see Fig. 1).
man Research Foundation (INST 131/795-1 320 FUGG, INST 91/310-1 FUGG), European Union's Horizon 2020 Research and innovation Programme under the Marie Sklodowska-Curie Grant Agreement No. 861097 (QUDOT-TECH), Einstein foundation via the Einstein Research Unit "Perspectives of a quantum digital transformation: Near-term quantum computational devices and quantum processors" and the Austrian Science Fund (FWF) via the Research Group FG5, I 4320, I 4380
|
2307.13228 | Variations of rigidity | We study possibilities for semantic and syntactic rigidity, i.e., the
rigidity with respect to automorphism group and with respect to definable
closure. Variations of rigidity and their degrees are studied in general case,
for special languages and for some natural operations with structures. | Sergey V. Sudoplatov | 2023-07-25T03:34:56Z | http://arxiv.org/abs/2307.13228v1 | # Variations of rigidity+
###### Abstract
We study possibilities for semantic and syntactic rigidity, i.e., the rigidity with respect to automorphism group and with respect to definable closure. Variations of rigidity and their degrees are studied in general case, for special languages and for some natural operations with structures.
**Key words:** definable closure, semantic rigidity, syntactic rigidity, degree of rigidity.
## 1 Introduction
We continue to study variations of algebraic closures [1] considering and describing semantic and syntactic possibilities for definable closures.
In Section 2, we introduce variations and degrees for semantic and syntactic rigidity of structures, describe properties, possibilities, and dynamics for these characteristics, in general and for theories of unary predicates. In Section 3, indexes of rigidity are introduced and their possibilities are described. In Sections 4 and 5, possibilities for degrees of rigidity and for indexes of rigidity are described for disjoint unions of structures and for compositions of structures are studied.
We use the standard model-theoretic terminology [2, 3, 4, 5, 6], notions and notations in [1].
## 2 Variations of rigidity and their characteristics
**Definition**.: For a set \(A\) in a structure \(\mathcal{M}\), \(\mathcal{M}\) is called _semantically \(A\)-rigid_ or _automorphically \(A\)-rigid_ if any \(A\)-automorphism \(f\in\operatorname{Aut}(\mathcal{M})\) is identical. The structure \(\mathcal{M}\) is called _syntactically \(A\)-rigid_ if \(M=\operatorname{dcl}(A)\).
A structure \(\mathcal{M}\) is called _\(\forall\)-semantically \(/\)\(\forall\)-syntactically \(n\)-rigid_ (respectively, _\(\exists\)-semantically \(/\)\(\exists\)-syntactically \(n\)-rigid_), for \(n\in\omega\), if \(\mathcal{M}\) is semantically \(/\) syntactically \(A\)-rigid for any (some) \(A\subseteq M\) with \(|A|=n\).
Clearly, as above, syntactical \(A\)-rigidity and \(n\)-rigidity imply semantical ones, and vice versa for finite structures, but not vice versa for some infinite ones. Besides, if \(\mathcal{M}\) is \(Q\)-semantically \(/\)\(Q\)-syntactically \(n\)-rigid, where \(Q\in\{\forall,\exists\}\), then \(\mathcal{M}\) is \(Q\)-semantically \(/\)\(Q\)-syntactically \(m\)-rigid for any \(m\geq n\).
The least \(n\) such that \(\mathcal{M}\) is \(Q\)-semantically \(/\)\(Q\)-syntactically \(n\)-rigid, where \(Q\in\{\forall,\exists\}\), is called the _\(Q\)-semantical \(/\)\(Q\)-syntactical degree of rigidity_, it is denoted by \(\deg_{\operatorname{rig}}^{Q\text{-}\text{semic}}(\mathcal{M})\)
and \(\deg_{\rm rig}^{Q\mbox{-}{\rm synt}}(\mathcal{M})\), respectively. Here if a set \(A\) produces the value of \(Q\)-semantical / \(Q\)-syntactical degree then we say that \(A\)_witnesses_ that degree. If such \(n\) does not exists we put \(\deg_{\rm rig}^{Q\mbox{-}{\rm sem}}(\mathcal{M})=\infty\) and \(\deg_{\rm rig}^{Q\mbox{-}{\rm synt}}(\mathcal{M})=\infty\), respectively.
Notice that all these characteristics have the upper bound \(|M|-1\) if the structure \(\mathcal{M}\) is finite. Moreover, if \(M\setminus{\rm dcl}(\emptyset)\) is finite then the cardinality \(|M\setminus{\rm dcl}(\emptyset)|-1\) is the upper bound for both \(\deg_{\rm rig}^{\exists\mbox{-}{\rm sem}}(\mathcal{M})\) and \(\deg_{\rm rig}^{\exists\mbox{-}{\rm synt}}(\mathcal{M})\).
We have the following obvious characterizations for finite values of degrees:
**Proposition 2.1**: \(\deg_{\rm rig}^{\forall\mbox{-}{\rm sem}}(\mathcal{M})=0\) _iff \(\deg_{\rm rig}^{\exists\mbox{-}{\rm sem}}(\mathcal{M})=0\), and iff the structure \(\mathcal{M}\) is semantically rigid._
2. \(\deg_{\rm rig}^{\forall\mbox{-}{\rm synt}}(\mathcal{M})=0\) _iff \(\deg_{\rm rig}^{\exists\mbox{-}{\rm synt}}(\mathcal{M})=0\), and iff the structure \(\mathcal{M}\) is syntactically rigid._
3. \(\deg_{\rm rig}^{\forall\mbox{-}{\rm sem}}(\mathcal{M})=n\in\omega\) _iff for any set \(A\subseteq M\) with \(|A|\geq n\) there is minimal \(B\subseteq A\), under inclusion, such that \(|B|=n\) and any automorphism \(f\in{\rm Aut}(\mathcal{M})\) fixing \(B\) pointwise fixes all elements in \(\mathcal{M}\), too, and there are no sets of cardinalities \(n^{\prime}<n\) with that property. Here \(B\subseteq A\) can be taken arbitrary with \(|B|=n\)._
4. \(\deg_{\rm rig}^{\exists\mbox{-}{\rm sem}}(\mathcal{M})=n\in\omega\) _iff for some set \(A\subseteq M\) with \(|A|\geq n\) there is minimal \(B\subseteq A\), under inclusion, such that \(|B|=n\) and any automorphism \(f\in{\rm Aut}(\mathcal{M})\) fixing \(B\) pointwise fixes all elements in \(\mathcal{M}\), too, and there are no sets of cardinalities \(n^{\prime}<n\) with that property._
5. \(\deg_{\rm rig}^{\forall\mbox{-}{\rm synt}}(\mathcal{M})=n\in\omega\) _iff for any set \(A\subseteq M\) with \(|A|\geq n\) there is minimal \(B\subseteq A\), under inclusion, such that \(|B|=n\) and \(M={\rm dcl}(B)\), and there are no sets of cardinalities \(n^{\prime}<n\) with that property. Here \(B\subseteq A\) can be taken arbitrary with \(|B|=n\)._
6. \(\deg_{\rm rig}^{\exists\mbox{-}{\rm synt}}(\mathcal{M})=n\in\omega\) _iff for some set \(A\subseteq M\) with \(|A|\geq n\) there is minimal \(B\subseteq A\), under inclusion, such that \(|B|=n\) and \(M={\rm dcl}(B)\), and there are no sets of cardinalities \(n^{\prime}<n\) with that property._
By the definition, we have the following _monotonicity property_: if \(\mathcal{M}\) is semantically / syntactically \(A\)-rigid and \(A\subseteq A^{\prime}\subseteq M\) then \(\mathcal{M}\) is semantically / syntactically \(A^{\prime}\)-rigid.
Using the definition and the monotonicity property, for any structure \(\mathcal{M}\) the following inequalities hold:
\[\deg_{\rm rig}^{\forall\mbox{-}{\rm sem}}(\mathcal{M})\leq\deg_{\rm rig}^{ \forall\mbox{-}{\rm synt}}(\mathcal{M}), \tag{1}\]
the equality in (1) means that either there are no finite sets \(A\) with identical \(A\)-automorphisms only, or minimal finite sets \(A\) with identical \(A\)-automorphisms only have unbounded cardinalities, or all finite \(A\subseteq M\) of some fixed cardinality \(n\) satisfy \(M={\rm dcl}(A)\) and some \(A\) with \(|A|=n\) does not have proper subsets \(A^{\prime}\) such that there are identical \(A^{\prime}\)-automorphisms only;
\[\deg_{\rm rig}^{\exists\mbox{-}{\rm sem}}(\mathcal{M})\leq\deg_{\rm rig}^{ \exists\mbox{-}{\rm synt}}(\mathcal{M}), \tag{2}\]
the equality in (2) means that either there are no finite sets \(A\) with identical \(A\)-automorphisms only, or there is finite \(A\subseteq M\) such that \(M={\rm dcl}(A)\), and there are no sets \(A^{\prime}\) with less cardinalities such that there are identical \(A^{\prime}\)-automorphisms only;
\[\deg_{\rm rig}^{\exists\mbox{-}{\rm sem}}(\mathcal{M})\leq\deg_{\rm rig}^{ \forall\mbox{-}{\rm sem}}(\mathcal{M}), \tag{3}\]
the equality in (3) means that either there are no finite sets \(A\) with identical \(A\)-automorphisms only, or there is finite \(A\subseteq M\) with identical \(A\)-automorphism only and each finite \(A^{\prime}\subseteq M\) with \(|A^{\prime}|\geq|A|\) has a minimal restriction \(A^{\prime\prime}\), under inclusion, with \(|A^{\prime\prime}|=|A|\) and with identical \(A^{\prime\prime}\)-automorphism only;
\[\deg_{\rm rig}^{\exists\mbox{-}{\rm synt}}(\mathcal{M})\leq\deg_{\rm rig}^{ \forall\mbox{-}{\rm synt}}(\mathcal{M}). \tag{4}\]
the equality in (4) means that either there are no finite sets \(A\) with \(\mathrm{dcl}(A)=M\), or there is finite \(A\subseteq M\) with \(\mathrm{dcl}(A)=M\) and each finite \(A^{\prime}\subseteq M\) with \(|A^{\prime}|\geq|A|\) has a minimal restriction \(A^{\prime\prime}\), under inclusion, with \(|A^{\prime\prime}|=|A|\) and with \(\mathrm{dcl}(A^{\prime\prime})=M\).
**Example 2.2**: The structure \(\mathcal{M}=\langle\omega,\leq\rangle\) is both semantically and syntactically rigid, therefore \(\mathrm{deg}_{\mathrm{rig}}^{\forall\text{-}\mathrm{sem}}(\mathcal{M})= \mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{sem}}(\mathcal{M})= \mathrm{deg}_{\mathrm{rig}}^{\forall\text{-}\mathrm{synt}}(\mathcal{M})= \mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{synt}}(\mathcal{M})=0\). We observe the same effect for arbitrary structures in which each element is marked by a constant.
**Example 2.3**: If \(\mathcal{M}\) has the empty language then
\[\mathrm{deg}_{\mathrm{rig}}^{\forall\text{-}\mathrm{sem}}(\mathcal{M})= \mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{sem}}(\mathcal{M})= \mathrm{deg}_{\mathrm{rig}}^{\forall\text{-}\mathrm{synt}}(\mathcal{M})= \mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{synt}}(\mathcal{M})=|M|-1\]
if \(\mathcal{M}\) is finite, and and these values equal \(\infty\) if \(\mathcal{M}\) is infinite.
**Example 2.4**: If \(\mathcal{V}\) is a vector space over a field \(F\) then we have the following criterion for the semantic/syntactic rigidity: \(\mathrm{deg}_{\mathrm{rig}}^{\forall\text{-}\mathrm{sem}}(\mathcal{V})= \mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{sem}}(\mathcal{V})= \mathrm{deg}_{\mathrm{rig}}^{\forall\text{-}\mathrm{synt}}(\mathcal{V})= \mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{synt}}(\mathcal{V})=0\) iff \(\dim(\mathcal{V})\leq 1\) and \(|F|=2\) for \(\dim(\mathcal{V})=1\). If \(\mathcal{V}\) is not rigid then \(\mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{sem}}(\mathcal{V})= \mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{synt}}(\mathcal{V})= \mathrm{dim}(\mathcal{V})\) for finite \(\dim(\mathcal{V})\), and \(\mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{sem}}(\mathcal{V})= \mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{synt}}(\mathcal{V})=\infty\), otherwise. Besides, \(\mathrm{deg}_{\mathrm{rig}}^{\forall\text{-}\mathrm{sem}}(\mathcal{V})= \mathrm{deg}_{\mathrm{rig}}^{\forall\text{-}\mathrm{synt}}(\mathcal{V})=\infty\) if \(\dim(\mathcal{V})\) is infinite, or \(\dim(\mathcal{V})\geq 1\) and \(F\) is infinite. Finally for \(\dim(\mathcal{V})=n\in\omega\setminus\{0\}\) and \(|F|=m\in\omega\setminus\{0\}\) with \((n,m)\neq(1,2)\), we have \(\mathrm{deg}_{\mathrm{rig}}^{\forall\text{-}\mathrm{sem}}(\mathcal{V})= \mathrm{deg}_{\mathrm{rig}}^{\forall\text{-}\mathrm{synt}}(\mathcal{V})=(n-1)m+1\), since we obtain the rigidity taking all vectors in a \((n-1)\)-dimensional subspace \(\mathcal{V}^{\prime}\), with \((n-1)m\) elements, and a vector in \(\mathcal{V}\setminus\mathcal{V}^{\prime}\).
**Example 2.5**: Let \(\mathcal{M}\) be a structure of disjoint infinite unary predicates \(P_{i}\), \(i\in I\), expanded by constants for all elements in \(\bigcup\limits_{i\in I}P_{i}\). Since \(\mathcal{M}\) is both semantically and syntactically rigid we have \(\mathrm{deg}_{\mathrm{rig}}^{Q\text{-}\mathrm{sem}}(\mathcal{M})=\mathrm{deg}_ {\mathrm{rig}}^{Q\text{-}\mathrm{synt}}(\mathcal{M})=0\) for \(Q\in\{\forall,\exists\}\). At the same time extending \(n\) predicates \(P_{i}\) by new elements \(a_{i}\) we obtain \(\mathcal{N}\succ\mathcal{M}\) with \(\mathrm{deg}_{\mathrm{rig}}^{\forall\text{-}\mathrm{sem}}(\mathcal{N})= \mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{sem}}(\mathcal{N})=0\), \(\mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{synt}}(\mathcal{N})=n\), \(\mathrm{deg}_{\mathrm{rig}}^{\forall\text{-}\mathrm{synt}}(\mathcal{N})=\infty\). Moreover, if infinitely many \(P_{i}\) are extended by new elements \(a_{i}\) then the correspondent elementary extension \(\mathcal{N}^{\prime}\) of \(\mathcal{M}\) has the following characteristics: \(\mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{sem}}(\mathcal{N}^{\prime})=0\), \(\mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{synt}}(\mathcal{N}^{\prime})=n\) and \(\mathrm{deg}_{\mathrm{rig}}^{\forall\text{-}\mathrm{sem}}(\mathcal{N}^{\prime})= \mathrm{deg}_{\mathrm{rig}}^{\forall\text{-}\mathrm{synt}}(\mathcal{N}^{ \prime})=\infty\). Besides, if some extended \(P_{i}\) are again extended by \(m\) new elements in total then an appropriate elementary extension \(\mathcal{N}_{m,n}\) has the following characteristics: \(\mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{sem}}(\mathcal{N}_{m,n})=m\), \(\mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{synt}}(\mathcal{N}_{m,n})=m+n\), \(\mathrm{deg}_{\mathrm{rig}}^{\forall\text{-}\mathrm{sem}}(\mathcal{N}_{m,n})= \mathrm{deg}_{\mathrm{rig}}^{\forall\text{-}\mathrm{synt}}(\mathcal{N}_{m,n})=\infty\) including the possibility \(\mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{sem}}(\mathcal{N}_{\mu,n})= \mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{synt}}(\mathcal{N}_{\mu,n})= \mathrm{deg}_{\mathrm{rig}}^{\forall\text{-}\mathrm{sem}}(\mathcal{N}_{\mu,n})= \mathrm{deg}_{\mathrm{rig}}^{\forall\text{-}\mathrm{synt}}(\mathcal{N}_{\mu,n})= \infty\) if \(\mu\geq\omega\) new elements are added.
Thus by Example 2.5 the difference between \(\mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{sem}}(\mathcal{M})\) and \(\mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{synt}}(\mathcal{M})\) can be arbitrary large. In view of Proposition 2.1 and inequality 2 we obtain the following theorem on distributions for these characteristics:
**Theorem 2.6**: \(1.\) _The pairs \(\left(\mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{sem}}(\mathcal{M}), \mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{synt}}(\mathcal{M})\right)\) belong to the set \(\mathrm{DEG}_{\mathrm{rig}}^{\exists\text{-}\mathrm{sem},\exists\text{-} \mathrm{synt}}=\{(\mu,\nu)\ |\ \mu,\nu\in\omega\cup\{\infty\},\mu\leq\nu\}\)._
\(2.\) _For each pair \((\mu,\nu)\in\mathrm{DEG}_{\mathrm{rig}}^{\exists\text{-}\mathrm{sem},\exists\text{-} \mathrm{synt}}\) there exists a structure \(\mathcal{M}_{\mu,\nu}\) such that_
\[\mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{sem}}(\mathcal{M}_{\mu,\nu})=\mu,\,\mathrm{deg}_{\mathrm{rig}}^{\exists\text{-}\mathrm{synt}}(\mathcal{M}_{\mu,\nu})=\nu.\]
Example 2.5 shows that values in \(\mathrm{DEG}_{\mathrm{rig}}^{\exists\mathrm{-sem},\exists\mathrm{-synt}}\) in Theorem 2.6 are covered by structures in countable languages \(\Sigma_{1}\) of unary predicates. Now we describe possibilities for the pairs \(\left(\mathrm{deg}_{\mathrm{rig}}^{\forall\mathrm{-sem}}(\mathcal{M}),\mathrm{ deg}_{\mathrm{rig}}^{\forall\mathrm{-synt}}(\mathcal{M})\right)\) in these languages \(\Sigma_{1}\).
**Proposition 2.7**: _For any structure \(\mathcal{M}\) in a language \(\Sigma_{1}\) of unary predicates the pair_
\[\left(\mathrm{deg}_{\mathrm{rig}}^{\forall\mathrm{-sem}}(\mathcal{M}),\mathrm{ deg}_{\mathrm{rig}}^{\forall\mathrm{-synt}}(\mathcal{M})\right)\]
_has one of the following possibilities:_
1)_\((0,0)\), if \(\mathcal{M}\) is both semantically and syntactically rigid;_
2)_\((n,n)\), if \(\mathcal{M}\) is finite with \(n+1\) elements and it is not semantically rigid that is not syntactically rigid;_
3)_\((0,\infty)\), if \(\mathcal{M}\) is infinite, semantically rigid but not syntactically rigid;_
4)_\((\infty,\infty)\), if \(\mathcal{M}\) is infinite and both not semantically rigid and not syntactically rigid._
Proof. If \(\mathcal{M}\) is syntactically rigid then we have \(\left(\mathrm{deg}_{\mathrm{rig}}^{\forall\mathrm{-sem}}(\mathcal{M}),\mathrm{ deg}_{\mathrm{rig}}^{\forall\mathrm{-synt}}(\mathcal{M})\right)=(0,0)\) by the inequality (1). Now we assume that \(\mathcal{M}\) is not syntactically rigid and consider the following cases.
Case 1: \(\mathcal{M}\) is semantically rigid, i.e., \(\mathrm{deg}_{\mathrm{rig}}^{\forall\mathrm{-sem}}(\mathcal{M})=0\). In such a case \(\mathcal{M}\) is infinite since finite structures have isolated 1-types only and there are complete 1-types over empty set with at least two realizations that contradicts the semantic rigidity for the language \(\Sigma_{1}\). Again using the unary language \(\Sigma_{1}\) and the arguments of [7, Section 8.1] that all 1-types, over empty set, are forced by formulae of quantifier free diagrams and formulae describing estimations for cardinalities of their solutions, with independent actions of automorphisms in distinct sets of realizations of 1-types. Thus each 1-type has at most one realization in \(\mathcal{M}\). Since \(\mathcal{M}\) is not syntactically rigid, \(\mathcal{M}\) realizes at least one nonisolated 1-type \(p(x)\) by some unique element \(a\). Now for any \(n\in\omega\) we can take \(n\) realizations of other 1-types forming a set \(A\) such that \(a\notin\mathrm{dcl}(A)\). It implies \(\mathrm{deg}_{\mathrm{rig}}^{\forall\mathrm{-synt}}(\mathcal{M})=\infty\).
Case 2: \(\mathcal{M}\) is not semantically rigid and \(|M|=n+1\in\omega\). In such a case \(\mathcal{M}\) has a complete 1-type \(p(x)\) with at least two realizations \(a\) and \(b\). Since there is an \((M\setminus\{a,b\})\)-automorphism \(f\) with \(f(a)=b\), we obtain \(\mathrm{deg}_{\mathrm{rig}}^{\forall\mathrm{-sem}}(\mathcal{M})=n\) implying \(\mathrm{deg}_{\mathrm{rig}}^{\forall\mathrm{-synt}}(\mathcal{M})=n\) by the inequality (1) and the syntactic rigidity of \(\mathcal{M}\) over each \(n\)-element set.
Case 3: \(\mathcal{M}\) is not semantically rigid and it is infinite. In such a case \(\mathcal{M}\) has a complete 1-type \(p(x)\) with at least two realizations \(a\) and \(b\) and such that realizations of other 1-types allow to form arbitrarily large finite set \(A\) such that some \(A\)-automorphism transforms \(a\) in \(b\). It means that \(\mathrm{deg}_{\mathrm{rig}}^{\forall\mathrm{-sem}}(\mathcal{M})=\infty\) implying \(\mathrm{deg}_{\mathrm{rig}}^{\forall\mathrm{-synt}}(\mathcal{M})=\infty\) by the inequality (1). \(\square\)
Combining arguments for Theorems 2.6 and 2.7 we obtain the following possibilities for tetrads \(\mathrm{deg}_{4}(\mathcal{M})\rightleftharpoons\left(\mathrm{deg}_{\mathrm{rig }}^{\exists\mathrm{-sem}}(\mathcal{M}),\mathrm{deg}_{\mathrm{rig}}^{\exists \mathrm{-synt}}(\mathcal{M}),\mathrm{deg}_{\mathrm{rig}}^{\forall\mathrm{- sem}}(\mathcal{M}),\mathrm{deg}_{\mathrm{rig}}^{\forall\mathrm{-synt}}(\mathcal{M})\right)\) in a language of unary predicates:
**Corollary 2.8**: _For any structure \(\mathcal{M}\) in a language \(\Sigma_{1}\) of unary predicates the tetrad \(\mathrm{deg}_{4}(\mathcal{M})\) has one of the following possibilities:_
1)_\((0,0,0,0)\), if \(\mathcal{M}\) is both semantically and syntactically rigid;_
2)_\((m,m,n,n)\), if \(\mathcal{M}\) is finite with \(n+1\) elements and it is not semantically rigid that is not syntactically rigid with some minimal \(m\)-elements set \(A\subset M\), \(1\leq m\leq n\), producing \(\mathrm{dcl}(A)=M\);_
3) \((0,\nu,0,\infty)\)_, if \({\cal M}\) is infinite, semantically rigid but not syntactically rigid, with \(1\leq\nu\leq\infty\);_
4) \((\mu,\nu,\infty,\infty)\)_, if \({\cal M}\) is infinite and both not semantically rigid and not syntactically rigid, with \(1\leq\mu\leq\nu\leq\infty\)._
**Example 2.9**: Let \({\cal M}\) be a finitely generated algebra by a set \(X\). Then by the definition we have \(\deg_{\rm rig}^{\exists\mbox{\tiny-}{\rm synt}}({\cal M})\leq|X|\) which implies \(\deg_{\rm rig}^{\exists\mbox{\tiny-}{\rm sem}}({\cal M})\leq|X|\) by the inequality (2). Here, if additionally the generating set \(X\) admits substitutions by any \(Y\subseteq M\) with \(|Y|=|X|\) and these substitutions preserve the generating property then we have \(\deg_{\rm rig}^{\forall\mbox{\tiny-}{\rm synt}}({\cal M})\leq|X|\) which implies \(\deg_{\rm rig}^{\exists\mbox{\tiny-}{\rm sem}}({\cal M})\leq|X|\) by the inequality (1). For instance, if \({\cal M}\) is a directed graph forming a finite cycle of positive length then \(\deg_{4}({\cal M})=(1,1,1,1)\).
Since algebras, with constants and unary operations, can define arbitrary configurations of unary predicates, possibilities for characteristics \(\deg_{4}({\cal M})\) in Corollary 2.8 can be realized in the class of algebras, too.
**Example 2.10**: Let \({\rm pm}={\rm pm}(G_{1},G_{2},{\cal P})\) be a connected polygonometry of a group pair \((G_{1},G_{2})\) on an exact pseudoplane \({\cal P}\), and \({\cal M}={\cal M}({\rm pm})\) be a ternary structure for pm [8]. Since all points \(a\) in \({\cal M}\) are connected by automorphisms we have \({\rm acl}(\{a\})=\{a\}\). At the same time any two distinct points \(a,b\in M({\rm pm})\) (laying in a common line) define all points in \({\cal M}\) by line and angle parameters of broken lines. It implies \(M({\rm pm})={\rm dcl}(\{a,b\})\). If line and angle parameters of shortest broken lines connecting arbitrary distinct points \(a\) and \(b\) are defined uniquely then \(M({\rm pm})={\rm dcl}(\{a,b\})\) for these points, too. Hence, in such a case, \(\deg_{\rm rig}^{\exists\mbox{\tiny-}{\rm sem}}({\cal M})=\deg_{\rm rig}^{ \exists\mbox{\tiny-}{\rm synt}}({\cal M})=\deg_{\rm rig}^{\forall\mbox{ \tiny-}{\rm sem}}({\cal M})=\deg_{\rm rig}^{\forall\mbox{\tiny-}{\rm sem}}({ \cal M})\leq 2\). Moreover, these degree values equal 1 iff pm consists of unique line and with at least two points, i.e., \(|G_{1}|>1\) and \(|G_{2}|=1\). Finally, for a polygonometry pm, the degrees equal 0 iff pm consists of unique point.
If parameters of broken lines do not define these broken lines by endpoints then finite cardinalities of points in these lines can be unbounded. Indeed, taking opposite vertices \(a\) and \(b\) in an \(n\)-cube [8, 9] or in its polygonometry pm we obtain \(n\) adjacent vertices \(c_{1},\ldots,c_{n}\) for \(a\) and these vertices are connected by \(\{a,b\}\)-automorphisms. Moreover, in such a case, \(\deg_{\rm rig}^{\exists\mbox{\tiny-}{\rm sem}}({\cal M})=\deg_{\rm rig}^{ \exists\mbox{\tiny-}{\rm synt}}({\cal M})=n+1\) witnessed, for instance, by the set \(A=\{a,b,c_{1},\ldots,c_{n-1}\}\).
The value \(\deg_{4}({\cal M}_{2})=(2,2,2,2)\) for \({\cal M}_{2}={\cal M}({\rm pm})\) can be increased till \(\deg_{4}({\cal M}_{n})=(n,n,n,n)\), \(n\geq 3\), generalizing group trigonometries in the following way. We construct a \((n+1)\)-dimensional space consisting of points and \(n\)-dimensional hyperplanes. We introduce an incidence \(n\)-ary relation \(I_{n}\) for \(n\) distinct points to lay on a common hyperplane. Now fixing a hyperplane \(H\) and \(n-1\) pairwise distinct points \(a_{1},\ldots,a_{n-1}\in H\) we define an exact transitive action of a group \(G_{1}\) on \(H\setminus\{a_{1},\ldots,a_{n-1}\}\), i.e., on \(H\) with respect to \(a_{1},\ldots,a_{n-1}\), such that this action is transformed for any pairwise distinct points \(a^{\prime}_{1},\ldots,a^{\prime}_{n-1}\in H\). Since each \(H\) can be defined by its \(n-1\) distinct points with actions, we can fix \(a_{1},\ldots,a_{n-1}\) and move \(a_{n}\in H\setminus\{a_{1},\ldots,a_{n-1}\}\) into points \(a^{\prime}_{n}\) in other hyperplanes \(H^{\prime}\) containing \(a_{1},\ldots,a_{n-1}\). Collecting these movements we define an action of a group \(G_{2}\) on that bundle of hyperplanes containing \(a_{1},\ldots,a_{n-1}\). Then we spread actions of \(G_{1}\) and \(G_{2}\) for any hyperplanes and bundles of hyperplanes, respectively, such that all pairwise distinct \(a_{1},\ldots,a_{n-1}\) and \(a^{\prime}_{1},\ldots,a^{\prime}_{n-1}\) are connected by automorphisms with respect to these actions.
For instance, taking the set \(P\) of planes in \({\mathbb{R}}^{3}\), a plane \(\pi\in P\) and distinct points \(a_{1},a_{2}\in P\) the action of \(G_{1}\) can be defined as \({\mathbb{R}}\times A\) with the side group \({\mathbb{R}}\) and angle group \(A\) defining
both the directed distance \(d\in\mathbb{R}\) from \(a_{1}\) to a point \(a_{3}\in\pi\) and the angle value \(\alpha\) from the side \(a_{1}\hat{a}_{2}\) to the side \(a_{1}\hat{a}_{3}\). And \(G_{2}\) is the rotation group for the planes in \(P\) around the lines \(l(a_{1},a_{2})\).
Now we extend the language \(\{I_{n}\}\) by \((n+1)\)-ary predicates \(Q_{g_{1}}\), \(g_{1}\in G_{1}\), such that first \((n-1)\)-coordinates \(\overline{a}\) in \(\langle\overline{a},b,c\rangle\in Q_{g_{1}}\) are exhausted by \(a_{1},\ldots,a_{n-1}\) and \(c=bg_{1}\) with respect to \(a_{1},\ldots,a_{n-1}\). Simultaneously we define predicates \(R_{g_{2}}\), \(g_{2}\in G_{2}\), of arities \(n+1\) such that each \(R_{g_{2}}\) realizes a rotation of a hyperplane with respect to \(a_{1},\ldots,a_{n-1}\) by the element \(g_{2}\). We obtain a structure \(\mathcal{M}_{n}\) whose values \(\deg_{\mathrm{rig}}^{Q\mbox{-}\mathrm{sem}}(\mathcal{M}_{n})\) and \(\deg_{\mathrm{rig}}^{Q\mbox{-}\mathrm{synt}}(\mathcal{M}_{n})\), for \(Q\in\{\forall,\exists\}\) equal \(n\).
The construction above admits a generalization for polygonometries \(\mathrm{pm}(G_{1},G_{2},\mathcal{P})\) of group pairs transforming \((G_{1},G_{2})\) a pseudoplane \(\mathcal{P}\) to a pseudospace \(\mathcal{S}\) with hyperplanes \(H\) such that \(H=\mathrm{dcl}(\{a_{1},\ldots,a_{n}\})\) for any pairwise distinct points \(a_{1},\ldots,a_{n}\in H\) and with \(\mathrm{dcl}(\{b_{1},\ldots,b_{n-1}\})=\{b_{1},\ldots,b_{n-1}\}\) for any \(b_{1},\ldots,b_{n-1}\in\mathcal{S}\).
Comparing characteristics \(\deg_{\mathrm{rig}}^{\exists\mbox{-}\mathrm{sem}}(\mathcal{M})\) / \(\deg_{\mathrm{rig}}^{\exists\mbox{-}\mathrm{synt}}(\mathcal{M})\) and \(\deg_{\mathrm{rig}}^{\forall\mbox{-}\mathrm{sem}}(\mathcal{M})\) / \(\deg_{\mathrm{rig}}^{\forall\mbox{-}\mathrm{synt}}(\mathcal{M})\) we observe that the first ones produce cardinalities of "best", i.e., minimal sets generating the structure \(\mathcal{M}\) and the second ones give cardinalities of "worst" generating sets. It is natural to describe possibilities of "intermediate" generating sets. For this aim we define the degrees of rigidity with respect to a subset \(A\) of \(M\) as follows:
**Definition.** For a set \(A\) in \(\mathcal{M}\) and an expansion \(\mathcal{M}_{A}\) of \(\mathcal{M}\) by constants in \(A\), the least \(n\) such that \(\mathcal{M}_{A}\) is \(Q\)-semantically / \(Q\)-syntactically \(n\)-rigid, where \(Q\in\{\forall,\exists\}\), is called the \((Q,A)\)_-semantical / \((Q,A)\)-syntactical degree of rigidity_, it is denoted by \(\deg_{\mathrm{rig},A}^{Q\mbox{-}\mathrm{sem}}(\mathcal{M})\) and \(\deg_{\mathrm{rig},A}^{Q\mbox{-}\mathrm{synt}}(\mathcal{M})\), respectively. If such \(n\) does not exists we put \(\deg_{\mathrm{rig},A}^{Q\mbox{-}\mathrm{sem}}(\mathcal{M})=\infty\) and \(\deg_{\mathrm{rig},A}^{Q\mbox{-}\mathrm{synt}}(\mathcal{M})=\infty\), respectively.
Any expansion \(\mathcal{M}_{A}\) of \(\mathcal{M}\) with \(\deg_{\mathrm{rig}}^{\exists\mbox{-}s}(\mathcal{M}_{A})=0\), for \(s\in\{\mathrm{sem},\mathrm{synt}\}\), is called a _\(s\)-rigiditization_ or simply a _rigiditization_ of \(\mathcal{M}\).
We have the following properties for \((Q,A)\)-semantical and \((Q,A)\)-syntactical degrees of rigidity:
**Proposition 2.11**: _Let \(\mathcal{M}\) be a structure, \(A\subseteq M\), \(Q\in\{\forall,\exists\}\), \(s\in\{\mathrm{sem},\mathrm{synt}\}\). Then the following assertions hold:_
1. (Preservation of degrees of rigidity) _If \(A\subseteq\mathrm{dcl}(\emptyset)\) then \(\deg_{\mathrm{rig}}^{Q\mbox{-}s}(\mathcal{M})=\deg_{\mathrm{rig},A}^{Q\mbox{ -}s}(\mathcal{M})\)._
2. (Rigiditization) _If \(A\) contains a witnessing set for the finite value \(\deg_{\mathrm{rig}}^{\exists\mbox{-}s}(\mathcal{M})\) then \(\deg_{\mathrm{rig},A}^{\exists\mbox{-}s}(\mathcal{M})=0\)._
3. (Monotony) _If \(A\subseteq B\subseteq M\) then \(\deg_{\mathrm{rig},A}^{Q\mbox{-}s}(\mathcal{M})\geq\deg_{\mathrm{rig},B}^{Q \mbox{-}s}(\mathcal{M})\)._
4. (Additivity) _If \(A\) witnesses the finite value \(\deg_{\mathrm{rig}}^{\exists\mbox{-}s}(\mathcal{M})\) then for any \(A^{\prime}\subseteq A\),_
\[\deg_{\mathrm{rig}}^{\exists\mbox{-}s}(\mathcal{M})=\deg_{\mathrm{rig},A^{ \prime}}^{\exists\mbox{-}s}(\mathcal{M})+\deg_{\mathrm{rig},A\setminus A^{ \prime}}^{\exists\mbox{-}s}(\mathcal{M}).\]
5. (Cofinite character) _If \(A\) is cofinite in \(\mathcal{M}\) then \(\deg_{\mathrm{rig},A}^{\exists\mbox{-}\mathrm{sem}}(\mathcal{M})\) and \(\deg_{\mathrm{rig},A}^{\exists\mbox{-}\mathrm{synt}}(\mathcal{M})\) are natural._
6. (Finite rigiditization) _Any cofinite set \(A\) in \(\mathcal{M}\) has a minimal finite extension \(A^{\prime}\) such that \(\mathcal{M}_{A^{\prime}}\) is semantically / syntactically rigid._
Proof. 1. If \(A\subseteq\mathrm{dcl}(\emptyset)\) then \(\mathrm{Aut}(\mathcal{M})=\mathrm{Aut}(\mathcal{M}_{A})\) and therefore the equalities \(\mathrm{deg}_{\mathrm{rig}}^{Q\!-\!s}(\mathcal{M})=\mathrm{deg}_{\mathrm{rig},A}^ {Q\!-\!s}(\mathcal{M})\) hold for \(s=\mathrm{sem}\). For the case \(s=\mathrm{synt}\) the required equalities are satisfied in view of \(\mathrm{dcl}(B)=\mathrm{dcl}(A\cup B)\) for any \(B\subseteq M\).
2. If \(A\) contains a witnessing set for the finite value \(\mathrm{deg}_{\mathrm{rig}}^{\exists\!-\!\mathrm{sem}}(\mathcal{M})\) then there exists identical \(A\)-automorphism of \(\mathcal{M}\) only implying \(\mathrm{deg}_{\mathrm{rig},A}^{\exists\!-\!\mathrm{sem}}(\mathcal{M})=0\). Similarly if \(A\) contains a witnessing set for the finite value \(\mathrm{deg}_{\mathrm{rig}}^{\exists\!-\!\mathrm{synt}}(\mathcal{M})\) then \(\mathrm{dcl}(A)=M\) producing \(\mathrm{deg}_{\mathrm{rig},A}^{\exists\!-\!\mathrm{synt}}(\mathcal{M})=0\).
3. If \(A\subseteq B\subseteq M\) then \(\mathrm{Aut}(\mathcal{M}_{B})\leq\mathrm{Aut}(\mathcal{M}_{A})\) therefore the inequalities \(\mathrm{deg}_{\mathrm{rig},A}^{Q\!-\!s}(\mathcal{M})\geq\mathrm{deg}_{\mathrm{ rig},B}^{Q\!-\!s}(\mathcal{M})\) hold for \(s=\mathrm{sem}\). For the case \(s=\mathrm{synt}\) the required equalities are satisfied in view of \(\mathrm{dcl}(A\cup C)\subseteq\mathrm{dcl}(B\cup C)\) for any \(C\subseteq M\).
4. If \(A\) witnesses the finite value \(\mathrm{deg}_{\mathrm{rig}}^{\exists\!-\!s}(\mathcal{M})\) then we divide \(A\) into two disjoint parts \(A_{1}\) and \(A_{2}\) and by the definition of \(\mathrm{deg}_{\mathrm{rig}}^{\exists\!-\!s}(\mathcal{M})\), both \(A_{1}\) and \(A_{2}\) are extended till minimal \(A\) witnessing the semantic / syntactic rigidity. Thus \(A_{1}\) witnesses the value \(\mathrm{deg}_{\mathrm{rig}}^{\exists\!-\!\mathrm{sem}}(\mathcal{M}_{A_{2}})\) and \(A_{2}\) witnesses the value \(\mathrm{deg}_{\mathrm{rig}}^{\exists\!-\!\mathrm{sem}}(\mathcal{M}_{A_{1}})\) producing the required equation \(\mathrm{deg}_{\mathrm{rig}}^{\exists\!-\!s}(\mathcal{M})=\mathrm{deg}_{ \mathrm{rig},A^{\prime}}^{\exists\!-\!s}(\mathcal{M})+\mathrm{deg}_{\mathrm{rig },A\setminus A^{\prime}}^{\exists\!-\!s}(\mathcal{M})\).
5. If \(A\) is cofinite in \(\mathcal{M}\) then there are only finitely many elements, all in \(M\setminus A\), witnessing the values \(\mathrm{deg}_{\mathrm{rig},A}^{\exists\!-\!\mathrm{sem}}(\mathcal{M})\) and \(\mathrm{deg}_{\mathrm{rig},A}^{\exists\!-\!\mathrm{synt}}(\mathcal{M})\). Thus these values are natural.
6. It is immediately implied by Items 2 and 5. \(\Box\)
In view of Proposition fixing a subset in \(\mathcal{M}\) large enough we obtain its rigiditization. At the same time the following assertion clarifies that small subsets can produce the rigiditization for structures in bounded cardinalities only.
**Proposition 2.12**: 1. _If \(\mathrm{deg}_{\mathrm{rig}}^{\exists\!-\!\mathrm{synt}}(\mathcal{M})\) is finite then \(|M|\leq\max\{\Sigma(\mathcal{M}),\omega\}\)._
1. _If \(\mathcal{M}\) is homogeneous and \(\mathrm{deg}_{\mathrm{rig}}^{\exists\!-\!\mathrm{sem}}(\mathcal{M})\) is finite then \(|M|\leq 2^{\max\{\Sigma(\mathcal{M}),\omega\}}\)._
Proof. 1. If \(\mathrm{deg}_{\mathrm{rig}}^{\exists\!-\!\mathrm{synt}}(\mathcal{M})\) is finite then there is a finite set \(A\subseteq M\) witnessing that value, with \(M=\mathrm{dcl}(A)\). This equality is witnessed by at most by \(\max\{\Sigma(\mathcal{M}),\omega\}\) formulae such that each element in \(\mathcal{M}\) is defined by a formula in the language \(\Sigma(\mathcal{M}_{A})\). Since there are \(\max\{\Sigma(\mathcal{M}),\omega\}\)\(\Sigma(\mathcal{M}_{A})\)-formulae we obtain at most \(\max\{\Sigma(\mathcal{M}),\omega\}\) elements in \(\mathcal{M}\).
2. If a finite set \(A\subseteq M\) witnesses the finite value \(\mathrm{deg}_{\mathrm{rig}}^{\exists\!-\!\mathrm{sem}}(\mathcal{M})\) and \(\mathcal{M}\) is homogeneous possibilities for \(A\)-automorphisms fixing elements of \(\mathcal{M}\) are exhausted by single realizations of types in \(S^{1}(A)\). Since there are at most \(2^{\max\{\Sigma(\mathcal{M}),\omega\}}\) these types that value is the required upper bound for the cardinality of semantically rigid structure \(\mathcal{M}_{A}\). \(\Box\)
Proposition 2.12 immediately implies the following:
**Corollary 2.13**: 1. _If \(\mathrm{deg}_{\mathrm{rig},A}^{\exists\!-\!\mathrm{synt}}(\mathcal{M})\) is finite then \(|M|\leq\max\{\Sigma(\mathcal{M}),|A|,\omega\}\)._
1. _If \(\mathcal{M}\) is homogeneous and \(\mathrm{deg}_{\mathrm{rig},A}^{\exists\!-\!\mathrm{sem}}(\mathcal{M})\) is finite then \(|M|\leq 2^{\max\{\Sigma(\mathcal{M}),|A|,\omega\}}\)._
## 3 Indexes of rigidity
**Definition.** For a set \(A\) in a structure \(\mathcal{M}\) the _index of rigidity_ of \(\mathcal{M}\) over \(A\), denoted by \(\mathrm{ind}_{\mathrm{rig}}(\mathcal{M}/A)\) is the supremum of cardinalities for the set of solutions of algebraic types \(\mathrm{tp}(a/A)\) for \(a\in M\). We put \(\mathrm{ind}_{\mathrm{rig}}(\mathcal{M})=\mathrm{ind}_{\mathrm{rig}}(\mathcal{M}/\emptyset)\). Here we assume that \(\mathrm{ind}_{\mathrm{rig}}(\mathcal{M})=0\) if \(\mathcal{M}\) does not have algebraic types \(\mathrm{tp}(a)\) for \(a\in M\).
**Remark 3.1**: By the definition we have \(\mathrm{ind}_{\mathrm{rig}}(\mathcal{M}/A)\in\omega+1\).
**Example 3.2**: 1. If \({\cal M}\) is a structure of unary predicates \(P_{i}\), \(i\in I\), then \({\rm ind}_{\rm rig}({\cal M})=0\) iff there are no finite nonempty intersections \(P_{i_{1}}^{\delta_{1}}\cap\ldots\cap P_{i_{k}}^{\delta_{k}}\), \(\delta_{1},\ldots,\delta_{k}\in\{0,1\}\). We have \({\rm ind}_{\rm rig}({\cal M})=1\) iff \({\rm dcl}(\emptyset)\neq\emptyset\) and there are no maximal finite intersections \(P_{i_{1}}^{\delta_{1}}\cap\ldots\cap P_{i_{k}}^{\delta_{k}}\) with at least two elements. Besides, \({\rm ind}_{\rm rig}({\cal M})\in\omega\) iff these finite intersections have bounded cardinalities, and all natural possibilities \(n\) are realized by predicates with exactly \(n\) elements and infinite complements. Otherwise, i.e., for \({\rm ind}_{\rm rig}({\cal M})=\omega\), these finite intersections have unbounded cardinalities.
2. If \({\cal M}\) is a structure of an equivalence relation \(E\), then \({\rm ind}_{\rm rig}({\cal M})=0\) iff there are no finite \(E\)-classes. We have \({\rm ind}_{\rm rig}({\cal M})=1\) iff \({\rm dcl}(\emptyset)\neq\emptyset\) and there are no finite \(E\)-classes with at least two elements. Besides, \({\rm ind}_{\rm rig}({\cal M})\in\omega\) iff these \(E\)-classes have bounded cardinalities, and all natural possibilities \(n\) are realized by infinitely many \(E\)-classes with exactly \(n\) elements. Otherwise, i.e., for \({\rm ind}_{\rm rig}({\cal M})=\omega\), these \(E\)-classes have unbounded cardinalities.
3. If \({\cal M}={\cal M}({\rm pm})\) for a polygonometry pm then \({\rm ind}_{\rm rig}({\cal M})=0\) iff pm has infinitely many points. Otherwise, if pm has \(n\in\omega\) points then \({\rm ind}_{\rm rig}({\cal M})=n\).
More generally, we have the following possibilities for a model \({\cal M}\) of transitive theory \(T\), i.e., of a theory with \(|S^{1}(\emptyset)|=1\):
i) \({\rm ind}_{\rm rig}({\cal M})=0\), if \({\cal M}\) is infinite;
ii) \({\rm ind}_{\rm rig}({\cal M})=|{\cal M}|\), if \({\cal M}\) is finite.
In view of Remark 3.1 the following assertion describes possibilities of indexes of rigidity:
**Proposition 3.3**: _For any \(\lambda\in\omega+1\) there is a structure \({\cal M}_{\lambda}\) such that \({\rm ind}_{\rm rig}({\cal M}_{\lambda})=\lambda\)._
Proof follows by Example 3.2. \(\Box\)
## 4 Variations of rigidity for disjoint unions of structures
**Definition**[10]. The _disjoint union_\(\bigsqcup_{n\in\omega}{\cal M}_{n}\) of pairwise disjoint structures \({\cal M}_{n}\) for pairwise disjoint predicate languages \(\Sigma_{n}\), \(n\in\omega\), is the structure of language \(\bigcup\limits_{n\in\omega}\Sigma_{n}\cup\{P_{n}^{(1)}\mid n\in\omega\}\) with the universe \(\bigsqcup_{n\in\omega}M_{n}\), \(P_{n}=M_{n}\), and interpretations of predicate symbols in \(\Sigma_{n}\) coinciding with their interpretations in \({\cal M}_{n}\), \(n\in\omega\). The _disjoint union of theories_\(T_{n}\) for pairwise disjoint languages \(\Sigma_{n}\) accordingly, \(n\in\omega\), is the theory
\[\bigsqcup_{n\in\omega}T_{n}\rightleftharpoons{\rm Th}\left(\bigsqcup_{n\in \omega}{\cal M}_{n}\right),\]
where \({\cal M}_{n}\models T_{n}\), \(n\in\omega\).
**Theorem 4.1**: _For any disjoint predicate structures \({\cal M}_{1}\) and \({\cal M}_{2}\), and \(s\in\{{\rm sem},{\rm synt}\}\) the following conditions hold:_
1. \({\rm deg}_{\rm rig}^{\exists-s}({\cal M}_{1}\sqcup{\cal M}_{2})={\rm deg}_{\rm rig }^{\exists-s}({\cal M}_{1})+{\rm deg}_{\rm rig}^{\exists-s}({\cal M}_{2})\)_, in particular, \({\rm deg}_{\rm rig}^{\exists-s}({\cal M}_{1}\sqcup{\cal M}_{2})\) is finite iff \({\rm deg}_{\rm rig}^{\exists-s}({\cal M}_{1})\) and \({\rm deg}_{\rm rig}^{\exists-s}({\cal M}_{2})\) are finite._
2. \({\rm deg}_{\rm rig}^{\forall-s}({\cal M}_{1}\sqcup{\cal M}_{2})=0\) _iff \({\rm deg}_{\rm rig}^{\forall-s}({\cal M}_{1})=0\) and \({\rm deg}_{\rm rig}^{\forall-s}({\cal M}_{2})=0\)._
3. _If \({\rm deg}_{\rm rig}^{\forall-s}({\cal M}_{1}\sqcup{\cal M}_{2})>0\) then it is finite iff \({\rm deg}_{\rm rig}^{\forall-s}({\cal M}_{1})>0\) is finite and \({\cal M}_{2}\) is finite, or \({\rm deg}_{\rm rig}^{\forall-s}({\cal M}_{2})>0\) is finite and \({\cal M}_{1}\) is finite. Here,_
\[{\rm deg}_{\rm rig}^{\forall-s}({\cal M}_{1}\sqcup{\cal M}_{2})=\max\{|M_{1}| +{\rm deg}_{\rm rig}^{\forall-s}({\cal M}_{2}),|M_{2}|+{\rm deg}_{\rm rig}^{ \forall-s}({\cal M}_{1})\}.\]
Proof. 1. Let \(A_{i}\subset M_{i}\) be sets witnessing values \(\deg_{\rm rig}^{\exists\cdot s}({\cal M}_{i})\), \(i=1,2\). By the definition of \({\cal M}_{1}\sqcup{\cal M}_{2}\), \(A_{1}\) and \(A_{2}\) are disjoint and \(A_{1}\cup A_{2}\) witnesses the value \(\deg_{\rm rig}^{\exists\cdot s}({\cal M}_{1}\sqcup{\cal M}_{2})\). Thus \(\deg_{\rm rig}^{\exists\cdot s}({\cal M}_{1}\sqcup{\cal M}_{2})=\deg_{\rm rig }^{\exists\cdot s}({\cal M}_{1})+\deg_{\rm rig}^{\exists\cdot s}({\cal M}_{2})\).
2. If \(\deg_{\rm rig}^{\forall\cdot s}({\cal M}_{1}\sqcup{\cal M}_{2})=0\) then the empty set witnesses that \({\cal M}_{1}\sqcup{\cal M}_{2}\), \({\cal M}_{1}\) and \({\cal M}_{2}\) are \(s\)-rigid, i.e., rigid with respect to \(s\), implying \(\deg_{\rm rig}^{\forall\cdot s}({\cal M}_{1})=0\) and \(\deg_{\rm rig}^{\forall\cdot s}({\cal M}_{2})=0\). Conversely, if \(\deg_{\rm rig}^{\forall\cdot s}({\cal M}_{1})=0\) and \(\deg_{\rm rig}^{\forall\cdot s}({\cal M}_{2})=0\) then the empty set witnesses that \({\cal M}_{1}\) and \({\cal M}_{2}\) are \(s\)-rigid. Now by the definition of \({\cal M}_{1}\sqcup{\cal M}_{2}\) we observe that \({\cal M}_{1}\sqcup{\cal M}_{2}\) is \(s\)-rigid, too, implying \(\deg_{\rm rig}^{\forall\cdot s}({\cal M}_{1}\sqcup{\cal M}_{2})=0\).
3. Let \(\deg_{\rm rig}^{\forall\cdot s}({\cal M}_{1}\sqcup{\cal M}_{2})>0\) be finite, then by Item 2, \(\deg_{\rm rig}^{\forall\cdot s}({\cal M}_{1})>0\) or \(\deg_{\rm rig}^{\forall\cdot s}({\cal M}_{2})>0\). Assuming that \(\deg_{\rm rig}^{\forall\cdot s}({\cal M}_{i})>0\) we can not witness that value by subsets of \(M_{3-i}\), \(i=1,2\). Thus \(M_{3-i}\) should be finite. Conversely, let \(\deg_{\rm rig}^{\forall\cdot s}({\cal M}_{1})>0\) be finite and \({\cal M}_{2}\) be finite, or \(\deg_{\rm rig}^{\forall\cdot s}({\cal M}_{2})>0\) be finite and \({\cal M}_{1}\) be finite. Then we can take \(\deg_{\rm rig}^{\forall\cdot s}({\cal M}_{1})\) elements of \(M_{1}\) and all elements of \(M_{2}\) obtaining the \(s\)-rigidity of \({\cal M}_{1}\sqcup{\cal M}_{2}\). Similarly we can take \(\deg_{\rm rig}^{\forall\cdot s}({\cal M}_{2})\) elements of \(M_{2}\) and all elements of \(M_{1}\) obtaining the \(s\)-rigidity of \({\cal M}_{1}\sqcup{\cal M}_{2}\), too. Thus, the finite value \(\max\{|M_{1}|+\deg_{\rm rig}^{\forall\cdot s}({\cal M}_{2}),|M_{2}|+\deg_{ \rm rig}^{\forall\cdot s}({\cal M}_{1})\}\) equals \(\deg_{\rm rig}^{\forall\cdot s}({\cal M}_{1}\sqcup{\cal M}_{2})\). \(\square\)
Theorem 4.1 and Corollary 2.8 immediately imply:
**Corollary 4.2**: _For any structures \({\cal M}_{1}\) and \({\cal M}_{2}\) in a language \(\Sigma_{1}\) of unary predicates the tetrad \(\deg_{4}({\cal M}_{1}\sqcup{\cal M}_{2})\) has one of the following possibilities:_
1)_\((0,0,0,0)\), if \({\cal M}_{1}\) and \({\cal M}_{2}\) are both semantically and syntactically rigid;_
2)_\((m,m,n,n)\), if \({\cal M}_{1}\) and \({\cal M}_{2}\) are finite with \(|M_{1}\mathbin{\dot{\cup}}M_{2}|=n+1\) elements and some \({\cal M}_{i}\) is not semantically rigid that is not syntactically rigid with some minimal \(m_{1}\)-elements set \(A_{1}\subset M_{1}\) producing \({\rm del}(A_{1})=M_{1}\) and some minimal \(m_{2}\)-elements set \(A_{2}\subset M_{2}\) producing \({\rm del}(A_{2})=M_{2}\), where \(m=m_{1}+m_{2}\leq n-1\);_
3)_\((0,\nu,0,\infty)\), if \({\cal M}_{1}\sqcup{\cal M}_{2}\) is infinite, \({\cal M}_{1}\) and \({\cal M}_{2}\) are semantically rigid but some of them is not syntactically rigid, with \(1\leq\nu\leq\infty\), \(\nu=\deg_{\rm rig}^{\exists\cdot{\rm synt}}({\cal M}_{1})+\deg_{\rm rig}^{ \exists\cdot{\rm synt}}({\cal M}_{2})\);_
4)_\((\mu,\nu,\infty,\infty)\), if \({\cal M}_{1}\sqcup{\cal M}_{2}\) is infinite, \({\cal M}_{1}\) or \({\cal M}_{2}\) is not semantically rigid, \({\cal M}_{1}\) or \({\cal M}_{2}\) is not syntactically rigid, with \(1\leq\mu\leq\nu\leq\infty\), \(\mu=\deg_{\rm rig}^{\exists\cdot{\rm sem}}({\cal M}_{1})+\deg_{\rm rig}^{ \exists\cdot{\rm sem}}({\cal M}_{2})\), \(\nu=\deg_{\rm rig}^{\exists\cdot{\rm synt}}({\cal M}_{1})+\deg_{\rm rig}^{ \exists\cdot{\rm synt}}({\cal M}_{2})\)._
**Theorem 4.3**: _For any disjoint predicate structures \({\cal M}_{1}\) and \({\cal M}_{2}\), and a set \(A\subseteq M_{1}\cup M_{2}\),_
\[{\rm ind}_{\rm rig}(({\cal M}_{1}\sqcup{\cal M}_{2})/A)=\max\{{\rm ind}_{\rm rig }({\cal M}_{1}/(M_{1}\cap A)),{\rm ind}_{\rm rig}({\cal M}_{2})/(M_{2}\cap A)\}.\]
Proof. By the definition of disjoint union types in \(S^{1}(A)\) are locally realized either in \({\cal M}_{1}\) or in \({\cal M}_{2}\). Moreover, they are forced by their restrictions to \(M_{1}\) or \(M_{2}\). So algebraic types \(p(x)\in S^{1}(A)\) are defined in \({\cal M}_{1}\) or in \({\cal M}_{2}\) by their restrictions to \(M_{1}\cap A\) and to \(M_{2}\cap A\). Now we collect possibilities for cardinalities of sets of realizations of algebraic types in \(S^{1}(M_{1}\cap A)\) and in \(S^{1}(M_{2}\cap A)\). We either choose a maximal natural cardinality obtaining natural \(n={\rm ind}_{\rm rig}(({\cal M}_{1}\sqcup{\cal M}_{2})/A)\) with \(n=\max\{{\rm ind}_{\rm rig}({\cal M}_{1}/(M_{1}\cap A)),{\rm ind}_{\rm rig}({ \cal M}_{2})/(M_{2}\cap A)\}\) or there are no maximal natural cardinality with both \({\rm ind}_{\rm rig}(({\cal M}_{1}\sqcup{\cal M}_{2})/A)=\omega\) and \(\max\{{\rm ind}_{\rm rig}({\cal M}_{1}/(M_{1}\cap A)),{\rm ind}_{\rm rig}({ \cal M}_{2})/(M_{2}\cap A)\}=\omega\). \(\square\)
## 5 Variations of rigidity for compositions of structures
Recall the notions of composition for structures and theories.
**Definition**[11]. Let \({\cal M}\) and \({\cal N}\) be structures of relational languages \(\Sigma_{\cal M}\) and \(\Sigma_{\cal N}\) respectively. We define the _composition_\({\cal M}[{\cal N}]\) of \({\cal M}\) and \({\cal N}\) satisfying the following conditions:
1) \(\Sigma_{{\cal M}[{\cal N}]}=\Sigma_{\cal M}\cup\Sigma_{\cal N}\);
2) \(M[N]=M\times N\), where \(M[N]\), \(M\), \(N\) are universes of \({\cal M}[{\cal N}]\), \({\cal M}\), and \({\cal N}\) respectively;
3) if \(R\in\Sigma_{\cal M}\setminus\Sigma_{\cal N}\), \(\mu(R)=n\), then \(((a_{1},b_{1}),\ldots,(a_{n},b_{n}))\in R_{{\cal M}[{\cal N}]}\) if and only if \((a_{1},\ldots,a_{n})\in R_{\cal M}\);
4) if \(R\in\Sigma_{\cal N}\setminus\Sigma_{\cal M}\), \(\mu(R)=n\), then \(((a_{1},b_{1}),\ldots,(a_{n},b_{n}))\in R_{{\cal M}[{\cal N}]}\) if and only if \(a_{1}=\ldots=a_{n}\) and \((b_{1},\ldots,b_{n})\in R_{{\cal N}}\);
5) if \(R\in\Sigma_{\cal M}\cap\Sigma_{\cal N}\), \(\mu(R)=n\), then \(((a_{1},b_{1}),\ldots,(a_{n},b_{n}))\in R_{{\cal M}[{\cal N}]}\) if and only if \((a_{1},\ldots,a_{n})\in R_{\cal M}\), or \(a_{1}=\ldots=a_{n}\) and \((b_{1},\ldots,b_{n})\in R_{\cal N}\).
The theory \(T={\rm Th}({\cal M}[{\cal N}])\) is called the _composition_\(T_{1}[T_{2}]\) of the theories \(T_{1}={\rm Th}({\cal M})\) and \(T_{2}={\rm Th}({\cal N})\).
By the definition, the composition \({\cal M}[{\cal N}]\) is obtained replacing each element of \({\cal M}\) by a copy of \({\cal N}\).
**Definition**[11]. The composition \({\cal M}[{\cal N}]\) is called \(E\)_-definable_ if \({\cal M}[{\cal N}]\) has an \(\emptyset\)-definable equivalence relation \(E\) whose \(E\)-classes are universes of the copies of \({\cal N}\) forming \({\cal M}[{\cal N}]\).
**Remark 5.1**: It is shown in [11] that \(E\)-definable compositions \({\cal M}[{\cal N}]\) uniquely define theories \({\rm Th}({\cal M}[{\cal N}])\) by theories \({\rm Th}({\cal M})\) and \({\rm Th}({\cal N})\) and types of elements in copies of \({\cal N}\) are defined by types in these copies and types for connections between these copies.
**Proposition 5.2**: _For \(E\)-definable compositions \({\cal M}[{\cal N}]\) the automorphism group \({\rm Aut}({\cal M}[{\cal N}])\) is isomorphic to the wreath product of \({\rm Aut}({\cal M})\) and \({\rm Aut}({\cal N})\):_
\[{\rm Aut}({\cal M}[{\cal N}])\simeq{\rm Aut}({\cal M})\setminus{\rm Aut}({\cal N }).\]
Proof. Since all copies of \({\cal N}\) are isomorphic in \({\cal M}[{\cal N}]\) and form definable \(E\)-classes each automorphism \(f\in{\rm Aut}({\cal M}[{\cal N}])\) is defined both by the action on the set of \(E\)-classes, which corresponds to an automorphism \(g\in{\rm Aut}({\cal M})\), and by the the actions on the \(E\)-classes, which corresponds to an automorphism \(h\) for copies of \({\cal N}\). Therefore \(f\) is situated in the one-to-one correspondence with the pair \((g,h)\) producing a correspondent element of \({\rm Aut}({\cal M})\wr{\rm Aut}({\cal N})\). \(\Box\)
In view of Remark 5.1 and Proposition 5.2 we have the following:
**Theorem 5.3**: _For any \(E\)-definable composition \({\cal M}[{\cal N}]\) the following conditions hold:_
\[{\rm deg}^{\exists\mbox{-}{\rm sem}}_{\rm rig}({\cal M}[{\cal N}])={\rm deg}^{ \exists\mbox{-}{\rm sem}}_{\rm rig}({\cal M}),\]
_if \({\cal N}\) is semantically rigid, and_
\[{\rm deg}^{\exists\mbox{-}{\rm sem}}_{\rm rig}({\cal M}[{\cal N}])=|M|\cdot{ \rm deg}^{\exists\mbox{-}{\rm sem}}_{\rm rig}({\cal N}),\]
_if \({\cal N}\) is not semantically rigid. In particular, \({\rm deg}^{\exists\mbox{-}{\rm sem}}_{\rm rig}({\cal M}[{\cal N}])\) is finite iff \({\rm deg}^{\exists\mbox{-}{\rm sem}}_{\rm rig}({\cal M})\) and \({\cal N}\) are finite, if \({\cal N}\) is semantically rigid, and \({\rm deg}^{\exists\mbox{-}{\rm sem}}_{\rm rig}({\cal N})\) and \({\cal M}\) are finite, if \({\cal N}\) is not semantically rigid._
Proof. If \({\cal N}\) is semantically rigid then it suffices to find possibilities for automorphisms of \({\cal M}\) since in such a case the semantical rigidity of an inessential expansion of \({\cal M}\) implies the semantical rigidity of correspondent inessential expansion of \({\cal M}[{\cal N}]\). Thus, here \(\deg_{\rm rig}^{\exists\mbox{\scriptsize-sem}}({\cal M}[{\cal N}])=\deg_{\rm rig }^{\exists\mbox{\scriptsize-sem}}({\cal M})\). If \({\cal N}\) is not semantically rigid then copies of \({\cal N}\) in \({\cal M}[{\cal N}]\) are automorphically independent, i.e., fixing automorphisms for \({\cal M}[{\cal N}]\) one have to fix all automorphisms for these copies. Since the smallest set fixing automorphisms for \({\cal N}\) contains \(\deg_{\rm rig}^{\exists\mbox{\scriptsize-sem}}({\cal N})\), we have at least and minimally at most \(|M|\cdot\deg_{\rm rig}^{\exists\mbox{\scriptsize-sem}}({\cal N})\) elements to fix automorphisms for \({\cal M}[{\cal N}]\) implying \(\deg_{\rm rig}^{\exists\mbox{\scriptsize-sem}}({\cal M}[{\cal N}])=|M|\cdot \deg_{\rm rig}^{\exists\mbox{\scriptsize-sem}}({\cal N})\). \(\Box\)
**Theorem 5.4**: _For any \(E\)-definable composition \({\cal M}[{\cal N}]\) the following conditions hold:_
\[\deg_{\rm rig}^{\exists\mbox{\scriptsize-synt}}({\cal M}[{\cal N}])=\deg_{ \rm rig}^{\exists\mbox{\scriptsize-synt}}({\cal M}),\]
_if \(N={\rm dcl}(\emptyset)\), and_
\[\deg_{\rm rig}^{\exists\mbox{\scriptsize-synt}}({\cal M}[{\cal N}])=|M|\cdot \deg_{\rm rig}^{\exists\mbox{\scriptsize-synt}}({\cal N}),\]
_if \(N\neq{\rm dcl}(\emptyset)\). In particular, \(\deg_{\rm rig}^{\exists\mbox{\scriptsize-synt}}({\cal M}[{\cal N}])\) is finite iff \(\deg_{\rm rig}^{\exists\mbox{\scriptsize-synt}}({\cal M})\) and \({\cal N}\) are finite, for \(N={\rm dcl}(\emptyset)\), and \(\deg_{\rm rig}^{\exists\mbox{\scriptsize-synt}}({\cal N})\) and \({\cal M}\) are finite, for \(N\neq{\rm dcl}(\emptyset)\)._
Proof repeats the proof of Theorem 5.3 replacing automorphism groups by definable closures. \(\Box\)
Proposition 2.1, (1), (2) and Theorems 5.3, 5.4 immediately imply:
**Corollary 5.5**: _For any \(E\)-definable composition \({\cal M}[{\cal N}]\) and \(s\in\{{\rm sem},{\rm synt}\}\) the following conditions are equivalent:_
(1)_\(\deg_{\rm rig}^{\forall\mbox{\scriptsize-s}}({\cal M}[{\cal N}])=0\);_
(2)_\(\deg_{\rm rig}^{\forall\mbox{\scriptsize-s}}({\cal M})=0\) and \(\deg_{\rm rig}^{\forall\mbox{\scriptsize-s}}({\cal N})=0\)._
**Theorem 5.6**: _For any \(s\in\{{\rm sem},{\rm synt}\}\) and \(E\)-definable composition \({\cal M}[{\cal N}]\) with_
\[\deg_{\rm rig}^{\forall\mbox{\scriptsize-s}}({\cal M}[{\cal N}])>0\]
_the following conditions are equivalent:_
(1)_\(\deg_{\rm rig}^{\forall\mbox{\scriptsize-s}}({\cal M}[{\cal N}])\) is finite;_
(2) _one of the following conditions hold:_
i)_\({\cal M}\) and \({\cal N}\) are finite, i.e. \({\cal M}[{\cal N}]\) is finite;_
ii)_\({\cal M}\) is infinite with \(\deg_{\rm rig}^{\forall\mbox{\scriptsize-s}}({\cal M})=1\) and \(\deg_{\rm rig}^{\forall\mbox{\scriptsize-s}}({\cal N})=0\);_
iii)_\({\cal M}\) is infinite and \({\cal N}\) is finite with \(\deg_{\rm rig}^{\forall\mbox{\scriptsize-s}}({\cal M})\in\omega\setminus\{0,1\}\) and \(\deg_{\rm rig}^{\forall\mbox{\scriptsize-s}}({\cal N})=0\);_
iv)_\({\cal M}\) is a singleton and \({\cal N}\) is infinite with \(\deg_{\rm rig}^{\forall\mbox{\scriptsize-s}}({\cal N})\in\omega\setminus\{0\}\)._
_Here there are the following possibilities:_
a)_\(\deg_{\rm rig}^{\forall\mbox{\scriptsize-s}}({\cal M}[{\cal N}])=(\deg_{\rm rig }^{\forall\mbox{\scriptsize-s}}({\cal M})-1)\cdot|N|+1\), if the case_ i) _or_ iii) _is satisfied with \(\deg_{\rm rig}^{\forall\mbox{\scriptsize-s}}({\cal N})=0\);_
b)_\(\deg_{\rm rig}^{\forall\mbox{\scriptsize-s}}({\cal M}[{\cal N}])=(|M|-1)\cdot|N|+ \deg_{\rm rig}^{\forall\mbox{\scriptsize-s}}({\cal N}),\) if the case_ i) _is satisfied with \(\deg_{\rm rig}^{\forall\mbox{\scriptsize-s}}({\cal N})>0\);_
c)_\(\deg_{\rm rig}^{\forall\mbox{\scriptsize-s}}({\cal M}[{\cal N}])=1,\) if the case_ ii) _is satisfied;_
d)_\(\deg_{\rm rig}^{\forall\mbox{\scriptsize-s}}({\cal M}[{\cal N}])=\deg_{\rm rig }^{\forall\mbox{\scriptsize-s}}({\cal N}),\) if the case_ iv) _is satisfied._
Proof. At first we notice that \(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{M})>0\) or \(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{N})>0\) in view of Corollary 5.5.
Now by the definition \(\mathcal{M}[\mathcal{N}]\) is finite iff \(\mathcal{M}\) and \(\mathcal{N}\) are finite. In such a case we have the following possibilities:
\(\bullet\)\(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{M}[\mathcal{N}])=(\deg_{ \mathrm{rig}}^{\forall\text{-}s}(\mathcal{M})-1)\cdot|N|+1\), if \(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{N})=0\), since the rigidity of \(\mathcal{M}[\mathcal{N}]\) can be achieved here taking all elements in \(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{M})-1\) copies of \(\mathcal{N}\) with one additional element witnessing the degree \(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{M})\) defining rigidly all \(E\)-classes for copies of \(\mathcal{N}\) which are rigid by \(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{N})=0\); it corresponds the case i) with a);
\(\bullet\)\(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{M}[\mathcal{N}])=(|M|-1)\cdot|N|+ \deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{N})\), if \(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{N})>0\), since the rigidity of \(\mathcal{M}[\mathcal{N}]\) can be achieved here taking all elements in \((|M|-1)\) copies of \(\mathcal{N}\) with \(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{N})\) additional elements in the last copy of \(\mathcal{N}\); it corresponds the case i) with b).
(1) \(\Rightarrow\) (2). Let \(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{M}[\mathcal{N}])>0\) is finite. We can assume that \(\mathcal{M}\) is infinite or \(\mathcal{N}\) is infinite. We have the following possibilities:
\(\bullet\)\(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{M})=1\) and \(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{N})=0\), that is any element of \(\mathcal{M}[\mathcal{N}]\) rigidly defines its \(E\)-class and all \(E\)-classes, too, by \(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{M})=1\), such that all copies of \(\mathcal{N}\) in these \(E\)-classes are rigid by \(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{N})=0\); it corresponds the case ii) with c);
\(\bullet\)\(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{M})\in\omega\setminus\{0,1\}\) and \(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{N})=0\); here we require that \(\mathcal{N}\) is finite, since otherwise we can take arbitrary many elements in some \(E\)-classes which do not imply the rigidity in view of \(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{M})\geq 2\); here we have the case iii) with a).
\(\bullet\)\(\mathcal{M}\) is a singleton and \(\mathcal{N}\) is infinite with \(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{N})\in\omega\setminus\{0\}\), here \(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{M})=0\), \(\mathcal{M}[\mathcal{N}]\simeq\mathcal{N}\) and therefore \(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{M}[\mathcal{N}])=\deg_{ \mathrm{rig}}^{\forall\text{-}s}(\mathcal{N})\).
If \(\mathcal{N}\) is infinite with \(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{N})\in\omega\setminus\{0\}\) and \(|\mathcal{M}|\geq 2\) then we can not obtain the rigidity for all \(E\)-classes taking arbitrary many elements in some \(E\)-classes that contradicts the condition \(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{M}[\mathcal{N}])\in\omega\).
(2) \(\Rightarrow\) (1). Since each finite structure has finite degrees of rigidity it suffices to show that \(\deg_{\mathrm{rig}}^{\forall\text{-}s}(\mathcal{M}[\mathcal{N}])\) is finite if \(\mathcal{M}\) is infinite or \(\mathcal{N}\) is infinite with the conditions ii), iii), iv). We observe that ii) implies c), iii) implies a), and iv) implies d) confirming a finite value of that degree. \(\Box\)
## 6 Conclusion
We studied possibilities for the degrees and indexes of rigidity, both for semantical and syntactical cases. Links of these characteristics and their possible values are described. We studied these values and dynamics for structures in some languages, for some natural operations including disjoint unions and compositions of structures. A series of examples illustrates possibilities of these characteristics. It would be interesting to continue this research describing possible values of degrees and indexes for natural classes of structures and their theories.
|
2305.15020 | An Efficient Multilingual Language Model Compression through Vocabulary
Trimming | Multilingual language model (LM) have become a powerful tool in NLP
especially for non-English languages. Nevertheless, model parameters of
multilingual LMs remain large due to the larger embedding matrix of the
vocabulary covering tokens in different languages. On the contrary, monolingual
LMs can be trained in a target language with the language-specific vocabulary
only, but this requires a large budget and availability of reliable corpora to
achieve a high-quality LM from scratch. In this paper, we propose
vocabulary-trimming (VT), a method to reduce a multilingual LM vocabulary to a
target language by deleting irrelevant tokens from its vocabulary. In theory,
VT can compress any existing multilingual LM to build monolingual LMs in any
language covered by the multilingual LM. In our experiments, we show that VT
can retain the original performance of the multilingual LM, while being smaller
in size (in general around 50% of the original vocabulary size is enough) than
the original multilingual LM. The evaluation is performed over four NLP tasks
(two generative and two classification tasks) among four widely used
multilingual LMs in seven languages. Finally, we show that this methodology can
keep the best of both monolingual and multilingual worlds by keeping a small
size as monolingual models without the need for specifically retraining them,
and even limiting potentially harmful social biases. | Asahi Ushio, Yi Zhou, Jose Camacho-Collados | 2023-05-24T11:00:33Z | http://arxiv.org/abs/2305.15020v3 | # An Efficient Multilingual Language Model Compression
###### Abstract
Multilingual language model (LM) have become a powerful tool in NLP especially for non-English languages. Nevertheless, model parameters of multilingual LMs remain large due to the larger embedding matrix of the vocabulary covering tokens in different languages. On the contrary, monolingual LMs can be trained in a target language with the language-specific vocabulary only, but this requires a large budget and availability of reliable corpora to achieve a high-quality LM from scratch. In this paper, we propose _vocabulary-trimming_ (VT), a method to reduce a multilingual LM vocabulary to a target language by deleting irrelevant tokens from its vocabulary. In theory, VT can compress any existing multilingual LM to build monolingual LMs in any language covered by the multilingual LM. In our experiments, we show that VT can retain the original performance of the multilingual LM, while being smaller in size (in general around 50% of the original vocabulary size is enough) than the original multilingual LM. The evaluation is performed over four NLP tasks (two generative and two classification tasks) among four widely used multilingual LMs in seven languages. Finally, we show that this methodology can keep the best of both monolingual and multilingual worlds by keeping a small size as monolingual models without the need for specifically retraining them, and even limiting potentially harmful social biases.
## 1 Introduction
Multilingual language model (LM) pre-training Devlin et al. (2019); Conneau et al. (2019); Liu et al. (2020); Xue et al. (2021) has been shown to be an efficient mechanism to store information from many languages into a single model, without the need for training multiple language-specific models. Moreover, is has been proven reliable to cross-lingual tasks Pires et al. (2019); Conneau and Lample (2019), and in most settings, it can provide a competitive performance, even similar to its monolingual counterparts Goyal et al. (2021), while being generally less affected by culturally-dependant biases Ahn and Oh (2021). As with the monolingual models, it can be used for few/zero-shot learning Scao et al. (2022) by increasing the model size at scale and, more frequently, can be specialized to different tasks by fine-tuning to specific data. In practice, there are a few practical issues when training multilingual LM such as the curse of multilinguality Conneau et al. (2019); Pfeiffer et al. (2022), a trade-off between the number of languages and an individual performance in a single language, or the multilingual vocabulary construction, where a careful design Chung et al. (2020); Zheng et al. (2021); Liang et al. (2023) can lead a better generalization.
Besides such generalization concerns, multilingual LMs usually consist of larger parameters than their monolingual counterparts due to the need for a large multilingual vocabulary covering multiple languages. This becomes an important issue in practice when the resources to host models are limited. For instance, while using the same configuration (i.e., same number of layers and hidden units), the parameter size of T5SMALL Raffel et al. (2020) and mT5SMALL Xue et al. (2021) are 140M and 300M, respectively. This is only due to their difference in vocabulary size, with T5 being 50k and mT5, 250k. In fact, the embedding matrix stem
Figure 1: The ratio of the embedding matrix to the number of entire model parameters for each of multilingual LMs and the embedding matrix after VT with top-60 vocabulary.
ming from the LM vocabulary can occupy a large portion of the parameter space. For instance, the ratio of the embedding matrix to the full model's parameter size in multilingual LMs can be higher than 80% in LMs such as BART Lewis et al. (2020) and T5.
In this paper, we propose a simple _vocabulary trimming (VT)_ method to remove tokens from the vocabulary of multilingual LMs that may be irrelevant to the target language. This is achieved by automatically identifying language-specific tokens from an underlying text corpus. Figure 1 shows a visual parameter breakdown of three multilingual LMs and the effect of VT in their overall embedding matrix and parameter size. We consider two VT strategies of pre-FT VT (VT before fine-tuning) and post-FT VT (VT after fine-tuning) and analyse them by varying the final vocabulary size. We conduct experiments on two generation tasks, question answering (QA) and question generation (QG), and two classification tasks, sentiment analysis and natural language inference (NLI), across seven different languages. The experimental results show that both pre and post fine-tuning VT can reduce the model size while retaining the original performance in generation tasks (QA and QG), and particularly in classification tasks (sentiment and NLI) where the results are close to being identical despite the significant reduction in vocabulary size. By limiting the number of vocabulary at VT in generation tasks, the original performance can be achieved with 35% of the full model parameters for all the languages, and some languages (English/French/Italian) can be reduced even to 16% of the full model parameters. Moreover, NLI and sentiment analysis models can be reduced to 39%/33% of the original model parameter respectively without a major decrease in the performance.
Finally, even though pre-trained LMs have reported impressive performance on various NLP downstream tasks Kenton and Toutanova (2019); Liu et al. (2019); Conneau et al. (2019), such LMs also demonstrate worrying levels of social biases in certain situations May et al. (2019); Kurita et al. (2019); Kaneko and Bollegala (2021). One natural question that arises is whether VT can have an influence on the bias level in multilingual LMs, including fine-tuned models. For this purpose, we evaluate social bias in multilingual LMs after applying VT with different settings and compare it against its monolingual counterpart. Experimental results show that the monolingual LM tends to contain more bias than its multilingual versions. Moreover, compared to the original multilingual LM, the bias level has no significant change after applying VT. These results suggest that a monolingual LM can be induced by applying VT to its corresponding multilingual LM, thereby obtaining a less biased monolingual LM compared to its original monolingual counterpart.
## 2 Related Work
Several studies have explored the possibility to modify or adapt the vocabulary of LMs. For instance, Artetxe et al. (2020) and Marchisio et al. (2022) adapted a mono-lingual LM into another language by learning the embedding matrix on the new language, while fixing the other weights. Similarly, Wang et al. (2019) augmented the vocabulary of a multilingual LM to new languages with multilingual word alignment Lample et al. (2018). Zheng et al. (2021) proposed to evaluate the ability of a vocabulary to represent a particular language, and Chung et al. (2020) proposed a multilingual vocabulary construction that balances the trade-off between optimizing for cross-lingual sub-word sharing and the need for robust representation of individual languages. XLM-V Liang et al. (2023) combines the idea of Zheng et al. (2021) and Chung et al. (2020) to efficiently enlarge the vocabulary size along with the model size scaling. Ostendorff and Rehm (2023) used a multi-stage fine-tuning to obtain a LM in the target language from other LM in the source language. These prior works modify
Figure 2: An illustration of vocabulary trimming for Korean and French.
existing mono/multi-lingual LMs to include new languages, i.e. augmenting the multilinguality of the LMs. In contrast, our study focuses on compressing multilingual LMs into the target language to effectively achieve smaller monolingual LMs, i.e. reducing the multilingual representation of the LMs while retaining the capability in a specific target language.
The work of Abdaoui et al. (2020) is the most relevant to our study as, to the best of our knowledge, they introduced the idea of VT for the first time. However, their analysis is limited to NLI with pre-fine-tuning VT with mBERT Devlin et al. (2019) only, as well as a fixed vocabulary size after VT. In contrast, our study compares two VT strategies, before and after fine-tuning, and show how this latter strategy, not considered in Abdaoui et al. (2020), can be a more effective compression technique in some settings. Furthermore, we extend the experiments to generation tasks as well as classification tasks with more recent LMs such as mBART and mT5, and provide an exhaustive analysis on the effect of VT.
## 3 Vocabulary Trimming
To perform vocabulary trimming (VT), we first need a multilingual LM as an input. The idea is to tailor model to a particular target language \(l\), which in principle belong to the same set of languages used to trained the input multilingual LM.1 For the target language \(l\), VT first identifies language-specific tokens on a language-specific corpus \(\mathcal{C}_{l}\), and remove all the tokens along with their embeddings except for those appeared in \(\mathcal{C}_{l}\) as described in Figure 2. In our analysis (SS 5), we also consider to keep the top-\(n\) most frequent tokens in \(\mathcal{C}_{l}\) to further reduce the model size by removing less frequent tokens. We consider two VT strategies:
Footnote 1: In theory, vocabulary trimming could be applied to any language model, even monolingual, but this analysis is out of the scope of this paper.
1. Vocabulary trimming before fine-tuning _(pre-FT VT)_.
2. Vocabulary trimming after fine-tuning _(post-FT VT)_.
The difference between these two strategies is whether to perform VT before or after fine-tuning, as shown in Figure 3. Both VTs have advantages and drawbacks: while pre-FT VT can reduce the time of fine-tuning as the trimmed LM is smaller than the original LM, post-FT VT only need a fine-tuned multilingual LM - this way, post-FT VT can be used as a postprocessing step and no additional language-specific training is required.
Finally, we release a simple LM vocabulary trimming starting package to apply our proposed technique to any input multilingual transformer-based LM, along with all the models and code needed to reproduce our experiments at [https://github.com/asahi417/lm-vocab-trimmer](https://github.com/asahi417/lm-vocab-trimmer).
## 4 Evaluation
In this section, we evaluate our VT methodology in a wide range of tasks, datasets and languages. In the following we describe our experiment setting and present our experimental results.
### Experimental Setting
Tasks and datasets.In order to test the efficacy of VT, we consider two generation tasks, question answering (QA) and question generation (QG), and two classification tasks, sentiment analysis and natural language inference (NLI). As the datasets for QA, we use SQuAD Rajpurkar et al. (2016) (English), Spanish SQuAD Casimiro Pio et al. (2019) (Spanish), FQuAD d'Hoffschmidt et al. (2020) (French), Italian SQuAD Croce et al. (2018) (Italian), JAQuAD So et al. (2022) (Japanese), KorquAD Lim et al. (2019) (Korean), and SberQuAd Efimov et al. (2020) (Russian). For QG, we use the same datasets adapted for QG via QG-Bench Ushio et al. (2022). For sentiment analysis, we use Twitter-based datasets for English Rosenthal et al. (2017), Arabic Rosenthal et al. (2017), French Benamara et al. (2017), Italian Barbieri et al. (2016), German Cieliebak et al. (2017), Portuguese Brum and Nunes (2017), and Spanish Diaz-Galiano et al. (2018) from UMSAB (Unified Multilingual Sentiment Analysis Benchmark) Barbieri et al. (2022). All the sentiment analysis datasets contain three labels: positive, neutral and negative. For NLI, we use XNLI Conneau et al. (2018), a multilingual
Figure 3: Comparisons of _Pre-FT_ vs _Post-FT_ VT in an example of fine-tuning on a task in French.
NLI dataset, including English, French, German, Spanish and Arabic, which are the languages included in the sentiment analysis experiment. We fine-tune LMs on the training sets of each language, which were translated automatically from English and released in the original paper.
Evaluation metrics.For the evaluation, we use the following standard metrics: answer span F1 score (Ans-F1) and exact match (EM) are used for QA; METEOR (MTR) and BERTScore (BS) for QG, which have been shown to be the most correlated metrics to human judgement Ushio et al. (2022); macro-F1 score for sentiment following Barbieri et al. (2022); and accuracy for NLI. As the language-specific corpus \(\mathcal{C}_{l}\) to extract vocabulary counts for VT, we use mC4 Xue et al. (2021), one of the largest public multilingual corpora.
Base language models.As the base LMs, given computational constraints we chose the smallest mT52 and mBART3 to fine-tune on QA and QG, and XLM-R4 and XLM-V Liang et al. (2023)5 for sentiment analysis and NLI. All these models have a vocabulary size of 250K, except for XLM-V that has vocabulary size of 901K subword tokens. For our experiments, we compare the results of pre/post-FT VT against vanilla LM fine-tuning without VT, which we refer as _No-Trim_.
Footnote 2: [https://huggingface.co/google/mt5-small](https://huggingface.co/google/mt5-small)
Footnote 3: [https://huggingface.co/facebook/mabert-large-cc25](https://huggingface.co/facebook/mabert-large-cc25)
Footnote 4: [https://huggingface.co/xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
Footnote 5: [https://huggingface.co/facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base)
Fine-tuning.For model fine-tuning, we use lmqg Ushio et al. (2023) for QA/QG, and Ray Tune6 for sentiment analysis. In both cases, we use the default search space for hyperparameter search. For NLI, we follow the same hyperparameters used in Liang et al. (2023). All the resulting models can be found at our project page7.
Footnote 6: [https://docs.ray.io/en/latest/tune/index.html](https://docs.ray.io/en/latest/tune/index.html)
Footnote 7: [https://github.com/asahi417/lm-vocab-trimmer/blob/main/model_card.md](https://github.com/asahi417/lm-vocab-trimmer/blob/main/model_card.md)
### Results
We report the experimental results of pre/post-FT VT for generation tasks (section 4.2.1), and classification tasks (section 4.2.2).
#### 4.2.1 Generation Tasks: QA & QG
Table 1 shows the overall results on QA and QG. The results confirm that both of pre/post-FT VT can maintain the original performance in most cases, while being smaller than the original models by significantly reducing the vocabulary size. First, post-FT VT achieves at least the same performance as the vanilla fine-tuning for all the languages for both LMs in QA and QG, except for a few cases such as mBART QA in Korean and mBART QG in Russian, although the decrease is no more than 0.5%. Meanwhile, pre-FT VT outperforms its vanilla fine-tuning model with a relatively important margin in some cases, such as mBART French QA and mT5 Spanish QA. In contrast, there are a few models
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multicolumn{2}{c}{\multirow{2}{*}{Lang.}} & \multirow{2}{*}{Vocabulary} & \multirow{2}{*}{Parameters} & \multicolumn{3}{c}{QA} & \multicolumn{3}{c}{QG} \\ & & & No-Trim & Post-FT & Pre-FT & No-Trim & Post-FT & Pre-FT \\ \hline \multirow{8}{*}{\begin{tabular}{c} EN \\ ES \\ \end{tabular} } & 209K (83.6\%) & 258M (86.1\%) & 70.1 / 55.5 & **70.2** / 55.5 & 70.1 / **56.4** & 23.8 / 90.0 & 23.8 / 90.0 & **24.0** / **90.1** \\ & ES & 131K (52.4\%) & 178M (59.4\%) & 55.9 / 34.7 & 55.9 / 34.7 & **57.8** / **37.5** & **22.7** / 84.1 & **22.7** / 84.1 & 22.3 / **84.2** \\ & FR & 131K (52.4\%) & 178M (59.4\%) & **50.0** / **30.9** & **50.0** / **30.9** & 48.6 / 29.4 & **17.5** / **80.7** & **17.5** / **80.7** & 16.1 / 79.2 \\ & IT & 111K (44.4\%) & 157M (52.6\%) & 53.2 / 37.6 & **53.4** / **37.8** & 51.5 / 36.0 & **17.6** / **80.8** & **17.6** / **80.8** & 17.5 / 80.6 \\ & JA & 125K (50.0\%) & 172M (57.6\%) & **65.7** / **65.7** / **65.7** & 63.0 / 63.0 & **29.0** / 80.9 & **29.0** / 80.9 & 28.6 / **81.0** \\ & KO & 73K (29.2\%) & 119M (39.7\%) & **77.1** / **70.6** & **77.1** / 70.5 & 74.5 / 67.3 & 27.5 / 82.9 & 27.5 / 83.0 & **28.0** / **83.7** \\ & RU & 147K (58.8\%) & 195M (65.1\%) & 73.7 / 51.4 & 73.8 / 51.4 & **74.8** / **53.4** & 26.4 / 84.3 & 26.4 / 84.3 & **28.9** / **86.4** \\ \hline \multirow{8}{*}{
\begin{tabular}{c} EN \\ ES \\ \end{tabular} } & 173K (69.2\%) & 532M (87.1\%) & 76.9 / 62.6 & 77.0 / 62.7 & **78.4** / **65.7** & **25.1** / **90.4** & **25.1** / **90.4** & 24.7 / 90.1 \\ & ES & 87K (34.8\%) & 443M (72.7\%) & 64.1 / 42.2 & **64.5** / 42.8 & 63.7 / **43.9** & **22.9** / 83.6 & 22.8 / 83.6 & 22.8 / **84.0** \\ \cline{1-1} & FR & 85K (34.0\%) & 442M (72.5\%) & 60.4 / 39.3 & 61.0 / 39.8 & **66.4** / **45.1** & **19.8** / 81.7** & **19.8** / 81.7** & 18.4 / 79.7 \\ \cline{1-1} & IT & 67K (26.8\%) & 424M (69.5\%) & 64.7 / 50.0 & 64.9 / **50.2** & **65.8** / 49.8 & 18.0 / 80.6 & 17.9 / 80.7 & **18.9** / **81.1** \\ \cline{1-1} & JA & 77K (30.8\%) & 434M (71.1\%) & 68.2 / 68.2 & 68.2 / 68.2 & **70.6** / **70.6** & **30.0** / **82.3** & 29.7 / 82.1 & 29.1 / 80.8 \\ \cline{1-1} & KO & 46K (18.4\%) & 402M (65.9\%) & 79.3 / 72.3 & 79.2 / 72.1 & **83.2** / **77.3** & 30.2 / 83.9 & **30.3** / **84.0** & 30.2 / 83.8 \\ \cline{1-1} & RU & 99K (39.6\%) & 456M (74.8\%) & 78.7 / 58.0 & **79.0** / **58.2** & 75.5 / 49.9 & **29.3** / **87.2** & 28.7 / 87.0 & 28.3 / 86.7 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on QA (Ans-F1/EM) and QG (MTR/BS), including both the vocabulary size and the number of parameters after VT with the ratio to the original model (%). The best results in each LM and language are in bold characters. Note that the parameter size of the original mT5 and mBART (No-Trim) is 300M and 611M, respectively, both with a vocabulary size of 250K.
where pre-FT VT degrades the performance of the original model such as mT5 QA in Korean (2.6% decrease in Ans-F1) or mBART QA in Russian (3.2% decrease in Ans-F1).
Since we keep all the vocabulary that appeared in the language-specific corpus \(\mathcal{C}_{l}\), the percentage of reduced parameter depends on the language, and generally VT can reduce the model size for Asian (Japanese/Korean) and European (Spanish/French/Italian) languages efficiently (50% for mT5 and 70% for mBART), but it remains high in other languages (English/Russian).
#### 4.2.2 Classification Tasks: Sentiment & NLI
Table 2 shows the results on sentiment analysis and NLI. In this case, post-FT VT can robustly preserve the original performance of the original No-Trim baseline in both tasks for XLM-R and XLM-V, while being no more than 40% and 60% in vocabulary and overall parameter size, respectively, of the original XLM-V and XLM-R models in all the non-English datasets. XLM-V PT sentiment model is the only post-FT VT where a slight decrease can be observed (0.1%). On the other hand, the accuracy of Pre-FT VT appears to be sensitive to the language and task, where it improves the performance in some languages such as Italian (XLM-R and XLM-V achieve 7.9% and 3.8% increase for sentiment analysis), but it decreases the performance with non-trivial margin in other languages such as Arabic, where XLM-R decreases 5% for sentiment analysis and 2% for XNLI. Since XLM-V has a larger vocabulary size, the percentage of reduced parameters at VT is more prominent in XLM-V, as seen in Arabic (20.2%) and Portuguese (28.9%) for example.
## 5 Vocabulary Size Analysis
In our main experimental results (SS 4.2), we keep all the unique tokens that appeared in the corpus, which results in a low compression ratio of some languages such as English and Russian. In this analysis, we constrain the number of vocabulary and choose the top-\(n\) vocabulary at VT in terms of the frequency in the corpus (see SS 3). For QA and QG, we compare mT5SMALL results with \(n\) from [5K, 10K, 15K, 30K, 60K, 90K, 120K], which correspond to an overall parameter size of [49M, 54M, 59M, 74M, 105M, 136M, 166M], respectively. For sentiment analysis and NLI, we experiment with XLM-RBASE with \(n\) from [5K, 10K, 15K, 30K, 60K], which correspond to [89M, 93M, 97M, 109M, 132M] of parameter size, respectively.
### Generation Tasks: QA & QG
Figure 4 shows the results of mT5 on QA and QG.8 Noticeably, post-FT VT can reduce the vocabulary size to 60K for both QA and QG in all the languages with a trivial gap (0.3% decrease of EM in Russian QA and 0.1% decrease of BS in French QG), and that is **35%** of the original mT5 in the parameter size. Furthermore, post-FT VT can further
\begin{table}
\begin{tabular}{l l l c c c c c c c} \hline \hline & Lang. & Vocabulary & Parameter & \multicolumn{4}{c}{Sentiment} & \multicolumn{2}{c}{NLI} \\ & & & & No-Trim & Post-FT & Pre-FT & No-Trim & Post-FT & Pre-FT \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & AR & 49K (19.6\%) & 124M (44.7\%) & **66.3** & **66.3** & 60.9 & **75.7** & **75.7** & 73.8 \\ & DE & 91K (36.4\%) & 156M (56.3\%) & 73.2 & 73.3 & **73.5** & **79.9** & **79.9** & 78.3 \\ & EN & 173K (69.2\%) & 219M (78.7\%) & 68.4 & **68.5** & 67.9 & **84.6** & **84.6** & 70.6 \\ & ES & 87K (34.8\%) & 153M (55.0\%) & **69.0** & **69.0** & 65.0 & **79.8** & **79.8** & 67.2 \\ & FR & 85K (34.0\%) & 151M (54.6\%) & 71.8 & 71.8 & **72.1** & **80.1** & **80.1** & 79.6 \\ & IT & 67K (26.8\%) & 138M (49.7\%) & 62.9 & 62.9 & **70.8** & - & - & - \\ & PT & 66K (26.4\%) & 137M (49.3\%) & 70.7 & **70.8** & 70.2 & - & - & - \\ \hline \multirow{8}{*}{
\begin{tabular}{} \end{tabular} } & AR & 92K (11.8\%) & 157M (20.2\%) & 59.8 & 59.8 & **64.7** & 75.5 & 75.6 & **76.1** \\ & DE & 239K (30.7\%) & 269M (34.7\%) & **73.5** & **73.5** & 73.0 & 78.9 & 78.9 & **79.0** \\ \cline{1-1} & EN & 484K (62.2\%) & 458M (58.9\%) & **63.9** & **63.9** & 61.3 & 84.4 & 84.4 & **84.5** \\ \cline{1-1} & ES & 243K (31.2\%) & 279M (35.1\%) & 60.7 & 60.7 & **66.6** & **80.7** & **80.7** & 80.6 \\ \cline{1-1} & FR & 218K (28.0\%) & 253M (32.6\%) & **68.8** & **68.8** & 59.5 & 78.6 & 78.6 & **79.0** \\ \cline{1-1} & IT & 184K (23.7\%) & 227M (29.3\%) & 70.2 & 70.2 & **74.2** & - & - & - \\ \cline{1-1} & PT & 181K (23.3\%) & 225M (28.9\%) & **66.6** & 66.5 & 52.8 & - & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of sentiment analysis (macro F1) and XNLI (accuracy) including both the vocabulary size and the number of parameters after VT with the ratio to the original model (%). The best results in each LM and language are in bold characters. Note that the overall parameter size of the original XLM-R and XLM-V (No-Trim) is 278M and 778M, respectively, with the vocabulary size being 250K and 901K vocabulary in each case.
reduce the vocabulary to 5K tokens with no more than 0.4% decrease in each metric for both QA and QG in English, French, and Italian, which is **16%** of the original mT5 in the parameter size. Meanwhile, pre-FT VT outperforms the No-Trim result in all the languages in QA, and the majority of the languages in QG (English, Italian, Korean, and Russian), but the result is sensitive to the choice of \(n\). For example, Japanese/Korean QA and Russian QG with pre-FT VT for top-5K (16% of the original mT5) outperforms No-Trim as well as post-FT VT, but Japanese QG with pre-FT VT is worse in any choice of \(n\) on contrary. This larger variation of results may also be due to the parameter size space, as the optimal parameters for the original multilingual LM (which is the one trained for post-FT VT) may differ. We leave this extended analysis for future work.
### Classification Tasks: Sentiment & NLI
Figure 5 and Figure 6 show the results of XLM-R on sentiment and NLI.9. In NLI, we can see that post/pre-FT VT both can reduce the vocabulary to 30K (**39%** of the original XLM-R in the parameter size) without any decrease except 0.3% of pre-FT VT for German, and there are no decrease more than 0.4% even with top-15K of post-FT VT. In sentiment analysis, pre-FT VT with top-10K (**33%** of the original XLM-R in the parameter size) can re
Figure 4: The results of QA (METEOR) and QA (Ans-F1) for mT5 with pre/post-FT VT for different vocabulary sizes compared to the No-Trim result.
Figure 5: The result of sentiment analysis (macro F1) of XLM-R with pre/post-FT VT for different vocabulary sizes compared to the No-Trim result.
Figure 6: The result of NLI (accuracy) of XLM-R with pre/post-FT VT for different vocabulary sizes compared to No-Trim result.
tain the accuracy of the No-Trim baseline in French and Italian. Moreover, post-FT VT with 30K can retain the original F1 score without a major drop in sentiment analysis, yet the decrease in F1 score is slightly more prominent than NLI (1.1% in Arabic sentiment analysis).
The sentiment analysis datasets are collected from Twitter, so one dataset in a single language can contain tokens from other languages (hashtags or named-entities, or even code-switching). In contrast, XNLI translates English NLI into other languages, so there is less chance for a dataset to contain tokens from the other languages. This can explain the effectiveness of top-\(n\) VT in NLI compared with sentiment analysis, as smaller values of \(n\) should result in a vocabulary with fewer tokens from the other languages, which limits the ability of the models to handle foreign tokens.
## 6 Monolingual vs. Multilingual LMs: The Case of Social Bias
There has been extensive literature in NLP comparing monolingual and multilingual LMs Muller et al. (2021); Goyal et al. (2021). As for the performance, there is no clear consensus on which type is better for certain languages, tasks or settings. However, there are other important factors that play a role in this comparison. First, monolingual models tend to have a smaller vocabulary size, which makes them more practical. In contrast, a single multilingual model can be used for a large number of languages. Moreover, multilingual LMs are less prone to capture and carry cultural- or language-dependant biases. This is due to the combination of languages and cultures into a single model, which makes it less biased to specific cultures Liang et al. (2020); Ahn and Oh (2021). Prior works have shown that different types of bias consistently appears in language-specific models Nadeem et al. (2021); Nangia et al. (2020); Blodgett et al. (2021); Dhamala et al. (2021); Kaneko et al. (2022); Zhou et al. (2022). While the comparison of monolingual and multilingual LMs is not the main focus of this paper, this analysis is certainly relevant. Trimming the vocabulary of a multilingual model essentially makes the model smaller, and therefore alleviates one of the main drawbacks of using multilingual language models on language-specific tasks, which is its larger size. On top of that, this strategy enables the usage of monolingual models with potentially less social bias. In the following, we present a comparison of monolingual and multilingual LMs (both trimmed and not trimmed) in terms of social bias and general performance.
### Experimental setting
Social bias datasets.To study the effect of VT on social bias existing in pre-trained LMs, we first conduct experiments on two commonly used social bias evaluation datasets for masked LMs: StereoSet (SS; Nadeem et al. (2021)10 and crowd-sourced stereotype pairs benchmark CP; Nangia et al. (2020)11. SS consists of associative contexts covering four types of social biases: race, gender, religion, and profession; while CP is crowd-sourced and annotated by workers in the United States, which contains nine types of social biases: race, gender, sexual orientation, religion, age, nationality, disability, physical appearance, and socioeconomic status. In order to further investigate the impact of pre/post-FT VT on LMs, we trim and fine-tune models on sentiment analysis with different orders and evaluate the social bias in such models on the Equity Evaluation Corpus (EEC; Kiritchenko and Mohammad (2018)12 considering two bias types: gender and race. The EEC dataset is proposed with the aim to examine social bias for sentiment analysis systems.
Footnote 10: [https://github.com/moinnadeem/StereoSet](https://github.com/moinnadeem/StereoSet)
Footnote 11: [https://github.com/nyu-mll/crows-pairs](https://github.com/nyu-mll/crows-pairs)
Footnote 12: [https://saifmohammad.com/WebPages/Biases-SA.html](https://saifmohammad.com/WebPages/Biases-SA.html)
Evaluation metrics.In order to study social bias in the aforementioned models, we compare the pseudo-likelihood scores returned by each model for stereotypical and anti-stereotypical sentences using AULA (All Unmasked Likelihood with Attention weights) Kaneko and Bollegala (2022)13. AULA has been shown to be robust against the frequency biases of the masked tokens and provides a more reliable assessment in contrast to alternative metrics when evaluating social biases in masked language models (MLMs). Given a sentence pair in the test dataset: "My _mom_ spent all day cooking for Thanksgiving" vs. "My _dad_ spent all day cooking for Thanksgiving", the first sentence is considered as stereotypical while the second one is anti-stereotypical. AULA computes the percentage of stereotypical sentences preferred by the MLM over anti-stereotypical ones as the bias
score. An MLM is considered to be unfairly biased if it returns higher pseudo-loglikelihood scores for stereotypical sentences than the corresponding anti-stereotypical sentences. The AULA score falls within the range of [0,100] and an unbiased model would return a bias score close to 50. On the other hand, a bias score greater than or less than 50 indicates the bias direction towards the stereotype or anti-stereotype, respectively. Since the original AULA is not fitted to evaluate fine-tuned models, we adapt AULA to the EEC dataset obtain the bias score for the LMs fine-tuned on sentiment analysis, and denote this metric as EEC-AULA. Specifically, given a model that predicts sentiment labels (e.g., positive, neutral, negative) to sentences, we consider the percentage of stereotypical test sentences with a more negative label over anti-stereotypical ones as the corresponding bias evaluation measure.
General performance.As a proxy to test the general performance, we use the general language understanding evaluation (GLUE; Wang et al., 2018)14 benchmark. We acknowledge the limitations of using this benchmark to draw reliable conclusions at large (Ethayarajh and Jurafsky, 2020) but it nevertheless provides a good proxy for understanding the overall performance of comparable models in standard NLP tasks. Moreover, these experiments are only aimed at analysing the effect of vocabulary trimming on general performance.
Footnote 14: [https://gluebenchmark.com/](https://gluebenchmark.com/); models are tested on the open development sets of each task.
Models.We compute the bias scores of RoBERTa (Liu et al., 2019) as base monolingual LM, and XLM-R (Conneau et al., 2019) as its multilingual counterpart (they have been trained with the same architecture and in an overlapping corpus. We explore two VT settings to be applied to XLM-R: XLM-R with the standard VT including the full English vocabulary (VT XLM-R) and XLM-R with VT for top-50K English vocabulary (top-50K VT XLM-R), which is the same vocabulary size as the monolingual RoBERTa model. Our experiments are based both on masked language models on AULA (in which the post-VT does not have any effect) and models fine-tuned on the sentiment analysis presented in SS 4.1 on EEC-AULA, as well as on the corresponding GLUE training sets.
### Results
Table 3 shows the performance of pre-FT VT and post-FT VT models against the original monolingual and multilingual LMs on social bias evaluation datasets and the GLUE benchmark. Both AULA and GLUE results are computed using the LMs without fine-tuning (i.e., RoBERTa, XLM-R, VT XLM-R, and top-50K VT XLM-R), whereas the EEC-AULA results are computed using the models applying VT and fine-tuning strategies. We observe that the monolingual model contains the highest level of social bias compared to the multilingual models with different settings. In particular, RoBERTa obtains the overall highest bias score on the EEC dataset after fine-tuning, with an alarmingly high 85.7 score on race.15 Overall, the models fine-tuned on sentiment analysis contain a higher
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Base Model} & \multirow{2}{*}{Vocab Trimming} & \multicolumn{2}{c}{Vocabulary} & \multicolumn{3}{c}{Social Bias} & \multicolumn{1}{c}{General Performance} \\ \cline{3-8} & & & \multicolumn{2}{c}{AULA} & \multicolumn{2}{c}{EEC-AULA} & \multirow{2}{*}{GLUE} \\ \cline{3-8} & & Pre-FT & Post-FT & & \multicolumn{1}{c}{CP} & \multicolumn{1}{c}{SS} & \multicolumn{1}{c}{Gender} & \multicolumn{1}{c}{Race} \\ \hline Monolingual (RoBERTa) & - & 50K & 50K & 58.1 & 58.8 & 64.8 & 85.7 & 78.0 \\ \hline \multirow{6}{*}{Multilingual (XLM-R)} & - & 250K & 250K & 49.5 & 54.9 & 44.3 & 62.0 & 77.9 \\ & Pre-FT VT (EN) & 173K & 173K & 49.5 & 54.9 & 42.5 & 56.9 & 78.0 \\ \cline{1-1} & Post-FT VT (EN) & 250K & 173K & 49.5 & 54.9 & 44.3 & 62.0 & 77.9 \\ \cline{1-1} & Pre-FT VT (50K) & 50K & 50K & 49.3 & 55.0 & 41.0 & 62.4 & 76.9 \\ \cline{1-1} & Post-FT VT (50K) & 250K & 50K & 49.5 & 54.9 & 44.3 & 62.0 & 77.9 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of pre/post-FT VT models compared with the original monolingual and multilingual models on two social bias analysis benchmarks (AULA for pre-trained masked language models and EEC-AULA for models fine-tuned on sentiment analysis) and the GLUE benchmark. The VT models are trimmed on English vocabulary with different vocabulary sizes: EN (full English vocabulary) and 50K (top 50K subword tokens). Note that for post-FT VT, the results on AULA are exactly the same as the original XLM-R. The green and red colours represent the social bias towards anti-stereotypical sentences (scores lower than 50) and stereotypical sentences (scores higher than 50), respectively. The lighter colour indicates less social bias observed in the LM.
bias level than the models without fine-tuning, resulting in the EEC-AULA scores substantially deviating from 50 compared to the AULA scores. On the other hand, there is no significant change in performance on social bias and GLUE evaluation tasks for pre-FT VT and post-FT VT models. This is important as we can apply the proposed VT method to a multilingual LM, which allows us to obtain a monolingual one with consistent performance on the GLUE benchmark and less social bias than the original monolingual model pre-trained in the target language, without using any ad-hoc debiasing methods.
## 7 Discussion
Vocabulary trimming before and after fine-tuning (pre/post-FT VT).According to the results, pre-FT VT appears to be generally more effective in classification tasks (see section 4.2.2). For generation tasks (see section 4.2.1), both pre/post-FT VT robustly retain the original performance while being able to considerably reduce the model size. As a guideline to choose the type of VT in such a case, post-FT VT should be more suitable if one already has a fine-tuned model, as no additional training is needed for this case. Moreover, post-FT is more robust as a compression mechanism as the performance is largely maintained with respect to that of the original multilingual LM. On the other hand, if one needs to fine-tune a model from scratch and the computation resources are limited, we recommend exploring pre-FT VT, as fine-tuning on a trimmed LM should be more efficient due to its smaller vocabulary and parameters and, in some cases, can lead to better overall results. However, this process has to be done carefully as the set of optimal parameters could differ from the original multilingual LM fine-tuning process.
Monolingual and Multilingual LM comparison.While in this paper we have not compared monolingual and multilingual models, the question would be whether we need vocabulary trimming strategies in a world where monolingual LMs exist. In this case, a monolingual model may perform similarly to a multilingual model Goyal et al. (2021). However, the multilingual model is often larger mainly due to larger vocabulary storage requirements. In contrast, our proposed VT technique does not require any extra LM training or computational resources. Indeed, only a multilingual LM is needed and we can induce multiple smaller language-specific monolingual models. This may reduce the carbon footprint overall and especially help with less-resource languages when a high-quality monolingual model does not exist. Finally, our social bias analysis see SS 6 shows how monolingual models exhibit larger social biases (especially racial) than a VT-induced multilingual LM. This is consistent with prior work suggesting that a multilingual LM has been trained with more languages, and hence more cultural variety, and these diverging viewpoints can compensate each other Ahn and Oh (2021).
## 8 Conclusion
In this paper, we proposed _vocabulary-trimming_ (VT), a method to reduce the large vocabulary of a multilingual LM to the small vocabulary specific to the target language. VT can induce a monolingual LM in the target language by leveraging an existing multilingual LM. The main advantage of this filtering step is the reduced size, as well as avoiding having to train monolingual LMs from scratch, which would be computationally demanding. Our experiments show how VT can retain the high performance of the original multilingual LM, while largely reducing the model size. For all languages evaluated, a 35% compression rate proves sufficient to keep the original performance of the larger mT5 multilingual LM in both QA and QG, with a similar 39% in NLI and 55% in sentiment analysis with XLM-R. Interestingly, in some cases, the compressed LM can even achieve better results than the original larger model when trimmed before fine-tuning. Since the main goal of the paper was to compress a multilingual LM while keeping its original performance, we leave the analysis of this behaviour for future work.
### Limitations
We have not tested our methodology in truly low-resource languages. Because of this, there could be a different behaviour when we apply VT to a language with lower resources or that is poorly representing in the underlying training corpus. The LMs we used in the paper limited their size up to 600M, and we have not considered larger models such as mT5XXL or BLOOM Scao et al. (2022), due to our limited computational resources. As the language-specific corpus to compute frequency, we employ mC4, which is one of the largest multilingual corpora. Nonetheless, this is used as a proxy
and having access to the full multilingual model could give potentially better results.
Similarly, we acknowledge the limitations of the analysis comparing multilingual and monolingual models in terms of social bias. Due to evaluation data available and existence of comparable monolingual and multilingual LMs, the evaluation is focused on English only and the results could differ for other languages. Moreover, there are other types of biases not covered in this evaluation.
## Ethics Statement
Pre-trained LMs are known to contain undesirable biases to generate toxic contents in some edge cases (Schick et al., 2021), so the resulting models could inherit such biases. While we have not analysed in detail the output of all models in the tasks evaluated, in this paper we have made an attempt to study this effect in terms of social biases for both base pre-trained LMs and fine-tuned LMs.
|
2305.01055 | An Augmented Lagrangian Approach for Problems With Random Matrix
Composite Structure | We consider the minimization of a sum of a smooth function with a nonsmooth
composite function, where the composition is applied on a random linear
mapping. This random composite model encompasses many problems, and can
especially capture realistic scenarios in which the data is sampled during the
optimization process. We propose and analyze a method that combines the
classical Augmented Lagrangian framework with a sampling mechanism and adaptive
update of the penalty parameter. We show that every accumulation point of the
sequence produced by our algorithm is almost surely a critical point. | Dan Greenstein, Nadav Hallak | 2023-05-01T19:39:45Z | http://arxiv.org/abs/2305.01055v1 | # An Augmented Lagrangian Approach for Problems With Random Matrix Composite Structure
###### Abstract
We consider the minimization of a sum of a smooth function with a nonsmooth composite function, where the composition is applied on a random linear mapping. This random composite model encompasses many problems, and can especially capture realistic scenarios in which the data is sampled during the optimization process. We propose and analyze a method that combines the classical Augmented Lagrangian framework with a sampling mechanism and adaptive update of the penalty parameter. We show that every accumulation point of the sequence produced by our algorithm is almost surely a critical point.
## 1 Introduction
### Problem formulation
This thesis addresses the nonconvex non-smooth composite model with a random linear operator \(M\in\mathbb{R}^{n\times n}\) given by
\[\min_{x\in\mathbb{R}^{n}}h(x)+P(\mathbb{E}[M]x), \tag{1.1}\]
where:
1. \(M:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is a random linear map with distribution \(\pi\).
2. \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is a twice continuously differentiable function.
3. \(P:\mathbb{R}^{n}\rightarrow\mathbb{R}\cup\{\infty\}\) is proper, lower semi-continuous function.
Unfortunately, finding the global optimum in a general nonconvex problem is an NP hard problem. Therefore, we look for a point that satisfies a necessary optimality condition - criticality. The set of critical points of a function \(\psi:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is
\[\operatorname{Crit}\psi=\{x\in\mathbb{R}^{n}\mid 0\in\partial\psi(x)\}.\]
Throughout the paper, we will address problem (1.1) via the Augmented Lagrangian (AL) framework (e.g. Bertsekas [3], Chapter 3, section 2.1). To that end, we first restate problem (1.1) as follows:
\[\min_{x\in\mathbb{R}^{n}}h(x)+P(y) \tag{1.2}\] \[\mathbb{E}[M]x=y.\]
The Augmented Lagrangian of (1.2) is
\[\mathcal{L}_{\beta}(x,y,z)=h(x)+P(y)-\langle z,\mathbb{E}[M]x\rangle+\langle z,y\rangle+\frac{\beta}{2}\|\mathbb{E}[M]x-y\|^{2}. \tag{1.3}\]
Our consideration of the Augmented Lagrangian of (1.2) is motivated by the fact that \((x,y,z)\in\operatorname{Crit}\mathcal{L}_{\beta}\Rightarrow(x,y)\) is a critical point of (1.2) (see Bolte et al. [6], Proposition 3.1).
We use Algorithm 1 to seek a critical point of (1.3). The algorithm
1. Maintains a sequence of unbiased estimators of \(\mathbb{E}[M]\), with decreasing variance.
2. Performs alternating directions update.
3. Updates the penalty parameter as needed.
An implicit assumption of our model is that \(P\) is prox-tractable - meaning, that \(prox_{\tau P}\) is easy to compute. The proximal operator will be discussed in Section 2.1.
We state our blanket assumptions regarding the random linear operation \(M\) below.
**Assumption 1**.: _The random linear operator \(M\) satisfies that_
1. \(\mathbb{E}[M]\) _is a surjective linear map;_
2. \(Var[M]\) _is bounded._
Note that the second part in Assumption 1 is fundamental in the analysis of behavior of random variables.
### Motivation
The structure of model (1.1) is exceedingly versatile. Even when we restrict ourselves to the simplified case in which \(M\) is deterministic and \(M=I\) (\(I\) being the identity mapping), we retrieve the highly versatile composite optimization model. In the composite optimization model, the function \(h\) can encode the differentiable components of many loss functions, while \(P\) can encode non-smooth components, regularization, and sets of constraints that define a closed subspace. When, \(M\) is a random linear mapping, we can encode constraints and regularization which are random and are learned online, during the algorithm's runtime.
We consider some applications of the model (1.1):
1. Inverse Reinforcement Learning: The MDP model for Reinforcement Learning (RL) is a tuple \(\mathcal{M}=\langle S,A,T,R,\gamma\rangle\), where \(S\) is the state space, \(A\) is the action space, \(T:S\times A\to Prob(S)\) is a mapping defining the probability of transitioning to a state conditioned on the current state and the action taken in it, \(R\) is the reward function, and \(\gamma\in[0,1]\) is the discount factor. Assuming the first state is sampled randomly, RL attempts to find a policy \(\pi\) maximizing \[\mathbb{E}[\sum_{t=0}^{\infty}\gamma^{t}R(s_{t},\pi(s_{t}))].\] In many applications of RL, the true reward function is not known or hard to determine. Therefore, when a demonstration of the desirable policy or behavior is available, it might be desirable to learn the demonstrated policy instead. For an extensive survey of Inverse Reinforcement Learning (IRL), see Arora and Doshi [1]. We consider a deterministic setting, as in Leibovich et al. [20] - we have a set of desirable outcomes \(\{y_{1},\ldots,y_{M}\}\in\mathcal{Y}\) and a bijective mapping from the set of actions \(\mathcal{X}\) to the set of outcomes \(\mathcal{F}:\mathcal{X}\rightarrow\mathcal{Y}\). The mapping \(\mathcal{F}\) is not known in advance, but \(\mathcal{F}(x)\) can be observed at any \(x\in\mathcal{X}\). The inverse mapping \(\mathcal{F}^{-1}:\mathcal{Y}\rightarrow\mathcal{X}\) is the mapping from outcomes to initial actions - that is, action \(\mathcal{F}^{-1}(y)\) needs to be taken to reach state \(y\). We wish to minimize the distance between the true inverse mapping \(\mathcal{F}^{-1}\), and a parametric approximation \(\mathcal{G}_{\theta}\), where \(\theta\in\Theta\) is a parameter vector. Formally, \[\min_{\theta\in\Theta}\frac{1}{N}\sum_{i=1}^{N}\mathcal{L}(\mathcal{G}_{\theta }(y_{i}),\mathcal{F}^{-1}(y_{i})),\]
where \(\mathcal{L}:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}\) is some loss function, such as the error between inputs in absolute value. Assume that for every \(y\in\{y_{1},\ldots,y_{N}\}\), the probability of drawing \(\mathcal{F}^{-1}\) from \(\mathcal{X}\) using uniform sampling is positive, and that \(\mathcal{G}_{\theta}\) is linear with respect to \(\theta\). Additionally, assume that \(\mathcal{L}(x,y)=\tilde{\mathcal{L}}(x-y)\) for some function \(\tilde{\mathcal{L}}\). Note that the last requirement is not very restrictive - any loss function that is based on the error term fulfills this requirement. We can generate i.i.d samples of \(\mathcal{F}^{-1}(y)\) for \(y\in\{y_{1},\ldots,y_{M}\}\) using uniform sampling from \(\mathcal{X}\) and rejection sampling - we throw away samples \(x\in\mathcal{X}\) where \(\mathcal{F}(x)\notin\{y_{1},\ldots,y_{N}\}\). We denote our approximation of \(\mathcal{F}^{-1}\) by \(\tilde{\mathcal{F}}^{-1}\). We can match the IRL problem to (1.1) by setting \[M=\begin{pmatrix}\operatorname{diag}\left(\mathcal{G}(y_{1}),\ldots,\mathcal{ G}_{y_{N}}\right)&\operatorname{diag}\left(\tilde{\mathcal{F}}^{-1}(y_{1}), \ldots,\tilde{\mathcal{F}}^{-1}(y_{N})\right)\\ 0&I\end{pmatrix},\] \(h\equiv 0\), \(\theta\) such that the dimension of \(\theta\) is \(2N\). Informally, we will want \(\theta_{N+1}=\theta_{N+2}=\ldots=\theta_{2N}=-1\). Formally, we will enforce it by introducing a \(\delta_{\mathcal{C}}\) component to \(P\). \(\delta_{\mathcal{C}}\) is defined as \[\delta_{\mathcal{C}}(u)=\begin{cases}\infty,&u\notin\mathcal{C}\\ 0,&\text{otherwise}\end{cases}.\] We define \(P\) as \[P(u)=\frac{1}{N}\sum_{i=1}^{N}\tilde{\mathcal{L}}(u_{i})+\delta_{u_{N+1}= \ldots=u_{2N}=-1}(u).\] Then, \[P(M\theta)=\frac{1}{N}\sum_{i=1}^{N}\mathcal{L}(\mathcal{G}_{\theta}(y_{i}), \tilde{\mathcal{F}}^{-1}(y_{i})).\]
2. Image Deconvolution: Image deconvolution is a fundamental problem in the field of image processing (see Jansson [18], Campisi and Egiazarian [12]). We observe an image from some source, such as a telescope, camera, microscope or ultrasound. The observation process is never perfect - the observed image is susceptible to blur, noise, and other sources of image degradation. The image deconvolution process aims at recovering the original image from the degraded observation. The discrete linear space invariant (LSI) model of the degradation for a 2-D image is given by \[g(x)=(f*h)(x)+n(x)=\sum_{s\in S_{h}}h(x-s)f(x)+n(x),\]
where the operator \((*)\) denotes convolution, \(x=(x_{1},x_{2})\) is the observed pixel location, \(g\) is the observed image, \(h\) is the blur operator, \(S_{h}\subset\mathbb{R}^{2}\) is the state space of the blur, \(f\) is the real image, and \(n\) is noise, usually modeled as white gaussian noise. This can also be written in a vector-matrix formulation as \[g=Hf+n.\] In applications in which the blur \(H\) is unknown, the problem is known as _blind image deconvolution_. In many applications of _blind image deconvolution_, such as microscopy, remote sensing, medical ultrasound, optical telescope systems or photography, the blur can be estimated from experiments (see Campisi and Egiazarian (12, Section 1.3, A priori blur identification methods) and Fergus et al. (15)). Then, non-blind methods can be used to estimate the original image. One such method is the Tikhonov filtering - least squares with Tikhonov regularization (see Tikhonov (25)) \[\hat{f}=\operatorname*{argmin}_{f}\|\mathbb{E}[H]f-g\|^{2}+\|\Gamma f\|^{2},\] (1.4) where \(\Gamma\) is a Tikhonov regularization matrix, usually chosen to be \(\lambda I\) for some \(\lambda>0\). We can capture (1.4) using (1.1) by choosing \(h(f)=\|\Gamma f\|^{2}\), \(M=H\), and \(P(u)=\|u-g\|^{2}\).
3. Maximum Singular Value of the Expectation of a Random Matrix: Finding the maximum singular value of a matrix is a fundamental subroutine in many algorithms, the best known of which are algorithms for the principal component analysis (PCA) problem. When the matrix is sampled, the retrieval the maximum singular value of the expectation of the matrix and its matching vector is often desired instead. Such subroutines are implicitly used in this paper, in Algorithm 3 and Algorithm 5 - we evaluate the minimum singular value of the positive semidefinite matrices \(\{(M^{t+1})^{T}M^{t+1}\}_{t\geq 0}\), which can be implemented by first finding the maximum eigenvalue \(\lambda_{max}((M^{t+1})^{T}M^{t+1})\), and then finding the maximum eigenvalue of \(\lambda_{max}((M^{t+1})^{T}M^{t+1})I-(M^{t+1})^{T}M^{t+1}\). The problem of finding the maximum singular value of the expectation of a random matrix is also considered on its own, for example in Garber et al. (16, Section 3), where the problem is considered in a stochastic bandits setting.
Given a sequence of i.i.d random matrices \(\{A_{t}\}_{t\geq 0}\), an eigenvector of the maximum eigenvalue of the expectation matrix is given by
\[\operatorname*{argmax}_{x\neq 0}\frac{\|\mathbb{E}[A]x\|^{2}}{\|x\|^{2}}.\]
Equivalently, since the \(\ln(\cdot)\) function is strictly monotonically increasing, we can find a vector matching the maximum singular value by
\[\operatorname*{argmax}_{x}\ln\left(\frac{\|\mathbb{E}[A]x\|^{2}}{\|x\|^{2}} \right).\]
Using the properties of \(\ln()\) and transitioning into a minimization problem, this is equivalent to
\[\operatorname*{argmin}_{x}\ln\left(\|x\|^{2}\right)-\ln\left(\|\mathbb{E}[A]x \|^{2}\right).\]
This problem can be modeled in by (1.1) by choosing
\[h(x)=\ln\left(\|x\|^{2}\right),\]
\[P(y)=-\ln(\|y\|^{2})\]
and
\[M=A.\]
### Literature
Previous research on composite methods can be divided into the two categories - the convex and the nonconvex case. The convex case is not the focus of this paper, and we refer the interested reader to a survey by Sabach and Teboulle [24].
Research on nonconvex composite minimization is sparse. All previous works that we have found assume that the mapping is deterministic - which is not assumed in this work. Additionally, all previous works assume that the gradient of the mapping is surjective (which, in the linear mapping case, means that \(MM^{T}\) is full rank) - in this work, the surjectivity assumption is only made on \(\mathbb{E}[M]\), while a specific sample \(M\) might not be surjective.
The work by Li and Pong [21] is the first reference we have found to the composite linear model in nonconvex optimization, and many proof techniques in this paper were inspired by it. The approach proposed by the authors hinges on the surjectivity of the linear mapping, and the assumption that the penalty parameter was chosen to be sufficiently large - we require similar conditions on the penalty parameter to reached eventually, but adaptively update the penalty parameter instead of assuming that it is
chosen to be sufficiently large. The authors prove subsequence convergence in the general case, and convergence of the whole sequence, as well as its boundedness, under additional assumptions. The model studied in Li and Pong [21] is deterministic, and therefore does not capture the model we study.
In Bot and Nguyen [8], the authors extend the work by Li and Pong [21]. Focusing on linear mappings, and using similar assumptions, they show a convergence rate to a critical point for the case in which the Kurdyka-Lojasiewicz (KL) assumption holds true.
Bolte et al. [6] studied a comprehensive class of problems which essentially can capture most known deterministic models. The key element, which is the source for the challenges of this problem, is the nonsmooth nonconvex composite function with a general operator, which in our case is replaced with a random linear operator. Their approach the authors used utilized two main tools to overcome the challenges originating from the composite structure:
1. They assumed the existence of an information zone, a subspace in which the Jacobian of the possibly non-linear operator is essentially surjective, note that this generalizes the surjectiveness of the linear operators in Bot and Nguyen [8] and Li and Pong [21].
2. They used an adaptive penalty to ensure that the algorithm reaches the information zone.
We borrow the idea of an adaptive penalty parameter in case it is not possible to predetermine it in Section3. The approach proposed by Bolte et al. [6] is mainly limited by the surjectivity assumption (termed in a more general manner as uniform regularity) on the Jacobian. This is expressed by the fact that the method uses the uniform regularity constant to control the penalty parameter although it is hard to determine it in non trivial cases. Moreover, the uniform regularity limits the variety of operators that can be used. The model studied in Bolte et al. [6] is fully deterministic, and therefore does not capture the model we study.
Cohen et al. [13] also focuses on nonlinear mappings. The authors require more restrictive on \(P\) (it is required to be continuously differentiable with a Lipschitz continuous gradient), and derive an algorithm that is much simpler than the one outlined in Bolte et al. [6].
### Contributions
Our contributions revolve the stochasticity of the linear mapping \(M\):
1. We establish convergence results for randomized mappings, under the assumption that \(Var[M]\) is bounded. To the best of our knowledge, this is the first time such results appear in this literature.
2. Using the additional assumption that each matrix entry \(M_{i,j}^{t}\) has Sub-Gaussian distribution, we improve the sample complexity of the algorithm considerably. The class of Sub-Gaussian distributions includes bounded random variables, among others.
3. Using the assumption that \(\nabla^{2}h(x)\) is lower bounded by some known constant, we simplify the adaptive update of the penalty parameter.
## 2 Mathematical Preliminaries
### Notation and Basic Definitions
Overall we use the common notation from linear Algebra, matrix analysis, and convex/nonconvex analysis. In particular, the proximal operator of \(\tau P\) for \(\tau>0\) is defined as:
\[u^{+}\in\operatorname*{argmin}_{y}\tau P(y)+\frac{1}{2}\|y-u\|^{2}.\]
In all our uses of the proximal operator, we will assume that it is well defined - that is, that the minimizer exists. Note, however, that there might be more than one minimizer - in that case, we will require that our proximal operator will return one minimizer in the set of minimizers.
The subdifferential operators are defined as in Rockafellar and Wets [22]. We denote \(z\xrightarrow{f}x\) if \(z\to x\) and \(f(z)\to f(x)\). The domain of a proper extended real valued function \(f:\mathbb{R}^{m}\rightarrow\mathbb{R}\cup\{\infty\}\) is defined as \(dom(f)\equiv\{x:f(x)<\infty\}\). Then the subdifferential set of \(f\) at point \(x\in dom(f)\), \(\partial f(x)\), is defined as
\[\partial f(x)\equiv\{v\in\mathbb{R}^{m}:\exists x^{t}\xrightarrow{f}x,v^{t} \to v,\limsup_{z\to x^{t}}\frac{f(z)-f(x^{t})-\langle v^{t},z-x^{t} \rangle}{\|z-x^{t}\|}\geq 0\ for\ each\ t\}.\]
For a twice continuously differentiable function \(\phi:\mathbb{R}^{n}\rightarrow\mathbb{R}\), the Bregman distance \(D_{\phi}:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}\) is defined as:
\[D_{\phi}(x_{1},x_{2}):=\phi(x_{1})-\phi(x_{2})-\langle\nabla\phi(x_{2}),x_{1 }-x_{2}\rangle.\]
We denote the index range \(\{1,\ldots,n\}\) by \([n]\).
Unless stated otherwise, the notation \(\|x\|\) will refer to the \(\ell_{2}\) norm when applied to a vector \(x\in\mathbb{R}^{n}\), and \(\|X\|\) will refer to the \(\ell_{2}\) induced norm \(\|X\|_{2,2}\) when applied to \(X\in\mathbb{R}^{m\times n}\). The only vector norm that will be used in practice is \(\ell_{2}\). The results
regarding matrix norms will always be stated with regards to the induced norm \(\|X\|_{2,2}\), but the entry-wise \(\ell_{1}\) norm \(\|X\|_{1,1}\) will be used in the body of proofs.
### Properties of the linear map
One of the blanket assumptions of our model is that \(\mathbb{E}(M)\) is a surjective linear map, a property we state formally below.
**Definition 2.1**.: A map \(F\) is surjective if for every \(y\in\mathbb{R}^{m}\), there exists \(x\in\mathbb{R}^{n}\), such that \(F(x)=y\). Similarly, a map \(G:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) is injective if for every \(y_{1}\neq y_{2}\in\mathbb{R}^{m}\), \(G(y_{1})\neq G(y_{2})\). For \(m<n\), a linear map \(F:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) can be surjective if its row span is \(m\), but cannot be injective (equivalently, \(\text{rank}(F)=m\)). Similarly, \(G:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) can be injective if its column span is \(m\), but cannot be surjective (equivalently, \(\text{rank}(G)=m\)).
For the sake of simplicity, we will assume that \(M\) is a bijective map from \(\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) such that \(\mathbb{E}(M)\) has full rank. This assumption actually does not incur any lose of generality compared to the case in which \(M\) is a surjective map from \(\mathbb{R}^{n}\) to \(\mathbb{R}^{m}\) (\(m<n\)); We justify this statement below.
The key reason for the above is that one can always extend the surjective mapping \(M\) to a mapping \(M^{\prime}\) that is surjective and injective almost surely, to attain an equivalent problem, using the following procedure:
1. Generate \(n-m\) independent identically distributed random vectors \(\{v_{i}\}_{i=1}^{n-m}\), so that each vector is sampled uniformly from the unit sphere (\(\{v\in\mathbb{R}^{n}\mid\|v\|=1\}\)).
2. For each sampled matrix \(M^{t}\sim\pi\): 1. Concatenate \(\{v_{i}\}_{i=1}^{n-m}\) to the rows of \(M^{t}\) to generate a square matrix \(M^{\prime}_{t}\).
Since the additional vectors are only sampled once, and are attached to each random mapping \(M^{t}\), the extended expected mapping \(\mathbb{E}[M^{\prime}]\) will contain the added rows \(\{v_{i}\}_{i=1}^{n-m}\), too. We will now formally establish that \(\mathbb{E}[M^{\prime}]\) has full rank almost surely.
**Lemma 2.1**.: _Let \(\{M_{t}\}_{t\geq 0}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) be random linear maps sampled i.i.d from distribution \(\pi\), such that \(\mathbb{E}[M]\equiv\mathbb{E}[M_{0}]\) exists, and \(\mathbb{E}[M]\) is surjective. Let \(\{v_{i}\}_{i=1}^{n-m}\) be i.i.d random vectors sampled uniformly from the unit sphere \(\{v\in\mathbb{R}^{n}:\|v\|_{2}=1\}\). Define \(M^{{}^{\prime}}_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) as \(M_{i}\) with \(\{v_{j}\}_{j=1}^{i}\) concatenated as the last \(i\) rows._
_Then \(\mathbb{E}[M^{\prime}]:=\mathbb{E}[M^{{}^{\prime}}_{n-m}]\) exists, and \(\mathbb{E}[M^{\prime}]\) has full rank almost surely._
Proof.: We will see that \(dim(span(\mathbb{E}[M^{\prime}]))=n\) by induction. First, we note that
\[dim(span(\mathbb{E}[M]))=m.\]
Since \(span(\mathbb{E}[M])\) is an \(m\)-dimensional subspace of \(\mathbb{R}^{n}\), it's Lebesgue measure is \(0\); see Rudin [23, Theorem 2.20 (e) and Determinants 2.23]. Thus, for
\[v_{1}\sim Unif(\{v\in\mathbb{R}^{n}:\|v\|=1\}),\]
it follows that
\[Prob(v_{1}\in span(\mathbb{E}[M]))=0,\]
and therefore
\[Prob(dim(span(\mathbb{E}[M^{\prime}_{1}]))=m+1)=1.\]
Next, we assume that \(Prob(dim(span(\mathbb{E}[M^{\prime}_{i}]))=m+i)=1\) for \(i<n-m\), and show that
\(Prob(dim(span(\mathbb{E}[M^{\prime}_{i+1}]))=m+i+1)=1\).
Note that the event \(dim(span(\mathbb{E}[M^{\prime}_{i}]))=m+i\) is contained the event \(dim(span(\mathbb{E}[M^{\prime}_{i+1}]))=m+i+1\) - that is, it is impossible that the event \(dim(span(\mathbb{E}[M^{\prime}_{i+1}]))=m+i+1\) occurs if \(dim(span(\mathbb{E}[M^{\prime}_{i}]))=m+i\) does not. Using this insight we can deduce that
\[Prob(dim(span(\mathbb{E}[M^{\prime}_{i+1}]))=m+i+1)\] \[=Prob\left(\{dim(span(\mathbb{E}[M^{\prime}_{i+1}]))=m+i+1\} \cap\{dim(span(\mathbb{E}[M^{\prime}_{i}]))=m+i\}\right).\]
By the definition of conditional probability, it then follows that
\[Prob(\{dim(span(\mathbb{E}[M^{\prime}_{i+1}]))=m+i+1\}\cap\{dim( span(\mathbb{E}[M^{\prime}_{i}]))=m+i\})\] \[=Prob(dim(span(\mathbb{E}[M^{\prime}_{i+1}]))=m+i+1\mid dim( span(\mathbb{E}[M^{\prime}_{i}]))=m+i)\] \[\cdot Prob(dim(span(\mathbb{E}[M^{\prime}_{i}]))=m+i).\]
By the induction assumption, \(Prob(dim(span(\mathbb{E}[M^{\prime}_{i}]))=m+i)=1\). Therefore
\[Prob(dim(span(\mathbb{E}[M^{\prime}_{i+1}]))=m+i+1)\] \[=Prob(dim(span(\mathbb{E}[M^{\prime}_{i+1}]))=m+i+1\mid dim(span (\mathbb{E}[M^{\prime}_{i}]))=m+i).\]
As we have argued for the base case of the induction, the Lebesgue measure of the span of \(\mathbb{E}([M^{\prime}_{i}])\) is \(0\), and therefore, since
\[v_{i+1}\sim Unif(\{v\in\mathbb{R}^{n}:\|v\|=1\}),\]
it follows that
\[Prob(v_{i+1}\in span(\mathbb{E}[M^{\prime}_{i}]))=0.\]
Consequently,
\[Prob(dim(span(\mathbb{E}[M^{\prime}_{i+1}]))=m+i+1\mid dim(span(\mathbb{E}[M ^{\prime}_{i}]))=m+i)=1.\]
Hence, for \(M^{\prime}_{n-m}\), \(Prob(dim(span(\mathbb{E}[M^{\prime}_{n-m}]))=n)=1\). We denote \(M^{\prime}_{n-m}\) by \(M^{\prime}\). Since the dimension of the span of \(M^{\prime}\) is \(n\) almost surely, it follows that
\[Prob(rank(\mathbb{E}[M^{\prime}])=n)=1.\]
\(\Box\)
To utilize Lemma 2.1, we can extend \(P:\mathbb{R}^{m}\rightarrow\mathbb{R}\) to \(P^{\prime}:\mathbb{R}^{n}\rightarrow\mathbb{R}\), by ignoring the last \(n-m\) indices. That is
\[P^{\prime}(u_{1},\ldots,u_{m},u_{m+1},\ldots,u_{n})=P(u_{1},\ldots,u_{m}).\]
Note that the first \(m\) cells of \(\mathbb{E}[M^{\prime}]x\) are equal to \(\mathbb{E}[M]x\), independently of our choice of random vectors \(\{v_{i}\}_{i=1}^{n-m}\). Therefore, for every choice of vectors \(\{v_{i}\}_{i=1}^{n-m}\) and every \(x\in\mathbb{R}^{n}\),
\[h(x)+P(\mathbb{E}[M]x)=h(x)+P^{\prime}(\mathbb{E}[M^{\prime}]x).\]
Considering the above discussion, we make the following assumption throughout this paper without restating it.
**Assumption 2** (blanket assumption).: _The expectation \(\mathbb{E}[M]\) has full rank, meaning that there is a scalar \(\sigma>0\), such that \(\lambda_{min}(\mathbb{E}[M]^{T}\mathbb{E}[M])=\sigma\)._
We adopt the following notation with regards to linear mappings: given a symmetric linear mapping \(T\), we use \(\|\cdot\|_{T}^{2}\) to denote the induced quadratic form of \(T\), that is \(\|\cdot\|_{T}^{2}=\langle x,Tx\rangle\). Given a matrix \(T\in\mathbb{R}^{n\times n}\), we denote the matrix \(T\cdot T\) as \(T^{2}\).
Additionally, whenever we refer to a sequence \(\{M^{i}\}_{i>0}\) of matrices, we assume that these are independent and identically distributed matrices, where \(M^{i}\sim\pi\); note that \(M\sim\pi\) shares the same distribution as the matrices in the sequence.
### Lipschitz and Empirical Lipschitz Constants
The Lipschitz constant is a powerful tool in analysis of optimization algorithms, often used to determine the step size in gradient based algorithms, or the penalty parameter in AL based algorithms. However, the reliance on the Lipschitz constant has three significant downsides:
1. The Lipschitz constant does not always exist ;
2. Even if it exists, the Lipschitz constant is not always known ;
3. The global Lipschitz constant might be significantly larger than the actual differences in function values in a sequence generated by an optimization algorithm.
To address these disadvantages, we introduce the _empirical Lipschitz constant_.
Given a function \(F:\mathbb{R}^{k}\to\mathbb{R}^{l}\), and a sequence \(\{x_{t}\}_{t\geq 0}\), we define an _empirical Lipschitz constant (eLip)_ of \(F\) with respect to the sequence \(\{x_{t}\}_{t\geq 0}\) in the following manner.
**Definition 2.2** (empirical Lipschitz constant).: Let \(F:\mathbb{R}^{k}\to\mathbb{R}^{l}\), and \(\{x_{t}\}_{t\geq 0}\). The empirical Lipschitz constant (eLip) of \(F\) with respect to the sequence \(\{x_{t}\}_{t\geq 0}\) is defined by
\[L_{F}^{e}(\{x_{t}\}_{t\geq 0})=\begin{cases}\sup_{t\geq 0,x_{t+1}\neq x_{t}} \frac{\|F(x_{t+1})-F(x_{t})\|}{\|x_{t+1}-x_{t}\|},&\exists t\text{ such that }x_{t+1}\neq x_{t},\\ 0,&\text{otherwise}.\end{cases}.\]
Comparing Definition2.2 with the definition of the (global) Lipschitz constant of \(F\),
\[L_{F}=\sup_{y\neq x}\frac{\|F(x)-F(y)\|}{\|x-y\|},\]
immediately yields the following result.
**Lemma 2.2**.: _Let \(F:\mathbb{R}^{k}\to\mathbb{R}^{l}\). For every sequence \(\{x_{t}\}_{t\geq 0}\) it holds that_
\[L_{F}^{e}(\{x_{t}\}_{t\geq 0})\leq L_{F}.\]
Proof.: If for every \(t\), \(x_{t+1}=x_{t}\), then \(L_{F}^{e}(\{x_{t}\}_{t\geq 0})=0\). Since \(L_{F}\geq 0\), the result trivially holds true.
Otherwise, there exists \(t\) such that \(x_{t+1}\neq x_{t}\). Since for every \(x_{t+1}\neq x_{t}\),
\[\frac{\|F(x_{t+1})-F(x_{t})\|}{\|x_{t+1}-x_{t}\|}\leq\sup_{y\neq x}\frac{\|F (x)-F(y)\|}{\|x-y\|},\]
it follows that
\[L_{F}^{e}(\{x_{t}\}_{t\geq 0})=\sup_{t\geq 0,x_{t+1}\neq x_{t}}\frac{\|F(x_{t+1} )-F(x_{t})\|}{\|x_{t+1}-x_{t}\|}\leq\sup_{y\neq x}\frac{\|F(x)-F(y)\|}{\|x-y \|}=L_{F}.\]
In our analysis, the eLip constant will be used in conjunction with an adaptive penalty scheme to create a sufficient decrease in the updates of the primal variables.
## 3 Iterative Sampling Alternating Directions Method
We are now ready to describe the _Iterative Sampling Alternating Directions (ISAD)_ method for composite-Linear problems meta-algorithm, whose pseudo-code is given in Algorithm1. Roughly speaking, at each iteration the algorithm begins by sampling the matrix followed by an update of its average. The method then performs two alternating
minimization steps of the Augmented Lagrangian with respect to the matrix estimator - we first update the \(y\) variable, and then update the \(x\) variable. Afterwards, the method updates the dual variable \(z\) with respect to the new values of \(x\) and \(y\), the previous value of \(z\), and the matrix estimator. Finally, the penalty parameter is tuned using the subprocedure of the _Adaptive Penalty Oracle (APO)_.
The meta-algorithm provides two degrees of freedom to its implementation: the choice of the convex function \(\phi\), and the implementation of the APO. The required characteristics of the APO are introduced in Definition 3.1. We will consider two implementations of the meta-algorithm: (i) an implementation for the case in which \(\nabla^{2}h(x)\) has a known lower and upper bounds (cf. Section 5); (ii) an implementation for the general case (cf. Section 6). Section 4 will include probability bounds that will be used in Section 5, Section 6 and Section 7. The convergence of the meta algorithm will be proven in Section 7.
The meta algorithm uses several parameters. The parameters \(x_{0}\) and \(z_{0}\) are the initial values of the variables \(x\) and \(z\), respectively. The parameter \(\beta_{0}\) is the initial value of the penalty parameter \(\beta\). The sequence \(\{\theta_{t}\}_{t>0}\) determines the amount of samples of the matrix \(M\) that are collected by round \(t\), that is, at round \(t+1\), we sample \(\theta_{t+1}-\theta_{t}\) matrices. Finally, \(\phi\) is the basis for the Bregman distance factor \(D_{\phi}\) which is utilized in the update of \(x\).
```
Input:\(\mathbf{x}_{0}\in\mathbb{R}^{n}\), \(\mathbf{z}_{0}\in\mathbb{R}^{n}\), \(\beta_{0}>0,\{\theta_{t}\}_{t>0}\), \(\phi:\mathbb{R}^{n}\to\mathbb{R}\) where \(\phi\in C^{2}\) and convex.
1for\(t=0,1,2,\ldots\)do
2\(sample\ (\theta_{t+1}-\theta_{t})\ matrices\ \{M^{i}\}_{i=\theta_{t}+1}^{\theta_{t+1}}\);
3\(\bar{M}^{t+1}=\frac{\theta_{t}}{\theta_{t+1}}\bar{M}^{t}+\frac{1}{\theta_{t+ 1}}\sum\limits_{i=\theta_{t}+1}^{\theta_{t+1}}M^{i}\)
4\(y_{t+1}\in\underset{y}{\operatorname{argmin}}\{P(y)+\langle z_{t},y\rangle+ \frac{\beta_{t}}{2}\|\bar{M}^{t+1}x_{t}-y\|^{2}\}\) ;
5\(x_{t+1}\in\operatorname{Crit}_{x}\{h(x)-\langle z_{t},\bar{M}^{t}x\rangle+ \frac{\beta_{t}}{2}\|\bar{M}^{t+1}x-y_{t+1}\|^{2}+D_{\phi}(x,x_{t})\}\);
6\(z_{t+1}=z_{t}-\beta_{t}(\bar{M}^{t+1}x_{t+1}-y_{t+1})\)
7\(\beta_{t+1}\gets APO(\bar{M}^{t+1},\beta_{t})\) // Additional arguments may be provided, depending on the implementation
8 end for
```
**Algorithm 1**Meta-ISAAD
We provide the following guarantee for Algorithm 1, (stated briefly and informally):
**Theorem 3.1**.: _Let \(\theta_{t}\) be chosen according to the sampling regime in Definition 4.3, and let \(\{x_{t},y_{t},z_{t}\}_{t>0}\) be the sequence generated by Algorithm 1. Then for every cluster point \((x^{*},y^{*},z^{*})\) of \(\{x_{t},y_{t},z_{t}\}_{t>0}\), \(x^{*}\) is a critical point of (1.1) almost surely._
The formal statement of the theorem and its proof can be found in Theorem 7.2.
The following Assumption 3 on the sequence generated by any implementation of Algorithm 1 will be part of our blanket assumptions throughout our analysis of Algorithm 1. This assumption is common in the analysis of Augmented Lagrangian methods applied to nonconvex nonsmooth problems - see Bolte et al. [5], Bolte et al. [4], Bot and Nguyen [8], Cohen et al. [13] and Hallak and Teboulle [17]. In Li and Pong [21] and Yang et al. [27], the boundedness of the sequence is not assumed, but coerciveness that implies boundedness in their studied model is assumed instead.
**Assumption 3**.: _The sequence \(\{x_{t},y_{t},z_{t}\}_{t>0}\) generated by Algorithm 1 is bounded._
The rest of this section will focus on the properties that are required from our APO, which will be formally defined in Definition 3.1. Informally, we require that the number of updates to the value of the adaptive penalty \(\beta\) will be finite, and that after a finite number of iterations, the \(x\) update will induce decrease in function value.
We define two functions that will be used extensively during the analysis of the algorithm. The first is the Augmented Lagrangian with respect to a matrix \(A\), which we denote by \(\mathcal{L}_{\beta_{t}}(x,y,z;A)\):
\[\mathcal{L}_{\beta}(x,y,z;A):=h(x)+P(y)-\langle z,Ax-y\rangle+\frac{\beta}{2} \|Ax-y\|^{2}. \tag{3.1}\]
The second is a shorthand for the function whose critical points we are searching for during the \(x\) update:
\[g^{t+1}(x)=h(x)-\langle z_{t},\bar{M}^{t}x\rangle+\frac{\beta_{t}}{2}\|\bar{ M}^{t+1}x-y_{t+1}\|^{2}+D_{\phi}(x,x_{t}). \tag{3.2}\]
Note that \(g^{t+1}(x)\) is equal \(\mathcal{L}_{\beta_{t}}(x,y_{t+1},z_{t})+D_{\phi}(x,x_{t})\) up to constant factors that do not depend on \(x\).
**Remark 3.1**.: Note that \(\mathbb{E}(\mathcal{L}_{\beta}(x,y,z;\bar{M}))\neq\mathcal{L}_{\beta}(x,y,z; \mathbb{E}(M))\), and more specifically,
\[\mathbb{E}[\mathcal{L}_{\beta}(x,y,z;\bar{M})]-\mathcal{L}_{\beta}(x,y,z; \mathbb{E}[M])=\frac{\beta}{2}\mathbb{E}[\|\bar{M}x-\mathbb{E}[M]x\|^{2}]= \frac{\beta}{2}\mathbb{E}[\|\delta(\bar{M},M)x\|^{2}].\]
To see this:
\[\mathbb{E}[\mathcal{L}_{\beta}(x,y,z;\bar{M})] =\mathbb{E}[h(x)+P(y)-\langle z,\bar{M}x-y\rangle+\frac{\beta}{2} \|\bar{M}x-y\|^{2}]\] \[=h(x)+P(y)-\langle z,\mathbb{E}[M]x-y\rangle+\frac{\beta}{2} \mathbb{E}[\|\bar{M}x-y\|^{2}].\]
Moreover,
\[\mathbb{E}[\|\bar{M}x-y\|^{2}] =\mathbb{E}[\|\bar{M}x-\mathbb{E}[M]x+\mathbb{E}[M]x-y\|^{2}]\] \[=\mathbb{E}[\|\bar{M}x-\mathbb{E}[M]x\|^{2}+2\langle\bar{M}x- \mathbb{E}[M]x,\mathbb{E}[M]x-y\rangle+\|\mathbb{E}[M]x-y\|^{2}]\] \[=\mathbb{E}[\|\bar{M}x-\mathbb{E}[M]x\|^{2}]+2\mathbb{E}[\langle \bar{M}x-\mathbb{E}[M]x,\mathbb{E}[M]x-y\rangle]+\mathbb{E}[\|\mathbb{E}[M]x-y \|^{2}]\] \[=\mathbb{E}[\|\bar{M}x-\mathbb{E}[M]x\|^{2}]+\|\mathbb{E}[M]x-y \|^{2}.\]
Therefore,
\[\mathbb{E}[\mathcal{L}_{\beta}(x,y,z;\bar{M})] =h(x)+P(y)-\langle z,\mathbb{E}[M]x-y\rangle+\frac{\beta}{2}\mathbb{ E}[\|\bar{M}x-y\|^{2}]\] \[=h(x)+P(y)-\langle z,\mathbb{E}[M]x-y\rangle+\frac{\beta}{2}( \mathbb{E}[\|\bar{M}x-\mathbb{E}[M]x\|^{2}]+\|\mathbb{E}[M]x-y\|^{2}).\]
Since
\[\mathcal{L}_{\beta}(x,y,z;\mathbb{E}[M])=h(x)+P(y)-\langle z,\mathbb{E}[M]x-y \rangle+\frac{\beta}{2}\|\mathbb{E}[M]x-y\|^{2},\]
it follows that
\[\mathbb{E}[\mathcal{L}_{\beta}(x,y,z;\bar{M})]-\mathcal{L}_{\beta}(x,y,z; \mathbb{E}[M])=\frac{\beta}{2}\mathbb{E}[\|\bar{M}x-\mathbb{E}[M]x\|^{2}]= \frac{\beta}{2}\mathbb{E}[\|\delta(\bar{M},M)x\|^{2}].\]
With definition (3.2) at hand, we can formulate our requirements from the APO in a clear and compact manner.
**Definition 3.1** (Apo).: An APO is a function that returns a scalar penalty parameter \(\beta>0\), that satisfies, when invoked by Algorithm 1, that for any sequence \(\{x_{t},y_{t},z_{t}\}_{t>0}\) generated by Algorithm 1 with the corresponding sequence of functions \(\{g^{t}\}_{t\geq 1}\) defined by the formula in (3.2) that there exists an index \(K_{stable}>0\) such that:
1. **Stability:**\(\beta_{k}=\beta_{K_{stable}}\) for all \(k>K_{stable}\).
2. **Sufficient decrease:** There exists \(\rho>0\) such that for all \(k>K_{stable}\) \[g^{k+1}(x_{k+1})-g^{k+1}(x_{k})\leq-\frac{\rho}{2}\|x_{k+1}-x_{k}\|^{2},\] and \[\rho>\frac{8}{\beta_{k}\sigma}\left(L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t \geq 0})+L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})\right),\] where \(L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0})\) and \(L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})\) are empirical Lipschitz constants defined in Definition 2.2.
**Remark 3.2**.: Note that the first part of Definition 3.1 does not mean that the last update occurs on the \(K_{stable}\) index - it might have occurred beforehand.
An APO that satisfies Definition 3.1 exists - Algorithm 5 implements such oracle for the general case, while Algorithm 3 implements the oracle under the assumption that the hessian of \(\nabla^{2}h\) is bounded, and \(\phi\) is chosen accordingly.
The Sampling Mechanism
In this section, we will discuss the properties of the sampling mechanism that will be used throughout the rest of this work. We introduce Assumption 4, which will not be mandatory, but will greatly improve the performance of our sampling mechanism. Next, we define the component-wise error of the matrix estimator, which will be used extensively throughout the remaining sections. Afterwards, we prove that the sum of the squared norms of the estimator errors is finite, a fact that will be used in Section 7 to prove the convergence of Algorithm 1. Finally, we will see that the lowest singular value of the matrix estimator and the lowest singular value of the real matrix are within a multiplicative error of each other almost surely given a sufficient amount of samples; this fact will be used in Section 5 and Section 6 to prove that the algorithms therein implement the meta algorithm Algorithm 1.
We begin with the definition and properties of Sub-Gaussian random variables; All the definitions, properties and theorems cited in this context are taken from Vershynin [26, Chapters 2.5-2.6]. Theorem 4.1, presented below, establishes equivalence between the five properties. A random variable matching any of these properties is called Sub-Gaussian. Since the properties are equivalent, the choice of the property for the definition of Sub-Gaussian random variables is arbitrary. Out of mathematical convenience, we will use the fourth property for the actual definition.
**Theorem 4.1** (Vershynin [26, Proposition 2.5.2]).: _Let \(X\) be a random variable. Then the following properties are equivalent; the positive parameters \(\{K_{i}\}_{i\in[5]}\) appearing in these properties differ from each other by at most an absolute constant factor._
1. _The tails of_ \(X\) _satisfy_ \[Prob\left(|X|\geq t\right)\leq\exp\left(-\frac{t^{2}}{K_{1}}\right)\,\forall t \geq 0.\]
2. _The moments of_ \(X\) _satisfy_ \[\|X\|_{\mathcal{L}_{p}}=\left(\mathbb{E}\left[|X|^{p}\right]\right)^{1/p}\leq K _{2}\sqrt{p}\ \ \forall p\geq 1.\]
3. _The Moment Generating Function (MGF) of_ \(X^{2}\) _satisfies_ \[\mathbb{E}\left[\exp\left(\lambda^{2}X^{2}\right)\right]\leq\exp\left(K_{3}^{2 }\lambda^{2}\right)\ \ \forall\lambda\ \ s.t.\ \left|\lambda\right|\leq\frac{1}{K_{3}}.\]
4. _The MGF of_ \(X^{2}\) _is bounded at some point, namely_ \[\mathbb{E}\left[\exp\left(\frac{X^{2}}{K_{4}}\right)\right]\leq 2.\]
_Moreover, if \(\mathbb{E}[X]=0\), then properties \(1-4\) are also equivalent to the following one._
1. _The MGF of_ \(X\) _satisfies_ \[\mathbb{E}\left[\exp\left(\lambda X\right)\right]\leq\exp\left(K_{5}^{2}\lambda^ {2}\right)\ \forall\lambda\in\mathbb{R}.\]
For mathematical convenience, we use the fourth characterization in Theorem 4.1 to define a Sub-Gaussian random variable.
**Definition 4.1** (Sub-Gaussian random variable).: Let \(X\) be a random variable. If there exists \(K>0\) such that the MGF of \(X\) satisfies
\[\mathbb{E}[\exp X^{2}/K]\leq 2.\]
Then \(X\) is called a Sub-Gaussian random variable, and its norm is given by
\[\|X\|_{\psi_{2}}=\inf\{t>0\ :\ \mathbb{E}[\exp X^{2}/t^{2}]\leq 2\},\]
where the notation \(\|\cdot\|_{\psi_{2}}\) refers to the fact that it is an Orlicz norm for the function
\[\psi_{2}(u)=\exp\{u^{2}\}-1;\]
see Vershynin [26, Section 2.7.1] for additional details.
Useful properties of Sub-Gaussian RVs are listed in Remark 4.1.
**Remark 4.1** (Sub-Gaussians useful properties).: The following properties holds true:
1. If \(Prob\left(X^{2}>0\right)>0\), then \(\|X\|_{\psi_{2}}>0\). On the other hand, if \(X=0\) almost surely, then \(\|X\|_{\psi_{2}}=0\).
2. For a random variable \(X\), \(\|X\|_{\psi_{2}}<\infty\) if and only if \(X\) is Sub-Gaussian.
3. If a random variable \(Y\) is bounded almost surely, i.e., there exists \(a\in\mathbb{R}_{+}\) such that \(Prob\left(|Y|\leq a\right)=1\), then there exists \(K>0\) such that \[\mathbb{E}[\exp Y^{2}/K]\leq 2.\] Therefore, it follows that \(Y\) is Sub-Gaussian.
4. If \(X\) is a Sub-Gaussian random variable, then \(X-\mathbb{E}[X]\) is also Sub-Gaussian.
With the definitions of Sub Gaussian random variables at hand, we can now state the optional assumption Assumption 4. While this assumption is not mandatory, it allows for a significant improvement in our sampling mechanism.
**Assumption 4**.: _For any \((i,j)\in[n]\) and any \(t\geq 0\), the random variable \(M_{i,j}^{t}\) sampled by Algorithm 1 is Sub-Gaussian._
It should be emphasized that Assumption 4 is very general. Any random variable whose tail distribution can be bounded by a Gaussian random variable (up to a constant) is Sub-Gaussian. This includes all bounded random variables, such as Bernoulli and Rademacher distributed random variables, and the Gaussian distribution.
Our analysis and results will be mostly given in terms of the _component-wise deviation matrix_ given by the difference between the estimator and the expected value of its corresponding random matrix variable. We call this matrix the _error of the matrix estimator_, and denote it by
\[\delta(\bar{M},M):=\bar{M}-\mathbb{E}[M], \tag{4.1}\]
where \(\bar{M}\) is the estimator of the random variable matrix \(M\). The definition in (4.1) will almost always be given with respect to a sequence sampled by Algorithm 1, and therefore, we will use the abbreviation
\[\delta_{t}:=\delta(\bar{M}^{t},M)=\bar{M}^{t}-\mathbb{E}[M], \tag{4.2}\]
where we assume that the couple \((\bar{M}^{t},M)\) is known from context; note that \(\delta_{t}\) is a random variable until realized.
To establish the essential properties of the random variable \(\delta_{t}\), we will utilize the notion of _infinitely often (i.o.)_ occurring event, which is defined below.
**Definition 4.2** (infinitely often (i.o.)).: Let \(A_{1},A_{2},...\) be a sequence of events in some probability space. Then the probability that infinitely many of them occur is denoted by
\[Prob\left(A_{n}\;i.o.\right)\equiv Prob\left(\limsup_{n\to\infty}A_{n}\right) \equiv Prob\left(\bigcap_{n=1}^{\infty}\bigcup_{k=n}^{\infty}A_{k}\right).\]
We use \(i.o.\) occuring events in our analysis to prove that
\[Prob\left(\sum_{t=1}^{\infty}\|\delta_{t}\|^{2}<\infty\right)=1.\]
To that end, we will use the fact that for every \(\epsilon>0\)
\[\sum_{t=1}^{\infty}\frac{1}{t^{1+0.5\epsilon}}<\infty.\]
Choose an arbitrary \(\epsilon>0\). Suppose that there are at most a finite amount of indices \(t\) such that \(\|\delta_{t}\|>\frac{1}{t^{0.5+0.25\epsilon}}\). Then there exists a final index \(K\), such that for all \(t\geq K+1\)
\(\|\delta_{t}\|\leq\frac{1}{t^{0.5+0.25\cdot\epsilon}}\). Since the image of \(\|\delta_{t}\|\) is contained in \(\mathbb{R}\), it follows that \(\sum\limits_{t=1}^{K}\|\delta_{t}\|^{2}<\infty\).
Since for every \(t\geq K+1\), \(\|\delta_{t}\|\leq\frac{1}{t^{0.5+0.25\cdot\epsilon}}\), it follows that
\[\sum\limits_{t=1}^{\infty}\|\delta_{t}\|^{2}=\sum\limits_{t=1}^{K}\|\delta_{t }\|^{2}+\sum\limits_{t=K+1}^{\infty}\|\delta_{t}\|^{2}<\infty.\]
Consider the event \(A_{t}\equiv\|\delta_{t}\|>O\left(\frac{1}{t^{0.5+0.25\cdot\epsilon}}\right)\). We can deduce from our discussion above that
\[Prob\left(\|\delta_{t}\|>O\left(\frac{1}{t^{0.5+0.25\cdot\epsilon}}\right)\ i.o. \right)=0\Rightarrow Prob\left(\sum\limits_{t=1}^{\infty}\|\delta_{t}\|^{2}< \infty\right)=1.\]
This suggests that we should pick \(\{\theta_{t}\}_{t>0}\) so that
\[Prob\left(\|\delta_{t}\|>O\left(\frac{1}{t^{0.5+0.25\cdot\epsilon}}\right);i. o.\right)=0\]
holds true. To that end, we first discuss some well known results in probability.
The first result is the General Hoeffding's inequality.
**Theorem 4.2** (Vershynin [26, Theorem 2.6.2, General Hoeffding's Inequality]).: _Let \(X_{1},X_{2},\ldots,X_{m}\) be independent, zero mean, Sub-Gaussian random variables. Then, for every \(k\geq 0\), we have_
\[Prob\left(\left|\sum\limits_{i=1}^{m}X_{i}\right|>k\right)\leq 2\exp{\left(- \frac{ck^{2}}{\sum\nolimits_{i=1}^{m}\|X_{i}\|_{\psi_{2}}^{2}}\right)}.\]
For completeness, we also present the better known case of Hoeffding's inequality for bounded random variables.
**Theorem 4.3** (Boucheron et al. [9, Theorem 2.8, Hoeffding's Inequality]).: _Let \(X_{1},X_{2},\ldots\) be a sequence of independent random variables, such that for all \(i\geq 1\), \(\infty<a\leq X_{i}\leq b<\infty\) almost surely. Then for every \(k>0\)_
\[Prob\left(\left|\frac{1}{m}\sum\limits_{i=1}^{m}X_{i}-\mathbb{E}[X_{i}]\right| >k\right)\leq 2\exp{\left(-\frac{2mk^{2}}{(b-a)^{2}}\right)}.\]
As we discussed above, bounded random variables are Sub-Gaussian, so the General Hoeffding's inequality is indeed a generalization.
The second result is the Borel-Cantelli Theorem.
**Theorem 4.4** (Borovkov [7, Theorem 11.1.1, Borel-Cantelli Theorem]).: _Let \(\{A_{i}\}_{i=1}^{\infty}\) be an infinite sequence of events defined over a given probability space, and denote by \(A=\bigcap\limits_{n=1}^{\infty}\bigcup\limits_{k=n}^{\infty}A_{k}\) the event that infinitely many events of the sequence \(\{A_{i}\}_{i=1}^{\infty}\) occur. If_
\[\sum\limits_{i=1}^{\infty}Prob\left(A_{i}\right)<\infty,\]
_then_
\[Prob\left(A\right)=0.\]
As we stated previously, the sampling rate regime, expressed by the sequence \(\{\theta_{t}\}_{t\geq 0}\), is chosen to guarantee that
\[Prob\left(\|\delta_{t}\|>O\left(\frac{1}{t^{0.5+0.25\cdot\epsilon}}\right)\ i.o. \right)=0.\]
In the general case, \(\theta_{t}\) should be proportional to \(t^{2+\epsilon}\), where \(\epsilon>0\) is arbitrarily small - this is described by Lemma4.1. When the elements of the sampled matrix are Sub-Gaussian, we derive a looser sampling rate proportional to \(t^{1+\epsilon}\) (once again, \(\epsilon>0\) is arbitrarily small) - as stated in Lemma4.2. Both lemmas are based on the Borel-Cantelli Theorem. Lemma4.1 additionally uses Chebyshev's inequality, while Lemma4.2 uses Theorem4.2.
**Lemma 4.1**.: _Let \(\{M^{t}\}_{t>0}\) be a sequence of i.i.d random matrices with distribution \(M^{i}\sim\pi\) sampled by Algorithm1 with \(\epsilon>0\) and \(\theta_{t}=t^{2+\epsilon}\). Then,_
\[Prob\left(\|\delta_{t}\|>O\left(\frac{1}{t^{0.5+0.25\epsilon}}\right)\ i.o. \right)=0.\]
Proof.: Recall that
\[\delta_{t}=\bar{M}^{t}-\mathbb{E}[M], \tag{4.3}\]
meaning that \(\mathbb{E}[\delta_{t}]=0\).
Hence, by definition of the variance and the unbiasedness of the estimator,
\[Var[[\delta_{t}]_{i,j}]=\mathbb{E}[[\delta_{t}]_{i,j}^{2}]-\mathbb{E}[[\delta _{t}]_{i,j}]^{2}=\mathbb{E}[[\delta_{t}]_{i,j}^{2}].\]
Consequently, using (4.3) and the fact that \(\mathbb{E}[[\delta_{t}]_{i,j}]=0\), we have that
\[Var[[\delta_{t}]_{i,j}]=\mathbb{E}[[\delta_{t}]_{i,j}^{2}]=\mathbb{E}[(\bar{ M}_{i,j}^{t}-\mathbb{E}[M]_{i,j})^{2}].\]
Thus, by the definition of \(\bar{M}^{t}\)
\[Var[[\delta_{t}]_{i,j}]=\mathbb{E}[(\bar{M}_{i,j}^{t}-\mathbb{E}[M]_{i,j})^{2 }]=\mathbb{E}\left[\left(\frac{1}{\theta_{t}}\sum_{k=1}^{\theta_{t}}M_{i,j}^{k }-[\mathbb{E}[M]]_{i,j}\right)^{2}\right].\]
Expanding the right hand term
\[Var[[\delta_{t}]_{i,j}]=\mathbb{E}\left[\frac{1}{\theta_{t}^{2}}\left(\sum_{k =1}^{\theta_{t}}\left(M_{i,j}^{k}-[\mathbb{E}[M]]_{i,j}\right)^{2}+2\sum_{k=1 }^{\theta_{t}}\sum_{l=1}^{k-1}\left(M_{i,j}^{k}-[\mathbb{E}[M]]_{i,j}\right) \cdot\left(M_{i,j}^{l}-[\mathbb{E}[M]]_{i,j}\right)\right)\right]. \tag{4.4}\]
By the assumption that the matrix sequence \(\{M^{t}\}_{t=1}^{\infty}\) is i.i.d, we have that for every \(k\neq l\)
\[\mathbb{E}\left[\left(M_{i,j}^{k}-[\mathbb{E}[M]]_{i,j}\right)\cdot\left(M_{i,j}^{l}-[\mathbb{E}[M]]_{i,j}\right)\right]=\mathbb{E}\left[M_{i,j}^{k}-[ \mathbb{E}[M]]_{i,j}\right]\cdot\mathbb{E}\left[M_{i,j}^{l}-[\mathbb{E}[M]]_{ i,j}\right].\]
Furthermore, noting that \(\mathbb{E}\left[M_{i,j}^{k}-[\mathbb{E}[M]]_{i,j}\right]=0\) for all \(k\), we can restate (4.4) as follows:
\[Var[[\delta_{t}]_{i,j}]=\mathbb{E}\left[\frac{1}{\theta_{t}^{2}}\sum_{k=1}^{ \theta_{t}}\left(M_{i,j}^{k}-[\mathbb{E}[M]]_{i,j}\right)^{2}\right]. \tag{4.5}\]
By the linearity of expectation, (4.5) can be reformulated as
\[Var[[\delta_{t}]_{i,j}]=\frac{1}{\theta_{t}^{2}}\sum_{k=1}^{\theta_{t}}\mathbb{ E}\left[\left(M_{i,j}^{k}-[\mathbb{E}[M]]_{i,j}\right)^{2}\right]. \tag{4.6}\]
By definition,
\[Var[M_{i,j}^{k}]=\mathbb{E}\left[\left(M_{i,j}^{k}-[\mathbb{E}[M]]_{i,j} \right)^{2}\right]. \tag{4.7}\]
Using (4.7), we can restate (4.6) as
\[Var[[\delta_{t}]_{i,j}]=\frac{1}{\theta_{t}^{2}}\sum_{k=1}^{\theta_{t}}Var[M_ {i,j}^{k}]. \tag{4.8}\]
The result in (4.8) can be further refined using the fact that the matrices \(\{M\}_{i=1}^{\infty}\) are sampled independently from the same distribution, and so for every \(k\geq 0\),
\[Var[M_{i,j}^{k}]=Var[M_{i,j}^{0}].\]
Denoting \(M=M^{0}\), we obtain
\[Var[[\delta_{t}]_{i,j}]=\frac{1}{\theta_{t}^{2}}\sum_{k=1}^{\theta_{t}}Var[M_ {i,j}]=\frac{1}{\theta_{t}}Var[M_{i,j}]. \tag{4.9}\]
Subsequently, using Chebyshev's inequality we have that
\[Prob(|[\delta_{t}]_{i,j}|>\eta)\leq\frac{Var[|[\delta_{t}]_{i,j}|]}{\eta^{2}} =\frac{Var[M_{i,j}]}{\theta_{t}\eta^{2}}.\]
Thus, choosing \(\theta_{t}=t^{2+\epsilon}\), \(\eta=\frac{1}{t^{0.5+0.25\epsilon}}\), and applying Chebyshev's inequality yields
\[Prob\left(|[\delta_{t}]_{i,j}|>\frac{1}{t^{0.5+0.25\epsilon}}\right)\leq \frac{Var[M_{i,j}]}{t^{1+0.5\epsilon}}. \tag{4.10}\]
Consider the _element-wise_ matrix norm \(\|\delta_{t}\|_{1,1}=\sum_{i=1}^{n}\sum_{j=1}^{n}|[\delta_{t}]_{i,j}|\). If \(\|\delta_{t}\|_{1,1}>\frac{n^{2}}{t^{0.5+0.25\epsilon}}\), then by the pigeonhole principle, at least one of the events \(|[\delta_{t}]_{i,j}|>\frac{1}{t^{0.5+0.25\epsilon}}\) occurred. Therefore,
\[Prob\left(\|\delta_{t}\|_{1,1}>\frac{n^{2}}{t^{0.5+0.25\epsilon}}\right)\leq Prob \left(\bigcup_{i,j}\left\{|[\delta_{t}]_{i,j}|>\frac{1}{t^{0.5+0.25\epsilon}} \right\}\right).\]
Consequently, using the union bound together with (4.10) implies that
\[Prob\left(\|\delta_{t}\|_{1,1}>\frac{n^{2}}{t^{0.5+0.25\epsilon}}\right)\leq \frac{1}{t^{1+0.5\epsilon}}\sum_{i=1}^{n}\sum_{j=1}^{n}Var[M_{i,j}].\]
By the equivalence between norms in \(\mathbb{R}^{n\times n}\), there exist \(a,A>0\), such that
\[a\cdot\|\delta_{t}\|_{1,1}\leq\|\delta_{t}\|_{2,2}\leq A\cdot\|\delta_{t}\|_{1,1},\]
where \(\|\delta_{t}\|_{2,2}\) is the induced \(\ell_{2}\) norm on the matrix \(\delta_{t}\); see Fabian et al. [14, Definition 1.30] for the definition of equivalence of norms, and Fabian et al. [14, Proposition 1.36] for the equivalence of norms in finite dimensional vector spaces. Since we use \(\|\delta_{t}\|=\|\delta_{t}\|_{2,2}\) by default (see the discussion of norm notation in Section 2.1), it follows that
\[Prob\left(\|\delta_{t}\|>\frac{a\cdot n^{2}}{t^{0.5+0.25\epsilon}}\right)\leq \frac{1}{t^{1+0.5\epsilon}}\sum_{i=1}^{n}\sum_{j=1}^{n}Var[M_{i,j}].\]
Finally, setting \(r=\sum_{i=1}^{n}\sum_{j=1}^{n}Var[M_{i,j}]\) and using the fact that \(\sum_{i=1}^{n}\sum_{j=1}^{n}Var[M_{i,j}]\) is finite (cf. Assumption 1), we conclude that
\[\sum_{t=1}^{\infty}Prob\left(\|\delta_{t}\|>\frac{a\cdot n^{2}}{t^{0.5+0.25 \epsilon}}\right)\leq\sum_{t=1}^{\infty}\frac{r}{t^{1+0.5\epsilon}}<\infty,\]
and subsequently, the lemma follows from Theorem 4.4. \(\Box\)
**Lemma 4.2**.: _Let \(\{M^{t}\}_{t>0}\) be a sequence of i.i.d random matrices with distribution \(M^{i}\sim\pi\) sampled during the execution of Algorithm 1 with \(\theta_{t}=t^{1+\epsilon}\) for some \(\epsilon>0\). Assume that for every \(i,j\in[n]\) and every \(t\geq 0\), \(M^{t}_{i,j}\) is Sub-Gaussian. Then,_
\[Prob\left(\|\delta_{t}\|>O\left(\frac{1}{t^{0.5+0.25\epsilon}}\right)\ i.o. \right)=0.\]
Proof.: Since \(M^{t}_{i,j}\) is Sub-Gaussian, \(M^{t}_{i,j}-\mathbb{E}[M]_{i,j}\) and \(\frac{1}{\theta_{t}}(M^{t}_{i,j}-\mathbb{E}[M]_{i,j})\) are also Sub-Gaussian (cf. Remark 4.1, Part 4). Recall that
\[\delta_{t}=\bar{M}^{t}-\mathbb{E}[M]=\sum_{l=1}^{\theta_{t}}\frac{1}{\theta_ {t}}(M^{l}-\mathbb{E}[M]),\]
where \(\bar{M}^{t}=\frac{1}{\theta_{t}}\sum\limits_{l=1}^{\theta_{t}}M^{l}\) is the estimator of \(M\). Hence,
\[Prob\left(|[\delta_{t}]_{i,j}|>\frac{1}{t^{0.5+0.25\epsilon}}\right) =Prob\left(|\bar{M}^{t}_{i,j}-\mathbb{E}[M]_{i,j}|>\frac{1}{t^{0.5 +0.25\epsilon}}\right)\] \[=Prob\left(\left|\sum_{l=1}^{\theta_{t}}\frac{1}{\theta_{t}}(M^{ l}_{i,j}-\mathbb{E}[M]_{i,j})\right|>\frac{1}{t^{0.5+0.25\epsilon}}\right).\]
If \(Prob\left([\delta_{t}]_{i,j}\neq 0\right)=0\), then the inequality
\[Prob\left(|[\delta_{t}]_{i,j}|>\frac{1}{t^{0.5+0.25\epsilon}}\right)\leq 2 \exp\left(-C\cdot t^{0.5\epsilon}\right) \tag{4.11}\]
holds trivially for any \(C>0\).
Otherwise, if \(Prob\left([\delta_{t}]_{i,j}\neq 0\right)>0\) then by Remark 4.1\(\|\frac{1}{\theta_{t}}(M_{i,j}^{l}-\mathbb{E}[M]_{i,j})\|_{\psi_{2}}>0\), and we have by the General Hoeffding's Inequality (cf. Theorem 4.2) with \(m=\theta_{t}\) and \(k=\frac{1}{t^{0.5+0.25\epsilon}}\) that
\[Prob\left(|[\delta_{t}]_{i,j}|>\frac{1}{t^{0.5+0.25\epsilon}}\right) =Prob\left(\left|\sum_{l=1}^{\theta_{t}}\frac{1}{\theta_{t}}(M_{ i,j}^{l}-\mathbb{E}[M]_{i,j})\right|>\frac{1}{t^{0.5+0.25\epsilon}}\right)\] \[\leq 2\exp\left(-\frac{c\cdot\frac{1}{t^{1+0.5\epsilon}}}{\sum_{l =1}^{\theta_{t}}\|\frac{1}{\theta_{t}}(M_{i,j}^{l}-\mathbb{E}[M]_{i,j})\|_{ \psi_{2}}^{2}}\right).\]
Using the fact that \(\{M^{l}\}_{l\geq 0}\) are i.i.d and the norm property \(\|\lambda\xi\|=|\lambda|\|\xi\|\), it follows that
\[Prob\left(|[\delta_{t}]_{i,j}|>\frac{1}{t^{0.5+0.25\epsilon}}\right) \leq 2\exp\left(-\frac{c\cdot\frac{1}{t^{1+0.5\epsilon}}}{\sum_{l =1}^{\theta_{t}}\|\frac{1}{\theta_{t}}(M_{i,j}^{l}-\mathbb{E}[M]_{i,j})\|_{ \psi_{2}}^{2}}\right)\] \[=2\exp\left(-\frac{c\cdot\frac{1}{t^{1+0.5\epsilon}}}{\theta_{t} \cdot\frac{1}{\theta_{t}^{2}}\|M_{i,j}^{0}-\mathbb{E}[M]_{i,j}\|_{\psi_{2}}^{ 2}}\right).\]
Denoting
\[C=\frac{c}{\|M_{i,j}^{0}-\mathbb{E}[M]_{i,j}\|_{\psi_{2}}^{2}}\]
and rearranging the elements we obtain that
\[Prob\left(|[\delta_{t}]_{i,j}|>\frac{1}{t^{0.5+0.25\epsilon}}\right)\leq 2\exp \left(-C\cdot\frac{\theta_{t}}{t^{1+0.5\epsilon}}\right).\]
By the choice of \(\theta_{t}\) we then have that
\[Prob\left(|[\delta_{t}]_{i,j}|>\frac{1}{t^{0.5+0.25\epsilon}}\right)\leq 2\exp \left(-C\cdot t^{0.5\epsilon}\right). \tag{4.12}\]
Thus, whether \(Prob\left([\delta_{t}]_{i,j}\neq 0\right)=0\) or \(Prob\left([\delta_{t}]_{i,j}\neq 0\right)>0\), there exists a constant \(C>0\), such that
\[Prob\left(\left|\bar{M}_{i,j}^{t}-\mathbb{E}[M]_{i,j}\right|>\frac{1}{t^{0.5+ 0.25\epsilon}}\right)\leq 2\exp\left(-C\cdot t^{0.5\epsilon}\right). \tag{4.13}\]
Consider the entry-wise \(\ell_{1}\) norm of \(\delta_{t}=\bar{M}^{t}-\mathbb{E}[M]\),
\[\|\delta_{t}\|_{1,1}=\sum_{i=1}^{n}\sum_{j=1}^{n}\left|\bar{M}_{i,j}^{t}- \mathbb{E}[M]_{i,j}\right|.\]
Note that for the event
\[\|\delta_{t}\|_{1,1}>\frac{n^{2}}{t^{0.5+0.25\cdot\epsilon}}\]
to occur, by the pigeonhole principle there has to be at least one pair of indices \(i,j\) such that \(\left|\bar{M}^{t}_{i,j}-\mathbb{E}[M]_{i,j}\right|>\frac{1}{t^{0.5+0.25\cdot \epsilon}}\). Therefore,
\[Prob\left(\|\delta_{t}\|_{1,1}>\frac{n^{2}}{t^{0.5+0.25\cdot\epsilon}}\right) \leq Prob\left(\bigcup_{i,j}\left\{|\bar{M}^{t}_{i,j}-\mathbb{E}[M]_{i,j}|> \frac{1}{t^{0.5+0.25\cdot\epsilon}}\right\}\right).\]
Using the union bound, it follows that
\[Prob\left(\|\delta_{t}\|_{1,1}>\frac{n^{2}}{t^{0.5+0.25\cdot\epsilon}}\right) \leq 2n^{2}\exp\left(-Ct^{0.5\epsilon}\right).\]
As stated in Lemma 4.1, by the equivalence between norms in \(\mathbb{R}^{n\times n}\), there exist \(a,A>0\), such that
\[a\cdot\|\delta_{t}\|_{1,1}\leq\|\delta_{t}\|_{2,2}\leq A\cdot\|\delta_{t}\|_{ 1,1}.\]
Since we use \(\|\delta_{t}\|=\|\delta_{t}\|_{2,2}\) by default (see the discussion of norm notation in Section 2.1), it follows that
\[Prob\left(\|\delta_{t}\|>\frac{a\cdot n^{2}}{t^{0.5+0.25\cdot\epsilon}}\right) \leq 2n^{2}\exp\left(-Ct^{0.5\epsilon}\right).\]
Summing over \(t\)
\[\sum_{t=1}^{\infty}Prob\left(\|\delta_{t}\|>\frac{a\cdot n^{2}}{t^{0.5+0.25 \cdot\epsilon}}\right)\leq\sum_{t=1}^{\infty}2n^{2}\cdot\exp\left(-Ct^{0.5 \epsilon}\right)<\infty,\]
and the result follows from Theorem 4.4. \(\Box\)
Note that in both lemmas, \(\epsilon\) can be chosen to be arbitrarily small. If the sampling mechanism generates matrices \(\{M^{i}\}_{i=1}^{\infty}\) of a Sub-Gaussian distribution as stated in the conditions of \(Lemma\)4.2, we can choose \(\theta_{t}=t^{1+\epsilon}\) so that \(\theta_{t}\) is arbitrarily close to being linear in \(t\).
The next claim states a technical result on the convergence of \(\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})\) to \(\lambda_{min}(\mathbb{E}[M]^{T}\mathbb{E}[M])\). It will be used in Section 5 and Section 6.
**Lemma 4.3**.: _Let \(\{\bar{M}^{t}\}_{t>0}\) be a sequence of unbiased estimators of \(M\), such that \(\bar{M}^{t}\xrightarrow{t\to\infty}\mathbb{E}[M]\) almost surely. For any \(\epsilon\in(0,1)\), with probability 1, there exists \(K>0\) such that_
\[(1-\epsilon)\sigma\leq\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})\leq(1+ \epsilon)\sigma,\qquad\forall k>K,\]
_and_
\[(1-\epsilon)\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})\leq\sigma\leq(1+ \epsilon)\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k}),\qquad\forall k>K,\]
_where \(\sigma=\lambda_{min}(\mathbb{E}[M]^{T}\mathbb{E}[M])\), as defined in Assumption 2._
Proof.: We use the following identity for the smallest eigenvalue of \(A^{T}A\), where \(A\) is some arbitrary matrix:
\[\lambda_{min}(A^{T}A)=\min_{\|v\|=1}v^{T}A^{T}Av=\min_{\|v\|=1}\|Av\|^{2}.\]
Define the vector sequence \(\{u_{k}\}_{k=1}^{\infty}\) and the vector \(w\) as
\[u_{k}\in\operatorname*{argmin}_{\|v\|=1}\|\bar{M}^{k}v\|^{2} \tag{4.14}\]
and
\[w\in\operatorname*{argmin}_{\|v\|=1}\|\mathbb{E}[M]v\|^{2}. \tag{4.15}\]
By our choice of \(u_{k}\) in (4.14),
\[\sqrt{\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})}=\|\bar{M}^{k}u_{k}\|.\]
Applying the facts that \(u_{k}\) is the minimizer, and \(w\) is a feasible solution to (4.14), we obtain that
\[\sqrt{\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})}=\|\bar{M}^{k}u_{k}\|\leq\| \bar{M}^{k}w\|. \tag{4.16}\]
Next, we bound \(\|\bar{M}^{k}w\|\) in terms of \(\sigma\) and the matrix estimation error. Recall that by (4.1)
\[\delta(\bar{M}^{t},M):=\bar{M}^{t}-\mathbb{E}[M].\]
By the triangle inequality, \(\|a+b\|\geq\|a\|-\|b\|\), and \(\|a+b\|\leq\|a\|+\|b\|\). Therefore, using the estimation error \(\delta(\bar{M}^{k},M)\) for every \(q\in\mathbb{R}^{n}\)
\[\|\bar{M}^{k}q\|=\|\mathbb{E}[M]q+\delta(\bar{M}^{k},M)q\|\leq\|\mathbb{E}[M] q\|+\|\delta(\bar{M}^{k},M)q\|, \tag{4.17}\]
and
\[\|\bar{M}^{k}q\|=\|\mathbb{E}[M]q+\delta(\bar{M}^{k},M)q\|\geq\|\mathbb{E}[M] q\|-\|\delta(\bar{M}^{k},M)q\|. \tag{4.18}\]
Using \(\bar{M}^{t}=\mathbb{E}[M]+\delta(\bar{M}^{t},M)\), in addition to the inequalities we've established in (4.16), (4.17), we have
\[\sqrt{\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})}\leq\|\bar{M}^{k}w\|=\| \mathbb{E}[M]w+\delta(\bar{M}^{t},M)w\|\leq\|\mathbb{E}[M]w\|+\|\delta(\bar{M }^{t},M)w\|.\]
By our choice of \(w\) in (4.18), it holds that
\[\|\mathbb{E}[M]w\|=\sqrt{\sigma}.\]
Hence,
\[\sqrt{\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})}\leq\sqrt{\sigma}+\|\delta (\bar{M}^{t},M)w\|.\]
Next, we bound the term \(\|\delta(\bar{M}^{t},M)w\|\). Using the induced norm inequality,
\[\|\delta(\bar{M}^{t},M)w\|_{2}\leq\|\delta(\bar{M}^{t},M)\|_{2,2}\|w\|_{2}.\]
Because \(\|w\|_{2}=1\), it follows that
\[\sqrt{\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})}\leq\sqrt{\sigma}+\|\delta( \bar{M}^{t},M)\|_{2,2}.\]
To bound \(\|\delta(\bar{M}^{t},M)\|_{2,2}\), note that since \(\bar{M}^{t}\xrightarrow{t\to\infty}\mathbb{E}[M]\) almost surely, there exists with probability \(1\) an iteration \(K_{1}>0\), such that
\[\|\delta(\bar{M}^{k},M)\|_{2,2}\leq(\sqrt{1+\epsilon}-1)\sqrt{\sigma}\qquad \forall k>K_{1}.\]
Therefore, for all sufficiently large \(k\),
\[\sqrt{\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})}\leq\sqrt{\sigma}+\|\delta( \bar{M}^{t},M)\|_{2,2}\leq\sqrt{(1+\epsilon)\sigma}.\]
Since both sides of the inequality are non-negative, we can square them, and get
\[\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})\leq(1+\epsilon)\sigma. \tag{4.19}\]
For the other direction, we once again use the fact that by (4.14)
\[\sqrt{\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})}=\|\bar{M}^{k}u_{k}\|. \tag{4.20}\]
Utilizing the equality \(\bar{M}^{t}=\mathbb{E}[M]+\delta(\bar{M}^{t},M)\) and the inequalities established at (4.20), (4.18), we derive
\[\sqrt{\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})}\geq\|\mathbb{E}[M]u_{k}\|- \|\delta(\bar{M}^{k},M)u_{k}\|.\]
Since \(u_{k}\) is a feasible solution to (4.15), and \(w\) is the minimizer,
\[\sqrt{\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})}\geq\|\mathbb{E}[M]u_{k}\|- \|\delta(\bar{M}^{k},M)u_{k}\|\geq\|\mathbb{E}[M]w\|-\|\delta(\bar{M}^{k},M)u_ {k}\|.\]
Utilizing the fact that by (4.15), \(\|\mathbb{E}[M]w\|=\sqrt{\sigma}\),
\[\sqrt{\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})}\geq\sqrt{\sigma}-\|\delta (\bar{M}^{k},M)u_{k}\|.\]
To further refine the lower bound of \(\sqrt{\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})}\), we use the induced norm inequality, and the fact that \(\|u_{k}\|=1\), to derive that
\[\sqrt{\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})}\geq\sqrt{\sigma}-\|\delta (\bar{M}^{k},M)u_{k}\|\geq\sqrt{\sigma}-\|\delta(\bar{M}^{k},M)\|_{2,2}\|u_{k} \|_{2}=\sqrt{\sigma}-\|\delta(\bar{M}^{k},M)\|_{2,2}. \tag{4.21}\]
Since \(\bar{M}^{t}\xrightarrow{t\to\infty}\mathbb{E}[M]\) almost surely, there exists with probability \(1\) an iteration \(K_{2}>0\), such that
\[\|\delta(\bar{M}^{k},M)\|_{2,2}\leq(1-\sqrt{1-\epsilon})\sqrt{\sigma}\qquad \forall k>K_{2}.\]
Applying this bound to (4.21), for all sufficiently large \(k\),
\[\sqrt{\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})}\geq\sqrt{\sigma}-\|\delta (\bar{M}^{k},M)\|_{2,2}\geq\sqrt{(1-\epsilon)\sigma},\]
and therefore
\[\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})\geq(1-\epsilon)\sigma.\]
Combining with (4.19), we conclude that for all \(k>\max\{K_{1},K_{2}\}\),
\[(1-\epsilon)\sigma\leq\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})\leq(1+ \epsilon)\sigma, \tag{4.22}\]
which concludes the first part of the proof.
To see that
\[(1-\epsilon)\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})\leq\sigma\leq(1+ \epsilon)\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k}),\]
for all sufficiently large \(k\), we choose \(0<\epsilon^{\prime}<1\) such that \(\frac{1}{1+\epsilon^{\prime}}>1-\epsilon\), and \(\frac{1}{1-\epsilon^{\prime}}<1+\epsilon\). By the first part of the proof, there exists \(K_{3}>0\) such that
\[(1-\epsilon^{\prime})\sigma\leq\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k}) \leq(1+\epsilon^{\prime})\sigma\qquad\forall k>K_{3}.\]
Combining with our choice of \(\epsilon^{\prime}\),
\[(1-\epsilon)\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})\leq\frac{1}{1+ \epsilon^{\prime}}\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})\leq\sigma,\]
and
\[\sigma\leq\frac{1}{1-\epsilon^{\prime}}\lambda_{min}((\bar{M}^{k})^{T}\bar{M} ^{k})\leq(1+\epsilon)\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k}).\]
We conclude that
\[(1-\epsilon)\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})\leq\sigma\leq(1+ \epsilon)\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k}).\]
Combining with (4.22), we conclude that for all \(k>\max\{K_{1},K_{2},K_{3}\}\),
\[(1-\epsilon)\sigma\leq\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})\leq(1+ \epsilon)\sigma\]
and
\[(1-\epsilon)\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})\leq\sigma\leq(1+ \epsilon)\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})\]
hold true, which concludes the proof. \(\Box\)
From here onwards, we will assume that \(\{\theta_{t}\}_{t\geq 0}\) is chosen according to the sampling regime that is defined below:
**Definition 4.3** (Sampling Regime).: Let \(\epsilon>0\) be an arbitrary positive constant. If Assumption 4 holds, we choose \(\theta_{t}=t^{1+\epsilon}\). Otherwise, we choose \(\theta_{t}=t^{2+\epsilon}\).
**Remark 4.2**.: The sampling regime can be changed by a constant nonnegative factor, that is, we may choose \(\theta_{t}=at^{1+\epsilon}\) or \(\theta_{t}=at^{2+\epsilon}\) depending on Assumption 4, where \(a>0\). For simplicity, we assume that \(a=1\).
To summarize the results in this section, we conclude that given \(\theta_{t}\) samples gathered until the \(t\)-th iteration, that is, at round \(t+1\) we sample \(\theta_{t+1}-\theta_{t}\) times,
\[Prob\left(\|\delta_{t}\|>O\left(\frac{1}{t^{0.5+0.25\epsilon}}\right)\;i.o \right)=0,\]
and therefore
\[Prob\left(\sum_{t=1}^{\infty}\|\delta_{t}\|^{2}<\infty\right)=1.\]
Furthermore, for every \(\epsilon>0\), there exists almost surely an iteration \(K>0\), such that for any \(k>K\) it holds that
\[(1-\epsilon)\sigma\leq\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})\leq(1+ \epsilon)\sigma\]
and
\[(1-\epsilon)\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})\leq\sigma\leq(1+ \epsilon)\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k}).\]
## 5 ISAD for Problems with Bounded Hessian
The meta-algorithm we introduced in Algorithm 1 contains two components that can be chosen by the implementor - an APO with the requirements in Definition 3.1 and a convex function \(\phi\). In this section we show that for functions with a bounded hessian (cf. Assumption 5) we can use a simple pair of APO and function \(\phi\). The APO and our choice of \(\phi\) are described by Algorithm 3 and Equation (5.1).
Accordingly, in this section we make the following additional assumption.
**Assumption 5**.: _For every \(x\in\mathbb{R}^{n}\), it holds that_
\[-\gamma I\preceq\nabla^{2}h(x)\preceq\gamma I.\]
Assumption 5 is satisfied by a broad class of objective functions, such as the prototypical examples we list below.
1. Quadratic functions, \[h(x)=x^{t}Qx+b^{T}x+c,\] where \(Q\in\mathbb{R}^{n\times n}\), \(b\in\mathbb{R}^{n}\) and \(c\in\mathbb{R}\).
2. The distance function (see e.g., Beck [2, Example 5.5]) \[h(x)=\frac{1}{2}\cdot d_{C}(x)^{2},\] where \(C\) is a compact set, and \(d:\mathbb{R}^{n}\to\mathbb{R}_{+}\) is the Euclidean distance of the input vector from \(C\) defined as \[d_{C}(x)=\left\|x-\operatorname*{argmin}_{y\in C}\left\{\|y-x\|\right\}\right\|.\]
3. A binary logistic regression objective function. Given \(n\) tuples \(\{(y_{i},x_{i})\}_{i=1}^{n}\), where \(y_{i}\) is an independent binary observation, and \(x_{i}\) is the observation's feature vector. The goal is to maximize the likelihood of the observations, which can be cast as minimizing the minus log likelihood (see Boyd et al. (2010, Chapter 7)) given by \[l(\theta)=-\sum_{i=1}^{n}y_{i}\ln\left(\frac{\exp\{x_{i}^{T}\theta\}}{1+\exp\{x _{i}^{T}\theta\}}\right)+(1-y_{i})\ln\left(\frac{1}{1+\exp\{x_{i}^{T}\theta\}} \right).\] We show in Lemma A.1, placed in the appendix, that the second derivative of \(l(\theta)\) is bounded with respect to \(\theta\).
We also assume that the meta-algorithm (cf. Algorithm 1) is implemented with the function
\[\phi(x)=\frac{\gamma}{2}\|x\|^{2}-h(x), \tag{5.1}\]
and with the APO given in Algorithm 3.
**Remark 5.1**.: Our choice for \(\phi\), given in (5.1), is convex, because
\[0\preceq\nabla^{2}\phi(x)=\gamma I-\nabla^{2}h(x)\preceq 2\gamma I.\]
The following lemma discusses the properties of the \(x\) update given that Assumption 5 holds true and \(\phi\) is chosen as in (5.1). The lemma will not affect the remainder of the analysis. Rather, it would provide motivation for the analysis of the simplified case, where Assumption 5 holds true.
**Lemma 5.1**.: _Suppose that Assumption 5 holds true, and that \(\phi\) is chosen as in (5.1). Then,_
1. \(g^{t+1}\) _is_ \(\frac{\gamma}{2}\)_-strongly convex and_ \(\frac{\gamma+\beta_{t}\lambda_{max}((\bar{M}^{t+1})^{T}\bar{M}^{t+1})}{2}\) _smooth._
2. _The_ \(x\) _update is equivalent to_ \[x_{t+1}=\operatorname*{argmin}_{x}g^{t+1}(x).\]
3. _The closed-form solution for the_ \(x\) _update is given by_ \[x_{t+1}=\left(\beta_{t+1}\left(\bar{M}^{t+1}\right)^{T}\bar{M}^{t+1}+\gamma I \right)^{-1}\left(\left(\bar{M}^{t}\right)^{T}z_{t}+\beta_{t+1}\left(\bar{M}^ {t+1}\right)^{T}y_{t+1}+\gamma x_{t}-\nabla h\left(x_{t}\right)\right).\]
Proof.: First, recall that the \(x\) update as defined in Algorithm 1 is
\[x_{t+1}\in\operatorname*{Crit}_{x}\{g^{t+1}(x)=h(x)-\langle z_{t},\bar{M}^{t} x\rangle+\frac{\beta_{t}}{2}\|\bar{M}^{t+1}x-y_{t+1}\|^{2}+D_{\phi}(x,x_{t})\}.\]
To prove all three parts, we show that \(g^{t+1}\) is a quadratic function
\[g^{t+1}(x)=\frac{1}{2}\langle x,Ax\rangle+\langle b,x\rangle+c, \tag{5.2}\]
such that \(A\) is positive definite.
We start by restating \(D_{\phi}(x,x_{t})\) for our particular choice of \(\phi\)
\[D_{\phi}(x,x_{t}) =\frac{\gamma}{2}\|x\|^{2}-h(x)-\frac{\gamma}{2}\|x_{t}\|^{2}+h(x_{ t})-\left\langle\gamma x_{t}-\nabla h(x_{t}),x-x_{t}\right\rangle\] \[=\frac{\gamma}{2}\|x\|^{2}-\gamma x^{T}x_{t}-\frac{\gamma}{2}\|x_ {t}\|^{2}+\gamma\|x_{t}\|^{2}-h(x)+h(x_{t})+\left\langle\nabla h(x_{t}),x-x_{ t}\right\rangle\] \[=\frac{\gamma}{2}\|x\|^{2}-\gamma x^{T}x_{t}+\frac{\gamma}{2}\|x_ {t}\|^{2}-h(x)+h(x_{t})+\left\langle\nabla h(x_{t}),x-x_{t}\right\rangle\] \[=\frac{\gamma}{2}\|x-x_{t}\|^{2}-h(x)+h(x_{t})+\left\langle \nabla h(x_{t}),x-x_{t}\right\rangle.\]
Applying to the definition of \(g^{t+1}(x)\),
\[g^{t+1}(x) =h(x)-\left\langle z_{t},\bar{M}^{t}x\right\rangle+\frac{\beta_{ t+1}}{2}\|\bar{M}^{t+1}x-y_{t+1}\|^{2}+D_{\phi}(x,x_{t})\] \[=h(x)-\left\langle z_{t},\bar{M}^{t}x\right\rangle+\frac{\beta_{ t}}{2}\|\bar{M}^{t+1}x-y_{t+1}\|^{2}+\frac{\gamma}{2}\|x-x_{t}\|^{2}-h(x)+h(x_{t}) +\left\langle\nabla h(x_{t}),x-x_{t}\right\rangle\] \[=-\langle z_{t},\bar{M}^{t}x\rangle+\frac{\beta_{t+1}}{2}\|\bar{ M}^{t+1}x-y_{t+1}\|^{2}+\frac{\gamma}{2}\|x-x_{t}\|^{2}+h(x_{t})+\left\langle \nabla h(x_{t}),x-x_{t}\right\rangle.\]
Rearranging,
\[g^{t+1}(x) =\frac{1}{2}\left\langle x,\left(\beta_{t+1}\left(\bar{M}^{t+1} \right)^{T}\bar{M}^{t+1}+\gamma I\right)x\right\rangle\] \[+\left\langle\nabla h(x_{t})-\beta_{t+1}\left(\bar{M}^{t+1} \right)^{T}y_{t+1}-\left(\bar{M}^{t+1}\right)^{T}z_{t}-\gamma x_{t},x\right\rangle\] \[+\frac{\beta_{t+1}}{2}\|y_{t+1}\|^{2}+\frac{\gamma}{2}\|x_{t}\|^ {2}+h(x_{t})-\left\langle\nabla h(x_{t}),x_{t}\right\rangle.\]
We denote \(A,b,c\) by
\[A=\beta_{t+1}\left(\bar{M}^{t+1}\right)^{T}\bar{M}^{t+1}+\gamma I,\]
\[b=\nabla h(x_{t})-\beta_{t+1}\left(\bar{M}^{t+1}\right)^{T}y_{t+1}-\left(\bar {M}^{t+1}\right)^{T}z_{t}-\gamma x_{t},\]
and
\[c=\frac{\beta_{t+1}}{2}\|y_{t+1}\|^{2}+\frac{\gamma}{2}\|x_{t}\|^{2}+h(x_{t}) -\left\langle\nabla h(x_{t}),x_{t}\right\rangle,\]
to derive the required form of \(g^{t+1}\) from (5.2). Note that \(A\) is indeed positive definite.
The first part follows from the fact that the
\[\gamma I\preceq\beta_{t+1}\left(\bar{M}^{t+1}\right)^{T}\bar{M}^{t+1}+\gamma I \preceq\left(\gamma+\beta_{t+1}\lambda_{max}\left(\left(\bar{M}^{t+1}\right)^ {T}\bar{M}^{t+1}\right)\right)I.\]
The second part of the lemma follows directly from the fact that \(g^{t+1}\) is a strongly convex function, and therefore its critical point set is a singleton.
The third of the proof follows from the closed form solution for minimization of strongly convex quadratic functions,
\[x^{*}=-A^{-1}b.\]
The results of Lemma 5.1 provide two methods to calculate the \(x\) updates. The first is to use a gradient based algorithms, which converge exponentially fast to the critical point, since \(g^{t+1}\) is both strongly convex and smooth (see Bubeck et al. (11, Theorem 3.10)). The second is to calculate the closed form solution directly, at the cost of inverting \(\beta_{t+1}\left(\bar{M}^{t+1}\right)^{T}\bar{M}^{t+1}+\gamma I\).
Algorithm 2 implements the meta algorthim (cf. Algorithm 1) for the case in which \(\nabla^{2}h(x)\) is bounded, and Algorithm 3 implements its corresponding APO.
The APO utilized in Algorithm 2 is denoted by \(\mathrm{APO_{B}}\) to mark that it is designated for functions with a bounded hessian.
```
Input:\(\mathbf{x}_{0}\in\mathbb{R}^{n}\), \(\mathbf{z}_{0}\in\mathbb{R}^{n}\), \(\beta_{0}>0,\{\theta_{t}\}_{t>0}\), \(\epsilon>0\)
1for\(t=0,1,2,\ldots\)do
2\(sample\)\((\theta_{t+1}-\theta_{t})\)\(matrices\)\(\{M^{i}\}_{i=\theta_{t}+1}^{\theta_{t+1}}\);
3\(\bar{M}^{t+1}=\frac{\theta_{t}}{\theta_{t+1}}\bar{M}^{t}+\frac{1}{\theta_{t+1 }}\sum\limits_{i=\theta_{t}+1}^{\theta_{t+1}}M^{i}\)
4\(y_{t+1}\in\underset{y}{\operatorname{argmin}}\{P(y)+\langle z_{t},y\rangle+ \frac{\beta_{t}}{2}\|\bar{M}^{t+1}x_{t}-y\|^{2}\}\) ;
5\(x_{t+1}\in\underset{x}{\operatorname{argmin}}\{-\langle z_{t},\bar{M}^{t}x \rangle+\frac{\beta_{t}}{2}\|\bar{M}^{t+1}x-y_{t+1}\|^{2}+\frac{\gamma}{2}\|x -x_{t}\|^{2}+\langle\nabla h(x_{t}),x-x_{t}\rangle\}\);
6\(z_{t+1}=z_{t}-\beta_{t}(\bar{M}^{t+1}x_{t+1}-y_{t+1})\);
7\(\beta_{t+1}\leftarrow\mathrm{APO_{B}}(\beta_{t},\bar{M}^{t+1},\gamma,\epsilon)\) ;
8
9 end for
```
**Algorithm 3**Bounded Hessian APO: \(\mathrm{APO_{B}}\)
Next, we prove that Algorithm 3 satisfies the requirements outlined in Definition 3.1, and therefore, that Algorithm 2 implements the meta algorithm Algorithm 1.
**Lemma 5.2**.: _Suppose that Assumption 5 holds true, and let \(\{x_{t},y_{t},z_{t}\}_{t\geq 0}\) be the sequence generated by Algorithm 2. Then,_
\[L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})\leq 2\gamma\text{ and }L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0}) \leq\gamma,\]
_where \(L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})\) and \(L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0})\) are the empirical Lipschitz constants of \(\nabla\phi\) and \(\nabla h+\nabla\phi\) with respect to the sequence \(\{x_{t}\}_{t\geq 0}\)._
Proof.: By the fact that
\[0\preceq\nabla^{2}\phi(x)=\gamma I-\nabla^{2}h(x)\preceq 2\gamma I,\]
The Lipschitzs constant of \(\nabla\phi\), denoted by \(L_{\nabla\phi}\), satisfies that \(L_{\nabla\phi}\leq 2\gamma\). Since
\[h(x)+\phi(x)=\frac{\gamma}{2}\|x\|^{2},\]
it follows that
\[\nabla^{2}(h+\phi)(x)=\gamma I,\]
and therefore
\[L_{\nabla h+\nabla\phi}\leq\gamma.\]
Thus, invoking Lemma 2.2, it follows that
\[L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})\leq 2\gamma\text{ and }L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0}) \leq\gamma.\]
Using the previous lemma, we will prove that Algorithm 3 fulfills the APO requirements outlined in Definition 3.1.
**Theorem 5.1**.: _Suppose that Assumption 5 holds. Then, Algorithm 3 fulfills the APO requirements outlined in Definition 3.1._
Proof.: Our goal is to show that there exists an index \(K_{stable}\) such that
1. For all \(k>K_{stable}\), \(\beta_{k}=\beta_{K_{stable}}\);
2. Let \(\{x_{t}\}_{t\geq 0}\) be the sequence generated by Algorithm 3. There exists \(\rho>0\) such that for all \(k>K_{stable}\) \[g^{t+1}(x_{t+1})-g^{t+1}(x_{t})\leq-\frac{\rho}{2}\|x_{t+1}-x_{t}\|^{2},\] and \[\rho>\frac{8}{\beta_{k}\sigma}\left(L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t \geq 0})+L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})\right).\]
For the duration of this proof, we extend the notation in Algorithm 3 in the following manner: at the end of iteration \(k\), \(\tilde{\sigma}_{k}\) and \(\beta_{k}\) are the values of \(\tilde{\sigma}\) and \(\beta\) respectively; Note that \(\tilde{\sigma}_{k}=\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})\), and that by Assumption 2,
\[\lambda_{min}(\mathbb{E}[M]^{T}\mathbb{E}[M])=\sigma>0.\]
By Lemma 4.3, there exists with probability \(1\) an index \(J>0\), such that for all \(k>J\), with parameter value of \(0.5\), it holds that
\[0<0.5\sigma<\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})=\tilde{\sigma}_{k}.\]
Without loss of generality, we shall assume throughout the rest of the proof that \(J=0\), and therefore that the condition \(\tilde{\sigma}_{k}==0\) is always false.
From Lemma 4.3 we have that for every \(0<\epsilon^{\prime}<1\), there exists almost surely \(K_{1}>0\), such that for all \(k>K_{1}\):
\[(1-\epsilon^{\prime})\tilde{\sigma_{k}}\leq\sigma\leq(1+\epsilon^{\prime}) \tilde{\sigma_{k}}\]
and
\[(1-\epsilon^{\prime})\sigma\leq\tilde{\sigma}_{k}\leq(1+\epsilon^{\prime})\sigma.\]
Assume in contradiction that the number of updates of \(\beta\) is infinite, which translates, due to the update mechanism of Algorithm 3, to the assumption that there is an infinite number of iterations is which either \(\tilde{\sigma}\beta+\gamma\leq\dfrac{(1+0.5\epsilon)\cdot 24\gamma}{ \tilde{\sigma}\beta}\) or \(\tilde{\sigma}\beta+\gamma\geq\dfrac{(1+2\epsilon)\cdot 24\gamma}{ \tilde{\sigma}\beta}\).
Let \(\{t_{i}\}_{i\geq 0}\subseteq\{k\}_{k>K_{1}}\) be a sequence of indices at which \(\beta\) is updated after the \(K_{1}\) iteration. By the update rule of \(\beta\),
\[\tilde{\sigma}_{t_{i}}\beta_{t_{i}}+\gamma=\dfrac{(1+\epsilon)\cdot 24 \gamma}{\tilde{\sigma}_{t_{i}}\beta_{t_{i}}},\qquad\forall i\geq 0.\]
Consider the \(t_{i}+j\) iteration where \(i\geq 0\), and \(j\geq 1\) satisfies that
\[\beta_{t_{i}+j-1}=\beta_{t_{i}}; \tag{5.3}\]
note that \(j\) is well-defined since the condition \(\beta_{t_{i}+j-1}=\beta_{t_{i}}\) holds as a tautology for \(j=1\).
For this \(j\), by (5.3) and the fact that \(\beta\) is updated at the \(t_{i}\) iteration, \(\beta_{t_{i}+j-1}\) satisfies that
\[\tilde{\sigma}_{t_{i}}\beta_{t_{i}+j-1}+\gamma=\dfrac{(1+\epsilon)\cdot 24 \gamma}{\tilde{\sigma}_{t_{i}}\beta_{t_{i}+j-1}}. \tag{5.4}\]
Since \(t_{i}>K_{1}\),
\[(1-\epsilon^{\prime})\sigma<\tilde{\sigma}_{t_{i}}<(1+\epsilon^{\prime})\sigma,\]
\[(1-\epsilon^{\prime})\sigma<\tilde{\sigma}_{t_{i}+j}<(1+\epsilon^{\prime})\sigma.\]
Therefore, it follows that
\[\frac{1-\epsilon^{\prime}}{1+\epsilon^{\prime}}<\frac{\tilde{\sigma}_{t_{i}}}{ \tilde{\sigma}_{t_{i}+j}}<\frac{1+\epsilon^{\prime}}{1-\epsilon^{\prime}}\]
and
\[\frac{1-\epsilon^{\prime}}{1+\epsilon^{\prime}}<\frac{\tilde{\sigma}_{t_{i}+j}} {\tilde{\sigma}_{t_{i}}}<\frac{1+\epsilon^{\prime}}{1-\epsilon^{\prime}}.\]
Subsequently, rewriting the leftmost side of the condition
\[\frac{(1+0.5\epsilon)\cdot 24\gamma}{\tilde{\sigma}\beta}<\tilde{\sigma} \beta+\gamma<\frac{(1+2\epsilon)\cdot 24\gamma}{\tilde{\sigma}\beta}\]
yields
\[\frac{(1+0.5\epsilon)\cdot 24\gamma}{\tilde{\sigma}_{t_{i}+j} \beta_{t_{i}+j-1}} =\frac{\tilde{\sigma}_{t_{i}}}{\tilde{\sigma}_{t_{i}+j}}\cdot \frac{(1+0.5\epsilon)\cdot 24\gamma}{\tilde{\sigma}_{t_{i}}\beta_{t_{i}+j-1}} \tag{5.5}\] \[<\frac{1+\epsilon^{\prime}}{1-\epsilon^{\prime}}\cdot\frac{(1+0.5\epsilon)\cdot 24\gamma}{\tilde{\sigma}_{t_{i}}\beta_{t_{i}+j-1}}\] \[=\frac{1+\epsilon^{\prime}}{1-\epsilon^{\prime}}\cdot\frac{1+0. 5\epsilon}{1+\epsilon}\cdot\frac{(1+\epsilon)\cdot 24\gamma}{\tilde{\sigma}_{t_{i}} \beta_{t_{i}+j-1}}\] \[=\frac{1+\epsilon^{\prime}}{1-\epsilon^{\prime}}\cdot\frac{1+0. 5\epsilon}{1+\epsilon}\cdot(\tilde{\sigma}_{t_{i}}\beta_{t_{i}+j-1}+\gamma).\]
Additionally, by developing the expression \(\tilde{\sigma}_{t_{i}+j}\beta_{t_{i}+j-1}+\gamma\), we obtain that
\[\tilde{\sigma}_{t_{i}+j}\beta_{t_{i}+j-1}+\gamma =\frac{\tilde{\sigma}_{t_{i}+j}}{\tilde{\sigma}_{t_{i}}}\cdot \tilde{\sigma}_{t_{i}}\beta_{t_{i}+j-1}+\gamma\] \[>\frac{1-\epsilon^{\prime}}{1+\epsilon^{\prime}}\cdot\tilde{ \sigma}_{t_{i}}\beta_{t_{i}+j-1}+\gamma\] \[>\frac{1-\epsilon^{\prime}}{1+\epsilon^{\prime}}\cdot(\tilde{ \sigma}_{t_{i}}\beta_{t_{i}+j-1}+\gamma)\]
and
\[\tilde{\sigma}_{t_{i}+j}\beta_{t_{i}+j-1}+\gamma =\frac{\tilde{\sigma}_{t_{i}+j}}{\tilde{\sigma}_{t_{i}}}\cdot \tilde{\sigma}_{t_{i}}\beta_{t_{i}+j-1}+\gamma\] \[<\frac{1+\epsilon^{\prime}}{1-\epsilon^{\prime}}\cdot\tilde{ \sigma}_{t_{i}}\beta_{t_{i}+j-1}+\gamma\] \[<\frac{1+\epsilon^{\prime}}{1-\epsilon^{\prime}}\cdot(\tilde{ \sigma}_{t_{i}}\beta_{t_{i}+j-1}+\gamma).\]
Therefore,
\[\frac{1-\epsilon^{\prime}}{1+\epsilon^{\prime}}\cdot(\tilde{\sigma}_{t_{i}} \beta_{t_{i}+j-1}+\gamma)<\tilde{\sigma}_{t_{i}+j}\beta_{t_{i}+j-1}+\gamma< \frac{1+\epsilon^{\prime}}{1-\epsilon^{\prime}}\cdot(\tilde{\sigma}_{t_{i}} \beta_{t_{i}+j-1}+\gamma). \tag{5.6}\]
Rewriting the expression \(\frac{(1+2\epsilon)\cdot 24\gamma}{\tilde{\sigma}_{t_{i}+j}\beta_{t_{i}+j-1}}\) in a similar manner, we have that
\[\begin{split}\frac{(1+2\epsilon)\cdot 24\gamma}{\tilde{\sigma}_{t_{i}+j} \beta_{t_{i}+j-1}}&=\frac{\tilde{\sigma}_{t_{i}}}{\tilde{ \sigma}_{t_{i}+j}}\cdot\frac{(1+2\epsilon)\cdot 24\gamma}{\tilde{\sigma}_{t_{i}} \beta_{t_{i}+j-1}}\\ &>\frac{1-\epsilon^{\prime}}{1+\epsilon^{\prime}}\cdot\frac{(1+2 \epsilon)\cdot 24\gamma}{\tilde{\sigma}_{t_{i}}\beta_{t_{i}+j-1}}\\ &=\frac{1-\epsilon^{\prime}}{1+\epsilon^{\prime}}\cdot\frac{1+2 \epsilon}{1+\epsilon}\cdot\frac{(1+\epsilon)\cdot 24\gamma}{\tilde{\sigma}_{t_{i}} \beta_{t_{i}+j-1}}\\ &=\frac{1-\epsilon^{\prime}}{1+\epsilon^{\prime}}\cdot\frac{1+2 \epsilon}{1+\epsilon}\cdot(\tilde{\sigma}_{t_{i}}\beta_{t_{i}+j-1}+\gamma). \end{split} \tag{5.7}\]
Now, for a sufficiently small \(\epsilon^{\prime}\) the following relations hold true:
1. \(1<\frac{1+0.5\epsilon}{(1+\epsilon^{\prime})^{2}}<\frac{1+0.5 \epsilon}{1+\epsilon^{\prime}}\),
2. \(\frac{1+\epsilon^{\prime}}{1-\epsilon^{\prime}}(1+0.5 \epsilon)<\frac{1-\epsilon^{\prime}}{1+\epsilon^{\prime}}(1+\epsilon)\),
3. \(\frac{1+\epsilon^{\prime}}{1-\epsilon^{\prime}}(1+\epsilon)<\frac{1- \epsilon^{\prime}}{1+\epsilon^{\prime}}(1+2\epsilon)\).
Combining (5.5), (5.6), (5.7), and using the above relations, we obtain that
\[\begin{split}\frac{(1+0.5\epsilon)\cdot 24\gamma}{\tilde{ \sigma}_{t_{i}+j}\beta_{t_{i}+j-1}}&<\frac{1+\epsilon^{\prime}}{1 -\epsilon^{\prime}}\cdot\frac{1+0.5\epsilon}{1+\epsilon}\cdot(\tilde{\sigma}_{ t_{i}}\beta_{t_{i}+j-1}+\gamma)\\ &<\frac{1-\epsilon^{\prime}}{1+\epsilon^{\prime}}\cdot(\tilde{ \sigma}_{t_{i}}\beta_{t_{i}+j-1}+\gamma)\\ &<\tilde{\sigma}_{t_{i}+j}\beta_{t_{i}+j-1}+\gamma\\ &<\frac{1+\epsilon^{\prime}}{1-\epsilon^{\prime}}\cdot(\tilde{ \sigma}_{t_{i}}\beta_{t_{i}+j-1}+\gamma)\\ &<\frac{1-\epsilon^{\prime}}{1+\epsilon^{\prime}}\cdot\frac{1+2 \epsilon}{1+\epsilon}\cdot(\tilde{\sigma}_{t_{i}}\beta_{t_{i}+j-1}+\gamma)\\ &<\frac{(1+2\epsilon)\cdot 24\gamma}{\tilde{\sigma}_{t_{i}+j} \beta_{t_{i}+j-1}},\end{split}\]
which boils down to
\[\frac{(1+0.5\epsilon)\cdot 24\gamma}{\tilde{\sigma}_{t_{i}+j}\beta_{t_{i}+j-1}} <\tilde{\sigma}_{t_{i}+j}\beta_{t_{i}+j-1}+\gamma<\frac{(1+2 \epsilon)\cdot 24\gamma}{\tilde{\sigma}_{t_{i}+j}\beta_{t_{i}+j-1}}. \tag{5.8}\]
Hence, by the update criteria of \(\beta\), the penalty \(\beta\) does not update at the \(t_{i}+j\) iteration. In particular for \(i=0\), since \(j=1\) satisfies (5.3), we can deduce by induction that for all \(j\geq 1\) there is no \(\beta\) update in the \(t_{0}+j\) iteration. Therefore, \(\beta\) does not update after the \(t_{0}>K_{1}\) iteration thus contradicting the assumption that \(\beta\) is updated infinitely many times and proving the first part of the lemma.
Denote the index of the last \(\beta\) update, whose existence is established in the first part, by \(K_{2}\). To prove the second part, first recall that by Lemma 5.2 it holds that
\[L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0})+L^{e}_{\nabla\phi}(\{x_{t}\}_{t \geq 0})\leq 3\gamma,\]
and that \(g^{t+1}\) is strongly convex, where we denote its strong convexity constant by \(\rho_{t+1}\). By the strong convexity inequality for \(g^{t+1}(x)\),
\[g^{t+1}(x_{t+1})-g^{t+1}(x_{t})\leq\nabla g^{t+1}(x_{t+1})^{T}(x_{t+1}-x_{t})- \frac{\rho_{t+1}}{2}\|x_{t+1}-x_{t}\|^{2}.\]
Since \(x_{t+1}\) minimizes \(g^{t+1}(x)\), \(\nabla g^{t+1}(x_{t+1})=0\), and hence
\[g^{t+1}(x_{t+1})-g^{t+1}(x_{t})\leq-\frac{\rho_{t+1}}{2}\|x_{t+1}-x_{t}\|^{2}.\]
Considering the above, it is sufficient to show that for \(K_{stable}=\max\{K_{1},K_{2}\}\),
\[\rho_{k}>\frac{24\gamma}{\sigma\beta_{k}}\qquad\forall k>K_{stable}.\]
By the definition of \(g^{k}\), we have that
\[\nabla^{2}g^{k}(x)=\beta_{k}(\bar{M}^{k})^{T}\bar{M}^{k}+\gamma I,\]
and therefore, \(g^{k}\) is \(\beta_{k}\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})+\gamma=\beta_{k}\tilde{ \sigma}_{k}+\gamma\) strongly convex.
Since \(K_{stable}\geq K_{1}\), for all \(k>K_{stable}\) we have that
\[(1-\epsilon^{\prime})\tilde{\sigma_{k}}\leq\sigma\leq(1+\epsilon^{\prime}) \tilde{\sigma_{k}}\]
and
\[(1-\epsilon^{\prime})\sigma\leq\tilde{\sigma}_{k}\leq(1+\epsilon^{\prime})\sigma.\]
Furthermore, let us choose again an \(\epsilon^{\prime}>0\) sufficiently small such that
\[1<\frac{1+0.5\epsilon}{1+\epsilon^{\prime}}.\]
For all \(k>K_{stable}\),
\[\frac{8}{\beta_{k}\sigma}\left(L^{\epsilon}_{\nabla h+\nabla\phi }(\{x_{t}\}_{t\geq 0})+L^{\epsilon}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})\right) \leq\frac{24\gamma}{\sigma\beta_{k}}\] \[<\frac{(1+0.5\epsilon)\cdot 24\gamma}{(1+\epsilon^{\prime}) \sigma\beta_{k}}\] \[<\frac{(1+0.5\epsilon)\cdot 24\gamma}{\tilde{\sigma}_{k}\beta_{k}}\] \[<\tilde{\sigma}_{k}\beta_{k}+\gamma,\]
where the first inequality follows from \(L^{\epsilon}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0})+L^{\epsilon}_{\nabla\phi} (\{x_{t}\}_{t\geq 0})\leq 3\gamma\), the second from \(1<\frac{1+0.5\epsilon}{1+\epsilon^{\prime}}\), the third from \(\tilde{\sigma}_{k}\leq(1+\epsilon^{\prime})\sigma\), and the fourth since the inequality in line 4 of Algorithm 3 holds for all \(k>K_{stable}\geq K_{2}\). Since \(\tilde{\sigma}_{k}\beta_{k}+\gamma\) is the strong convexity constant of \(g^{k}\), this concludes the proof. \(\Box\)
Algorithm Implementation For the General Case
Section 5 provides an explicit algorithmic infrastructure in which the meta-algorithm sub-procedure's requirements are satisfied, under the standard assumption that the hessian of the function \(h\) is bounded. In this section, we introduce a more universal implementation of the meta-algorithm, described in Algorithm 4, and its sub-procedures, that do not require the boundedness of the hessian. This generality comes at the price of the technical complexity of the schematics of the adaptive penalty oracle given in Algorithm 5, and the subsequent related analysis.
```
Input:\(\mathbf{x}_{0}\in\mathbb{R}^{n}\), \(\mathbf{z}_{0}\in\mathbb{R}^{n}\), \(\beta_{0}>0,\{\theta_{t}\}_{t>0}\), \(\epsilon>0\), \(\phi:\mathbb{R}^{n}\rightarrow\mathbb{R}\) where \(\phi\in C^{2}\) and convex.
1 Set \(\zeta_{0},\xi_{0}\gets 0\);
2for\(t=0,1,2,\ldots\)do
3\(sample\)\((\theta_{t+1}-\theta_{t})\)\(matrices\)\(\{M^{i}\}_{i=\theta_{t}+1}^{\theta_{t+1}}\);
4\(\bar{M}^{t+1}=\frac{\theta_{t}}{\theta_{t+1}}\bar{M}^{t}+\frac{1}{\theta_{t+1} }\sum\limits_{i=\theta_{t}+1}^{\theta_{t+1}}M^{i}\)
5\(y_{t+1}\in\underset{y}{\operatorname{argmin}}\{P(y)+\langle z_{t},y\rangle +\frac{\beta_{t}}{2}\|\bar{M}^{t+1}x_{t}-y\|^{2}\}\) ;
6\(x_{t+1}\in\operatorname{Crit}_{x}\{h(x)-\langle z_{t},\bar{M}^{t}x\rangle+ \frac{\beta_{t}}{2}\|\bar{M}^{t+1}x-y_{t+1}\|^{2}+D_{\phi}(x,x_{t})\}\);
7\(z_{t+1}=z_{t}-\beta_{t}(\bar{M}^{t+1}x_{t+1}-y_{t+1})\)
8\(\beta_{t+1},\zeta_{t+1},\xi_{t+1}\gets General\ APO(x_{t+1},x_{t},\beta_{t},\zeta_{t},\xi_{t}, \epsilon,\bar{M}^{t+1},\phi,h)\)
9 end for
```
**Algorithm 4**ISAD - General Case
**Remark 6.1** (a technical note on the \(\beta\) update).: Note that \(\beta\) is updated at \(t\geq 0\) only if
\[\tilde{\sigma}\neq 0\text{ and }\frac{1}{4}\rho_{t}\leq\frac{8(\zeta_{t+1}+\xi_{t+1 }+\epsilon)}{\beta\tilde{\sigma}},\]
and that if \(\beta\) was updated \(\kappa\) times, then \(\beta=2^{\kappa}\beta_{0}\) where \(\beta_{0}\) is the initial value of \(\beta\).
In Theorem 6.1 we establish that Algorithm 4, and its adaptive penalty oracle procedure Algorithm 5, satisfy the requirements outlined in Definition 3.1. The proof builds on the boundedness of the generated sequence (cf. Assumption 3) to guarantee that all the continuous elements, mainly the derivatives of \(h\) and \(\phi\), are bounded as-well, which implies that the corresponding empirical Lipschitz constants are bounded. From this point, the analysis takes a similar turn, with the necessary adjustments, to that taken in the proof of Theorem 5.1 in the previous section - showing that \(\beta\) is updated finitely many times. We move onward to formally state and prove Theorem 6.1.
**Theorem 6.1**.: _Let \(\{x_{t},y_{t},z_{t}\}_{t\geq 0}\) be the sequence generated by Algorithm 4, and let \(\{\beta_{t}\}_{t\geq 0}\) be the sequence of adaptive penalties generated. Then \(\{x_{t},y_{t},z_{t},\beta_{t}\}_{t\geq 0}\) fulfill Definition 3.1._
Proof.: Our goal is to show that there exists an index \(K_{stable}>0\) such that
1. For all \(k>K_{stable}\), \(\beta_{k}=\beta_{K_{stable}}\).
2. Let \(\{x_{t}\}_{t\geq 0}\) be the sequence generated by Algorithm 1. There exists \(\rho>0\) such that for all \(k>K_{stable}\) \[g^{t+1}(x_{t+1})-g^{t+1}(x_{t})\leq-\frac{\rho}{2}\|x_{t+1}-x_{t}\|^{2},\] and \[\rho>\frac{8}{\beta_{k}\sigma}\left(L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t \geq 0})+L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})\right).\]
As a starting point, first recall that Assumption 2 states that
\[\lambda_{min}(\mathbb{E}[M]^{T}\mathbb{E}[M])=\sigma>0,\]
by Assumption 3, there exists \(D>0\), such that \(\max\limits_{t\geq 1}\{\max\{\|x_{t}\|,\|y_{t}\|,\|z_{t}\|\}\}\leq D\), and that, by Lemma 4.3, there exists with probability \(1\) an index \(J>0\) such that,
\[0<0.5\sigma<\tilde{\sigma}=\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k}),\qquad \forall k>J.\]
Without loss of generality, we shall assume throughout the rest of the proof that \(J=0\), and therefore, that the condition \(\tilde{\sigma}_{k}==0\) appearing in Algorithm 5 is always false.
To show that that the number of \(\beta\) updates is finite, it is sufficient to show that the condition
\[\frac{1}{4}\rho_{t}>\frac{8(\zeta_{t+1}+\xi_{t+1}+\epsilon)}{\beta_{t}\tilde{ \sigma}_{t}}\]
is violated at most a finite number of times.
We prove that \(\lambda_{min}\left(\nabla^{2}h(x)+\nabla^{2}\phi(x)\right)\) is lower bounded over \(\mathcal{B}[0,D]\), a fact that will be used in the proof shortly. Note that both \(h\) and \(\phi\) are twice continuously differentiable, and therefore their hessians \(\nabla^{2}h(x)\) and \(\nabla^{2}\phi(x)\) are continuous. It follows that the functions \(\lambda_{min}(\nabla^{2}h(x))\), \(\lambda_{max}(\nabla^{2}h(x))\), \(\lambda_{min}(\nabla^{2}\phi(x))\), \(\lambda_{max}(\nabla^{2}\phi(x))\) are all continuous in the ball \(\mathcal{B}[0,D]\) (see Kato [19, Theorem 6.8]). Since the ball \(\mathcal{B}[0,D]\) is compact, by using Weierstrass' Extreme Value Theorem, the following maximum and minimum values exist
\[\lambda_{min}^{h+\phi}=\min_{x\in B(0,D)}\lambda_{min}(\nabla^{2}\phi(x)+ \nabla^{2}h(x))\text{ and }\lambda_{max}^{h+\phi}=\max_{x\in B(0,D)}\lambda_{max}( \nabla^{2}\phi(x)+\nabla^{2}h(x)),\]
\[\lambda_{min}^{\phi}=\min_{x\in B(0,D)}\lambda_{min}(\nabla^{2}\phi(x))\text{ and }\lambda_{max}^{\phi}=\max_{x\in B(0,D)}\lambda_{max}(\nabla^{2}\phi(x)).\]
To continue the proof, we define two expressions that are utilized in the analysis. The first is the minimal eigenvalue of the hessian of \(g\) at iteration \(t>0\) within \(\mathcal{B}[0,D]\),
\[\alpha_{t}=\min_{x\in\mathcal{B}[0,D]}\lambda_{min}\left(\nabla^{2}g^{t}(x) \right),\qquad\forall t>0.\]
The second partially acts as a counter for the number of times \(\beta\) was updated starting from some iteration \(k>J\),
\[\kappa_{\beta_{k}}=\begin{cases}0,&\beta_{k}\geq\frac{5C-\lambda_{ min}^{h+\phi}}{0.75\sigma},\\ \left\lceil\log_{2}\left(\frac{5C-\lambda_{min}^{h+\phi}}{0.75\beta_{k}\sigma} \right)\right\rceil,&\text{otherwise}.\end{cases}\]
Obviously, for any \(\beta_{k}\) there are two complementing possibilities: either \(\beta\) is updated less than \(\kappa_{\beta_{k}}\) times after the \(k\) iteration, or it is updated at least \(\kappa_{\beta_{k}}\) times after iteration \(k\). In this context, we make the following a-priori assumption, to be proven later. There exists \(C>0\) such that:
1. It holds that \[\frac{8(\zeta_{t+1}+\xi_{t+1}+\epsilon)}{\beta_{t}\tilde{\sigma}_{t}}<C,\qquad \forall t>J.\] (6.1)
2. If \(\beta\) is updated at least \(\kappa_{\beta_{k}}\) times after iteration \(k\), in which case let \(\tau_{\kappa,\beta_{k}}\) be the iteration of the \(\kappa_{\beta_{k}}\) update after \(k\), then for all \(t\geq\tau_{\kappa,\beta_{k}}\) \[\alpha_{t}>4C.\] (6.2)
We will show that the number of \(\beta\) updates is finite assuming (6.1) and (6.2), and then show that these assumptions indeed hold true.
If \(\beta\) is updated less than \(\kappa_{\beta_{k}}\) many times after iteration \(k\), the first part of Definition 3.1 trivially holds true. Otherwise, by (6.2), there exists \(\tau_{\kappa,\beta_{k}}\geq 0\), such that for all \(t>\tau_{\kappa,\beta_{k}}\) and \(x\in\mathcal{B}[0,D]\), \(\nabla^{2}g^{t}(x)\) is a positive definite matrix, meaning that \(g^{t}(x)\) is \(\alpha_{t}\)-strongly convex within \(\mathcal{B}[0,D]\). Since \(x_{t},x_{t+1}\in\mathcal{B}[0,D]\) for all \(t\), by the strong convexity inequality
\[g^{t+1}(x_{t+1})-g^{t+1}(x_{t})\leq\nabla g^{t+1}(x_{t+1})^{T}\left(x_{t+1}-x _{t}\right)-\frac{\alpha_{t+1}}{2}\|x_{t+1}-x_{t}\|^{2}\qquad\forall t>\tau_ {\kappa,\beta_{k}}.\]
From the update rule of \(x_{t+1}\), which defined as the minimizer of \(g^{t+1}(x)\), we have that \(\nabla g^{t+1}(x_{t+1})=0\), and hence,
\[g^{t+1}(x_{t+1})-g^{t+1}(x_{t})\leq-\frac{\alpha_{t+1}}{2}\|x_{t+1}-x_{t}\|^ {2}\qquad\forall t>\tau_{\kappa,\beta_{k}}.\]
By the update of \(\rho_{t+1}\) in line 11 of Algorithm 5,
\[\rho_{t+1}=-\frac{2\left(g^{t+1}(x_{t+1})-g^{t+1}(x_{t})\right)}{\|x_{t+1}-x_{ t}\|^{2}}\geq\alpha_{t+1}.\]
Thus, if \(\alpha_{t}>4C\) for all \(t\geq\tau\), then \(\rho_{t}>4C\), and the condition
\[\frac{1}{4}\rho_{t}>\frac{8(\zeta_{t+1}+\xi_{t+1}+\epsilon)}{\beta_{t}\tilde{ \sigma}_{t}}\]
is no longer violated. Therefore, the first part of the requirements in Definition 3.1 is fulfilled. That is, under the a-priori assumption above, we established the stability of \(\beta\) meaning that it is updated finitely many times.
We now move to prove that the a-priori assumptions indeed hold true, starting with (6.2), assuming that the first a-priori assumption (6.1) holds true.
First, we bound \(\lambda_{min}\left(\nabla^{2}g^{t+1}(x)\right)\) by the minimal eigenvalues of its components,
\[\lambda_{min}\left(\nabla^{2}g^{t+1}(x)\right)\geq\lambda_{min}\left(\nabla^{ 2}h(x)+\nabla^{2}\phi(x)\right)+\beta_{t}\lambda_{min}\left(\left(\bar{M}^{t +1}\right)^{T}\bar{M}^{t+1}\right), \tag{6.3}\]
where we used the fact that the hessian of \(g^{t+1}\) is given by
\[\nabla^{2}g^{t+1}(x)=\nabla^{2}h(x)+\nabla^{2}\phi(x)+\beta_{t}\left(\bar{M}^ {t+1}\right)^{T}\bar{M}^{t+1}.\]
For \(\lambda_{min}\left(\nabla^{2}h(x)+\nabla^{2}\phi(x)\right)\) in (6.3) we already have the lower bound \(\lambda_{min}^{h+\phi}\).
To lower bound \(\lambda_{min}\left(\left(\bar{M}^{t+1}\right)^{T}\bar{M}^{t+1}\right)\), we use Lemma 4.3, which states that for every \(0<\epsilon^{\prime}<1\) there exists almost surely \(K>0\) such that for all \(k>K\)
\[(1-\epsilon^{\prime})\lambda_{min}(\mathbb{E}[M]^{T}\mathbb{E}[M])=(1-\epsilon ^{\prime})\sigma\leq\tilde{\sigma}_{k}\leq(1+\epsilon^{\prime})\sigma=(1+ \epsilon^{\prime})\lambda_{min}(\mathbb{E}[M]^{T}\mathbb{E}[M]).\]
We arbitrarily choose \(\epsilon^{\prime}=0.25\), and derive that
\[\lambda_{min}\left(\left(\bar{M}^{t+1}\right)^{T}\bar{M}^{t+1}\right)>0.75 \sigma\qquad\forall t>K.\]
Combining the two lower bounds, for all \(t>K\),
\[\lambda_{min}\left(\nabla^{2}g^{t+1}(x)\right)\geq\lambda_{min}^{h+\phi}+0.75 \sigma\beta_{t}.\]
Consequently, if
\[\beta_{t}\geq\frac{5C-\lambda_{min}^{h+\phi}}{0.75\sigma},\]
then \(\alpha_{t+1}\geq 5C>4C\), and (6.2) holds true with \(\kappa_{\beta_{t}}=0\); note that the choice of the scalar \(5\) is arbitrary, any scalar strictly greater than \(4\) would suffice.
Otherwise, \(\beta_{t}<\frac{5C-\lambda_{min}^{h+\phi}}{0.75\sigma}\) and
\[\kappa_{\beta_{t}}=\left\lceil\log_{2}\left(\frac{5C-\lambda_{min}^{h+\phi}}{0.75\beta_{t}\sigma}\right)\right\rceil.\]
If \(\beta\) is updated at least \(\kappa_{\beta_{t}}\) times after iteration \(t\) (otherwise \(\beta\) is updated a finite number of times), let \(\tau_{\kappa,\beta_{t}}\) be the iteration of the \(\kappa_{\beta_{t}}\) update of \(\beta\) starting from iteration \(t\). By the choices of \(\kappa_{\beta_{t}}\) and \(\tau_{\kappa,\beta_{t}}\) and the update rule for \(\beta\) (see Remark 6.1),
\[\beta_{\kappa,\beta_{t}}=2^{\kappa_{\beta_{t}}}\beta_{t},\]
and therefore
\[\log_{2}\left(\frac{5C-\lambda_{min}^{h+\phi}}{0.75\beta_{\tau_{\kappa,\beta_{t}}\sigma}}\right) =\log_{2}\left(\frac{5C-\lambda_{min}^{h+\phi}}{0.75\cdot 2^{\kappa_{ \beta_{t}}\beta_{t}\sigma}}\right)\] \[=\log_{2}\left(\frac{5C-\lambda_{min}^{h+\phi}}{0.75\beta_{t} \sigma}\right)-\log_{2}\left(2^{\kappa_{\beta_{t}}}\right)\] \[=\log_{2}\left(\frac{5C-\lambda_{min}^{h+\phi}}{0.75\beta_{t} \sigma}\right)-\kappa_{\beta_{t}}\] \[\leq 0,\]
where the last inequality follows from the definition of \(\kappa_{\beta_{t}}\). Hence, it follows that
\[\beta_{\tau_{\kappa,\beta_{t}}}\geq\frac{5C-\lambda_{min}^{h+\phi}}{0.75 \sigma},\]
and for all \(k>\tau_{\kappa,\beta_{t}}\), and all \(x\in\mathcal{B}[0,D]\),
\[\lambda_{min}\left(\nabla^{2}g^{k}(x)\right)\geq\lambda_{min}^{h+\phi}+0.75 \sigma\beta_{k}\geq 5C>4C.\]
Thus, assuming (6.1),
\[\alpha_{k}>4C\qquad\forall k>\tau_{\kappa,\beta_{t}},\]
proving that (6.2) holds true.
All that remains is to prove that (6.1) holds true, that is, that there exits \(C>0\) such that
\[\frac{8(\zeta_{t+1}+\xi_{t+1}+\epsilon)}{\beta_{t}\tilde{\sigma}_{t}}<C.\]
To lower bound \(\beta_{t}\), note that the sequence \(\{\beta_{t}\}_{t\geq 0}\) is nondecreasing. Therefore,
\[\beta_{t}\geq\beta_{0}. \tag{6.4}\]
By the choice of \(J\), the lower bound for \(\tilde{\sigma}_{k}\) follows from
\[\tilde{\sigma}_{k}>0.5\sigma\qquad\forall k>J. \tag{6.5}\]
By the update rule of \(\zeta\), \(\xi\) and the definition of the empirical lipschitz constant (Definition 2.2), \(L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0})=\sup\limits_{t\geq 0} \zeta_{t}\) and \(L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})=\sup\limits_{t\geq 0}\xi_{t}\). Therefore, for all \(t\),
\[\zeta_{t}\leq L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0}) \tag{6.6}\]
and
\[\xi_{t}\leq L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0}). \tag{6.7}\]
Using (6.6) and (6.7), it is sufficient to bound \(L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0})\) and \(L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})\).
The Lipschitz constants of the restrictions of \(\nabla h+\nabla\phi\) and \(\nabla\phi\) to the ball \(\mathcal{B}[0,D]\) are bounded by \(L^{D}_{\nabla\phi}=\max\{|\lambda^{\phi}_{min}|,|\lambda^{h+\phi}_{max}|\}\) and \(L^{D}_{\nabla\phi+\nabla h}=\max\{|\lambda^{h+\phi}_{min}|,|\lambda^{h+\phi}_{ max}|\}\), respectively.
If there is no \(t\) such that \(x_{t}\neq x_{t+1}\), then \(L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0})=L^{e}_{\nabla\phi}(\{x_{t}\}_{t \geq 0})=0\), and therefore \(L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0})\leq L^{D}_{\nabla\phi+\nabla h}\) and \(L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})\leq L^{D}_{\nabla\phi}\).
Otherwise, since \(x_{t}\in\mathcal{B}(0,D)\) for every \(t\geq 0\),
\[L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0}) =\sup_{t\geq 0,x_{t+1}\neq x_{t}}\frac{\|\nabla h(x_{t+1})+\nabla \phi(x_{t+1})-\nabla h(x_{t})-\nabla\phi(x_{t})\|}{\|x_{t+1}-x_{t}\|}\] \[\leq\sup_{x,y\in\mathcal{B}(0,D),x\neq y}\frac{\|\nabla h(x)+ \nabla\phi(x)-\nabla h(y)-\nabla\phi(y)\|}{\|x-y\|}\] \[=L^{D}_{\nabla\phi+\nabla h}\] \[\leq\max\{|\lambda^{h+\phi}_{min}|,|\lambda^{h+\phi}_{max}|\}\] \[<\infty,\]
and in a similar manner, \(L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})\leq\max\{|\lambda^{\phi}_{min}|,| \lambda^{\phi}_{max}|\}<\infty\). We deduce that whether there exists \(t\) such that \(x_{t+1}\neq x_{t}\) or not,
\[\zeta_{t}\leq L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0})\leq L^{D}_{ \nabla\phi+\nabla h}<\infty\]
and
\[\xi_{t}\leq L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})\leq L^{D}_{\nabla\phi}<\infty.\]
Combining the above with (6.4) and (6.5), it follows that (6.1) holds true with
\[C=\frac{8\left(L^{D}_{\nabla\phi+\nabla h}+L^{D}_{\nabla\phi}+\epsilon\right) }{0.5\sigma\beta_{0}}. \tag{6.8}\]
To summarize the first part of the proof, we have shown that the first a-priori assumption holds true with \(C\) given in (6.8), which implies that the second a-priori assumption holds true with the same \(C\), and subsequently, that the number of \(\beta\) updates is finite, concluding that the first part of Definition3.1 holds true.
Now we establish the second part of Definition3.1, stating the iterates of \(\{x_{k}\}_{k\geq K}\) satisfy a sufficient decrease property from some iteration \(K>0\) onward.
Denote the index of the last \(\beta\) update by \(\tilde{K}\) whose existence is guaranteed by the first part of Definition3.1 we just proved. To underscore the fact that \(\beta\) is no longer updated, we denote \(\bar{\beta}=\beta_{\tilde{K}}\), and replace \(\beta_{k}\) with \(\bar{\beta}\) in the remainder of the proof.
Since the if statement in line 12 of Algorithm5 holds true for all \(k>\tilde{K}\), we can rearrange it into the following statement that holds for all \(k>\tilde{K}\),
\[\frac{1}{4}\cdot\rho_{k}\cdot\bar{\beta}\lambda_{min}((\bar{M}^{k})^{T}\bar{M} ^{k})>8(\zeta_{k+1}+\xi_{t+1}+\epsilon). \tag{6.9}\]
As before, let \(K>0\) such that almost surely for all \(k>K\):
\[(1-\epsilon^{\prime})\lambda_{min}(\mathbb{E}[M]^{T}\mathbb{E}[M])=(1-\epsilon ^{\prime})\sigma\leq\tilde{\sigma}_{k}\leq(1+\epsilon^{\prime})\sigma=(1+ \epsilon^{\prime})\lambda_{min}(\mathbb{E}[M]^{T}\mathbb{E}[M]).\]
Choosing \(\epsilon^{\prime}=0.25\) (the choice \(0.25\) is arbitrary, any scalar in \((0,1)\) would do), yields that
\[\lambda_{min}((\bar{M}^{k})^{T}\bar{M}^{k})<1.25\sigma\]
holds almost surely. Thus, combining the latter with (6.9), we derive that
\[\frac{5}{16}\cdot\rho_{k}>\frac{8(\zeta_{k+1}+\xi_{t+1}+\epsilon)}{\bar{\beta }\sigma}\]
holds almost surely for all \(k>\max\{K,\tilde{K}\}\). Rearranging the terms, it follows that
\[\frac{5}{16}\cdot\rho_{k}-\frac{8(\zeta_{k+1}+\xi_{t+1}+\epsilon)}{\bar{\beta }\sigma}>0. \tag{6.10}\]
Since (6.10) holds true for all \(k>\max\{K,\tilde{K}\}\), by taking \(\liminf\) we obtain that
\[\liminf\frac{5}{16}\cdot\rho_{k}-\frac{8(\zeta_{k+1}+\xi_{t+1}+\epsilon)}{ \bar{\beta}\sigma}\geq 0.\]
By the update rule of \(\zeta\) and \(\xi\), and the definition of empirical Lipschitz constant, the limits
\[\lim_{k\to\infty}\zeta_{k}=L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0}) \text{ and }\lim_{k\to\infty}\xi_{k}=L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})\]
exist (recall that the sequence is bounded).
Utilizing the facts that \(\lim_{k\to\infty}\zeta_{k}=L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0})\) and \(\lim_{k\to\infty}\xi_{k}=L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})\), we then deduce the relation
\[\liminf\frac{5}{16}\cdot\rho_{k}\geq\frac{8(L^{e}_{\nabla h+\nabla\phi}(\{x_{ t}\}_{t\geq 0})+L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})+\epsilon)}{\bar{\beta}\sigma}. \tag{6.11}\]
Decreasing the right hand side by \(\frac{\epsilon}{\bar{\beta}\sigma}>0\) and multiplying the left hand side by \(\frac{16}{5}\), results in
\[\liminf_{k\to\infty}\rho_{k}>\frac{8(L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t \geq 0})+L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0}))}{\bar{\beta}\sigma}. \tag{6.12}\]
Denote \(\rho_{inf}=\liminf_{k\to\infty}\rho_{k}.\) By the definition of the limit, for every \(\epsilon^{\prime\prime}>0\), there exists \(\bar{K}>0\), such that for all \(k>\bar{K}\), it holds that \(\rho_{k}>(1-\epsilon^{\prime\prime})\rho_{inf}.\) Additionally, due to the strict inequality in (6.12), for a sufficiently small \(\epsilon^{\prime\prime}>0\), it holds that
\[(1-\epsilon^{\prime\prime})\rho_{inf}>\frac{8(L^{e}_{\nabla h+\nabla\phi}(\{x _{t}\}_{t\geq 0})+L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0}))}{\bar{\beta}\sigma}.\]
Choosing such \(\epsilon^{\prime\prime}\), for every \(k>K_{stable}=\max\{K,\tilde{K},\bar{K}\}\), it holds almost surely that
\[\rho_{k}>(1-\epsilon^{\prime\prime})\rho_{inf}>\frac{8}{\bar{\beta}\sigma} \left(L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0})+L^{e}_{\nabla\phi}(\{x_{t}\}_{t \geq 0})\right).\]
Hence, the second part of the theorem holds for \(\rho=(1-\epsilon^{\prime\prime})\rho_{inf}\), which concludes the proof of the theorem.
\(\Box\)
Meta Algorithm Convergence
In this section, we prove the convergence of the meta algorithm Algorithm 1. The proof is split into two milestones. In Theorem 7.1, we prove that if the size of the algorithm steps converges to zero, then every accumulation point of the algorithm is a critical point. In Theorem 7.2 we prove that the size of the algorithm steps does indeed converge to zero.
Before proving Theorem 7.1, another lemma will be needed:
**Lemma 7.1**.: _Let \(\{x_{t},y_{t},z_{t}\}_{t>0}\) be the sequence generated by Algorithm 1. Then for any \(t\geq 0\),_
1. \(0\in\partial P(y_{t+1})+z_{t+1}+\beta_{t}\bar{M}^{t+1}(x_{t+1}-x_{t})\)__
2. \(\nabla h(x_{t+1})-(\bar{M}^{t+1})^{T}z_{t+1}=-\nabla\phi(x_{t+1})+\nabla\phi(x_ {t})\)__
3. \(\bar{M}^{t+1}x_{t+1}-y_{t+1}=\frac{1}{\beta_{t}}(z_{t}-z_{t+1})\)__
Proof.: First, we note the following two equations, which follow directly from the criticality conditions for the update steps for \(y_{t+1}\) and \(x_{t+1}\):
\[0\in\partial P(y_{t+1})+z_{t}-\beta_{t}(\bar{M}^{t+1}x_{t}-y_{t+1}) \tag{7.1}\]
and:
\[0=\nabla h(x_{t+1})-(\bar{M}^{t+1})^{T}z_{t}+\beta_{t}(\bar{M}^{t+1})^{T}( \bar{M}^{t+1}x_{t+1}-y_{t+1})+\nabla\phi(x_{t+1})-\nabla\phi(x_{t}). \tag{7.2}\]
The first part follows from applying the update rule of \(z_{t+1}\) to (7.1), and the second part follows from applying the update rule of \(z_{t+1}\) to (7.2). The third part follows directly from the update rule of \(z_{t+1}\).
In the purpose of establishing the convergence guarantee for Algorithm 1 stated by Theorem 7.2, we prove a cornerstone result that will be plugged in the analysis. Since it plays its role inside the proof, it assumes that the distance between consecutive decision variables converges to zero; this presumption is established as a part of the proof of Theorem 7.2.
**Theorem 7.1**.: _Let \(\{x_{t},y_{t},z_{t}\}_{t>0}\) be the sequence generated by Algorithm 1. Assume that \(\lim_{t\to\infty}\|x_{t+1}-x_{t}\|^{2}+\|y_{t+1}-y_{t}\|^{2}+\|z_{t+1}-z_{t}\| ^{2}=0\), that the sequence is bounded, and that \((x^{*},y^{*},z^{*})\) is an accumulation point of the sequence. Then the following holds almost-surely:_
1. \(-z^{*}\in\partial P(y^{*})\)_,_
2. \(\nabla h(x^{*})=\mathbb{E}[M]^{T}z^{*}\)_,_
3. \(\mathbb{E}[M]x^{*}=y^{*}\)_._
Proof.: By the definition of \(\beta\) in Definition 3.1, there exists an index \(K_{stable}\), such that \(\beta_{k}=\beta_{K_{stable}}\) for all \(k>K_{stable}\).
Let \(\{x_{t_{i}},y_{t_{i}},z_{t_{i}}\}_{i\geq 1}\) be a subsequence converging to the accumulation point \((x^{*},y^{*},z^{*})\), and for convenience, set \(\bar{\beta}=\beta_{K_{stable}}\) and assume without loss of generality that \(t_{1}>K_{stable}\), meaning that \(\beta_{t_{i}}=\bar{\beta}\) for any \(i\geq 1\).
The first part of the theorem requires a lengthy technical result to prove, and so we start with the second and the third parts.
For the second part, we note that by the second part of Lemma 7.1, for the \(t_{i}\) element of the subsequence:
\[\nabla h(x_{t_{i}})-(\bar{M}^{t_{i}})^{T}z_{t_{i}}=-\nabla\phi(x_{t_{i}})+ \nabla\phi(x_{t_{i}-1}). \tag{7.3}\]
By the convergence of the subsequence \(\{x_{t_{i}},y_{t_{i}},z_{t_{i}}\}_{i\geq 0}\) to \((x^{*},y^{*},z^{*})\) and the assumption that \(\lim_{t\to\infty}\|x_{t+1}-x_{t}\|^{2}+\|y_{t+1}-y_{t}\|^{2}+\|z_{t+1}-z_{t}\| ^{2}=0\), it holds that \(x_{t_{i}-1}\xrightarrow{i\to\infty}x^{*}\). Consequently, utilizing the fact that \(\phi\) is continuously differentiable, it follows that \(\nabla\phi(x_{t_{i}}),\nabla\phi(x_{t_{i}-1})\xrightarrow{i\to\infty}\nabla \phi(x^{*})\), and therefore, by taking \(i\to\infty\) we obtain that
\[-\nabla\phi(x_{t_{i}})+\nabla\phi(x_{t_{i}-1})\xrightarrow{i\to\infty}0. \tag{7.4}\]
Additionally, by the strong law of large numbers (see Borovkov [7, Theorem 11.3.1]),
\[\bar{M}^{t_{i}}\xrightarrow{i\to\infty}\mathbb{E}[M]. \tag{7.5}\]
Combining (7.3), (7.4), and (7.5), yields the second part of the lemma
\[\nabla h(x^{*})=\mathbb{E}[M]^{T}z^{*}.\]
To see the correctness of the third part of the lemma, we note that by the third part of Lemma 7.1, for the \(t_{i}\) element of the subsequence we have that
\[\bar{M}^{t_{i}}x_{t_{i}}-y_{t_{i}}=\frac{1}{\beta_{t_{i}}}(z_{t_{i}-1}-z_{t_{ i}}).\]
Recall that our subsequence satisfies that \(\beta_{t_{i}}=\bar{\beta}\). Therefore, for all \(i\geq 1\),
\[\bar{M}^{t_{i}}x_{t_{i}}-y_{t_{i}}=\frac{1}{\beta}(z_{t_{i}-1}-z_{t_{i}}).\]
By our assumption that \(\lim_{t\to\infty}\|z_{t+1}-z_{t}\|^{2}=0\), we thus obtain by taking \(i\to\infty\) that
\[\mathbb{E}[M]x^{*}=y^{*},\]
which concludes the proof of the third part.
To prove the first part of the theorem, we will need to establish that
\[\lim_{i\to\infty}P(y_{t_{i}})=P(y^{*}). \tag{7.6}\]
We will a priori assume that (7.6) holds, utilize it to prove the first part of the theorem, and only then establish that (7.6) indeed holds true.
Note that by the first part of Lemma 7.1, for the \(t_{i}\) element of the subsequence,
\[0\in\partial P(y_{t_{i}})+z_{t_{i}}+\beta_{t_{i}}\bar{M}^{t+1}(x_{t_{i}}-x_{t_{ i}-1}).\]
Recall that for \(t\) such that \(t>K_{stable}\), \(\beta_{t}=\bar{\beta}\), and that \(t_{1}>K_{stable}\). Therefore, for all \(i\geq 1\),
\[0\in\partial P(y_{t_{i}})+z_{t_{i}}+\bar{\beta}\bar{M}^{t+1}(x_{t_{i}}-x_{t_{ i}-1}).\]
Since \(\lim_{i\to\infty}y_{t_{i}}=y^{*}\) and \(\lim_{i\to\infty}P(y_{t_{i}})=P(y^{*})\) by our assumption, from Rockafellar and Wets [22, Proposition 8.7] we have that
\[\limsup_{i\to\infty}\partial P(y_{t_{i}})\subset\partial P(y^{*}).\]
Taking \(i\to\infty\), using the fact that \(\limsup_{i\to\infty}\partial P(y_{t_{i}})\subset\partial P(y^{*})\) and the assumption \(\lim_{t\to\infty}\|x_{t+1}-x_{t}\|^{2}=0\), we conclude that
\[-z^{*}\in\partial P(y^{*}),\]
which establishes the first part of the theorem - given that (7.6) holds true.
All that remains is to prove the correctness of (7.6). To that end, it is sufficient to show that: (i) \(\liminf_{i\to\infty}P(y_{t_{i}})\geq P(y^{*})\), and (ii) \(\limsup_{i\to\infty}P(y_{t_{i}})\leq P(y^{*})\), hold true.
The first relation
\[\liminf_{i\to\infty}P(y_{t_{i}})\geq P(y^{*}) \tag{7.7}\]
follows immediately from the lower semicontinuity of \(P\).
By the convergence of the subsequence \(\{x_{t_{i}},y_{t_{i}},z_{t_{i}}\}_{i\geq 0}\), and our assumption that \(\lim_{t\to\infty}\|x_{t+1}-x_{t}\|^{2}+\|y_{t+1}-y_{t}\|^{2}+\|z_{t+1}-z_{t}\| ^{2}=0\), it follows that
\[x_{t_{i}-1}\xrightarrow{i\to\infty}x^{*}\text{ and }z_{t_{i}-1}\xrightarrow{i \to\infty}z^{*}.\]
To establish that \(\limsup_{i\to\infty}P(y_{t_{i}})\leq P(y^{*})\) holds true, first note that since all the components of \(\mathcal{L}_{\bar{\beta}}(x,y,z;M)\) other than \(P(y)\) are continuous and \(\{(x_{t_{i}-1},y_{t_{i}},z_{t_{i}-1},\bar{M}^{t_{i}})\}\xrightarrow{i\to \infty}(x^{*},y^{*},z^{*},\mathbb{E}[M])\), it is sufficient to show that
\[\limsup_{i\to\infty}\mathcal{L}_{\bar{\beta}}(x_{t_{i}-1},y_{t_{i}},z_{t_{i} -1};\bar{M}^{t_{i}})\leq\mathcal{L}_{\bar{\beta}}(x^{*},y^{*},z^{*};\mathbb{E}[ M]):=\mathcal{L}_{\bar{\beta}}(x^{*},y^{*},z^{*}).\]
To prove the above, we will show that the sequence \(\{\mathcal{L}_{\bar{\beta}}(x_{t_{i}-1},y_{t_{i}},z_{t_{i}-1};\bar{M}^{t_{i}})\}_{ i\geq 0}\) is bounded. Then, we will show that the limit of every convergent subsequence is bounded by \(\mathcal{L}_{\bar{\beta}}(x^{*},y^{*},z^{*})\).
Since \(P\) is proper, there exists \(\tilde{y}\) such that \(P(\tilde{y})<\infty\). By Assumption 3, the sequence \(\{x_{t},y_{t},z_{t}\}_{t\geq 0}\) is bounded, and as we stated previously, all the components of \(\mathcal{L}_{\bar{\beta}}(x,y,z;M)\) except \(P(y)\) are continuous. Subsequently, it follows that
\[\inf_{i\geq 1}\{\mathcal{L}_{\bar{\beta}}(x_{t_{i}-1},y_{t_{i}},z_ {t_{i}-1};\bar{M}^{t_{i}})-P(y_{t_{i}})\}\text{ and }\inf_{i\geq 1}\{\mathcal{L}_{\bar{\beta}}(x_{t_{i}-1}, \tilde{y},z_{t_{i}-1};\bar{M}^{t_{i}})-P(\tilde{y})\},\] \[\sup_{i\geq 1}\{\mathcal{L}_{\bar{\beta}}(x_{t_{i}-1},y_{t_{i}},z_ {t_{i}-1};\bar{M}^{t_{i}})-P(y_{t_{i}})\}\text{ and }\sup_{i\geq 1}\{\mathcal{L}_{\bar{ \beta}}(x_{t_{i}-1},\tilde{y},z_{t_{i}-1};\bar{M}^{t_{i}})-P(\tilde{y})\},\]
exist and are finite.
By the definition of the update of \(y\), \(y_{t_{i}}\) is a minimizer of \(\mathcal{L}_{\bar{\beta}}(x_{t_{i}-1},y,z_{t_{i}-1};\bar{M}^{t_{i}})\). Hence,
\[\mathcal{L}_{\bar{\beta}}(x_{t_{i}-1},y_{t_{i}},z_{t_{i}-1})\leq \mathcal{L}_{\bar{\beta}}(x_{t_{i}-1},\tilde{y},z_{t_{i}-1}).\]
Moreover, by the definition of supremum and infimum,
\[\mathcal{L}_{\bar{\beta}}(x_{t_{i}-1},y_{t_{i}},z_{t_{i}-1}) =\mathcal{L}_{\bar{\beta}}(x_{t_{i}-1},y_{t_{i}},z_{t_{i}-1})-P(y_ {t_{i}})+P(y_{t_{i}})\] \[\geq P(y_{t_{i}})+\inf_{j\geq 1}\{\mathcal{L}_{\bar{\beta}}(x_{t_{ j}-1},y_{t_{j}},z_{t_{j}-1};\bar{M}^{t_{j}})-P(y_{t_{j}})\}\]
and
\[\mathcal{L}_{\bar{\beta}}(x_{t_{i}-1},\tilde{y},z_{t_{i}-1}) =\mathcal{L}_{\bar{\beta}}(x_{t_{i}-1},\tilde{y},z_{t_{i}-1})-P( \tilde{y})+P(\tilde{y})\] \[\leq P(\tilde{y})+\sup_{j\geq 1}\{\mathcal{L}_{\bar{\beta}}(x_{t_{ j}-1},\tilde{y},z_{t_{j}-1};\bar{M}^{t_{j}})-P(\tilde{y})\}.\]
Combining the inequalities above, we get
\[P(y_{t_{i}})+\inf_{j\geq 1}\{\mathcal{L}_{\bar{\beta}}(x_{t_{j}-1},y_{t_{j}},z_ {t_{j}-1};\bar{M}^{t_{j}})-P(y_{t_{j}})\}\leq P(\tilde{y})+\sup_{j\geq 1}\{ \mathcal{L}_{\bar{\beta}}(x_{t_{j}-1},\tilde{y},z_{t_{j}-1};\bar{M}^{t_{j}})-P (\tilde{y})\}.\]
Rearranging the terms, we obtain that
\[P(y_{t_{i}})\leq P(\tilde{y})+\sup_{j\geq 1}\{\mathcal{L}_{\bar{\beta}}(x_{t_{ j}-1},\tilde{y},z_{t_{j}-1};\bar{M}^{t_{j}})-P(\tilde{y})\}-\inf_{j\geq 1}\{ \mathcal{L}_{\bar{\beta}}(x_{t_{j}-1},y_{t_{j}},z_{t_{j}-1};\bar{M}^{t_{j}})-P (y_{t_{j}})\}\]
for all \(i\geq 1\).
Taking the limit, and using the fact that
\[P(\tilde{y})+\sup_{j\geq 1}\{\mathcal{L}_{\bar{\beta}}(x_{t_{j}-1},\tilde{y},z_ {t_{j}-1};\bar{M}^{t_{j}})-P(\tilde{y})\}-\inf_{j\geq 1}\{\mathcal{L}_{\bar{ \beta}}(x_{t_{j}-1},y_{t_{j}},z_{t_{j}-1};\bar{M}^{t_{j}})-P(y_{t_{j}})\}\]
is a sum comprising 3 finite elements, non of which depend on \(i\), yields
\[\liminf_{i\geq 1}P(y_{t_{i}})\] \[\leq\liminf_{i\geq 1}\{P(\tilde{y})+\sup_{j\geq 1}\{\mathcal{L}_{ \bar{\beta}}(x_{t_{j}-1},\tilde{y},z_{t_{j}-1};\bar{M}^{t_{j}})-P(\tilde{y})\} -\inf_{j\geq 1}\{\mathcal{L}_{\bar{\beta}}(x_{t_{j}-1},y_{t_{j}},z_{t_{j}-1};\bar{ M}^{t_{j}})-P(y_{t_{j}})\}\}\] \[=P(\tilde{y})+\sup_{j\geq 1}\{\mathcal{L}_{\bar{\beta}}(x_{t_{j}-1}, \tilde{y},z_{t_{j}-1};\bar{M}^{t_{j}})-P(\tilde{y})\}-\inf_{j\geq 1}\{\mathcal{L}_{\bar{ \beta}}(x_{t_{j}-1},y_{t_{j}},z_{t_{j}-1};\bar{M}^{t_{j}})-P(y_{t_{j}})\}.\]
Combining the latter with (7.7) then results with
\[P(y^{*})\leq P(\tilde{y})+\sup_{j\geq 1}\{\mathcal{L}_{\bar{\beta}}(x_{t_{j}-1}, \tilde{y},z_{t_{j}-1};\bar{M}^{t_{j}})-P(\tilde{y})\}-\inf_{j\geq 1}\{ \mathcal{L}_{\bar{\beta}}(x_{t_{j}-1},y_{t_{j}},z_{t_{j}-1};\bar{M}^{t_{j}})-P(y _{t_{j}})\}.\]
Recalling once again that
\[P(\tilde{y})+\sup_{j\geq 1}\{\mathcal{L}_{\bar{\beta}}(x_{t_{j}-1},\tilde{y}, z_{t_{j}-1};\bar{M}^{t_{j}})-P(\tilde{y})\}-\inf_{j\geq 1}\{\mathcal{L}_{\bar{ \beta}}(x_{t_{j}-1},y_{t_{j}},z_{t_{j}-1};\bar{M}^{t_{j}})-P(y_{t_{j}})\}\]
is a constant, it immediately follows that
\[P(y^{*})<\infty.\]
Therefore,
\[\mathcal{L}_{\bar{\beta}}(x_{t_{i}-1},y^{*},z_{t_{i}-1};\bar{M}^{t_{i}})<\infty.\]
Since \(y_{t_{i}}\) is the minimizer of \(\mathcal{L}_{\bar{\beta}}(x_{t_{i}-1},y,z_{t_{i}-1};\bar{M}^{t_{i}})\),
\[\mathcal{L}_{\bar{\beta}}(x_{t_{i}-1},y_{t_{i}},z_{t_{i}-1};\bar{M}^{t_{i}})< \mathcal{L}_{\bar{\beta}}(x_{t_{i}-1},y^{*},z_{t_{i}-1};\bar{M}^{t_{i}})<\infty\]
for all \(i\).
Taking the limit, we obtain that
\[\limsup_{i\to\infty}\mathcal{L}_{\bar{\beta}}(x_{t_{i}-1},y_{t_{i}},z_{t_{i}- 1};\bar{M}^{t_{i}})\leq\mathcal{L}_{\bar{\beta}}(x^{*},y^{*},z^{*};\mathbb{E} [M])=\mathcal{L}_{\bar{\beta}}(x^{*},y^{*},z^{*}).\]
Since all the elements of \(\mathcal{L}_{\bar{\beta}}(x,y,z;M)\) other than \(P\) are continuous, we have that
\[\limsup_{i\to\infty}P(y_{t_{i}})\leq P(y^{*}).\]
Combining this with (7.7) we can finally conclude that
\[\lim_{i\to\infty}P(y_{t_{i}})=P(y^{*}).\]
\(\Box\)
Theorem 7.1 essentially provides us the guarantee that any accumulation point of the meta algorithm Algorithm 1 is almost surely a critical point of Equation (1.1). All that remains is to show that the assumption \(\lim_{t\to\infty}\|x_{t+1}-x_{t}\|^{2}+\|y_{t+1}-y_{t}\|^{2}+\|z_{t+1}-z_{t}\| ^{2}=0\) indeed holds true under our blanket assumptions on the model. We do so in the proof process of our main result stated by Theorem 7.2.
**Theorem 7.2**.: _Suppose that Assumption 1 and Assumption 3 hold true. Let \(\theta_{t}\) be chosen according to the sampling regime in Definition 4.3, and let \(\{x_{t},y_{t},z_{t}\}_{t>0}\) be the sequence generated by Algorithm 1. Then for every cluster point \((x^{*},y^{*},z^{*})\) of \(\{x_{t},y_{t},z_{t}\}_{t>0}\), \(x^{*}\) is a critical point of (1.1) almost surely._
Proof.: Before initiating the proof process, let us recall a few definitions and facts.
By Definition 3.1, we have the following guarantees on the sequence \(\{\beta_{t}\}_{t\geq 0}\): There exists an index \(K_{stable}\) such that
1. \(\beta\) stability: For all \(k>K_{stable}\), \(\beta_{k}=\beta_{K_{stable}}\); for convenience we denote it by \(\bar{\beta}=\beta_{K_{stable}}\).
2. Sufficient decrease: There exists \(\rho>0\) such that for all \(k>K_{stable}\) \[g^{k+1}(x_{k+1})-g^{k+1}(x_{k})\leq-\frac{\rho}{2}\|x_{k+1}-x_{k}\|^{2},\] and \[\rho>\frac{8}{\beta_{k}\sigma}\left(L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t \geq 0})+L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})\right),\] where \(L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0})\) and \(L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})\) are the empirical Lipschitz constants defined in Definition 2.2.
Our analysis will focus on the iterations executed after reaching stability, i.e., \(t\geq K_{stable}\). To underscore the fact that \(\{\beta_{t}\}_{t\geq 0}\) is constant we will replace \(\beta_{t}\) with \(\bar{\beta}\) throughout this proof. For the sake of ease of reading, we will also use the notation
\[\mu:=L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0}), \tag{7.8}\]
and
\[\nu:=L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0}). \tag{7.9}\]
We begin the proof with outlining its main steps. The proof consists of 4 milestones:
1. Bounding \(\|z_{t+1}-z_{t}\|^{2}\) via the relation \[\|z_{t+1}-z_{t}\|^{2}\] (7.10) \[\leq \frac{4\mu}{\sigma}\|x_{t+1}-x_{t}\|^{2}+\frac{4\nu}{\sigma}\|x_{ t}-x_{t-1}\|^{2}+\frac{2}{\sigma}(\|\delta^{T}_{t+1}z_{t+1}\|+\|\delta^{T}_{t}z_{ t}\|)^{2}.\]
2. Showing that \[\lim_{t\to\infty}\|x_{t+1}-x_{t}\|^{2}=0\ \Rightarrow\ \lim_{t\to\infty}\|x_{t+1}-x_{t}\|^{2}+\|y_{t+1}-y_{t}\|^{2}+\|z_{t+1}-z_{t} \|^{2}=0.\] (7.11)
3. Showing that there exists a sequence of scalar random variables \(\{d_{t}\}_{t\geq 0}\) such that \[\sum_{t=0}^{\infty}d_{t+1}<\infty\] (7.12) almost surely, satisfying that \[\mathcal{L}_{\bar{\beta}}(x_{t+1},y_{t+1},z_{t+1};\bar{M}^{t+1})- \mathcal{L}_{\bar{\beta}}(x_{t},y_{t},z_{t};\bar{M}^{t})\] (7.13) \[\leq\frac{4}{\sigma\bar{\beta}}(L^{e}_{\nabla h+\nabla\phi}(\{x_{ t}\}_{t\geq 0})\|x_{t+1}-x_{t}\|^{2}+L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})\|x_{t}-x_{t- 1}\|^{2})-\frac{\rho}{2}\|x_{t+1}-x_{t}\|^{2}+d_{t+1}.\]
4. Showing that there exist random variables \(C_{1},C_{2}\), such that \(C_{1}<0\), \(C_{2}<\infty\) almost surely, and \[\mathcal{L}_{\beta_{t+1}}(x_{t+1},y_{t+1},z_{t+1};\bar{M}^{t+1})\leq\sum_{i=K_{ stable}+1}^{t-1}C_{1}\|x_{i+1}-x_{i}\|^{2}+\sum_{i=K_{stable}+1}^{t}d_{t+1}+C_{2}.\] (7.14)
Assuming that (7.11), (7.12) and (7.14) hold true, the proof of the theorem is quite straightforward, and therefore, we start from the end under the premise of these four milestones.
**Main proof assuming the four milestones.**
Consider an accumulation point \((x^{*},y^{*},z^{*})\) of \(\{x_{t},y_{t},z_{t}\}_{t>0}\), and its convergent subsequence \(\{x_{t_{i}},y_{t_{i}},z_{t_{i}}\}_{i\geq 0}\). We note the following facts:
1. \(\mathcal{L}_{\tilde{\beta}}(\cdot,\cdot,\cdot;A)\) is continuous in its last argument.
2. \(\bar{M}^{t}\xrightarrow{t\to\infty}\mathbb{E}[M]\).
3. \(\mathcal{L}_{\tilde{\beta}}\) is lower semicontinuous.
4. \(P\) is proper, and therefore \(\mathcal{L}_{\tilde{\beta}}\) is proper.
Using these facts, and the notation \(\mathcal{L}_{\tilde{\beta}}(\cdot,\cdot,\cdot,\mathbb{E}[M])=\mathcal{L}_{ \tilde{\beta}}\), the following inequalities hold true with respect to the accumulation point \((x^{*},y^{*},z^{*})\) and the convergent subsequence \(\{x_{t_{i}},y_{t_{i}},z_{t_{i}}\}_{i\geq 0}\),
\[\liminf_{i\to\infty}\mathcal{L}_{\tilde{\beta}}(x_{t_{i}},y_{t_{i}},z_{t_{i}}; \bar{M}^{t_{i}})\geq\mathcal{L}_{\tilde{\beta}}(x^{*},y^{*},z^{*})>-\infty. \tag{7.15}\]
Combining (7.15) with (7.14), we deduce that there exists a sequence of random variables \(\{d_{t}\}_{t\geq 0}\) such that
\[\sum_{i=K_{stable}+1}^{\infty}C_{1}\|x_{i+1}-x_{i}\|^{2}+\sum_{i=K_{stable}+1 }^{\infty}d_{t+1}+C_{2}>-\infty,\]
which implies, by (7.12) stating that
\[\sum_{i=K_{stable}+1}^{\infty}d_{t+1}<\infty\quad\text{ almost surely},\]
that
\[\sum_{i=K_{stable}+1}^{\infty}C_{1}\|x_{i+1}-x_{i}\|^{2}+C_{2}>-\infty\quad \text{ almost surely}.\]
Since \(C_{1}<0\) and \(C_{2}<\infty\), it follows that
\[\lim_{t\to\infty}\|x_{t+1}-x_{t}\|^{2}=0\quad\text{ almost surely}.\]
Finally, by (7.11), it follows that
\[\lim_{t\to\infty}\|x_{t+1}-x_{t}\|^{2}+\|y_{t+1}-y_{t}\|^{2}+\|z_{t+1}-z_{t}\|^{2 }=0,\]
which readily implies the required by invoking Theorem 7.1.
The remainder of the proof shall focus on proving that (7.11), (7.12) and (7.14) hold true; we will prove all the milestones mentioned above in order of appearance.
**Milestone 1: Proving Relation (7.10).**
Let \(t>K_{stable}\). We note that due to Assumption 2, \(\mathbb{E}[M]^{T}\mathbb{E}[M]-\sigma I\succeq 0\), and in particular, for any \(t\geq 1\)
\[\langle z_{t+1}-z_{t},(\mathbb{E}[M]^{T}\mathbb{E}[M]-\sigma I)z_{t+1}-z_{t} \rangle\geq 0,\]
which is the same as
\[\sigma\|z_{t+1}-z_{t}\|^{2}\leq\|\mathbb{E}[M]^{T}(z_{t+1}-z_{t})\|^{2}.\]
Hence, by adding the same elements on both sides of the relation, we obtain
\[\sqrt{\sigma}\|z_{t+1}-z_{t}\|-\|\delta_{t+1}^{T}z_{t+1}\|-\|\delta_{t}^{T}z_{ t}\|\leq\|\mathbb{E}[M]^{T}(z_{t+1}-z_{t})\|-\|\delta_{t+1}^{T}z_{t+1}\|-\| \delta_{t}^{T}z_{t}\|. \tag{7.16}\]
By the triangle inequality \(\|a+b\|\geq\|a\|-\|b\|\), and the definition of \(\delta_{t}\),
\[\begin{split}\|\mathbb{E}[M]^{T}(z_{t+1}-z_{t})\|-\|\delta_{t+1}^ {T}z_{t+1}\|-\|\delta_{t}^{T}z_{t}\|\\ \leq&\|\mathbb{E}[M]^{T}(z_{t+1}-z_{t})+\delta_{t+1} ^{T}z_{t+1}+\delta_{t}^{T}z_{t}\|\\ =&\|(\bar{M}^{t+1})^{T}z_{t+1}-(\bar{M}^{t})^{T}z_{ t}\|.\end{split} \tag{7.17}\]
Note that by the second part of Lemma 7.1,
\[(\bar{M}^{t+1})^{T}z_{t+1}=\nabla h(x_{t+1})+\nabla\phi(x_{t+1})-\nabla\phi(x_ {t}).\]
Consequently,
\[\begin{split}&\|(\bar{M}^{t+1})^{T}z_{t+1}-(\bar{M}^{t})^{T}z_{t}\|^ {2}\\ &=\|\nabla h(x_{t+1})-\nabla h(x_{t})+(\nabla\phi(x_{t+1})-\nabla \phi(x_{t}))-(\nabla\phi(x_{t})-\nabla\phi(x_{t-1}))\|^{2}.\end{split}\]
Using the inequality \(\|a+b\|^{2}\leq 2\|a\|^{2}+2\|b\|^{2}\),
\[\begin{split}&\|(\bar{M}^{t+1})^{T}z_{t+1}-(\bar{M}^{t})^{T}z_{t}\|^ {2}\\ &\leq 2\|\nabla h(x_{t+1})-\nabla h(x_{t})+(\nabla\phi(x_{t+1})- \nabla\phi(x_{t}))\|^{2}+2\|\nabla\phi(x_{t})-\nabla\phi(x_{t-1})\|^{2}.\end{split} \tag{7.18}\]
Recall that by (7.8) and (7.9), \(\mu=L^{\rm e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0})\) and \(\nu=L^{\rm e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})\). By the definition of the empirical Lipschitz constant in Definition 2.2, we have that
\[\|\nabla\phi(x_{t})-\nabla\phi(x_{t-1})\|^{2}\leq\nu\|x_{t}-x_{t-1}\|^{2} \tag{7.19}\]
\[\|\nabla h(x_{t+1})-\nabla h(x_{t})+(\nabla\phi(x_{t+1})-\nabla\phi(x_{t}))\|^{2} \leq\mu\|x_{t+1}-x_{t}\|^{2}. \tag{7.20}\]
Applying (7.19) and (7.20) to (7.18):
\[\|(\bar{M}^{t+1})^{T}z_{t+1}-(\bar{M}^{t})^{T}z_{t}\|^{2}\leq 2\mu\|x_{t+1}-x_{t} \|^{2}+2\nu\|x_{t}-x_{t-1}\|^{2}. \tag{7.21}\]
Using inequalities (7.16), (7.17), establishes that
\[\sigma\|z_{t+1}-z_{t}\|^{2}\leq(\|(\bar{M}^{t+1})^{T}z_{t+1}-(\bar{M}^{t})^{T}z _{t}\|+\|\delta_{t+1}^{T}z_{t+1}\|+\|\delta_{t}^{T}z_{t}\|)^{2}. \tag{7.22}\]
Using (7.22), (7.21) and the inequality \((a+b)^{2}\leq 2a^{2}+2b^{2}\),
\[\sigma\|z_{t+1}-z_{t}\|^{2}\leq 4\mu\|x_{t+1}-x_{t}\|^{2}+4\nu\|x_{t}-x_{t-1} \|^{2}+2(\|\delta_{t+1}^{T}z_{t+1}\|+\|\delta_{t}^{T}z_{t}\|)^{2} \tag{7.23}\]
This concludes the proof of the first milestone (7.10).
**Milestone 2: Proving the implication (7.11).**
First note that by the third part of Lemma 7.1, \(\bar{M}^{t+1}x_{t+1}-y_{t+1}=\frac{1}{\bar{\beta}}(z_{t}-z_{t+1})\), and therefore
\[y_{t+1}-y_{t}=\bar{M}^{t+1}x_{t+1}-\bar{M}^{t}x_{t}+\frac{1}{\bar{\beta}}(z_{t +1}-z_{t})-\frac{1}{\bar{\beta}}(z_{t}-z_{t-1}). \tag{7.24}\]
Since \(\bar{M}^{t+1}x_{t+1}=(\mathbb{E}[M]+\delta_{t+1})x_{t+1}\) and \(\bar{M}^{t}x_{t}=(\mathbb{E}[M]+\delta_{t})x_{t}\) (from the definition of \(\delta_{t}\)), by the triangle inequality
\[\|\mathbb{E}[M](x_{t+1}-x_{t})\|+\|\delta_{t+1}x_{t+1}\|+\|\delta_{t}x_{t}\| \geq\|\bar{M}^{t+1}x_{t+1}-\bar{M}^{t}x_{t}\|. \tag{7.25}\]
Plugging (7.25) to (7.24) we obtain that
\[\|y_{t+1}-y_{t}\|\leq\|\mathbb{E}[M](x_{t+1}-x_{t})\|+\|\delta_{t+1}^{T}x_{t+ 1}\|+\|\delta_{t}^{T}x_{t}\|+\frac{1}{\bar{\beta}}\|z_{t+1}-z_{t}\|+\frac{1}{ \bar{\beta}}\|z_{t}-z_{t-1}\|.\]
Since \(\{x_{t},y_{t},z_{t}\}_{t>0}\) is bounded (cf. Assumption 3) and \(\delta_{t}\xrightarrow{t\to\infty}0\), from the relation in (7.23) it follows that it is sufficient to show that \(\lim_{t\to\infty}\|x_{t+1}-x_{t}\|^{2}=0\) in order to derive that \(\lim_{t\to\infty}\|x_{t+1}-x_{t}\|^{2}+\|y_{t+1}-y_{t}\|^{2}+\|z_{t+1}-z_{t}\| ^{2}=0\), which concludes the implication (7.11) stated by the second milestone.
**Milestone 3: Proving the existence of a sequence of random variables satisfying (7.12) and (7.13).**
Our goal is to bound \(\mathcal{L}_{\bar{\beta}}(x_{t+1},y_{t+1},z_{t+1};\bar{M}^{t+1})-\mathcal{L}_ {\bar{\beta}}(x_{t},y_{t},z_{t};\bar{M}^{t})\). Using the telescoping sum identity,
\[\mathcal{L}_{\bar{\beta}}(x_{t+1},y_{t+1},z_{t+1};\bar{M}^{t+1}) -\mathcal{L}_{\bar{\beta}}(x_{t},y_{t},z_{t};\bar{M}^{t})\] \[= \mathcal{L}_{\bar{\beta}}(x_{t+1},y_{t+1},z_{t+1};\bar{M}^{t+1}) -\mathcal{L}_{\bar{\beta}}(x_{t+1},y_{t+1},z_{t};\bar{M}^{t+1})\] \[+\mathcal{L}_{\bar{\beta}}(x_{t+1},y_{t+1},z_{t};\bar{M}^{t+1})- \mathcal{L}_{\bar{\beta}}(x_{t},y_{t+1},z_{t};\bar{M}^{t+1})\] \[+\mathcal{L}_{\bar{\beta}}(x_{t},y_{t},z_{t};\bar{M}^{t+1})- \mathcal{L}_{\bar{\beta}}(x_{t},y_{t},z_{t};\bar{M}^{t}),\]
we will bound each consecutive pair separately to derive a bound on the sum itself. We begin with the bound for
\[\mathcal{L}_{\bar{\beta}}(x_{t+1},y_{t+1},z_{t+1};\bar{M}^{t+1})-\mathcal{L}_{ \bar{\beta}}(x_{t+1},y_{t+1},z_{t};\bar{M}^{t+1}).\]
Note that
\[\mathcal{L}_{\bar{\beta}}(x_{t+1},y_{t+1},z_{t+1};\bar{M}^{t+1})- \mathcal{L}_{\bar{\beta}}(x_{t+1},y_{t+1},z_{t};\bar{M}^{t+1})\] \[= h(x_{t+1})+P(y_{t+1})-\langle z_{t+1},\bar{M}^{t+1}x_{t+1} \rangle+\langle z_{t+1},y_{t+1}\rangle+\frac{\bar{\beta}}{2}\|\bar{M}^{t+1}x_{ t+1}-y_{t+1}\|^{2}\] \[- (h(x_{t+1})+P(y_{t+1})-\langle z_{t},\bar{M}^{t+1}x_{t+1}\rangle +\langle z_{t},y_{t+1}\rangle+\frac{\bar{\beta}}{2}\|\bar{M}^{t+1}x_{t+1}-y_{t +1}\|^{2})\] \[= -\langle z_{t+1}-z_{t},\bar{M}^{t+1}x_{t+1}-y_{t+1}\rangle. \tag{7.26}\]
Applying the third part of Lemma 7.1, \(z_{t}-z_{t+1}=\dfrac{\bar{M}^{t+1}x_{t+1}-y_{t+1}}{\beta_{t}}\), to (7.26) then yields
\[\mathcal{L}_{\bar{\beta}}(x_{t+1},y_{t+1},z_{t+1};\bar{M}^{t+1})- \mathcal{L}_{\bar{\beta}}(x_{t+1},y_{t+1},z_{t};\bar{M}^{t+1})= -\langle z_{t+1}-z_{t},\bar{M}^{t+1}x_{t+1}-y_{t+1}\rangle\] \[= \dfrac{\|z_{t+1}-z_{t}\|^{2}}{\bar{\beta}}.\]
Using (7.23) we thus obtain that
\[\mathcal{L}_{\bar{\beta}}(x_{t+1},y_{t+1},z_{t+1};\bar{M}^{t+1}) -\mathcal{L}_{\bar{\beta}}(x_{t+1},y_{t+1},z_{t};\bar{M}^{t+1}) \tag{7.27}\] \[\leq\frac{4}{\sigma\bar{\beta}}(\mu\|x_{t+1}-x_{t}\|^{2}+\nu\|x_{ t}-x_{t-1}\|^{2})+\frac{2}{\sigma\bar{\beta}}(\|\delta_{t+1}^{T}z_{t+1}\|+\| \delta_{t}^{T}z_{t}\|)^{2},\]
where we used the abbreviations \(\mu\) and \(\nu\) for the empirical Lipschitz constants defined in (7.8) and (7.9) respectively.
To evaluate \(\mathcal{L}_{\bar{\beta}}(x_{t+1},y_{t+1},z_{t};\bar{M}^{t+1})-\mathcal{L}_{ \bar{\beta}}(x_{t},y_{t+1},z_{t};\bar{M}^{t+1})\), we recall our choice of \(g^{t+1}(x)\) first introduced in (3.2)
\[g^{t+1}(x)=h(x)-\langle z_{t},\bar{M}^{t}x\rangle+\frac{\bar{\beta}}{2}\|\bar{ M}^{t+1}x-y_{t+1}\|^{2}+D_{\phi}(x,x_{t}).\]
The function \(g^{t+1}(x)\) contains all the components of \(\mathcal{L}_{\bar{\beta}}(x,y_{t+1},z_{t};\bar{M}^{t+1})+D_{\phi}(x,x_{t})\) that depend on \(x\), so that
\[\mathcal{L}_{\bar{\beta}}(x_{t+1},y_{t+1},z_{t};\bar{M}^{t+1})+D_{\phi}(x_{t+ 1},x_{t})-\mathcal{L}_{\bar{\beta}}(x_{t},y_{t+1},z_{t};\bar{M}^{t+1})-D_{\phi }(x_{t},x_{t})=g^{t+1}(x_{t+1})-g^{t+1}(x_{t}).\]
Rearranging,
\[\mathcal{L}_{\bar{\beta}}(x_{t+1},y_{t+1},z_{t};\bar{M}^{t+1})-\mathcal{L}_{ \bar{\beta}}(x_{t},y_{t+1},z_{t};\bar{M}^{t+1})=g^{t+1}(x_{t+1})-D_{\phi}(x_{t +1},x_{t})-g^{t+1}(x_{t})+D_{\phi}(x_{t},x_{t}).\]
We will bound each of the components of the right hand side separately.
First, by Definition 3.1, for every \(t>K_{stable}\) it holds that
\[g^{t+1}(x_{t+1})-g^{t+1}(x_{t})\leq-\frac{\rho}{2}\|x_{t+1}-x_{t}\|^{2}.\]
Then, from the definition of \(D_{\phi}(\cdot,\cdot)\) we have that
\[D_{\phi}(x,x_{t})=\phi(x)-\phi(x_{t})-\langle\nabla\phi(x_{t}),x-x_{t}\rangle,\]
which clearly implies that
\[D_{\phi}(x_{t},x_{t})=\phi(x_{t})-\phi(x_{t})-\langle\nabla\phi(x_{t}),x_{t}-x _{t}\rangle=0.\]
At last, since \(\phi\) is convex, we can apply the gradient inequality to \(\phi\) and derive that
\[D_{\phi}(x_{t+1},x_{t})\geq 0.\]
Using these facts, it follows that
\[\mathcal{L}_{\tilde{\beta}}(x_{t+1},y_{t+1},z_{t};\bar{M}^{t+1})- \mathcal{L}_{\tilde{\beta}}(x_{t},y_{t+1},z_{t};\bar{M}^{t+1}) \tag{7.28}\] \[= g^{t+1}(x_{t+1})-g^{t+1}(x_{t})-D_{\phi}(x_{t+1},x_{t})+D_{\phi} (x_{t},x_{t})\] \[= g^{t+1}(x_{t+1})-g^{t+1}(x_{t})-D_{\phi}(x_{t+1},x_{t})\] \[\leq -\frac{\rho}{2}\|x_{t+1}-x_{t}\|^{2}-D_{\phi}(x_{t+1},x_{t})\] \[\leq -\frac{\rho}{2}\|x_{t+1}-x_{t}\|^{2}.\]
To bound \(\mathcal{L}_{\tilde{\beta}}(x_{t},y_{t+1},z_{t};\bar{M}^{t+1})-\mathcal{L}_{ \tilde{\beta}}(x_{t},y_{t},z_{t};\bar{M}^{t+1})\), note that \(y_{t+1}\) is the minimizer of \(\mathcal{L}_{\tilde{\beta}}(x_{t},y,z_{t};\bar{M}^{t+1})\) (we remind the reader that for \(t\geq K_{stable}\), \(\beta_{t}=\bar{\beta}\)). Hence,
\[\mathcal{L}_{\tilde{\beta}}(x_{t},y_{t+1},z_{t};\bar{M}^{t+1})-\mathcal{L}_{ \tilde{\beta}}(x_{t},y_{t},z_{t};\bar{M}^{t+1})\leq 0. \tag{7.29}\]
Finally, we bound the difference
\[e_{t+1}:=\mathcal{L}_{\tilde{\beta}}(x_{t},y_{t},z_{t};\bar{M}^{t+1})- \mathcal{L}_{\tilde{\beta}}(x_{t},y_{t},z_{t};\bar{M}^{t}). \tag{7.30}\]
Recall that
\[\mathcal{L}_{\beta}(x,y,z;A)=h(x)+P(y)-\langle z,Ax-y\rangle+\frac{\beta}{2} \|Ax-y\|^{2},\]
which yields that
\[e_{t+1}=\langle z_{t},\bar{M}^{t}x_{t}-\bar{M}^{t+1}x_{t}\rangle+\frac{\bar{ \beta}}{2}\left(\|\bar{M}^{t+1}x_{t}-y_{t}\|^{2}-\|\bar{M}^{t}x_{t}-y_{t}\|^{2 }\right).\]
Since
\[\|\bar{M}^{t+1} x_{t}-y_{t}\|^{2}-\|\bar{M}^{t}x_{t}-y_{t}\|^{2}\] \[= \|\bar{M}^{t+1}x_{t}\|^{2}-2x_{t}^{T}\left(\bar{M}^{t+1}\right)^{ T}y_{t}+\|y_{t}\|^{2}-\|\bar{M}^{t}x_{t}\|^{2}+2x_{t}^{T}\left(\bar{M}^{t} \right)^{T}y_{t}-\|y_{t}\|^{2}\] \[= \|\bar{M}^{t+1}x_{t}\|^{2}-\|\bar{M}^{t}x_{t}\|^{2}-2x_{t}^{T} \left(\bar{M}^{t+1}-\bar{M}^{t}\right)y_{t}.\]
It follows that
\[e_{t+1}=\langle z_{t},\bar{M}^{t}x_{t}-\bar{M}^{t+1}x_{t}\rangle+\frac{\bar{\beta} }{2}\left(\|\bar{M}^{t+1}x_{t}\|^{2}-\|\bar{M}^{t}x_{t}\|^{2}-2x_{t}^{T}\left( \bar{M}^{t+1}-\bar{M}^{t}\right)y_{t}\right). \tag{7.31}\]
Utilizing the fact that \(\bar{M}^{t+1}=\mathbb{E}[M]+\delta_{t+1}\), \(\bar{M}^{t}=\mathbb{E}[M]+\delta_{t}\), we have that
\[\|\bar{M}^{t+1}x_{t}\|^{2}-\|\bar{M}^{t}x_{t}\|^{2}\] \[= \|\mathbb{E}[M]x_{t}+\delta_{t+1}x_{t}\|^{2}-\|\mathbb{E}[M]x_{t} +\delta_{t}x_{t}\|^{2}\] \[= \|\mathbb{E}[M]x_{t}\|^{2}+2x_{t}^{T}\mathbb{E}[M]^{T}\delta_{t+ 1}x_{t}+\|\delta_{t+1}x_{t}\|^{2}-\|\mathbb{E}[M]x_{t}\|^{2}-2x_{t}^{T} \mathbb{E}[M]^{T}\delta_{t}x_{t}-\|\delta_{t}x_{t}\|^{2}\] \[= 2x_{t}^{T}\mathbb{E}[M]^{T}\left(\delta_{t+1}-\delta_{t}\right)x _{t}+\|\delta_{t+1}x_{t}\|^{2}-\|\delta_{t}x_{t}\|^{2}.\]
Plugging it into (7.31) then results with
\[e_{t+1}=\langle z_{t},\bar{M}^{t}x_{t}-\bar{M}^{t+1}x_{t}\rangle +\frac{\bar{\beta}}{2}(2x_{t}^{T}\mathbb{E}[M]^{T}\left(\delta_{t +1}-\delta_{t}\right)x_{t}\] \[+\|\delta_{t+1}x_{t}\|^{2}-\|\delta_{t}x_{t}\|^{2}-2x_{t}^{T} \left(\bar{M}^{t+1}-\bar{M}^{t}\right)y_{t}).\]
Using the fact that
\[\delta_{t+1}-\delta_{t}=\bar{M}^{t+1}-\mathbb{E}[M]-\left(\bar{M}^{t}-\mathbb{ E}[M]\right)=\bar{M}^{t+1}-\bar{M}^{t},\]
we can update our previous equation for \(e_{t+1}\) to
\[e_{t+1}=\langle z_{t},\bar{M}^{t}x_{t}-\bar{M}^{t+1}x_{t}\rangle +\frac{\bar{\beta}}{2}(2x_{t}^{T}\mathbb{E}[M]^{T}\left(\bar{M}^{ t+1}-\bar{M}^{t}\right)x_{t} \tag{7.32}\] \[+\|\delta_{t+1}x_{t}\|^{2}-\|\delta_{t}x_{t}\|^{2}-2x_{t}^{T} \left(\bar{M}^{t+1}-\bar{M}^{t}\right)y_{t}).\]
We denote by \(\hat{M}^{t+1}\) the average of the \(\theta_{t+1}-\theta_{t}\) samples taken in round \(t+1\). Formally,
\[\hat{M}^{t+1}=\frac{1}{\theta_{t+1}-\theta_{t}}\sum_{i=\theta_{t}+1}^{\theta_ {t+1}}M^{i}.\]
Furthermore, we denote \(\hat{\delta}_{t}=\hat{M}^{t}-\mathbb{E}[M]\). By the update rule of \(\bar{M}^{t+1}\) and the definition of \(\hat{\delta}_{t}\) and \(\delta_{t}\)
\[\begin{split}\bar{M}^{t+1}-\bar{M}^{t}=&\frac{ \theta_{t}}{\theta_{t+1}}\bar{M}^{t}+\frac{\theta_{t+1}-\theta_{t}}{\theta_{t+ 1}}\hat{M}^{t+1}-\bar{M}^{t}\\ =&\frac{\theta_{t+1}-\theta_{t}}{\theta_{t+1}} \left(\hat{M}^{t+1}-\bar{M}^{t}\right)\\ =&\frac{\theta_{t+1}-\theta_{t}}{\theta_{t+1}} \left(\mathbb{E}[M]+\hat{\delta}_{t+1}-\mathbb{E}[M]-\delta_{t}\right)\\ =&\frac{\theta_{t+1}-\theta_{t}}{\theta_{t+1}} \left(\hat{\delta}_{t+1}-\delta_{t}\right).\end{split} \tag{7.33}\]
To reduce clutter, we denote
\[s_{t+1}=\frac{\theta_{t+1}-\theta_{t}}{\theta_{t+1}}.\]
Introducing this notation to (7.33), we derive
\[\bar{M}^{t+1}-\bar{M}^{t}=s_{t+1}\left(\hat{\delta}_{t+1}-\delta_{t}\right). \tag{7.34}\]
Combining (7.32) and (7.34) results with
\[e_{t+1} =\langle z_{t},-s_{t+1}(\hat{\delta}_{t+1}-\delta_{t})x_{t}\rangle\] \[+\frac{\bar{\beta}}{2}\left(2x_{t}^{T}\mathbb{E}[M]^{T}\left(s_{ t+1}\left(\hat{\delta}_{t+1}-\delta_{t}\right)\right)x_{t}+\|\delta_{t+1}x_{t}\|^{2}- \|\delta_{t}x_{t}\|^{2}-2x_{t}^{T}\left(s_{t+1}\left(\hat{\delta}_{t+1}-\delta _{t}\right)\right)y_{t}\right),\]
which in particular implies that
\[e_{t+1} \leq\langle z_{t},-s_{t+1}\left(\hat{\delta}_{t+1}-\delta_{t} \right)x_{t}\rangle\] \[+\frac{\bar{\beta}}{2}\left(2x_{t}^{T}\mathbb{E}[M]^{T}\left(s_{ t+1}\left(\hat{\delta}_{t+1}-\delta_{t}\right)\right)x_{t}+\|\delta_{t+1}x_{t} \|^{2}-2x_{t}^{T}\left(s_{t+1}\left(\hat{\delta}_{t+1}-\delta_{t}\right)\right) y_{t}\right).\]
By the Cauchy-Schwarz and the triangle inequalities,
\[e_{t+1} \leq s_{t+1}\|z_{t}\|\left(\|\hat{\delta}_{t+1}\|+\|\delta_{t}\| \right)\|x_{t}\|\] \[+s_{t+1}\frac{\bar{\beta}}{2}\left(\|\mathbb{E}[M]x_{t}\|\left( \|\hat{\delta}_{t+1}\|+\|\delta_{t}\|\right)\|x_{t}\|+2\|x_{t}\|\left(\|\hat{ \delta}_{t+1}\|+\|\delta_{t}\|\right)\|y_{t}\|+\|\delta_{t+1}\|^{2}\cdot\|x_{ t}\|^{2}\right).\]
By Assumption 3, there exists \(B>0\), such that \(B\geq\sup\limits_{t\geq 0}\{\max\{\|x_{t}\|,\|\mathbb{E}[M]x_{t}\|,\|y_{t}\|,\|z_{t}\|\}\}\). Using this fact, we have that
\[e_{t+1} \leq s_{t+1}B^{2}\left(\|\hat{\delta}_{t+1}\|+\|\delta_{t}\| \right)+s_{t+1}B^{2}\frac{\bar{\beta}}{2}\left(\left(\|\hat{\delta}_{t+1}\|+ \|\delta_{t}\|\right)+2\left(\|\hat{\delta}_{t+1}\|+\|\delta_{t}\|\right)+\| \delta_{t+1}\|^{2}\right) \tag{7.35}\] \[=\left(\frac{3\bar{\beta}}{2}+1\right)B^{2}s_{t+1}\left(\|\hat{ \delta}_{t+1}\|+\|\delta_{t}\|\right)+\frac{\bar{\beta}}{2}B^{2}s_{t+1}\| \delta_{t+1}\|^{2}.\]
Let \(\kappa\in\{\epsilon,1+\epsilon\}\) be chosen according to sampling regime described by Definition 4.3. By the Taylor series expansion of \((t+1)^{1+\kappa}\),
\[(t+1)^{1+\kappa}=t^{1+\kappa}+\frac{1}{1!}(t+1-t)(1+\kappa)t^{\kappa}+O(t^{-1 +\kappa})=t^{1+\kappa}+(1+\kappa)t^{\kappa}+O(t^{-1+\kappa}).\]
Therefore, there exist constants \(A_{1},A_{2}>0\), such that for all \(t\geq 1\)
\[t^{1+\kappa}+A_{1}t^{\kappa}\leq(t+1)^{1+\kappa}\leq t^{1+\kappa}+A_{2}t^{ \kappa}.\]
Hence, it follows that
\[s_{t+1}=\frac{\theta_{t+1}-\theta_{t}}{\theta_{t+1}}=\frac{(t+1)^{1+\kappa}-t ^{1+\kappa}}{(t+1)^{1+\kappa}}\leq\frac{t^{1+\kappa}+A_{2}t^{\kappa}-t^{1+ \kappa}}{(t+1)^{1+\kappa}}=\frac{A_{2}t^{\kappa}}{(t+1)^{1+\kappa}}\leq\frac{A _{2}t^{\kappa}}{t^{1+\kappa}}=\frac{A_{2}}{t}. \tag{7.36}\]
Now our mission boils down to producing (probabilistic) bounds on \(\|\delta_{t}\|\) and \(\|\hat{\delta}_{t}\|\). By Lemma 4.1 and Lemma 4.2\(\,\ Prob\left(\|\delta_{t}\|>O\left(\frac{1}{t^{0.5+0.25\epsilon}}\right)\ i.o. \right)=0\). Therefore, with probability 1, there exists an index \(\tau_{1}\) such that for all \(t>\tau_{1}\), \(\|\delta_{t}\|\leq\frac{1}{\sqrt{t}}\).
Using similar arguments to those used in Lemma 4.1 and Lemma 4.2, it can also be shown that \(Prob\left(\|\hat{\delta}_{t}\|>O\left(\frac{1}{t^{0.25\epsilon}}\right)\,i.o. \right)=0\). We provide a sketch of the formal adaptions required below.
* If Assumption 4 holds true, the proof is almost identical to Lemma 4.2, with the following differences: 1. We use \(\theta_{t+1}-\theta_{t}\) instead of \(\theta_{t}\) in the definition of \(\hat{\delta}_{t}\): \[\hat{\delta}_{t+1}=\hat{M}^{t+1}-\mathbb{E}[M]=\sum_{l=\theta_{t}}^{\theta_{ t+1}}\left(\frac{1}{\theta_{t+1}-\theta_{t}}\left(M^{l}-\mathbb{E}[M]\right) \right).\] 2. Subsequently, we derive an upper bound for \[Prob\left(|[\hat{\delta}_{t+1}]_{i,j}|>\frac{1}{t^{0.25\epsilon}}\right),\] instead of \[Prob\left(|[\delta_{t}]_{i,j}|>\frac{1}{t^{0.5+0.25\epsilon}}\right).\] 3. Before deriving (4.12), We use the lower bound for \(\theta_{t+1}-\theta_{t}\) under Assumption 4, \(A_{1}t^{\epsilon}\).
* If Assumption 4 does not hold, the proof is almost identical to Lemma 4.1, with the following differences: 1. We use the definition \[\hat{\delta}_{t+1}=\hat{M}^{t+1}-\mathbb{E}[M]=\frac{1}{\theta_{t+1}-\theta_{ t}}\sum_{l=\theta_{t}}^{\theta_{t+1}}\left(M^{l}-\mathbb{E}[M]\right)\] instead of \[\delta_{t}=\bar{M}^{t}-\mathbb{E}[M]=\frac{1}{\theta_{t}}\sum_{l=1}^{\theta_{ t}}\left(M^{l}-\mathbb{E}[M]\right).\] 2. Subsequently, the expression for \(Var[[\hat{\delta}_{t+1}]_{i,j}]\) in (4.9) will be replaced by \[Var[[\delta_{t}]_{i,j}]=\frac{1}{\theta_{t+1}-\theta_{t}}Var[M_{i,j}].\] 3. In the application of Chebyshev's inequality that follows immediately afterwards, we modify our choice of \(\eta\), from \(\frac{1}{t^{0.5+0.25\epsilon}}\) to \(\frac{1}{t^{0.25\epsilon}}\). 4. Finally, we use the lower bound for \(\theta_{t+1}-\theta_{t}\) if Assumption 4 does not hold, \(A_{1}t^{1+\epsilon}\), yielding the following replacement for (4.10) \[Prob\left(|[\hat{\delta}_{t+1}]_{i,j}|>\frac{1}{t^{0.25\epsilon}}\right)\leq \frac{Var[M_{i,j}]}{A_{1}t^{1+0.5\epsilon}}.\]
Since the proof process is identical to the one shown in Lemma 4.1 and Lemma 4.2, other than the minor differences pointed out above, we do not provide the complete proof.
Returning to the main stream of the proof of the third milestone, we have, due to \(Prob\left(\|\hat{\delta}_{t}\|>O\left(\frac{1}{t^{0.25\epsilon}}\right)\,i.o. \right)=0\) justified above, that there exists with probability 1 an index \(\tau_{2}\), so that for all \(t>\tau_{2}\)
\[\|\hat{\delta}_{t}\|\leq O\left(\frac{1}{t^{0.25\epsilon}}\right). \tag{7.37}\]
Utilizing (7.35), (7.36) and (7.37), we can show that for all sufficiently large \(t\), \(e_{t+1}\leq O\left(\frac{1}{t^{1+0.25\epsilon}}\right)\). Indeed, for any \(t>\tau\equiv\max\{\tau_{1},\tau_{2}\}\), there exist constants \(A_{3},A_{4}\) so that
\[e_{t+1}\leq\left(\frac{3\bar{\beta}}{2}+1\right)B^{2}\frac{A_{2}}{t}\left( \frac{A_{3}}{t^{0.25\epsilon}}+\frac{A_{4}}{\sqrt{t}}\right)+\frac{\bar{\beta} B^{2}}{2}\frac{A_{2}}{t}\cdot\frac{A_{4}^{2}}{t},\]
where \(A_{3}\), \(A_{4}\) are the scalars whose existence is implied by the notations \(\|\hat{\delta}_{t}\|\leq O\left(\frac{1}{t^{0.25\epsilon}}\right)\) and \(\|\delta_{t}\|\leq O\left(\frac{1}{t^{0.5+0.25\epsilon}}\right)\).
Consequently, it follows with probability 1 that
\[\sum_{t=1}^{\infty}e_{t}<\infty. \tag{7.38}\]
Combining (7.30), (7.27), (7.28), (7.29), we can bound \(\mathcal{L}_{\bar{\beta}}(x_{t+1},y_{t+1},z_{t+1};\bar{M}^{t+1})-\mathcal{L}_ {\bar{\beta}}(x_{t},y_{t},z_{t};\bar{M}^{t})\) as follows.
\[\leq\frac{4}{\sigma\bar{\beta}}\left(\mu\|x_{t+1}-x_{t}\|^{2}+ \nu\|x_{t}-x_{t-1}\|^{2}\right)+\frac{2}{\sigma\bar{\beta}}\left(\|\delta_{t+ 1}^{T}z_{t+1}\|+\|\delta_{t}^{T}z_{t}\|\right)^{2}-\frac{\rho}{2}\|x_{t+1}-x_ {t}\|^{2}+e_{t+1}.\]
Set \(d_{t+1}\) from (7.12) to be
\[d_{t+1}=\frac{2}{\sigma\bar{\beta}}\left(\|\delta_{t+1}^{T}z_{t+1}\|+\|\delta_ {t}^{T}z_{t}\|\right)^{2}+e_{t+1},\]
so that
\[\mathcal{L}_{\bar{\beta}}(x_{t+1},y_{t+1},z_{t+1};\bar{M}^{t+1})- \mathcal{L}_{\bar{\beta}}(x_{t},y_{t},z_{t};\bar{M}^{t}) \tag{7.39}\] \[\leq\frac{4}{\sigma\bar{\beta}}\left(\mu\|x_{t+1}-x_{t}\|^{2}+\nu \|x_{t}-x_{t-1}\|^{2}\right)-\frac{\rho}{2}\|x_{t+1}-x_{t}\|^{2}+d_{t+1}.\]
The relation in (7.39) is exactly the required bound (7.13).
To prove that the sequence \(\{d_{t+1}\}_{t\geq 1}\) indeed satisfies (7.12), note that by Assumption 3 the sequence \(\{z_{t}\}_{t>0}\) is bounded - that is, there exists a constant \(B>0\), such that \(\|z_{t}\|\leq B\) for all \(t\). Using this and the Cauchy-Schwartz inequality,
\[d_{t+1}=\frac{2}{\sigma\bar{\beta}}\left(\|\delta_{t+1}^{T}z_{t+1}\|+\| \delta_{t}^{T}z_{t}\|\right)^{2}+e_{t+1}\leq\frac{2B^{2}}{\sigma\bar{\beta}} \left(\|\delta_{t+1}\|+\|\delta_{t}\|\right)^{2}+e_{t+1}.\]
Applying the inequality \((a+b)^{2}\leq 2a^{2}+2b^{2}\),
\[d_{t+1}\leq\frac{2B^{2}}{\sigma\bar{\beta}}\left(\|\delta_{t+1}\|+\|\delta_{t}\| \right)^{2}+e_{t+1}\leq\frac{4B^{2}}{\sigma\bar{\beta}}\left(\|\delta_{t+1}\|^ {2}+\|\delta_{t}\|^{2}\right)+e_{t+1}.\]
As we already argued above, by our choice of \(\theta_{t}\) it holds that \(Prob\left(\|\delta_{t}\|>O\left(\frac{1}{t^{0.5+0.25\epsilon}}\right)\ i.o. \right)=0\). Combined with (7.38), the following holds almost surely:
\[\sum_{t=K+1}^{\infty}d_{t+1}<\infty. \tag{7.40}\]
This concludes the proof of (7.12), and the third milestone of the proof.
**Milestone 4: Establishing Relation (7.14).**
Utilizing (7.39) obtained in the proof of the third milestone, for any \(q\geq p+1\geq K_{stable}\) we have that
\[\mathcal{L}_{\bar{\beta}}(x_{q},y_{q},z_{q};\bar{M}^{q})-\mathcal{ L}_{\bar{\beta}}(x_{p},y_{p},z_{p};\bar{M}^{p})\] \[\leq\frac{1}{2}\sum_{t=p}^{q}\left(\frac{8\mu}{\sigma\bar{\beta} }-\rho\right)\|x_{t+1}-x_{t}\|^{2}+\frac{1}{2}\sum_{t=p}^{q}\frac{8\nu}{ \sigma\bar{\beta}}\|x_{t}-x_{t-1}\|^{2}+\sum_{t=p}^{q}d_{t+1}.\]
Since
\[\frac{1}{2}\sum_{t=p}^{q}\frac{8\nu}{\sigma\bar{\beta}}\|x_{t}-x_{t-1}\|^{2}= \frac{1}{2}\sum_{t=p-1}^{q-1}\frac{8\nu}{\sigma\bar{\beta}}\|x_{t+1}-x_{t}\|^ {2},\]
it follows that
\[\mathcal{L}_{\bar{\beta}}(x_{q},y_{q},z_{q};\bar{M}^{q})-\mathcal{ L}_{\bar{\beta}}(x_{p},y_{p},z_{p};\bar{M}^{p})\] \[\leq\frac{1}{2}\sum_{t=p}^{q}\left(\frac{8\mu}{\sigma\bar{\beta}} -\rho\right)\|x_{t+1}-x_{t}\|^{2}+\frac{1}{2}\sum_{t=p-1}^{q-1}\frac{8\nu}{ \sigma\bar{\beta}}\|x_{t+1}-x_{t}\|^{2}+\sum_{t=p}^{q}d_{t+1}.\]
Separating the last element from the first summation, and the first element from the second summation,
\[\mathcal{L}_{\bar{\beta}}(x_{q},y_{q},z_{q};\bar{M}^{q})-\mathcal{ L}_{\bar{\beta}}(x_{p},y_{p},z_{p};\bar{M}^{p})\] \[\leq\frac{1}{2}\sum_{t=p}^{q-1}\left(\frac{8\mu+8\nu}{\sigma\bar{ \beta}}-\rho\right)\|x_{t+1}-x_{t}\|^{2}+\frac{1}{2}\left(\frac{8\mu}{\sigma \bar{\beta}}-\rho\right)\|x_{q+1}-x_{q}\|^{2}+\frac{4\nu}{\sigma\bar{\beta}}\| x_{p}-x_{p-1}\|^{2}+\sum_{t=p}^{q}d_{t+1}.\]
Recall that \(\mu:=L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0})\), \(\nu=L^{e}_{\nabla\phi}(\{x_{t}\}_{t\geq 0})\), as defined in (7.8), (7.9). By
Definition 3.1
\[\frac{8L^{e}_{\nabla h+\nabla\phi}(\{x_{t}\}_{t\geq 0})}{\sigma\bar{\beta}}- \rho=\frac{8\mu}{\sigma\bar{\beta}}-\rho<0.\]
Therefore,
\[\mathcal{L}_{\bar{\beta}}(x_{q},y_{q},z_{q};\bar{M}^{q})-\mathcal{ L}_{\bar{\beta}}(x_{p},y_{p},z_{p};\bar{M}^{p}) \tag{7.41}\] \[\leq-\frac{1}{2}\sum_{t=p}^{q-1}\left(\rho-\frac{8\mu+8\nu}{ \sigma\bar{\beta}}\right)\|x_{t+1}-x_{t}\|^{2}+\frac{4\nu}{\sigma\bar{\beta}} \|x_{p}-x_{p-1}\|^{2}+\frac{1}{2}\sum_{t=p}^{q}d_{t+1}.\]
Utilizing Definition 3.1 once again, we have that,
\[\rho-\frac{8L_{\nabla h+\nabla\phi}^{e}(\{x_{t}\}_{t\geq 0})+8L_{\nabla\phi}^{e}( \{x_{t}\}_{t\geq 0})}{\sigma\bar{\beta}}=\rho-\frac{8\mu+8\nu}{\sigma\bar{\beta}}>0,\]
and therefore \(-\frac{1}{2}\cdot\left(\rho-\frac{8\mu+8\nu}{\sigma\bar{\beta}}\right)<0\) almost surely; denote
\[C_{1}=-\frac{1}{2}\cdot\left(\rho-\frac{8\mu+8\nu}{\sigma\bar{\beta}}\right).\]
That is,
\[\begin{split}\mathcal{L}_{\bar{\beta}}(x_{q},y_{q},z_{q};\bar{M}^ {q})-\mathcal{L}_{\bar{\beta}}(x_{p},y_{p},z_{p};\bar{M}^{p})\leq\sum_{t=p}^{q -1}C_{1}\|x_{t+1}-x_{t}\|^{2}&+\frac{4\nu}{\sigma\bar{\beta}}\|x _{p}-x_{p-1}\|^{2}\\ &+\frac{1}{2}\sum_{t=p}^{q}d_{t+1}.\end{split} \tag{7.42}\]
To reduce clutter, denote \(\bar{K}=K_{stable}\). Rearranging elements in (7.42) with \(q=t\) and \(p=\bar{K}+1\) and using the scalar random variable
\[C_{2}=\frac{4\nu}{\sigma\bar{\beta}}\|x_{\bar{K}+1}-x_{\bar{K}}\|^{2}+ \mathcal{L}_{\bar{\beta}}(x_{\bar{K}+1},y_{\bar{K}+1},z_{\bar{K}+1};\bar{M}^{ \bar{K}+1}),\]
we have that
\[\mathcal{L}_{\bar{\beta}}(x_{t},y_{t},z_{t};\bar{M}^{t})\leq\sum_{i=\bar{K}+1 }^{t-2}C_{1}\|x_{i+1}-x_{i}\|^{2}+\sum_{i=\bar{K}+1}^{t}d_{i+1}+C_{2}.\]
By Assumption 3, there exists \(B>0\) such that \(\|x_{t}\|\) is bounded by \(B\) for all \(t\). Therefore,
\[\frac{4\nu}{\sigma\bar{\beta}}\|x_{\bar{K}+1}-x_{\bar{K}}\|^{2}\leq\frac{16 \nu B^{2}}{\sigma\bar{\beta}}<\infty. \tag{7.43}\]
Since \(P\) is proper, there exists \(\tilde{y}\) such that \(P(\tilde{y})<\infty\). Since \(P(\tilde{y})<\infty\) and all other components are continuous,
\[\mathcal{L}_{\bar{\beta}}(x_{\bar{K}+1},\tilde{y},z_{\bar{K}+1};\bar{M}^{\bar{ K}+1})<\infty.\]
Note that \(y_{\bar{K}+1}\) is the minimizer of \(\mathcal{L}_{\bar{\beta}}(x_{\bar{K}},y,z_{\bar{K}};\bar{M}^{\bar{K}+1})\) with respect to \(y\), and hence,
\[\mathcal{L}_{\bar{\beta}}(x_{\bar{K}},y_{\bar{K}+1},z_{\bar{K}};\bar{M}^{\bar{ K}+1})\leq\mathcal{L}_{\bar{\beta}}(x_{\bar{K}+1},\tilde{y},z_{\bar{K}+1};\bar{M}^{ \bar{K}+1})<\infty. \tag{7.44}\]
Combining (7.43) and (7.44) implies that with probability 1
\[C_{2}<\infty.\]
This concludes the proof of (7.14), which is the fourth and last milestone of the proof. \(\Box\)
Conclusion
In this paper we have studied the optimization model of a sum of two functions; a smooth function and a nonsmooth composite function, where the composition is done with regards to the expectation of a random linear operator. This problem was not discussed in the literature previously to the best of our knowledge, and generalizes the previously researched deterministic linear-composite model.
We provided a meta algorithm and to algorithms that were shown to implement it. Furthermore, we have provided assumptions given which a stationary point of a sequence generated by an implementation of the meta algorithm is a critical point of the optimization model.
## Appendix A Technical Proofs
**Lemma A.1**.: _Let \(L(\theta)\) be the maximum likelihood function of Logistic Regression, and let \(-\ln(L(\theta))\) be the minus log likelihood. Then,_
\[0\preceq\frac{\partial^{2}\left(-\ln(L(\theta))\right)}{\partial\theta^{2}}( \theta)\preceq 0.25\lambda_{max}\left(\sum_{i=1}^{n}x_{i}x_{i}^{T}\right)I.\]
Proof.: The maximum likelihood function is given by
\[L(\theta)=\prod_{i=1}^{n}\left(\frac{\exp\{x_{i}^{T}\theta\}}{1+\exp\{x_{i}^{T }\theta\}}\right)^{y_{i}}\cdot\left(1-\frac{\exp\{x_{i}^{T}\theta\}}{1+\exp\{ x_{i}^{T}\theta\}}\right)^{1-y_{i}}.\]
and
\[l(\theta)=-\ln(L(\theta)) =-\sum_{i=1}^{n}y_{i}\ln\left(\frac{\exp\{x_{i}^{T}\theta\}}{1+ \exp\{x_{i}^{T}\theta\}}\right)+(1-y_{i})\ln\left(1-\frac{\exp\{x_{i}^{T} \theta\}}{1+\exp\{x_{i}^{T}\theta\}}\right)\] \[=-\sum_{i=1}^{n}y_{i}\ln\left(\frac{\exp\{x_{i}^{T}\theta\}}{1+ \exp\{x_{i}^{T}\theta\}}\right)+(1-y_{i})\ln\left(\frac{1}{1+\exp\{x_{i}^{T} \theta\}}\right).\]
Taking the first derivative with respect to \(\theta\),
\[\frac{\partial\left(l(\theta)\right)}{\partial\theta} =-\sum_{i=1}^{n}y_{i}\frac{1+\exp\{x_{i}^{T}\theta\}}{\exp\{x_{i} ^{T}\theta\}}\cdot\frac{\partial\left(\frac{\exp\{x_{i}^{T}\theta\}}{1+\exp\{ x_{i}^{T}\theta\}}\right)}{\partial\theta}\] \[\quad+(1-y_{i})\left(1+\exp\{x_{i}^{T}\theta\}\right)\cdot\frac{ \partial\left(\frac{1}{1+\exp\{x_{i}^{T}\theta\}}\right)}{\partial\theta}\]
Denoting \(z_{i}=x_{i}^{T}\theta\), we have
\[\frac{\partial\left(\frac{\exp\{x_{i}^{T}\theta\}}{1+\exp\{x_{i} ^{T}\theta\}}\right)}{\partial\theta} =\frac{\partial z_{i}}{\partial\theta}\cdot\frac{\partial\left( \frac{\exp\{z_{i}\}}{1+\exp\{z_{i}\}}\right)}{\partial z_{i}}\] \[=x_{i}\cdot\frac{\exp\{x_{i}^{T}\theta\}}{\left(1+\exp\{x_{i}^{T} \theta\}\right)^{2}},\]
\[\frac{\partial\left(\frac{1}{1+\exp\{x_{i}^{T}\theta\}}\right)}{ \partial\theta} =\frac{\partial z_{i}}{\partial\theta}\cdot\frac{\partial\left( \frac{1}{1+\exp\{z_{i}\}}\right)}{\partial z_{i}}\] \[=x_{i}\cdot\left(\frac{-\exp\{x_{i}^{T}\theta\}}{\left(1+\exp\{x _{i}^{T}\theta\}\right)^{2}}\right).\]
Therefore,
\[\frac{\partial\left(l(\theta)\right)}{\partial\theta} =-\sum_{i=1}^{n}y_{i}\frac{1+\exp\{x_{i}^{T}\theta\}}{\exp\{x_{i} ^{T}\theta\}}\cdot x_{i}\cdot\frac{\exp\{x_{i}^{T}\theta\}}{\left(1+\exp\{x_{ i}^{T}\theta\}\right)^{2}}\] \[+\left(1-y_{i}\right)\left(1+\exp\{x_{i}^{T}\theta\}\right)\cdot x _{i}\cdot\left(\frac{-\exp\{x_{i}^{T}\theta\}}{\left(1+\exp\{x_{i}^{T}\theta \}\right)^{2}}\right)\] \[=-\sum_{i=1}^{n}y_{i}x_{i}\cdot\frac{1}{1+\exp\{x_{i}^{T}\theta \}}+\left(1-y_{i}\right)x_{i}\cdot\left(\frac{-\exp\{x_{i}^{T}\theta\}}{1+ \exp\{x_{i}^{T}\theta\}}\right)\] \[=\sum_{i=1}^{n}x_{i}\cdot\left(\frac{\exp\{x_{i}^{T}\theta\}}{1+ \exp\{x_{i}\theta\}}-y_{i}\right).\]
Taking the second derivative,
\[\frac{\partial^{2}\left(l(\theta)\right)}{\partial\theta^{2}} =\sum_{i=1}^{n}x_{i}x_{i}^{T}\cdot\frac{\exp\{x_{i}^{T}\theta\}}{ \left(1+\exp\{x_{i}\theta\}\right)^{2}}\] \[=\sum_{i=1}^{n}x_{i}x_{i}^{T}\cdot\left(\frac{\exp\{x_{i}^{T} \theta\}}{1+\exp\{x_{i}\theta\}}\right)\cdot\left(1-\frac{\exp\{x_{i}^{T} \theta\}}{1+\exp\{x_{i}\theta\}}\right).\]
Since
\[0<\frac{\exp\{x_{i}^{T}\theta\}}{1+\exp\{x_{i}\theta\}}<1\]
for all \(x_{i}\) and \(\theta\), it follows that
\[\sum_{i=1}^{n}x_{i}x_{i}^{T}\cdot\left(\frac{\exp\{x_{i}^{T}\theta\}}{1+\exp \{x_{i}\theta\}}\right)\cdot\left(1-\frac{\exp\{x_{i}^{T}\theta\}}{1+\exp\{x _{i}\theta\}}\right)\succeq 0.\]
For the second relation, we note that for every vector \(v\) and every \(i\in[n]\)
\[v^{T}x_{i}x_{i}^{T}v=(v^{T}x_{i})^{2}\geq 0.\]
Since \(\max\{u(1-u)\}=0.25\),
\[\left(\frac{\exp\{x_{i}^{T}\theta\}}{1+\exp\{x_{i}\theta\}}\right)\cdot\left( 1-\frac{\exp\{x_{i}^{T}\theta\}}{1+\exp\{x_{i}\theta\}}\right)\leq 0.25.\]
Therefore, the inequality
\[v^{T}\left(\sum_{i=1}^{n}x_{i}x_{i}^{T}\cdot\left(\frac{\exp\{x_{i}^{T}\theta \}}{1+\exp\{x_{i}\theta\}}\right)\cdot\left(1-\frac{\exp\{x_{i}^{T}\theta\}}{1 +\exp\{x_{i}\theta\}}\right)\right)v\leq v^{T}\left(0.25\sum_{i=1}^{n}x_{i}x_{i} ^{T}\right)v\]
holds for every vector \(v\).
Hence, for every \(v\),
\[v^{T}\left(0.25\lambda_{max}\left(\sum_{i=1}^{n}x_{i}x_{i}^{T} \right)I-\sum_{i=1}^{n}x_{i}x_{i}^{T}\cdot\left(\frac{\exp\{x_{i}^{T}\theta\}}{1 +\exp\{x_{i}\theta\}}\right)\cdot\left(1-\frac{\exp\{x_{i}^{T}\theta\}}{1+\exp \{x_{i}\theta\}}\right)\right)v\] \[\geq v^{T}\left(0.25\lambda_{max}\left(\sum_{i=1}^{n}x_{i}x_{i}^{T }\right)I-0.25\sum_{i=1}^{n}x_{i}x_{i}^{T}\right)v\geq 0.\]
Therefore, by definition,
\[\sum_{i=1}^{n}x_{i}x_{i}^{T}\cdot\left(\frac{\exp\{x_{i}^{T}\theta\}}{1+\exp\{x _{i}\theta\}}\right)\cdot\left(1-\frac{\exp\{x_{i}^{T}\theta\}}{1+\exp\{x_{i} \theta\}}\right)\preceq 0.25\lambda_{max}\left(\sum_{i=1}^{n}x_{i}x_{i}^{T} \right)I,\]
\(\Box\)
|
2302.05294 | MoreauGrad: Sparse and Robust Interpretation of Neural Networks via
Moreau Envelope | Explaining the predictions of deep neural nets has been a topic of great
interest in the computer vision literature. While several gradient-based
interpretation schemes have been proposed to reveal the influential variables
in a neural net's prediction, standard gradient-based interpretation frameworks
have been commonly observed to lack robustness to input perturbations and
flexibility for incorporating prior knowledge of sparsity and group-sparsity
structures. In this work, we propose MoreauGrad as an interpretation scheme
based on the classifier neural net's Moreau envelope. We demonstrate that
MoreauGrad results in a smooth and robust interpretation of a multi-layer
neural network and can be efficiently computed through first-order optimization
methods. Furthermore, we show that MoreauGrad can be naturally combined with
$L_1$-norm regularization techniques to output a sparse or group-sparse
explanation which are prior conditions applicable to a wide range of deep
learning applications. We empirically evaluate the proposed MoreauGrad scheme
on standard computer vision datasets, showing the qualitative and quantitative
success of the MoreauGrad approach in comparison to standard gradient-based
interpretation methods. | Jingwei Zhang, Farzan Farnia | 2023-01-08T11:28:28Z | http://arxiv.org/abs/2302.05294v1 | # MoreauGrad: Sparse and Robust Interpretation of Neural Networks via Moreau Envelope
###### Abstract
Explaining the predictions of deep neural nets has been a topic of great interest in the computer vision literature. While several gradient-based interpretation schemes have been proposed to reveal the influential variables in a neural net's prediction, standard gradient-based interpretation frameworks have been commonly observed to lack robustness to input perturbations and flexibility for incorporating prior knowledge of sparsity and group-sparsity structures. In this work, we propose MoreauGrad1 as an interpretation scheme based on the classifier neural net's Moreau envelope. We demonstrate that MoreauGrad results in a smooth and robust interpretation of a multi-layer neural network and can be efficiently computed through first-order optimization methods. Furthermore, we show that MoreauGrad can be naturally combined with \(L_{1}\)-norm regularization techniques to output a sparse or group-sparse explanation which are prior conditions applicable to a wide range of deep learning applications. We empirically evaluate the proposed MoreauGrad scheme on standard computer vision datasets, showing the qualitative and quantitative success of the MoreauGrad approach in comparison to standard gradient-based interpretation methods.
Footnote 1: The paper’s code is available at [https://github.com/buyeah1109/MoreauGrad](https://github.com/buyeah1109/MoreauGrad)
## 1 Introduction
Deep neural networks (DNNs) have achieved state-of-the-art performance in many computer vision problems including image classification [1], object detection [2], and medical image analysis [3]. While they manage to attain super-human scores on standard image and speech recognition tasks, a reliable application of deep learning models to real-world problems requires an interpretation of their predictions to help domain experts understand and investigate the basis of their predictions. Over the past few years, developing and analyzing interpretation schemes that reveal the influential features in a neural network's prediction have attracted great interest in the computer vision community.
A standard approach for interpreting neural nets' predictions is to analyze the gradient of their prediction score function at or around an input data point. Such gradient-based interpretation mechanisms result in a feature saliency map revealing the influential variables that locally affect the neural net's assigned prediction score. Three well-known examples of gradient-based interpretation schemes are the simple gradient [4], integrated gradients [5], and DeepLIFT [6] methods. While the mentioned methods have found many applications in explaining neural nets' predictions, they have been observed to lack robustness to input perturbations and to output a dense noisy saliency map in their application to computer vision datasets [7, 8]. Consequently, these gradient-based explanations can be considerably altered by minor random or adversarial input noise.
A widely-used approach to improve the robustness and sharpness of gradient-based interpretations is SmoothGrad [9] which applies Gaussian smoothing to the mentioned gradient-based interpretation methods. As shown by [9], SmoothGrad can significantly boost the visual quality of a neural net's gradient-based
saliency map. On the other hand, SmoothGrad typically leads to a dense interpretation vector and remains inflexible to incorporate prior knowledge of sparsity and group-sparsity structures. Since a sparse saliency map is an applicable assumption to several image classification problems where a relatively small group of input variables can completely determine the image label, a counterpart of SmoothGrad which can simultaneously achieve sparse and robust interpretation will be useful in computer vision applications.
In this paper, we propose a novel approach, which we call _MoreauGrad_, to achieve a provably smooth gradient-based interpretation with potential sparsity or group-sparsity properties. The proposed Moreau-Grad outputs the gradient of a classifier's Moreau envelope which is a useful optimization tool for enforcing smoothness in a target function. We leverage convex analysis to show that MoreauGrad behaves smoothly around an input sample and therefore provides an alternative optimization-based approach to SmoothGrad for achieving a smoothly-changing saliency map. As a result, we demonstrate that similar to SmoothGrad, MoreauGrad offers robustness to input perturbations, since a norm-bounded perturbation will only lead to a bounded change to the MoreauGrad interpretation.
Next, we show that MoreauGrad can be flexibly combined with \(L_{1}\)-norm-based regularization penalties to output sparse and group-sparse interpretations. Our proposed combinations, Sparse MoreauGrad and Group-Sparse MoreauGrad, take advantage of elastic-net [10] and group-norm [11] penalty terms to enforce sparse and group-sparse saliency maps, respectively. We show that these extensions of MoreauGrad preserve the smoothness and robustness properties of the original MoreauGrad scheme. Therefore, our discussion demonstrates the adaptable nature of MoreauGrad for incorporating prior knowledge of sparsity structures in the output interpretation.
Figure 1: Interpretation of Sparse MoreauGrad (ours) vs. standard gradient-based baselines on an ImageNet sample before and after adding a norm-bounded interpretation adversarial perturbation.
Finally, we present the empirical results of our numerical experiments applying MoreauGrad to standard image recognition datasets and neural net architectures. We compare the numerical performance of MoreauGrad with standard gradient-based interpretation baselines. Our numerical results indicate the satisfactory performance of vanilla and \(L_{1}\)-norm-based MoreauGrad in terms of visual quality and robustness. Figure 1 shows the robustness and sparsity of the Sparse MoreauGrad interpretation applied to an ImageNet sample in comparison to standard gradient-based saliency maps. As this and our other empirical findings suggest, MoreauGrad can outperform standard baselines in terms of the sparsity and robustness properties of the output interpretation. In the following, we summarize the main contributions of this paper:
* Proposing MoreauGrad as an interpretation scheme based on a classifier function's Moreau envelope
* Analyzing the smoothness and robustness properties of MoreauGrad by leveraging convex analysis
* Introducing \(L_{1}\)-regularized Sparse MoreauGrad to obtain an interpretation satisfying prior sparsity conditions
* Providing numerical results supporting MoreauGrad over standard image recognition datasets
## 2 Related Work
**Gradient-based Interpretation.** A large body of related works develop gradient-based interpretation methods. Simonyan et al. [4] propose to calculate the gradient of a classifier's output with respect to an input image. The simple gradient approach in [4] has been improved by several related works. Notably, the method of Integrated Gradients [5] is capable of keeping highly relevant pixels in the saliency map by aggregating gradients of image samples. SmoothGrad [9] removes noise in saliency maps by adding Gaussian-random noise to the input image. The CAM method [12] analyzes the information from global average pooling layer for localization, and Grad-CAM++ [13] improves over Grad-CAM [14] and generates coarse heat-maps with improved multi-object localization. The NormGrad [15] focuses on the weight-based gradient to analyze the contribution of each image region. DeepLIFT [6] uses difference from reference to propagate an attribution signal. However, the mentioned gradient-based methods do not obtain a sparse interpretation, and their proper combination with \(L_{1}\)-regularization to promote sparsity remains highly non-trivial and challenging. On the other hand, our proposed MoreauGrad can be smoothly equipped with \(L_{1}\)-regularization to output sparse interpretations and can further capture group-sparsity structures.
**Mask-based Interpretation.** Mask-based interpretation methods rely on adversarial perturbations to interpret neural nets. Applying a mask which perturbs the neural net input, the importance of input pixels is measured by a masked-based method. This approach to explaining neural nets has been successfully applied in References [16, 17, 18, 19] and has been shown to benefit from dynamic perturbations [20]. More specifically, Dabkowski and Gal [19] introduce a real-time mask-based detection method; Fong and Vedaldi [17] develop a model-agnostic approach with interpretable perturbations; Wagner et al. [16] propose a method that could generate fine-grained visual interpretations. Moreover, Lim et al. [18] leverage local smoothness to enhance their robustness towards samples attacked by PGD [21]. However, [17] and [19] show that perturbation-based interpretation methods are still vulnerable to adversarial perturbations.
We note that the discussed methods depend on optimizing perturbation masks for interpretations, and due to the non-convex nature of neural net loss functions, their interpretation remains sensitive to input perturbations. In contrast, our proposed MoreauGrad can provably smooth the neural net score function, and can adapt to non-convex functions using norm regularization. Hence, MoreauGrad can improve both the sparsity and robustness of the interpretation.
**Robust Interpretation.** The robustness of interpretation methods has been a subject of great interest in the literature. Ghorbani et al. [7] introduce a gradient-based adversarial attack method to alter the neural nets' interpretation. Dombrowski et al. [22] demonstrate that interpretations could be manipulated, and they suggest improving the robustness via smoothing the neural net classifier. Heo et al. [8] propose a manipulation method that is capable of generalizing across datasets. Subramanya et al. [23] create adversarial patches fooling both the classifier and the interpretation.
To improve the robustness, Sparsified-SmoothGrad [24] combines a sparsification technique with Gaussian smoothing to achieve certifiable robustness. The related works [16, 17, 18, 19, 25] discuss the application of adversarial defense methods against classification-based attacks to interpret the prediction of neural net classifiers. We note that these papers' main focus is not on defense schemes against interpretation-based attacks. Specifically, [16] filter gradients internally during backpropogation, and [18] leverage local smoothness to integrate more samples. Unlike the mentioned papers, our work proposes a model-agnostic optimization-based method which is capable of generating simultaneously sparse and robust interpretations.
## 3 Preliminaries
In this section, we review three standard interpretation methods as well as the notation and definitions in the paper.
### Notation and Definitions
In the paper, we use notation \(\mathbf{X}\in\mathbb{R}^{d}\) to denote the feature vector and \(Y\in\{1,\ldots,k\}\) to denote the label of a sample. In addition, \(f_{\mathbf{w}}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{k}\) denotes a neural net classifier with its weights contained in vector \(\mathbf{w}\in\mathcal{W}\) where \(\mathcal{W}\) is the feasible set of the neural net's weights. Here \(f_{\mathbf{w}}\) maps the \(d\)-dimensional input \(\mathbf{x}\) to a \(k\)-dimensional prediction vector containing the likelihood of each of the \(k\) classes in the classification problem. For every class \(c\in\{1,\ldots,k\}\), we use the notation \(f_{\mathbf{w},c}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) to denote the \(c\)-th entry of \(f_{\mathbf{w}}\)'s output which corresponds to class \(c\).
We use \(\|\mathbf{x}\|_{p}\) to denote the \(\ell_{p}\)-norm of input vector \(\mathbf{x}\). Furthermore, we use notation \(\|\mathbf{x}\|_{p,q}\) to denote the \(\ell_{p,q}\)-group-norm of \(\mathbf{x}\) defined in the following equation for given variable subsets \(S_{1},\ldots,S_{t}\subseteq\{1,\ldots,d\}\):
\[\|\mathbf{x}\|_{p,q}=\big{\|}\left[\|\mathbf{x}_{S_{1}}\|_{p},\ldots,\| \mathbf{x}_{S_{t}}\|_{p}\right]\big{\|}_{q} \tag{1}\]
In other words, \(\|\mathbf{x}\|_{p,q}\) is the \(\ell_{q}\)-norm of a vector containing the \(\ell_{p}\)-norms of the subvectors of \(\mathbf{x}\) characterized by index subsets \(S_{1},\ldots,S_{t}\).
### Gradient-based Saliency Maps
In our theoretical and numerical analysis, we consider the following widely-used gradient-based interpretation baselines which apply to a classifier neural net \(f_{\mathbf{w}}\) and predicted class \(c\) for input \(\mathbf{x}\):
1. **Simple Gradient**: The simple gradient interpretation returns the saliency map of a neural net score function's gradient with respect to input \(\mathbf{x}\): \[\mathrm{SG}\big{(}f_{\mathbf{w},c},\mathbf{x}\big{)}\,:=\,\nabla_{\mathbf{x}} f_{\mathbf{w},c}(\mathbf{x}).\] (2) In the applications of the simple gradient approach, \(c\) is commonly chosen as the neural net's predicted label with the maximum prediction score.
2. **Integrated Gradients:** The integrated gradients approach approximates the integral of the neural net's gradient function between a reference point \(\mathbf{x}^{0}\) and the input \(\mathbf{x}\). Using \(m\) intermediate points on the line segment connecting \(\mathbf{x}^{0}\) and \(\mathbf{x}\), the integrated gradient output will be \[\mathrm{IG}\big{(}f_{\mathbf{w},c},\mathbf{x}\big{)}\,:=\,\frac{\Delta\mathbf{ x}}{m}\sum_{i=1}^{m}\nabla_{\mathbf{x}}f_{\mathbf{w},c}\big{(}\mathbf{x}^{0}+ \frac{i}{m}\Delta\mathbf{x}\big{)}.\] (3) In the above \(\Delta\mathbf{x}:=\mathbf{x}-\mathbf{x}^{0}\) denotes the difference between the target and reference points \(\mathbf{x},\mathbf{x}^{0}\).
3. **SmoothGrad:** SmoothGrad considers the averaged simple gradient score over an additive random perturbation \(Z\) drawn according to an isotropic Gaussian distribution \(Z\sim\mathcal{N}(\mathbf{0},\sigma^{2}I_{d})\). In practice, the SmoothGrad interpretation is estimated over a number \(t\) of independently drawn noise vectors \(\mathbf{z}_{1},\ldots,\mathbf{z}_{t}\stackrel{{\text{i.i.d.}}}{{ \sim}}\mathcal{N}(\mathbf{0},\sigma^{2}I_{d})\) according to the zero-mean Gaussian distribution: \[\text{SmoothGrad}\big{(}f_{\mathbf{w},c},\mathbf{x}\big{)}\,:=\mathbb{E}\big{[} \nabla_{\mathbf{x}}f_{\mathbf{w},c}(\mathbf{x}+Z)\big{]}\;\approx\;\frac{1}{t} \sum_{i=1}^{t}\nabla_{\mathbf{x}}f_{\mathbf{w},c}(\mathbf{x}+\mathbf{z}_{i}).\] (4)
## 4 MoreauGrad: An Optimization-based Interpretation Framework
As discussed earlier, smooth classifier functions with a Lipschitz gradient help to obtain a robust explanation of neural nets. Here, we propose an optimization-based smoothing approach based on Moreau-Yosida regularization. To introduce this optimization-based approach, we first define a function's Moreau envelope.
**Definition 1**.: _Given regularization parameter \(\rho>0\), we define the Moreau envelope of a function \(g:\mathbb{R}^{d}\to\mathbb{R}\) as:_
\[g^{\rho}(\mathbf{x})\,:=\,\min_{\widetilde{\mathbf{x}}\in\mathbb{R}^{d}}\;g \big{(}\widetilde{\mathbf{x}}\big{)}+\frac{1}{2\rho}\big{\|}\widetilde{ \mathbf{x}}-\mathbf{x}\big{\|}_{2}^{2}. \tag{5}\]
In the above definition, \(\rho>0\) represents the Moreau-Yosida regularization coefficient. Applying the Moreau envelope, we propose the MoreauGrad interpretation as the gradient of the classifier's Moreau envelope at an input \(\mathbf{x}\).
**Definition 2**.: _Given regularization parameter \(\rho>0\), we define the MoreauGrad interpretation \(\mathrm{MG}_{\rho}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) of a neural net \(f_{\mathbf{w}}\) predicting class \(c\) for input \(\mathbf{x}\) as_
\[\mathrm{MG}_{\rho}(f_{\mathbf{w},c},\mathbf{x})\,:=\,\nabla f_{\mathbf{w},c}^{ \rho}(\mathbf{x}).\]
To compute and analyze the MoreauGrad explanation, we first discuss the optimization-based smoothing enforced by the Moreau envelope. Note that the Moreau envelope is known as an optimization tool to turn non-smooth convex functions (e.g. \(\ell_{1}\)-norm) into smooth functions. Here, we discuss an extension of this result to weakly-convex functions which also apply to non-convex functions.
**Definition 3**.: _A function \(g:\mathbb{R}^{d}\to\mathbb{R}\) is called \(\lambda\)-weakly convex if \(\Phi(\mathbf{x}):=g(\mathbf{x})+\frac{\lambda}{2}\|\mathbf{x}\|_{2}^{2}\) is a convex function, i.e. for every \(\mathbf{x}_{1},\mathbf{x}_{2}\in\mathbb{R}^{d}\) and \(0\leq\alpha\leq 1\) we have:_
\[g\big{(}\alpha\mathbf{x}_{1}+(1-\alpha)\mathbf{x}_{2}\big{)}\;\leq\;\alpha g( \mathbf{x}_{1})+(1-\alpha)g(\mathbf{x}_{2})+\frac{\lambda\alpha(1-\alpha)}{2} \big{\|}\mathbf{x}_{1}-\mathbf{x}_{2}\big{\|}_{2}^{2}.\]
**Theorem 1**.: _Suppose that \(g:\mathbb{R}^{d}\to\mathbb{R}\) is a \(\lambda\)-weakly convex function. Assuming that \(0<\rho<\frac{1}{\lambda}\), the followings hold for the optimization problem of the Moreau envelope \(g^{\rho}\) and the optimal solution \(\widetilde{x}_{\rho}^{*}(\mathbf{x})\) solving the optimization problem:_
1. _The gradients of_ \(g^{\rho}\) _and_ \(g\) _are related as for every_ \(\mathbf{x}\)_:_ \[\nabla g^{\rho}(\mathbf{x})=\nabla g\big{(}\widetilde{x}_{\rho}^{*}(\mathbf{x })\big{)}.\]
2. _The difference_ \(\widetilde{x}_{\rho}^{*}(\mathbf{x})-\mathbf{x}\) _is aligned with_ \(g^{\rho}\)_'s gradient:_ \[\nabla g^{\rho}(\mathbf{x})=\frac{-1}{\rho}\big{(}\,\widetilde{x}_{\rho}^{*}( \mathbf{x})-\mathbf{x}\,\big{)}.\]
3. \(g^{\rho}\) _will be_ \(\max\{\frac{1}{\rho},\frac{\lambda}{1-\rho\lambda}\}\)_-smooth, i.e. for every_ \(\mathbf{x}_{1},\mathbf{x}_{2}\)_:_ \[\big{\|}\nabla g^{\rho}(\mathbf{x}_{1})-\nabla g^{\rho}(\mathbf{x}_{2})\big{\|} _{2}\,\leq\,\frac{1}{\min\big{\{}\rho,\frac{1}{\lambda}-\rho\big{\}}}\big{\|} \mathbf{x}_{1}-\mathbf{x}_{2}\big{\|}_{2}.\]
Proof.: This theorem is known for convex functions. In the Appendix, we provide another proof for the result.
**Corollary 1**.: _Assume that the prediction score function \(f_{\mathbf{w},c}:\mathbb{R}^{d}\to\mathbb{R}\) is \(\lambda\)-weakly convex. Then, the MoreauGrad interpretation \(\mathrm{MG}_{\rho}\) will remain robust under an \(\epsilon\)-\(\ell_{2}\)-norm bounded perturbation \(\|\mathbf{\delta}\|_{2}\leq\epsilon\) as_
\[\left\|\mathrm{MG}_{\rho}(\mathbf{x}+\mathbf{\delta})-\mathrm{MG}_{\rho}(\mathbf{ x})\right\|_{2}\leq\frac{\epsilon}{\min\bigl{\{}\rho,\frac{1}{\lambda}-\rho \bigr{\}}}.\]
The above results imply that by choosing a small enough coefficient \(\rho\) the Moreau envelope will be a differentiable smooth function. Moreover, the computation of the Moreau envelope will reduce to a convex optimization task that can be solved by standard or accelerated gradient descent with global convergence guarantees. Therefore, one can efficiently compute the MoreauGrad interpretation by solving the optimization problem via the gradient descent algorithm. Algorithm 1 applies gradient descent to compute the solution to the Moreau envelope optimization which according to Theorem 1 yields the MoreauGrad explanation.
As discussed above, MoreauGrad will be provably robust as long as the regularization coefficient will dominate the weakly-convexity degree of the prediction score. In the following proposition, we show this condition can be enforced by applying either Gaussian smoothing.
**Proposition 1**.: _Suppose that \(f_{\mathbf{w},c}\) is \(L\)-Lipschitz, that is for every \(\mathbf{x}_{1},\mathbf{x}_{2}\mid f_{\mathbf{w},c}(\mathbf{x}_{1})-f_{ \mathbf{w},c}(\mathbf{x}_{2})|\leq L\|\mathbf{x}_{2}-\mathbf{x}_{1}\|_{2}\), but could be potentially non-differentiable and non-smooth. Then, \(h_{\mathbf{w},c}(\mathbf{x}):=\mathbb{E}[f_{\mathbf{w},c}(\mathbf{x}+\mathbf{ Z})]\) where \(\mathbf{Z}\sim\mathcal{N}(\mathbf{0},\sigma^{2}I_{d\times d})\) will be \(\frac{L\sqrt{d}}{\sigma}\)-weakly convex._
Proof.: We postpone the proof to the Appendix.
The above proposition suggests the regularized MoreauGrad which regularizes the neural net function to satisfy the weakly-convex condition through Gaussian smoothing.
## 5 Sparse and Group-Sparse MoreauGrad
To further extend the MoreauGrad approach to output sparsely-structured feature saliency maps, we further include an \(L_{1}\)-norm-based penalty term in the Moreau-Yosida regularization and define the following \(L_{1}\)-norm-based sparse and group-sparse Moreau envelope.
**Definition 4**.: _For a function \(g:\mathbb{R}^{d}\to\mathbb{R}\) and regularization coefficients \(\rho,\eta>0\), we define \(L_{1}\)-Moreau envelope \(g_{L_{1}}^{\rho,\eta}\):_
\[g_{L_{1}}^{\rho,\eta}(\mathbf{x})\,:=\min_{\widetilde{\mathbf{x}}\in\mathbb{R }^{d}}\,g(\widetilde{\mathbf{x}})+\frac{1}{2\rho}\bigl{\|}\widetilde{\mathbf{ x}}-\mathbf{x}\bigr{\|}_{2}^{2}+\eta\bigl{\|}\widetilde{\mathbf{x}}- \mathbf{x}\bigr{\|}_{1}.\]
_We also define \(L_{2,1}\)-Moreau envelope \(g_{L_{2,1}}^{\rho,\eta}\) as_
\[g_{L_{2,1}}^{\rho,\eta}(\mathbf{x})\,:=\,\min_{\widetilde{\mathbf{x}}\in \mathbb{R}^{d}}\,g(\widetilde{\mathbf{x}})+\frac{1}{2\rho}\bigl{\|}\widetilde{ \mathbf{x}}-\mathbf{x}\bigr{\|}_{2}^{2}+\eta\bigl{\|}\widetilde{\mathbf{x}}- \mathbf{x}\bigr{\|}_{2,1}.\]
_In the above, the group norm \(\|\cdot\|_{2,1}\) is defined as \(\|\mathbf{x}\|_{2,1}:=\sum_{i=1}^{t}\|\mathbf{x}_{S_{i}}\|_{2}\) for given subsets \(S_{1},\ldots,S_{t}\subseteq\{1,\ldots,d\}\)._
**Definition 5**.: _Given regularization coefficients \(\rho,\eta>0\), we define the Sparse MoreauGrad (\(\mathrm{S-MG}_{\rho,\eta}\)) and Group-Sparse MoreauGrad (\(\mathrm{GS-MG}_{\rho,\eta}\)) interpretations as_
\[\mathrm{S-MG}_{\rho,\eta}(f_{\mathbf{w},c},\mathbf{x})\,:= \frac{1}{\rho}\bigl{(}\,\widetilde{\mathbf{x}}_{L_{1}}^{*}(\mathbf{x})- \mathbf{x}\,\bigr{)},\] \[\mathrm{GS-MG}_{\rho,\eta}(f_{\mathbf{w},c},\mathbf{x})\,:= \frac{1}{\rho}\bigl{(}\,\widetilde{\mathbf{x}}_{L_{2,1}}^{*}(\mathbf{x})- \mathbf{x}\,\bigr{)},\]
_where \(\widetilde{\mathbf{x}}_{L_{1}}^{*}(\mathbf{x}),\,\widetilde{\mathbf{x}}_{L_{2,1}}^{*}(\mathbf{x})\) denote the optimal solutions to the optimization tasks of \(f_{\mathbf{w},c,L_{1}}^{\rho,\eta}(\mathbf{x}),\,f_{\mathbf{w},c,L_{2,1}}^{ \rho,\eta}(\mathbf{x})\), respectively._
In Theorem 2, we extend the shown results for the standard Moreau envelope to our proposed \(L_{1}\)-norm-based extensions of the Moreau envelope. Here, we use \(\text{ST}_{\alpha}\) and \(\text{GST}_{\alpha}\) to denote sparse and group-sparse soft-thresholding functions defined entry-wise and group-entry-wise as
\[\text{ST}_{\alpha}(\mathbf{x})_{i} :=\begin{cases}0&\text{ if }|x_{i}|\leq\alpha\\ x_{i}-\text{sign}(x_{i})\alpha&\text{ if }|x_{i}|>\alpha,\end{cases}\] \[\text{GST}_{\alpha}(\mathbf{x})_{S_{i}} :=\begin{cases}\mathbf{0}&\text{ if }\|\mathbf{x}_{S_{i}}\|_{2}\leq \alpha\\ \big{(}1-\frac{\alpha}{\|\mathbf{x}_{S_{i}}\|_{2}}\big{)}\mathbf{x}_{S_{i}}& \text{ if }\|\mathbf{x}_{S_{i}}\|_{2}>\alpha.\end{cases}\]
**Theorem 2**.: _Suppose that \(g:\mathbb{R}^{d}\to\mathbb{R}\) is a \(\lambda\)-weakly convex function. Then, assuming that \(0<\rho<\frac{1}{\lambda}\), Theorem 1's parts 1 and 3 will further hold for the sparse Moreau envelope \(g_{L_{1}}^{\rho,\eta}\) and group-sparse Moreau envelope \(g_{L_{2,1}}^{\rho,\eta}\) and their optimization problems' optimal solutions \(\widetilde{\mathbf{x}}_{\rho,\eta,L_{1}}^{*}(\mathbf{x})\) and \(\widetilde{\mathbf{x}}_{\rho,\eta,L_{2,1}}^{*}(\mathbf{x})\). To parallel Theorem 1's part 2 for \(L_{1}\)-Moreau envelope, the followings hold_
\[\text{ST}_{\rho\eta}\big{(}\!-\!\rho\nabla g_{L_{1}}^{\rho,\eta}( \mathbf{x})\big{)} = \widetilde{\mathbf{x}}_{\rho,\eta,L_{1}}^{*}(\mathbf{x})- \mathbf{x},\] \[\text{GST}_{\rho\eta}\big{(}\!-\!\rho\nabla g_{L_{2,1}}^{\rho, \eta}(\mathbf{x})\big{)} = \widetilde{\mathbf{x}}_{\rho,\eta,L_{2,1}}^{*}(\mathbf{x})- \mathbf{x}.\]
Proof.: We defer the proof to the Appendix.
**Corollary 2**.: _Suppose that the prediction score function \(f_{\mathbf{w},c}\) is \(\lambda\)-weakly convex. Assuming that \(0<\rho<\frac{1}{\lambda}\), the Sparse MoreauGrad \(\text{S-MG}_{\rho,\eta}\) and Group-Sparse MoreauGrad \(\text{GS-MG}_{\rho,\eta}\) interpretations will be robust to every norm-bounded perturbation \(\|\boldsymbol{\delta}\|_{2}\leq\epsilon\) as:_
\[\big{\|}\text{GS-MG}_{\rho,\eta}(\mathbf{x}+\boldsymbol{\delta}) -\text{GS-MG}_{\rho,\eta}(\mathbf{x})\big{\|}_{2} \leq \frac{\epsilon}{\min\big{\{}\rho,\frac{1}{\lambda}-\rho\big{\}}}.\]
To compute the Sparse and Group-Sparse MoreauGrad, we propose applying the proximal gradient descent algorithm as described in Algorithm 1. Note that Algorithm 1 applies the soft-thresholding function as the proximal operator for the \(L_{1}\)-norm function present in Sparse MoreauGrad.
```
Input: data \(\mathbf{x}\), label \(c\), classifier \(f_{\mathbf{w}}\), regulatization coeff. \(\rho\), stepsize \(\gamma\), noise std. parameter \(\sigma\), number of updates \(T\) Initialize\(\mathbf{x}^{(0)}=\mathbf{x}\), for\(t=0,\ldots,T\)do ifRegularized Modethen Draw noise vectors \(\mathbf{z}_{1},\ldots,\mathbf{z}_{m}\sim\mathcal{N}(\mathbf{0},\sigma^{2}I_{d \times d})\) Compute\(\mathbf{g}_{t}=\frac{1}{m}\sum_{i=1}^{m}\nabla f_{\mathbf{w},c}(\mathbf{x}^{(t)}+ \mathbf{z}_{i})\) else Compute\(\mathbf{g}_{t}=\nabla f_{\mathbf{w},c}(\mathbf{x}^{(t)})\) end Update\(\mathbf{x}^{(t+1)}\leftarrow(1-\frac{\gamma}{\rho})\mathbf{x}^{(t)}-\gamma(\mathbf{g}_{t}- \frac{1}{\rho}\mathbf{x})\) ifSparse Modethen Update\(\mathbf{x}^{(t+1)}\leftarrow\text{SoftThreshold}_{\gamma\eta}\big{(}\mathbf{x}^{(t+1)}- \mathbf{x}\big{)}+\mathbf{x}\) end Output\(\text{MG}(\mathbf{x})=\frac{1}{\rho}\big{(}\mathbf{x}^{(T)}-\mathbf{x}\big{)}\)
```
**Algorithm 1** MoreauGrad Interpretation
## 6 Numerical Results
We conduct several numerical experiments to evaluate the performance of the proposed MoreauGrad. Our designed experiments focus on the smoothness, sparsity, and robustness properties of MoreauGrad interpretation maps as well as the feature maps of several standard baselines. In the following, we first describe the numerical setup in our experiments and then present the obtained numerical results on the qualitative and quantitative performance of interpretation methods.
### Experiment Setup
In our numerical evaluation, we use the following standard image datasets: CIFAR-10 [26] consisting of 60,000 labeled samples with 10 different labels (50,000 training samples and 10,000 test samples), and ImageNet-1K [27] including 1.4 million labeled samples with 1,000 labels (10,000 test samples and 1.34 million training samples). For CIFAR-10 experiments, we trained a standard ResNet-18 [28] neural network with the softplus activation. For ImageNet experiments, we used an EfficientNet-b0 network [29] pre-trained on the ImageNet training data. In our experiments, we compared the MoreauGrad schemes with the following baselines: 1) the simple gradient [4], 2) Integrated Gradients [14], 3) DeepLIFT [6], 4) SmoothGrad [9], 5) Sparsified SmoothGrad [24], 6) RelEx [18]. We note that for baseline experiments we adopted the official implementations and conducted the experiments with hyperparameters suggested in their work.
Figure 3: Visualization of Sparse MoreauGrad with various coefficient \(\eta\)’s. \(\eta=0\) is Vanilla MoreauGrad.
Figure 2: Visualization of MoreauGrad with various coefficient \(\rho\)’s. \(\rho=0\) is Simple Gradient.
### Effect of Smoothness and Sparsity Parameters
We ran the numerical experiments for unregularized Vanilla MoreauGrad with multiple smoothness coefficient \(\rho\) values to show the effect of the Moreau envelope's regularization. Figure 2 visualizes the effect of different \(\rho\) on the Vanilla MoreauGrad saliency map. As can be seen in this figure, the saliency map qualitatively improves by increasing the value of \(\rho\) from 0 to 1. Please note that for \(\rho=0\), the MoreauGrad simplifies to the simple gradient interpretation. However, as shown in Theorem 1 the proper performance of Vanilla MoreauGrad requires choosing a properly bounded \(\rho\) value, which is consistent with our observation that when \(\rho\) becomes too large, the Moreau envelope will be computationally difficult to optimize and the quality of interpretation maps could deteriorate to some extent. As numerically verified in both CIFAR-10 and ImageNet experiments, we used the rule of thumb \(\rho=\frac{1}{\sqrt{\mathbb{E}[\|\mathbf{X}\|_{2}]}}\) measured over the empirical training data to set the value of \(\rho\), which is equal to 1 for the normalized samples in our experiments.
Regarding the sparsity hyperparameter \(\eta\) in Sparse and Group-Sparse MoreauGrad experiments, we ran several experimental tests to properly tune the hyperparameter. Note that a greater coefficient \(\eta\) enforces more strict sparsity or group-sparsity in the MoreauGrad interpretation, and the degree of sparsity could be simply adjusted by changing this coefficient \(\eta\). As shown in Figure 3, in our experiments with different \(\eta\) coefficients the interpretation map becomes sparser as we increase the \(L_{1}\)-norm penalty coefficient \(\eta\). Similarly, to achieve a group-sparse interpretation, we used \(L_{2,1}\)-regularization on groups of adjacent pixels as discussed in Definition 4. The effect of the group-sparsity coefficient was similar to the sparse case in our experiments, as fewer pixel groups took non-zero values and the output interpretations showed more structured interpretation maps when choosing a larger coefficient \(\eta\). The results with different group-sparsity hyperparameters are demonstrated in Figure 4.
### Qualitative Comparison of MoreauGrad vs. Gradient-based Baselines
In Figure 5, we illustrate the Sparse, and Group-Sparse MoreauGrad interpretation outputs as well as the saliency maps generated by the gradient-based baselines. The results demonstrate that MoreauGrad generates qualitatively sharp and, in the case of Sparse and Group-Sparse MoreauGrad, sparse interpretation maps. As shown in Figure 5, promoting sparsity in the MoreauGrad interpretation maps has improved the visual quality, and managed to erase the less relevant pixels like the background ones. Additionally, in the case of Group-Sparse MoreauGrad, the maps exhibit both sparsity and connectivity of selected pixels.
Figure 4: Visualization of Group-Sparse MoreauGrad maps with various coefficient \(\eta\)’s.
Figure 5: Qualitative comparison between Sparse, Group-Sparse MoreauGrad and the baselines.
### Robustness
We qualitatively and quantitatively evaluated the robustness of MoreauGrad interpretation. To assess the empirical robustness of interpretation methods, we adopt a \(L_{2}\)-bounded interpretation attack method defined by [24]. Also, for quantifying the empirical robustness, we adopt three robustness metrics. The first metric is the Euclidean distance of the normalized interpretations before and after the attack:
\[D(I(\mathbf{x}),I(\mathbf{x}^{\prime}))=\big{\|}\frac{I(\mathbf{x})}{\|I( \mathbf{x})\|_{2}}-\frac{I(\mathbf{x}^{\prime})}{\|I(\mathbf{x}^{\prime})\|_ {2}}\big{\|}_{2} \tag{6}\]
Note that a larger distance between the normalized maps indicates a smaller similarity and a higher vulnerability of the interpretation method to adversarial attacks.
The second metric is the top-k intersection ratio. This metric is another standard robustness measure used in [7, 24]. This metric measures the ratio of pixels that remain salient after the interpretation attack. A robust interpretation is expected to preserve most of the salient pixels under an attack. The third metric is the structural similarity index measure (SSIM) [30]. A larger SSIM value indicates that the two input maps are more perceptively similar.
Using the above metrics, we compared the MoreauGrad schemes with the baseline methods. As qualitatively shown in Figure 7, using the same attack magnitude, the MoreauGrad interpretations are mostly similar
Figure 6: Quantitative robustness comparison between MoreauGrad and the baselines.
Figure 7: Visualization of robustness against interpretation attacks. The top and bottom rows show original and attacked maps.
before and after the norm-bounded attack. The qualitative robustness of MoreauGrad seems satisfactory compared to the baseline methods. Finally, Figure 6 presents a quantitative comparison of the robustness measures for the baselines and proposed MoreauGrad on CIFAR-10, Tiny-ImageNet, and ImageNet datasets. As shown by these measures, MoreauGrad outperforms the baselines in terms of the robustness metrics.
## 7 Conclusion
In this work, we introduced MoreauGrad as an optimization-based interpretation method for deep neural networks. We demonstrated that MoreauGrad can be flexibly combined with \(L_{1}\)-regularization methods to output sparse and group-sparse interpretations. We further showed that the MoreauGrad output will enjoy robustness against input perturbations. While our analysis focuses on the sparsity and robustness of the MoreauGrad explanation, studying the consistency and transferability of MoreauGrad interpretations is an interesting future direction. Moreover, the application of MoreauGrad to convex and norm-regularized neural nets could be another topic for future study. Finally, our analysis of \(\ell_{1}\)-norm-based Moreau envelope could find independent applications to other deep learning problems.
|
2305.09409 | Sublattice Pairing in Pyrochlore Heisenberg Antiferromagnets | We argue that classical pyrochlore Heisenberg antiferromagnets with small
further-neighbor couplings can order in a state where pairs of sublattices form
antiparallel spirals. The spiral ordering wave vectors of the two pairs are in
general different from each other, and are constrained by which sublattices are
being paired. This sublattice pairing state generally breaks inversion and most
rotation symmetries. Its existence depends on the antiferromagnetic
nearest-neighbor coupling which favors the spins on each tetrahedron to sum to
zero. To substantiate our argument, we extend the nematic bond theory; a
diagrammatic large-$N_s$ method, to non-Bravais lattices, and we demonstrate
that the predicted state is indeed realized at low temperatures in a large
region of exchange coupling space. We also carry out a spin wave calculation
which suggests that the sublattice pairing state is coplanar. | Cecilie Glittum, Olav F. Syljuåsen | 2023-05-16T12:54:06Z | http://arxiv.org/abs/2305.09409v2 | # Sublattice Pairing in Pyrochlore Heisenberg Antiferromagnets
###### Abstract
We argue that classical pyrochlore Heisenberg antiferromagnets with small further-neighbor couplings can order in a state where pairs of sublattices form antiparallel spirals. The spiral ordering wave vectors of the two pairs are in general different from each other, and are constrained by which sublattices are being paired. This sublattice pairing state generally breaks inversion and most rotation symmetries. Its existence depends on the antiferromagnetic nearest-neighbor coupling which favors the spins on each tetrahedron to sum to zero. To substantiate our argument, we extend the nematic bond theory; a diagrammatic large-\(N_{s}\) method, to non-Bravais lattices, and we demonstrate that the predicted state is indeed realized at low temperatures in a large region of exchange coupling space. We also carry out a spin wave calculation which suggests that the sublattice pairing state is coplanar.
## I Introduction
The Heisenberg antiferromagnet on the pyrochlore lattice has gotten much attention as it is a spin liquid candidate. This is mainly motivated by the antiferromagnetic (AF) nearest-neighbor classical model, which is predicted to be disordered at all temperatures [1; 2; 3; 4]. However, real pyrochlore magnetic materials are seldom described by the nearest-neighbor model alone. It is therefore important to understand the effects of further-neighbor couplings, and when and what magnetic order they may cause.
In this article we propose a new kind of ordered state for pyrochlore Heisenberg antiferromagnets: a sublattice pairing (SLP) state, where sublattices pair up, and each pair form antiparallel spirals.
Ordering transitions as a result of adding further-neighbor couplings has been studied in mean-field theory [2], and it is known that further-neighbor interactions induce symmetry breaking in the purely classical \(J_{1}\)-\(J_{2}\) model [5; 6; 7; 8].
The third nearest-neighbor couplings are known to be important for several pyrochlore materials, and in many cases more important than the second nearest-neighbor couplings [9; 10; 11]. There are two inequivalent third nearest-neighbor couplings on the pyrochlore lattice: \(J_{3a}\) which goes in the direction of \(J_{1}\), and \(J_{3b}\), which goes across the hexagons, see Fig. 1. Existing theoretical works including third nearest-neighbor couplings either treat the two as equal or set \(J_{3b}\) to zero when studying ordering transitions [5; 12; 13].
In this article we treat the Heisenberg antiferromagnets classically. We focus on the hierarchy of magnetic scales \(J_{1}>J_{3b}\geq J_{2},J_{3a}\), which might be important for the Gd\({}_{2}B_{2}\)O\({}_{7}\) pyrochlores [9] and the \(A\)Fe\({}_{2}\)O\({}_{4}\) spinels [11].
The Hamiltonian is
\[H=\frac{1}{2}\sum_{\vec{r},\vec{r}^{\,\prime}}J(\vec{r}^{\,\prime}-\vec{r})\; \vec{S}_{\vec{r}^{\,\prime}}.\vec{S}_{\vec{r}^{\,\prime}}, \tag{1}\]
where the exchange couplings are illustrated in Fig. 1. The pyrochlore lattice has four fcc sublattices. We label the spins by their unit cell \(\vec{R}\) and sublattice index \(i\) rather than position \(\vec{r}=\vec{R}+\vec{\alpha}_{i}\). \(\vec{R}\) is constructed from the fcc primitive lattice vectors \(\vec{a}_{1}=(0,1/2,1/2),\vec{a}_{2}=(1/2,0,1/2)\), and \(\vec{a}_{3}=(1/2,1/2,0)\), where we have set the cubic lattice constant to unity. The sublattice vectors \(\vec{\alpha}_{i}\) are \(\vec{a}_{0}=(0,0,0)\) and \(\vec{\alpha}_{i}=\vec{a}_{i}/2\) for \(i=\{1,2,3\}\)
Figure 1: Pyrochlore lattice with spins showing an example of an SLP state where sublattices 0 (pink) and 1 (blue) are antiparallel and ordered at \(\vec{Q}_{(0,1)}=(0,-4\pi/3,4\pi/3)\) and sublattices 2 (green) and 3 (yellow) are antiparallel and ordered at \(\vec{Q}_{(2,3)}=(0,4\pi/3,4\pi/3)\). The first (\(J_{1}\)), the second (\(J_{2}\)) and the two inequivalent third (\(J_{3a}\) and \(J_{3b}\)) nearest-neighbor couplings are shown. The blue spins show how layers of triangular planes separated by kagome layers of the remaining sublattices are ordered in \(120^{\circ}\) spirals for the \(J_{1}\)-\(J_{3b}\) model.
We choose energy units \(J_{1}=1\).
## II Sublattice pairing
An AF nearest-neighbor coupling favors the spins on each tetrahedron to sum to zero [1; 2], i.e. \(\sum_{i=0}^{3}S_{\vec{R},i}=0\) for the up-tetrahedra and \(\sum_{i=0}^{3}\vec{S}_{\vec{R}-\vec{a}_{i},i}=0\) for the down-tetrahedra. If each sublattice orders in a single-\(\vec{q}\) spiral state \(S_{\vec{R},i}=\vec{u}_{i}\cos\vec{Q}_{i}\cdot\vec{R}+\vec{v}_{i}\sin\vec{Q}_{i} \cdot\vec{R}\), the Fourier-transformed condition for the up-tetrahedra gives
\[\sum_{i}\left[\left(\vec{u}_{i}-i\vec{v}_{i}\right)\delta_{\vec{q},\vec{Q}_{i} }+\left(\vec{u}_{i}+i\vec{v}_{i}\right)\delta_{\vec{q},-\vec{Q}_{i}}\right]=0, \tag{2}\]
where the \(\delta\)'s should be understood modulo a reciprocal lattice vector. This is satisfied by what we refer to as an SLP state. In an SLP state the sublattices form pairs, such that each sublattice pair \((i,j)\) shares the same ordering wave vector \(\vec{Q}_{(i,j)}\) and has antiparallel spins:
\[\vec{S}_{\vec{R},i} = \vec{u}_{(i,j)}\cos(\vec{Q}_{(i,j)}\cdot\vec{R})+\vec{v}_{(i,j)} \sin(\vec{Q}_{(i,j)}\cdot\vec{R}), \tag{3}\] \[\vec{S}_{\vec{R},j} = -\vec{S}_{\vec{R},i}, \tag{4}\]
where \(\vec{u}_{(i,j)}\) and \(\vec{v}_{(i,j)}\) are orthonormal vectors. If the two ordering wave vectors are different, this state satisfies also the condition for the down-tetrahedra if
\[\left[\vec{Q}_{(i,j)}\cdot(\vec{a}_{i}-\vec{a}_{j})\right]\mod 2\pi=0 \tag{5}\]
for both pairs of sublattices. Figure 2 shows the planes in momentum space where \(\vec{Q}_{(0,1)}\) and \(\vec{Q}_{(2,3)}\) satisfy this equation.
The ordering wave vectors \(\vec{Q}_{(i,j)}\) are generally found by minimizing the energy. As the SLP state minimizes the \(J_{1}\) terms of the energy, it is sufficient to minimize the further-neighbor energy terms subject to the condition Eq. (5). As a first example, we consider the pure \(J_{1}\)-\(J_{3b}\) model. The third nearest neighbors on the pyrochlore lattice couple sites from the same fcc sublattices, and \(J_{3b}\) alone effectively reduces each of the four fcc sublattices to a set of decoupled parallel triangular planes. An AF \(J_{3b}\) will then favor \(120^{\circ}\) order in each triangular plane. For a single plane, there are two such chiral ordering vectors, given by clockwise and counter-clockwise \(120^{\circ}\) rotations. Since the triangular planes in a set are decoupled, the addition of any wave vector orthogonal to the triangular planes will still give \(120^{\circ}\) order in each plane, but with an additional inter-plane rotation. This gives rise to a set of lines in momentum space for each of the four sublattices, along which the \(J_{3b}\)-part of the energy is minimal. This is illustrated in Fig. 2. These lines would correspond to rods of scattering if \(J_{1}=0\). The lines intersect at points where the \(J_{3b}\) energy of two sublattices is minimized by the same \(\vec{Q}\) vector. These wave vectors are given by \((0,4\pi/3,4\pi/3)\) and symmetry-related vectors and lie also on the planes satisfying the tetrahedron condition Eq. (5). The example shown in Fig. 2 is \(\vec{Q}_{(0,1)}=\pm(0,4\pi/3,-4\pi/3)\) and \(\vec{Q}_{(2,3)}=\pm(0,4\pi/3,4\pi/3)\), and the corresponding SLP \(120^{\circ}\) configuration is illustrated in Fig. 1.
To study the ordering wave vectors for the more general \(J_{1}\)-\(J_{2}\)-\(J_{3a}\)-\(J_{3b}\) model, we make use of the fact that this model can be recast into a \(\tilde{J}_{1}\)-\(\tilde{J}_{3a}\)-\(J_{3b}\) model when the tetrahedra conditions are satisfied, with \(\tilde{J}_{1}=J_{1}-J_{2}\) and \(\tilde{J}_{3a}=J_{3a}-J_{2}\)[7]. \(\vec{Q}_{(i,j)}\) is then found as the wave vector that satisfies Eq. (5) and minimizes the \(\tilde{J}_{3a}\)-\(J_{3b}\) energy sum of the pairing fcc sublattices \(i\) and \(\tilde{j}\). For \(-3\leq\tilde{J}_{3a}/J_{3b}\leq 1\) with AF \(J_{3b}\), we find that \(\vec{Q}_{(i,j)}\) is given by the vectors symmetry related to
\[\vec{Q}=\begin{cases}(0,h,h),&\tilde{J}_{3a}/J_{3b}\leq\sqrt{2}-1\\ (2\pi,h-2\pi,h-2\pi),&\tilde{J}_{3a}/J_{3b}>\sqrt{2}-1,\end{cases} \tag{6}\]
with \(h\)=\(2\arccos\left[-(1+\tilde{J}_{3a}/J_{3b})/2\right]\), that satisfy Eq. (5).
When \(\tilde{J}_{3a}/J_{3b}<-3\) for AF \(J_{3b}\), \(\tilde{J}_{3a}/|J_{3b}|<1\) for ferromagnetic (FM) \(J_{3b}\), or \(\tilde{J}_{3a}<0\) for \(J_{3b}=0\), the minimum occurs at \(\Gamma\). The associated SLP state, SLP-\(\Gamma\), where both the ordering vectors are equal to \(\Gamma\), covers both the collinear Neel state [7] and the coplanar Palmer-Chalker state [14]. When \(\tilde{J}_{3a}>|J_{3b}|\), the minimum occurs at X\({}_{1\text{BZ}}\).
Figure 2: The first Brillouin zone (1BZ) showing planes \(\vec{Q}_{(0,1)}\) (purple) and \(\vec{Q}_{(2,3)}\) (grey) satisfying Eq. (5), and \(J_{3b}\) energy minimal lines for sublattices \(0\) (pink), \(1\) (blue), \(2\) (green), and \(3\) (yellow). The lines intersect at \((0,4\pi/3,4\pi/3)\) and symmetry-related points (light grey). The line intersection points in the planes correspond to the possible spiral ordering wave vectors for SLP between sublattices \((0,1)\) and \((2,3)\).
For \(\tilde{J}_{3a}/J_{3b}=-1\), there is a line minimum: \(\vec{Q}=(l,\pi,\pi)\) for AF \(J_{3b}\) and \(\vec{Q}=(l,0,0)\) for FM \(J_{3b}\).
## III Nematic Bond Theory
In order to investigate the occurrence of SLP states in the pyrochlore Heisenberg model, Eq. (1), we employ the nematic bond theory (NBT) [15]. The NBT is a large-\(N_{s}\) approximation, where \(N_{s}\) is the number of spin components, leading to a set of self-consistent equations for classical Heisenberg magnets. It has previously been employed to the square, cubic and triangular lattices [15; 16; 17; 18]. In this article, we extend the NBT to non-Bravais lattices with \(m\) sublattices. Consequently, quantities like the exchange coupling \(J_{\vec{q}}\) and the self-energy \(\Sigma_{\vec{q}}\) become \(m\times m\) matrices in sublattice space.
In momentum space, the Hamiltonian is
\[H=\sum_{\vec{q}}\sum_{ij}J_{\vec{q},ij}\vec{S}_{-\vec{q},i}\cdot\vec{S}_{\vec{ q},j}, \tag{7}\]
where the \(\vec{q}\)-sum goes over the first Brillouin zone, and \(i\) and \(j\) are sublattice indices. In the NBT, the classical spins are integrated out by introducing a constraint field \(\lambda_{\vec{R},i}\) ensuring unit length of the spins, \(|\vec{S}_{\vec{R},i}|=1\), through
\[\delta(|\vec{S}_{\vec{R},i}|-1)=\int_{-\infty}^{\infty}\frac{\beta d\lambda_{ \vec{R},i}}{\pi}e^{-i\beta\lambda_{\vec{R},i}\langle S_{\vec{R},i}\cdot S_{ \vec{R},i}-1\rangle}. \tag{8}\]
The remaining integrals over the constraint field are treated separately for the average constraints \(\Delta_{i}\) and the fluctuations around the average. The average constraints are treated by the saddle-point approximation, and the fluctuations are treated through diagrammatic perturbation theory (large-\(N_{s}\) approximation). As in Ref. [15], the diagrams consist of solid and wavy lines, representing spin and constraint propagators, respectively. In addition to sublattice indices, we have here also included directions to the spin propagators to allow for breaking of inversion symmetry. The spin propagator \(K_{\vec{q},ij}^{-1}\) is then to be understood as carrying momentum \(\vec{q}\) from \(i\) to \(j\).
The spin and constraint propagators are renormalized by the self-energy \(\Sigma_{\vec{q}}\) and the polarization \(\Pi_{\vec{q}}\), respectively, through the Dyson equations in Fig. 3. The large-\(N_{s}\) approximation is performed through a set of self-consistent equations for the self-energy and the polarization, which are shown diagramatically in Fig. 4. These equations approximate the self-energy and the polarization by infinite resummations of classes of diagrams excluding vertex corrections. Combining the Dyson equations and the self-consistent equations, the NBT equations are [15]
\[K_{\vec{q},ij} =J_{\vec{q},ij}+\Delta_{i}\delta_{ij}-\Sigma_{\vec{q},ij}, \tag{9}\] \[D_{\vec{q},ij}^{-1} =\frac{N_{s}}{2}\sum_{\vec{p}}K_{\vec{q}+\vec{p},ij}^{-1}K_{\vec {p},ji}^{-1},\] (10) \[\Sigma_{\vec{q},ij} =-\sum_{\vec{p}\neq 0}K_{\vec{q}-\vec{p},ij}^{-1}D_{\vec{p},ij}, \tag{11}\]
where \(K_{\vec{q}}^{-1}\) is the renormalized spin propagator and \(D_{\vec{q}}\) is the renormalized constraint propagator.
As the constraint field now has a sublattice index, we get a separate saddle-point equation for each sublattice
\[\frac{N_{s}T}{2V}\sum_{\vec{q}}K_{\vec{q},ii}^{-1}=1, \tag{12}\]
where \(V=L^{3}\) is the number of unit cells. These \(m\) saddle-point equations give the temperature \(T\). They must all give the same temperature for the solution to be physical. Following the derivation in Ref. [17], the free energy per unit cell (excluding vertex corrections) is
\[\begin{split} f=&-\sum_{i}\Delta_{i}+\frac{T}{2V} \sum_{\vec{q}}\ln\det\left(\frac{T^{2}}{2V}D_{\vec{q}}^{-1}\right)\\ &-\frac{N_{s}T}{2V}\sum_{\vec{q}}\left[\ln\det\left(TK_{\vec{q}} ^{-1}\right)-\mathrm{Tr}\left(K_{\vec{q}}^{-1}\ \Sigma_{\vec{q}}\right)\right]\\ &-m\frac{N_{s}-1}{2}T\ln\pi.\end{split} \tag{13}\]
The self-consistent equations are solved by iteration starting from a random self-energy and equal values of the \(\Delta_{i}\)s. Each iteration gives an overall negative contribution to the self-energy. To avoid the general increase
Figure 3: Dyson equations for the renormalized spin propagator \(K_{\vec{q}}^{-1}\) (bold solid line), and the renormalized constraint propagator \(D_{\vec{q}}\) (bold wavy line).
Figure 4: Self-consistent equations for the self-energy \(\Sigma_{\vec{q}}\) and the polarization \(\Pi_{\vec{q}}\).
in temperature associated with this, the \(\Delta_{i}\)s are renormalized in every iteration by subtracting from them the minimum eigenvalue among all \(\Sigma_{\vec{q}}\). In addition, each \(\Delta_{i}\) is adjusted very slightly so that Eq. (12) gives the same value of the temperature for all sublattices. We iterate until the temperature has converged, and then employ \(K_{\vec{q}}^{-1}\), \(\Sigma_{\vec{q}}\) and \(D_{\vec{q}}\) to calculate the free energy. For each initial value of the \(\Delta_{i}\)s, we thereby obtain \(f\) and \(K_{\vec{q}}^{-1}\) with a corresponding \(T\).
For a random initial self-energy the NBT might not converge to the lowest temperatures. In those cases we initialize the iterations using a guessed form of \(K_{\vec{q}}^{-1}\) with peaks at suitable momenta. If different initial conditions converge to different states, we pick the state with the lowest free energy.
To get information about the spin correlations we calculate the quantity
\[A_{\vec{q}}\equiv\sum_{ij}K_{\vec{q},ij}^{-1}e^{-i\vec{q}\cdot(\alpha_{i}- \alpha_{j})}. \tag{14}\]
\(A_{\vec{q}}\) is periodic with twice the reciprocal lattice vectors and the associated extended Brillouin zone (EBZ) is a truncated octahedron with dimensions twice those of the first Brillouin zone (1BZ) of the fcc lattice. \(A_{\vec{q}}\) is closely related to the spin structure factor \(S(\vec{q})\equiv\sum_{ij}(\vec{S}_{-\vec{q},i}\cdot\vec{S}_{\vec{q},j})e^{i \vec{q}(\alpha_{i}-\alpha_{j})}=N_{s}T\left(A_{\vec{q}}+A_{-\vec{q}}\right)/4\). While \(S(\vec{q})\) is manifestly inversion symmetric, \(A_{\vec{q}}\) is not as it reflects the symmetries of the self-energy \(\Sigma_{\vec{q},ij}\). We take lack of inversion symmetry in \(A_{\vec{q}}\) to indicate that the spin state breaks inversion symmetry.
## IV Results
For the pure AF nearest-neighbor Hamiltonian, NBT gives no symmetry breaking down to the lowest temperature studied (\(T\simeq 10^{-9}\)), and \(A_{\vec{q}}\) shows \(O_{h}\) symmetric extended maxima on the square surfaces of the EBZ with pinch points at X\({}_{\text{EBZ}}\)[19].
For the \(J_{1}\)-\(J_{3b}\) model with \(J_{3b}=0.2\), the maxima of \(A_{\vec{q}}\) at high temperature occurs at W\({}_{\text{EBZ}}\). As the temperature is lowered, these maxima move into the hexagonal EBZ surfaces keeping the full \(O_{h}\) symmetry. Then at \(T_{c}=0.195\) (for \(L=36\)) the NBT free energy reveals a first-order phase transition into a low-temperature phase with a total of eight peaks in \(A_{\vec{q}}\) in the EBZ [20]: at \(\vec{Q}_{(0,1)}+\vec{b}_{1}+n_{2}\vec{b}_{2}+n_{3}\vec{b}_{3}\) for \(n_{2},n_{3}\in\{0,1\}\), and at \(\vec{Q}_{(2,3)}+\vec{b}_{2}+n_{1}\vec{b}_{1}\), \(\vec{Q}_{(2,3)}+\vec{b}_{3}+n_{1}\vec{b}_{1}\) for \(n_{1}\in\{0,1\}\), with \(\vec{Q}_{(0,1)}=(0,-4\pi/3,4\pi/3)\) and \(\vec{Q}_{(2,3)}=(0,4\pi/3,4\pi/3)\). \(\vec{b}_{i}\) denote the reciprocal lattice vectors for the fcc Bravais lattice. The symmetry of \(A_{\vec{q}}\) is thus reduced from \(O_{h}\) to \(C_{2v}\). In particular, inversion symmetry and all three- and four-fold symmetries are broken. When adding also \(A_{-\vec{q}}\) to obtain the spin structure factor, the \(C_{2v}\) symmetry is increased to \(D_{2h}\). The peaks in \(A_{\vec{q}}\) can be explained as originating from a \(120^{\circ}\) SLP state where antiparallel spirals on sublattices \(0\) and \(1\) order at \(\vec{Q}_{(0,1)}\), and antiparallel spirals on sublattices \(2\) and \(3\) order at \(\vec{Q}_{(2,3)}\). We find that such first-order transitions into the \(C_{2v}\) symmetric \(120^{\circ}\) SLP phase occur for all positive values of \(J_{3b}\), see Fig. 5.
We next check the stability of the SLP phase when adding \(J_{2}\). The finite-temperature phase diagram obtained using NBT for \(J_{3b}=0.2\) is shown in Fig. 6. The SLP phase is stable in the region \(-0.2\leq J_{2}<0.372\), and the ordering wave vectors follow Eq. (6). For FM \(J_{2}<-0.2\), the SLP state becomes unstable to a double-\(\vec{q}\) state, reminiscent of the multi-\(\vec{q}\) states investigated in
Figure 6: \(T\) vs. \(J_{2}\) phase diagram for \(J_{3b}=0.2\). Various system sizes \(L=26-60\). \(J_{3a}=0\). The solid curves, based on discontinuities in the free energy (dots), are first-order phase transitions. The yellow lines around the SLP-\((0,\pi,\pi)\) phase are uncertain as NBT does not converge in the shaded region close to the disordered phase.
Figure 5: Phase transition temperature \(T_{c}\) (pink) and latent heat (blue) vs. \(J_{3b}\) for the transition into the SLP state. \(J_{2}=J_{3a}=0\). \(L=36\).
Refs. [6; 7; 8], and [21], where two ordering vectors are present on _all_ sublattices.
For \(J_{2}=-0.2\), we find a special case of the SLP state, labeled SLP-X in Fig. 6, where all sublattices have the same ordering vector X\({}_{\text{1BZ}}\). \(A_{\vec{q}}\) has maxima at opposite corners of four of the EBZ square surfaces; \((2\pi,4\pi,0)\), \((2\pi,0,4\pi)\), and subleading peaks with half maximum intensity at the four points \((0,\pm 2\pi,\pm 2\pi)\), consistent with the SLP ordering vector \(\vec{Q}_{(0,1)}=\vec{Q}_{(2,3)}=(2\pi,0,0)\). The peak locations transform into each other by the \(D_{4h}\) subgroup of \(O_{h}\). Thus, this phase is inversion symmetric, as opposed to the general SLP phase. This SLP-X phase extends both into the double-\(\vec{q}\) and the general SLP regions at finite temperatures, and will therefore generally cause two ordering transitions as the temperature is lowered from the disordered phase, first one into the SLP-X phase and then another into the double-\(\vec{q}\) or general SLP phase at a lower temperature, see Fig. 6.
For \(J_{2}=J_{3b}\) at low temperatures, when biased into it, NBT converges to an SLP-\((0,\pi,\pi)\) state where \(A_{\vec{q}}\) also has \(D_{4h}\) symmetry. In this state the two spirals each reduce to a collinear configuration, one with ordering wave vectors \(\vec{Q}_{(0,1)}=\pm(0,-\pi,\pi)\) and the other with \(\vec{Q}_{(2,3)}=\pm(0,\pi,\pi)\). This state exists up to a finite temperature, but not all the way up to the disordered phase. We have not found the proper state at the intermediate temperatures, as we have not been able to get NBT to converge in the shaded region in Fig. 6. Nevertheless, we have indicated phase boundaries around the SLP-\((0,\pi,\pi)\) phase as it necessarily must be separated by phase transitions from the inversion symmetry-breaking SLP phase surrounding it.
For sufficiently strong AF \(J_{2}\), the SLP phase gives way to a single-\(\vec{q}\) state with an ordering wave vector along the \(\Gamma\)X\({}_{\text{1BZ}}\) line. In this phase all sublattices order at the same wave vector, and the tetrahedra conditions are no longer satisfied. The phase boundary between the SLP phase and the \(\Gamma\)X\({}_{\text{1BZ}}\) phase is estimated to lie between \(J_{2}=0.366\) and \(J_{2}=0.374\) from our NBT calculations. The vertical line at \(J_{2}=0.372\) is chosen from where the minimum of \(J_{\vec{q}}\) changes character.
In Fig. 7 we map out the low-temperature phase diagram in the \(J_{3b}\)-\(J_{2}\) coupling space using NBT. It is seen that the SLP phase exists in a large region.
On the AF \(J_{2}\) side, the SLP phase ceases to exist when it becomes energetically favorable to violate the tetrahedra conditions. For small and intermediate values of \(J_{3b}\) we find the single-\(\vec{q}\)\(\Gamma\)X\({}_{\text{1BZ}}\) phase. For large \(J_{3b}\) the SLP phase is stable up to \(J_{2}=1\), which is the limit set by the mapping from \(J_{1}\)-\(J_{2}\) to \(\vec{J}_{1}\)-\(\vec{J}_{3a}\).
On the FM \(J_{2}\) side, the SLP phase borders the double-\(\vec{q}\) phase where _each_ sublattice has two ordering vectors. The two ordering vectors are \((2\pi,l,l)\) and \((2\pi,l,-l)\), where \(l\) increases from 0 (X\({}_{\text{1BZ}}\)) to \(\pi/2\) (U\({}_{\text{1BZ}}\)) as \(J_{3b}\) decreases from \(-J_{2}\). It reaches \(\pi/2\) for \(J_{3b}\) equal to a small positive \(J_{2}\)-dependent value. For \(J_{3b}\) less than this, the two ordering vectors shift to \((0,k,k)\) and \((0,-k,k)\) with \(k\) decreasing slowly from \(3\pi/2\) (K\({}_{\text{1BZ}}\)) as \(J_{3b}\) decreases further. Spins in such double-\(\vec{q}\) spirals will only obey the length constraint when the number of spin components \(N_{s}>3\). The NBT is derived from the large-\(N_{s}\) limit without vertex corrections, and is extrapolated down to \(N_{s}=3\). We believe that the double-\(\vec{q}\) state produced by NBT is a remnant of the large-\(N_{s}\) limit and that the extrapolation down to \(N_{s}=3\) does not take the length constraint sufficiently into account.
We have also performed a similar stability analysis in \(J_{3b}\)-\(J_{3a}\) coupling space with \(J_{2}=0\). There, for AF \(J_{3b}\), the SLP state is stable in an even wider region; whenever \(J_{3a}\leq J_{3b}\).
To get information about the relative orientation of the spiral plane vectors \(\vec{u}_{(i,j)}\) and \(\vec{v}_{(i,j)}\) of the two SLP spirals, we have performed a spin wave calculation to compute the entropy. We find that entropy favors the SLP state to be coplanar for our model, Eq. (1), with the two SLP spirals sharing spiral plane vectors. Consequently, entropy favors collinear states in the special cases of SLP-\(\Gamma\) and SLP-X.
## V Discussion
We have shown how the classical AF Heisenberg model on the pyrochlore lattice with small further-neighbor couplings orders with coplanar SLP. In SLP, pairs of sublattices form antiparallel spirals. The ordering wave vectors are in general different for the two sublattice pairs, and are found to be the wave vectors that minimize the total \(\bar{J}_{3a}\)-\(J_{3b}\) energy of the paired fcc sublattices subject to the tetrahedra conditions.
For the pure \(J_{1}\)-\(J_{3b}\) model, we find a \(120^{\circ}\) SLP state. This state simultaneously satisfies both AF \(J_{1}\) and AF \(J_{3b}\) and is thus realized for all \(J_{3b}>0\). It is separated
Figure 7: Low-temperature phase diagram in the \(J_{3b}\)-\(J_{2}\) coupling space, \(J_{3a}=0\). The inversion symmetric SLP-X and SLP-\((0,\pi,\pi)\) states are realized at \(J_{2}=-J_{3b}\) (green line) and \(J_{2}=J_{3b}\) (yellow line), respectively. For large AF \(J_{3b}\), the SLP state is stable for \(-J_{3b}\leq J_{2}<1\).
from the disordered phase by a first-order phase transition with a \(T_{c}\) and latent heat that goes to zero as \(J_{3b}\to 0\), Fig. 5. This is consistent with the AF nearest-neighbor model not ordering [1; 2; 3; 4]. Note that the critical temperatures are likely to be overestimated as NBT excludes vertex corrections [16].
The SLP state generally breaks inversion symmetry, except when four times the ordering wave vectors are reciprocal lattice vectors. We note that recent numerical results indicate that quantum fluctuations of the purely AF spin-1/2 and spin-1 models also induce inversion symmetry-breaking [22; 23; 24].
As special cases of the SLP state, where all sublattices order at the same wave vector, we find SLP-\(\Gamma\) (Neel) and SLP-X for our model. SLP-\(\Gamma\) (Neel) has previously been identified as the ground state for the \(J_{1}\)-\(J_{2}\) model with small AF \(J_{2}\)[25; 7; 21]. SLP-X is realized along the line \(\tilde{J}_{3a}=J_{3b}>0\) (and close to this line at intermediate temperatures) and should be the symmetry-broken state for the \(J_{1}\)-\(J_{3}\) model in Ref. [12]. An _ab inito_ study of the breathing pyrochlore material LiGaCr\({}_{4}\)O\({}_{8}\) finds \(\tilde{J}_{3a}=J_{3b}>0\) and SLP-X as the corresponding low-temperature state [26]. They also find SLP-X to be stabilized at intermediate temperatures for LiInCr\({}_{4}\)O\({}_{8}\), where \(\tilde{J}_{3a}\approx 1.8J_{3b}>0\), which could be explained by the finite-temperature extension of the SLP-X phase due to its collinearity.
The \(J_{2}\) and AF \(J_{3a}\) bonds favor states which do not satisfy the tetrahedra conditions. Nevertheless, we find that the SLP state dominates a large portion of the exchange coupling space, particularly in the region \(J_{1}>J_{3b}>\tilde{J}_{3a}\) for AF \(J_{1}\) and \(J_{3b}\).
This region might be relevant for the Gd\({}_{2}B_{2}\)O\({}_{7}\) class of materials (\(B\) is a nonmagnetic cation) [9]. These are believed to be described as classical pyrochlore Heisenberg AFs with further-neighbor and dipole-dipole interactions [27; 28; 29]. While we have not considered the dipole-dipole interaction, we note that a recent experiment on Gd\({}_{2}\)Ti\({}_{2}\)O\({}_{7}\) suggests a partially ordered state with two ordering vectors at different L\({}_{1\text{BZ}}\) points [30]. For SLP states such L-peaks are most likely to occur at intermediate temperatures near the line \(J_{2}=J_{3b}>0\) where there are line minima in-between the L\({}_{1\text{BZ}}\) points.
The pyrochlore spinel materials are expected to have \(J_{3a}\) at the same order of magnitude as \(J_{3b}\)[10; 11]. As we find the SLP phase to be stable for \(J_{3a}\leq J_{3b}\) for AF \(J_{3b}\), it could be of relevance also for this class of materials. Especially in the ferrites \(A\)Fe\({}_{2}\)O\({}_{4}\), the two types of third nearest-neighbor couplings have been suggested to be of comparable strength [11]. Inclusion of an AF \(J_{2}\) could also help to push the system towards the SLP phase.
We envision also the general SLP states to be realized on the breathing pyrochlore lattice when both the "nearest"-neighbor couplings \(J_{1}\) and \(J_{1}^{\prime}\) are sufficiently strong AF, such that the tetrahedra conditions are satisfied. For further work it would be interesting to study the stability of the SLP state on the breathing pyrochlore lattice as well as its stability when adding dipolar interactions and/or anisotropy. We also note that the extension of NBT to the pyrochlore lattice can be used to study other interesting problems such as possible symmetry breaking in the \(J_{2}=J_{3a}\) model [13].
###### Acknowledgements.
We would like to thank C. Castelnovo for pointing us in the direction of the \(J_{1}\)-\(J_{3b}\) model, and J. Paaske and O.O.L. Solow for useful discussions about NBT. C.G. acknowledges funding from the Aker Scholarship. The computations were performed on resources provided by Sigma2 - the National Infrastructure for High Performance Computing and Data Storage in Norway.
|
2306.10408 | Foliated asymptotically safe gravity in the fluctuation approach | The gravitational asymptotic safety program envisions a high-energy
completion of gravity based on a non-Gaussian renormalization group fixed
point. A key step in this program is the transition from Euclidean to
Lorentzian signature spacetimes. One way to address this challenge is to
formulate the quantum theory based on the Arnowitt-Deser-Misner decomposition
of the metric field. This equips the Euclidean spacetime with a preferred
direction which may serve as the time-direction in the Lorentzian setting. In
this work we use the Wetterich equation in order to compute the renormalization
group flow of the graviton two-point function. The resulting beta functions
possess a non-Gaussian renormalization group fixed point suitable for rendering
the theory asymptotically safe. The phase diagram underlying the flow of the
two-point function is governed by the interplay between this non-Gaussian fixed
point, the Gaussian fixed point, and an infrared fixed point. The latter
ensures that the renormalized squared graviton mass cannot take negative
values. These results are in qualitative agreement with fluctuation
computations carried out in the covariant setting. We take this as non-trivial
evidence that the asymptotic safety mechanism remains intact when considering
quantum gravity on spacetimes carrying a foliation structure. Technically, our
work constitutes the first fluctuation computation carried out within the
ADM-framework. Therefore, we also provide a detailed discussion of the
conceptual framework, highlighting the elements which differ from fluctuation
computations in the covariant setting. | Frank Saueressig, Jian Wang | 2023-06-17T18:40:55Z | http://arxiv.org/abs/2306.10408v1 | # Foliated asymptotically safe gravity
###### Abstract
The gravitational asymptotic safety program envisions a high-energy completion of gravity based on a non-Gaussian renormalization group fixed point. A key step in this program is the transition from Euclidean to Lorentzian signature spacetimes. One way to address this challenge is to formulate the quantum theory based on the Arnowitt-Deser-Misner decomposition of the metric field. This equips the Euclidean spacetime with a preferred direction which may serve as the time-direction in the Lorentzian setting. In this work we use the Wetterich equation in order to compute the renormalization group flow of the graviton two-point function. The resulting beta functions possess a non-Gaussian renormalization group fixed point suitable for rendering the theory asymptotically safe. The phase diagram underlying the flow of the two-point function is governed by the interplay between this non-Gaussian fixed point, the Gaussian fixed point, and an infrared fixed point. The latter ensures that the renormalized squared graviton mass cannot take negative values. These results are in qualitative agreement with fluctuation computations carried out in the covariant setting. We take this as non-trivial evidence that the asymptotic safety mechanism remains intact when considering quantum gravity on spacetimes carrying a foliation structure. Technically, our work constitutes the first fluctuation computation carried out within the ADM-framework. Therefore, we also provide a detailed discussion of the conceptual framework, highlighting the elements which differ from fluctuation computations in the covariant setting.
## 1 Introduction
Time is an intrinsic building block in our description of nature. General relativity implements this structure by the spacetime metric coming with indefinite signature. Adopting the mostly plus convention, the metric of Minkowski space is \(\eta_{\mu\nu}=\text{diag}(-1,+1,+1,+1)\). This entails that the spacetime geometry \(\mathcal{M}\) possesses a "preferred direction" inducing a
foliation structure, \(\mathcal{M}=\mathbb{R}\times\Sigma_{\tau}\). Here \(\tau\in\mathbb{R}\) gives the "time-coordinate" of an event and \(\Sigma_{\tau}\) are the spatial slices defined by the points in \(\mathcal{M}\) with the same value of \(\tau\).
The spacetime metric determines the causal order of events. The principle of causality then states that no effect should precede its cause. In general relativity this is realized by the propagation of signals being confined to the lightcone. At the level of a quantum field theory in a fixed Minkowski spacetime causality entails that correlation functions vanish outside the light cone. The concept of causality becomes highly non-trivial once one departs from a classical spacetime and moves into the realm of quantum gravity. In this case, quantum fluctuations in the lightcone structure could, e.g., lead to a violation of causality at microscopic scales [1]. Our understanding of these effects is far from complete though. Owed to technical reasons, many investigations of quantum gravity effects work in an Euclidean setting, where questions related to the propagation of fields are difficult to assess. Moreover, a generic Euclidean spacetime may not support the additional geometrical structures required to transit from Euclidean to Lorentzian signature settings, so that the analytic continuation only works locally [2]. Having a well-defined time-direction even in the Euclidean setting requires a foliation of spacetime into spatial slices which are then welded together to form spacetime.
Technically, the structures required for the existence of such a foliation can be provided by the Arnowitt-Deser-Misner (ADM)-decomposition of the spacetime metric [3] (also see [4] for a review),
\[g_{\mu\nu}\mapsto\left(N,N_{i},\sigma_{ij}\right). \tag{1}\]
In this case, the metric degrees of freedom are encoded in the lapse function \(N\), the shift vector \(N_{i}\), and the metric on the spatial slices \(\sigma_{ij}\). The resulting foliation allows to implement the analogue of time in the Euclidean setting. This opens the possibility of an analytic continuation to Lorentzian signature. Moreover, transverse-traceless fluctuations of \(\sigma_{ij}\) carry two degrees of freedom (polarizations) which are naturally associated with the two physical degrees of freedom related to the graviton [3; 5; 6].
As an intermediate step towards a full-fledged theory of quantum gravity formulated at Lorentzian signature, one may study an Euclidean setting implementing the foliation structure of spacetime. In the language of [7], this corresponds to a setting in which time is identified before quantizing. Within the gravitational asymptotic safety program, reviewed e.g. in [8; 9; 10; 11; 12; 13; 14; 15; 16; 17], the construction of renormalization group (RG) flows based on ADM-variables has been developed in [18; 19; 20; 21; 22; 23], also see [24] for an implementation of a foliation structure based on a gauge-fixing construction and [25; 26] for further discussions.1 Conceptually, this line differs from the covariant setting followed in e.g. [30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48] in the sense that in the latter case time plays no role at all. This also has profound consequences for the concept of background independence. The construction of the Wetterich equation for gravity [30] makes manifest use of the background field formalism. Background independence is then achieved by quantizing the fluctuation field in all backgrounds simultaneously [49; 50; 51; 52], also see [53; 54] for recent developments. The adaptation to the ADM-formalism also builds on the background formalism and follows the same logic. The conceptual difference is that the
backgrounds in the ADM-formalism admit a foliation structure by construction. In this sense, the formalism quantizes the fluctuation field in all backgrounds carrying a foliation structure. This guarantees that the background spacetimes possess a Lorentzian analogue. The foliation structure is also an essential element of the Causal Dynamical Triangulations program [55; 56], initiated in [57; 58].
In this work we follow the path initiated in [59] and use the Wetterich equation to investigate the RG flow of gravity with the gravitational degrees of freedom carried by the ADM-variables (1). The goal of our study is to identify the non-Gaussian fixed points (NGFPs) which could provide a phenomenologically viable high-energy completion of the theory via the asymptotic safety mechanism. Our work goes beyond the background approximation by studying the flow of the graviton two-point function sourced by the three- and four-point vertices of the theory. It constitutes the first investigation of the asymptotic safety mechanism for foliated spacetimes in a fluctuation computation.
Given the novel type of computation, we provide a detailed exposition on setting up fluctuation computations in the ADM-framework. In particular, we highlight the novel features appearing in the projection onto the vertices of the theory which are absent in the covariant approach. Within this general setting, we perform an explicit study of the RG flow of the graviton two-point function obtained from the Einstein-Hilbert action and its extensions encoding the effect of different speeds of light. As our main result, we identify a NGFP suitable for rendering the construction asymptotically safe. While fluctuation computations projecting the RG flow on different sets of couplings than the background computation, we find that the stability properties of the fixed point in these conceptually different approximations are very similar. We interpret this result as evidence for the robustness of the background computations as well as a first hint on the manifestation of effective universality [42] in the foliation setting. Moreover, it provides a strong indication that the asymptotic safety mechanism for gravity is robust when transiting from the Euclidean setting to backgrounds carrying a foliation structure.
The remainder of this work is organized as follows. Sects. 2 and 3 review the ADM-decomposition of the metric field and the Wetterich equation formulated on foliated spacetimes, respectively. In particular, Sect. 3.3 gives a general introduction to fluctuation field computations in this setting. The beta functions arising from the foliated Einstein-Hilbert truncation and its non-relativistic two-derivative extensions are derived in Sect. 4 and the resulting fixed point structure and phase diagrams are given in Sect. 5. We conclude with a discussion and outlook in Sect. 6. Technical details underlying our computation have been relegated to two appendices.
## 2 Foliated spacetimes in the ADM-formalism
We start by reviewing the Arnowitt-Deser-Misner (ADM) decomposition of the spacetime metric [3]. In this case, the gravitational degrees of freedom are encoded in the ADM-fields \((N,N_{i},\sigma_{ij})\). Let \(\mathcal{M}\) be a \(d+1\)-dimensional manifold equipped with a Euclidean signature metric \(g_{\mu\nu}\) and coordinates \(x^{\mu}\). Here Greek letters \(\mu,\nu,\cdots\) denotes spacetime indices taking values from \(1\) to \(d+1\). We assume that \(\mathcal{M}\) can be foliated by a family of
spatial hypersurfaces \(\Sigma_{\tau}\), labeled by the "time"-parameter \(\tau\). Points in the same spatial slice then share the same time-coordinate \(\tau\) and are labeled by spatial coordinates \(y^{i}\), \(i=1,2,\cdots,d\) on \(\Sigma_{\tau}\). We then perform a change of coordinates
\[x^{\mu}\mapsto(\tau,y^{i}) \tag{1}\]
and introduce the basis vectors
\[t^{\alpha}\equiv\left.\frac{\partial x^{\alpha}}{\partial\tau}\right|_{y^{i}},\qquad e^{\alpha}_{i}\equiv\frac{\partial x^{\alpha}}{\partial y^{i}}\right|_ {\tau}. \tag{2}\]
We further define the unit vector \(n_{\alpha}\), normal to the surface \(\Sigma_{\tau}\),
\[n_{\alpha}\equiv N\partial_{\alpha}\tau\,,\qquad n_{\alpha}e^{\alpha}_{i}=0\,. \tag{3}\]
Here the lapse function \(N(\tau,y^{i})\) is introduced as a normalization factor. The vector \(t^{\alpha}\) can then be decomposed into components normal and tangent to the surface \(\Sigma_{\tau}\)
\[t^{\alpha}=Nn^{\alpha}+N^{i}e^{\alpha}_{i}\,, \tag{4}\]
where \(N^{i}(\tau,y^{i})\) is called the shift vector.
Next, we apply the change of coordinates (1) to the line element
\[ds^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}=g_{\mu\nu}(t^{\mu}d\tau+e^{\mu}_{i}dy^{i})(t ^{\nu}d\tau+e^{\nu}_{j}dy^{j})\,. \tag{5}\]
Substituting the decomposition (4) and introducing the induced metric on \(\Sigma_{\tau}\), \(\sigma_{ij}\equiv g_{\mu\nu}e^{\mu}_{i}e^{\nu}_{j}\), the line element can be expressed in terms of the lapse, shift, and \(\sigma_{ij}\):
\[ds^{2}=(N^{2}+\sigma_{ij}N^{i}N^{j})d\tau^{2}+2\sigma_{ij}N^{i}d\tau dy^{j}+ \sigma_{ij}dy^{i}dy^{j}. \tag{6}\]
Comparing eqs. (5) and (6) then allows to express the spacetime metric in terms of the ADM-fields,
\[g_{\mu\nu}=\begin{pmatrix}N^{2}+N^{i}N_{i}&N_{j}\\ N_{i}&\sigma_{ij}\end{pmatrix}\,,\qquad g^{\mu\nu}=\begin{pmatrix}\frac{1}{N^ {2}}&-\frac{N^{j}}{N^{2}}\\ -\frac{N^{i}}{N^{2}}&\sigma^{ij}+\frac{N^{i}N^{j}}{N^{2}}\end{pmatrix}, \tag{7}\]
where the spatial indices \(i,j\) are raised and lowered with the induced metric \(\sigma_{ij}\).
Next we consider the transformation properties of the spacetime metric \(g_{\mu\nu}\) under the full diffeomorphism group. For a general \(d+1\)-dimensional infinitesimal coordinate transformation \(v^{\mu}(\tau,y^{i})\), the metric transforms as
\[\delta g_{\mu\nu}=\mathcal{L}_{v}g_{\mu\nu}, \tag{8}\]
where \(\mathcal{L}_{v}\) is the Lie derivative of the metric with respect to the vector \(v^{\mu}\). Decomposing the infinitesimal coordinate transformation into spatial and time-part,
\[v^{\mu}(\tau,y^{i})=(f(\tau,y^{i}),\xi^{i}(\tau,y^{i}))\,, \tag{9}\]
eq. (8) induces the following transformation of the ADM-fields
\[\delta N= \partial_{\tau}(fN)+\xi^{k}\partial_{k}N-NN^{i}\partial_{i}f,\] \[\delta N_{i}= \partial_{\tau}(N_{i}f)+\xi^{k}\partial_{k}N_{i}+N_{k}\partial_{i }\xi^{k}+\sigma_{ki}\partial_{\tau}\xi^{k}+N_{k}N^{k}\partial_{i}f+N^{2} \partial_{i}f, \tag{10}\] \[\delta\sigma_{ij}= f\partial_{\tau}\sigma_{ij}+\xi^{k}\partial_{k}\sigma_{ij}+ \sigma_{jk}\partial_{i}\xi^{k}+\sigma_{ik}\partial_{j}\xi^{k}+N_{j}\partial_{i }f+N_{i}\partial_{j}f.\]
We also give the transformation for \(N^{i}\) for completeness
\[\delta N^{i}=\partial_{\tau}(N^{i}f)+\xi^{k}\partial_{k}N^{i}-N_{k}\partial_{k }\xi^{i}+\partial_{\tau}\xi^{i}-N^{i}N^{j}\partial_{j}f+N^{2}\sigma^{ij} \partial_{j}f. \tag{11}\]
The full diffeomorphism group contains an important subgroup, the foliation preserving diffeomorphisms. Their action is obtained by restricting the function \(f(\tau,y^{i})\) to functions of \(\tau\) only.
Finally, we are interested in constructing an action describing the dynamics of the spatial metric \(\sigma_{ij}\). At this point, we restrict ourselves to interactions containing at most two derivatives with respect to the spacetime coordinates. While this property is not invariant under the RG flow which inevitably generates higher-order derivative interactions, the resulting action serves as a starting point for generating the tensor structures which will be tracked in the RG computation later on.
Since the foliation equips the manifold with a preferred time-direction, it is natural to construct the action from the view of non-relativistic theories where we have the kinetic term first and subsequently add a potential. To describe how the spatial metric \(\sigma_{ij}\) changes between different spatial surfaces \(\Sigma_{\tau}\), we introduce the extrinsic curvature
\[K_{ij}\equiv\frac{1}{2}\mathcal{L}_{n}\sigma_{ij}=\frac{1}{2}N^{-1}(\partial_ {\tau}\sigma_{ij}-D_{i}N_{j}-D_{j}N_{i})\,. \tag{12}\]
Here \(\mathcal{L}_{n}\) is the Lie derivative of spatial metric with respect to the normal vector \(n^{\alpha}\) and \(D_{i}\) is the covariant derivative on the spatial slice, carrying the Levi-Civita connection constructed from \(\sigma_{ij}\). Since the extrinsic curvature \(K_{ij}\) also measures the rate of change of the spatial metric along the time direction, one can construct the kinetic term in terms of \(K_{ij}\),
\[S^{K}=\frac{1}{16\pi G}\int d\tau d^{d}yN\sqrt{\sigma}(\alpha_{1}\,K^{ij}K_{ij }-\alpha_{2}K^{2})\,, \tag{13}\]
with \(K=K_{ij}\sigma^{ij}\) being the trace of the extrinsic curvature. Here the coupling \(G\) will turn out to be Newton's constant and \(\alpha_{1}\) and \(\alpha_{2}\) are parameters giving the relative weight between the two kinetic terms.
The potential contains all terms which are independent of time derivatives and compatible with the symmetries we want to impose. Insisting on diffeomorphism invariance on the spatial slice and at most two spatial derivatives, this limits the construction to the volume element and the integrated spatial curvature \({}^{(d)}R\). Thus,
\[S^{V}=\frac{1}{16\pi G}\int d\tau d^{d}yN\sqrt{\sigma}(-^{(d)}R+2\Lambda)\,, \tag{14}\]
with \(\Lambda\) denoting the cosmological constant. Combining eqs. (13) we arrive at
\[S=\frac{1}{16\pi G}\int d\tau d^{d}yN\sqrt{\sigma}(\alpha_{1}K^{ij}K_{ij}- \alpha_{2}K^{2}-{}^{(d)}R+2\Lambda). \tag{15}\]
Since we started from the non-relativistic theory, all terms in the kinetic and potential terms have their own independent couplings, and we denote the couplings for the kinetic terms as \(\alpha_{1}\) and \(\alpha_{2}\). The couplings for the spatial curvature and volume element are given by the Newton coupling and cosmological constant respectively, in order to keep consistency with other works. The parameters \(\alpha_{1}\) and \(\alpha_{2}\) introduce a relative scaling between time and spatial directions.2 For \(\alpha_{1}=\alpha_{2}=1\), the action (15) is the Einstein-Hilbert action, and one recovers invariance under the full diffeomorphism group. In general, the action (15) is invariant under foliation preserving diffeomorphisms only.
Footnote 2: This is reminiscent of CDT [60; 61] where the relative scaling of fundamental (squared) length of space and time also appears.
We also remark that from a physics perspective, the parameterization (15) is redundant in the sense that one of the couplings (canonically \(\alpha_{1}\)) can be fixed to \(\alpha_{1}=1\) by a rescaling of the lapse \(N\) followed by a redefinition of the other couplings. The second parameter in the kinetic part has a physical meaning though. It captures a difference in the propagation speed for the trace- and transverse-traceless modes of the spatial metric fluctuation [62]. At this point, we keep couplings for all operators contained in (15) though.
## 3 The Wetterich equation on foliated spacetimes
The Wetterich equation [63; 64; 30; 65] encodes the change of the effective average action \(\Gamma_{k}\) when integrating out quantum fluctuations with momenta close to the coarse-graining scale \(k\)[66; 67; 68; 8; 8; 10; 17]. In this way, it realizes the Wilsonian picture of renormalization. Its adaptation to the ADM-decomposition has been made in [18] with further details given in [19]. In this section, we review the key ingredients of the construction and explain the setup for performing computations tracking the fluctuation fields.
### General background on the functional renormalization group
The most-frequently used tool for calculating RG flows in theories containing gravitational degrees of freedom is the the Wetterich equation [63; 64; 30; 65]. This equation captures the dependence of the effective average action \(\Gamma_{k}\) on the coarse-graining scale \(k\). It realizes the Wilsonian idea of renormalization in the sense that it captures the change of the effective description of the system when integrating out quantum fluctuations with momenta \(p^{2}\simeq k^{2}\). The equation takes a one-loop form and is given by [63; 64]
\[\partial_{t}\Gamma_{k}=\frac{1}{2}\mathrm{STr}\left[\left(\Gamma_{k}^{(2)}+ \mathcal{R}_{k}\right)^{-1}\,\partial_{t}\mathcal{R}_{k}\right]\,. \tag{18}\]
Here \(t\equiv\ \ln\,(k/k_{0})\) denotes the RG time with \(k_{0}\) being an arbitrary reference scale and \(\Gamma_{k}^{(2)}\) is the second functional derivative of \(\Gamma_{k}\) with respect to the fluctuation fields. The regulator \(\mathcal{R}_{k}(p^{2})\) equips the fluctuations with momenta \(p^{2}\ll k^{2}\) with a \(k\)-dependent mass term and vanishes for momenta \(p^{2}\gg k^{2}\). The latter property ensures that the trace on the right-hand side is free from UV-divergences since its argument vanishes sufficiently fast for high momenta. Both \(\Gamma_{k}^{(2)}\) and \(\mathcal{R}_{k}\) are matrix-valued in field space. The supertrace STr
then includes an integration over loop-momenta as well as sums over all fluctuation fields and internal indices. Moreover it provides a minus-sign to the contribution of the ghost fields.
In practice one obtains non-perturbative, approximate solutions of (10) by projecting the exact equation to a subspace of all admissible action functionals \(\mathcal{O}_{i}\) constructable from a given field content,
\[\Gamma_{k}\simeq\sum_{i}\bar{u}^{i}(k)\,\mathcal{O}_{i}\,. \tag{11}\]
The \(k\)-dependence of \(\Gamma_{k}\) is then captured by the dimensionful couplings \(\bar{u}^{i}(k)\). When analyzing the RG flow it is convenient to consider the dimensionless versions of these couplings constructed with respect to the coarse-graining scale \(k\), \(u^{i}(k)=\bar{u}^{i}(k)k^{-[d_{i}]}\), where \([d_{i}]\) is the mass-dimension of \(\bar{u}^{i}\). Substituting (11) into (10) and matching the coefficients multiplying the functionals \(\mathcal{O}_{i}\) on its left and right-hand side gives the beta functions encoding the flow of the couplings
\[\partial_{t}u^{i}(k)=\beta_{u^{i}}(u^{j}(k))\,. \tag{12}\]
The solutions of this system are called RG trajectories.
The most important properties of the beta functions are their fixed points (\(u^{i}_{*}\)) where
\[\beta_{u^{i}}(u^{j}_{*})=0\,,\qquad\forall\,i\,. \tag{13}\]
Depending on whether the fixed point action corresponds to a free theory or admits interactions, one distinguishes between a Gaussian fixed point (GFP) and a non-Gaussian fixed point (NGFP).
In the vicinity of a fixed point, the properties of the RG flow can be obtained by linearizing the system (12)
\[\partial_{t}u^{i}(k)=\sum_{j}B^{i}{}_{j}(u^{j}-u^{j}_{*})+O((u^{j}-u^{j}_{*})^ {2})\,,\quad B^{i}{}_{j}=\left.\frac{\partial\beta_{u^{i}}}{\partial u^{j}} \right|_{u^{i}=u^{i}_{*}}\,. \tag{14}\]
The stability properties of the flow are captured by the stability coefficients \(\theta_{I}\), defined as minus the eigenvalues of the stability matrix \(B^{i}{}_{j}\). For eigendirections of \(B^{i}{}_{j}\) associated with coefficients with \(\text{Re}\theta_{I}>0\) the flow is dragged into the fixed point as \(k\to\infty\) while eigendirections where \(\text{Re}\theta_{I}<0\) are repulsive in this limit.
The asymptotic safety hypothesis then stipulates that the high-energy completion of gravity is provided by a NGFP [69]. If the underlying NGFP has eigendirections where \(\text{Re}\theta_{I}<0\), the presupposition that the NGFP provides the high-energy completion leads to testable predictions in the following sense: any theory meeting this criterion must be situated within the subspace of RG trajectories (the UV-critical hypersurface of the fixed point) emanated from the fixed point as \(k\) decreases. This induces conditions among the coupling constants which are testable, at least in principle. Recently, it has been argued in [47], that applying this condition to the standard model of particle physics allows to identify the interacting gravity-matter fixed point providing the UV-completion of the theory.
### Introducing the foliation structure
The construction of the effective average action for theories treating the gravitational degrees of freedom at the quantum level makes manifest use of the background field method [30]. The presence of the background structure is essential for dividing the spectrum of fluctuations into long-range and short-range with respect to the coarse-graining scale \(k\). In essence, the use of background fields allows to treat quantum fluctuations in the metric along the lines of matter degrees of freedom quantized within the framework of quantum field theory in a curved spacetime. On top of this conceptual need, the background field formalism also induces auxiliary symmetries in \(\Gamma_{k}\) which constrain the type of interaction monomials which are generated along the RG flow.
In practice, we implement the background field formalism by starting from the ADM-fields \(\chi=(N,N_{i},\sigma_{ij})\) and decomposing them into a background part \(\bar{\chi}=(\bar{N},\bar{N}_{i},\bar{\sigma}_{ij})\) and fluctuations \(\hat{\chi}=(\hat{N},\hat{N}_{i},\hat{\sigma}_{ij})\) via a linear split3
Footnote 3: Since the ADM-decomposition of the metric (7) is non-linear in the ADM-fields, the vertices containing a fixed power of the fluctuation fields capture different contributions than the ones obtained in the covariant computation building on the split \(g_{\mu\nu}=\bar{g}_{\mu\nu}+h_{\mu\nu}\).
\[N=\bar{N}+\hat{N}\,,\qquad N_{i}=\bar{N}_{i}+\hat{N}_{i}\,,\qquad\sigma_{ij}= \bar{\sigma}_{ij}+\hat{\sigma}_{ij}. \tag{10}\]
Depending on the values of \(\alpha_{1}\) and \(\alpha_{2}\), the action (15) is invariant either with respect to the full diffeomorphism group or foliation preserving diffeomorphisms. In order to obtain well-defined propagators, the gravitational part of the effective average action must thus be supplemented by a gauge-fixing condition and the corresponding ghost contribution. Thus the ADM-adaptation of \(\Gamma_{k}\) has the general structure
\[\begin{split}\Gamma_{k}=&\bar{\Gamma}_{k}[N,N_{i}, \sigma_{ij}]+\widehat{\Gamma}_{k}[\hat{N},\hat{N}_{i},\hat{\sigma}_{ij};\bar{N},\bar{N}_{i},\bar{\sigma}_{ij}]+\Gamma_{k}^{\rm gf}[\hat{N},\hat{N}_{i},\hat{ \sigma}_{ij};\bar{N},\bar{N}_{i},\bar{\sigma}_{ij}]\\ &+\Gamma_{k}^{\rm ghost}[\hat{N},\hat{N}_{i}\hat{\sigma}_{ij}, \bar{c},\bar{b}^{i},c,b_{i};\bar{N},\bar{N}_{i},\bar{\sigma}_{ij}]\,.\end{split} \tag{11}\]
Here \(\bar{\Gamma}_{k}[N,N_{i},\sigma_{ij}]\) is the "diagonal" part of the action depending on the background and fluctuation fields in the combination (10) only and \(\widehat{\Gamma}_{k}\) encodes the "off-diagonal" contributions and genuinely depends on both arguments. \(\Gamma_{k}^{\rm gf}\) provides the gauge-fixing of the action and is accompanied by the action for the ghost \((c,b^{i})\) and anti-ghost fields \((\bar{c},\bar{b}_{i})\) capturing the contribution of the Faddeev-Popov determinant. Eq. (11) anticipates that all sectors may contain \(k\)-dependent couplings.
Concretely, we follow [22; 70] and work within the class of background gauge-fixings. The main idea underling our choice of gauge fixing is to introduce a contribution bilinear in the fluctuation fields which equips all fields with a relativistic dispersion relation. Restricting the constructions to terms with at most two derivatives, we can parameterize the gauge-fixing functional as
\[\Gamma_{k}^{\rm gf}=\frac{1}{32\pi G_{k}}\int d\tau d^{d}x\bar{N}\sqrt{\bar{ \sigma}}\left[F_{i}\bar{\sigma}^{ij}F_{j}+F^{2}\right]\,. \tag{12}\]
The functionals \(F\) and \(F_{i}\) are linear in the fluctuation fields and implement the gauge-fixing condition. The most general form of \(F\) and \(F_{i}\) is a linear combination of the fluctuation
fields including one temporal or spatial derivative. In a flat background, their generic form is given by
\[\begin{split} F=& c_{1}\,\partial_{\tau}\hat{N}+c_{2}\, \partial^{i}\hat{N}_{i}+c_{3}\,\partial_{\tau}\hat{\sigma},\\ F_{i}=& c_{4}\,\partial_{\tau}\hat{N}_{i}+c_{5}\, \partial_{i}\hat{N}+c_{6}\,\partial_{i}\hat{\sigma}+c_{7}\,\partial^{j}\hat{ \sigma}_{ji}.\end{split} \tag{11}\]
The free coefficients \(c_{1},\cdots,c_{7}\) can be chosen for later convenience. In the sequel, we will fix
\[c_{1}=-\sqrt{2},\ c_{2}=-\sqrt{2},\ c_{3}=\frac{1}{\sqrt{2}},c_{4}=-\sqrt{2}, \ c_{5}=\sqrt{2},\ c_{6}=\frac{1}{\sqrt{2}},\ c_{7}=-\sqrt{2}, \tag{12}\]
which implements the harmonic gauge condition at the level of the ADM-decomposition. The ghost action exponentiating the Faddeev-Popov determinant is obtained in the standard way
\[\Gamma_{k}^{\text{ghost}}=\int d\tau d^{d}x\bar{N}\sqrt{\bar{\sigma}}\left[ \bar{c}\,\frac{\delta F}{\delta\hat{\chi}^{a}}\left(\delta_{c,b_{j}}\chi^{a} \right)+\bar{b}^{i}\,\frac{\delta F_{i}}{\delta\hat{\chi}^{a}}\left(\delta_{c,b_{j}}\chi^{a}\right)\right]\,. \tag{13}\]
Here \((\delta_{c,b_{j}}\chi^{a})\) is the transformation of ADM-field \(\chi^{a}\) introduced in (10), with \(f\) and \(\xi^{i}\) replaced by \(c\) and \(b^{i}\). For the parameters (12), the evaluation of (13) leads to a rather lengthy expression. Its explicit form is given in App. A.3.
The dependence of \(\Gamma_{k}\) on the coarse-graining scale \(k\) is then governed by the Wetterich equation (10). At this point a technical remark about the construction of the regulator function \(\mathcal{R}_{k}(\Box)\) is in order (also see [13; 71] for a more detailed discussion). In practice, the regulator is a non-local function of a differential operator \(\Box\) which is used to discriminate between fluctuations coming with "high-" and "low-"momentum with respect to the coarse graining scale \(k\). Its basic property is that it provides a mass term for low-momentum fluctuation and decays sufficiently fast for the high-momentum modes. In the covariant setting it is natural to use the background Laplacian, \(\Box\equiv-\bar{g}^{\mu\nu}\bar{D}_{\mu}\bar{D}_{\nu}\), potentially supplemented by an endomorphism constructed from the background curvature, in order to "measure" the momentum of a fluctuation [13]. In the foliated setting, we have a natural discrimination between spatial and "time"-derivatives. This opens more options. In particular, one can resort to a regularization procedure where \(\Box\) does not contain derivatives with respect to the "time"-direction. The discrimination between low- and high-momentum fluctuations can then be based on the Laplacian constructed on the spatial slices \(\Sigma_{\tau}\), \(\Box\equiv-\bar{\sigma}^{ij}\bar{D}_{i}\bar{D}_{j}\). This choice still realizes a \(k\)-dependent mass term for the fluctuation fields and suffices to render the trace on the right-hand side of eq. (10) finite. This choice comes with the advantages that it does not induce higher-order time-derivatives in the regularization procedure. Moreover, it orders fluctuations in a positive semi-definite way which can be carried over to Lorentzian signature computations. This is the route taken in the background computations [19; 20; 21; 22; 59]. As a drawback, the regulator introduces a non-covariant element in the construction which sources diffeomorphism-violating effects in the RG flow. Since our present work is limited to the Euclidean signature setting, we follow the construction [24] and adopt a covariant choice for \(\Box\). It turns out that this choice is also technically preferred when carrying out fluctuation computations in the foliated setting.
### Solving the Wetterich equation in the fluctuation approach
Before delving into the actual computation, let us first introduce the conceptual elements underlying fluctuation computations based on the Wetterich equation, following [16]. The general setup will be introduced in Sect. 3.3.1 and we give an instructive example based on the Einstein-Hilbert action in Sect. 3.3.2. Details specific to RG flows on a foliated spacetime are discussed in Sect. 3.3.3.
#### 3.3.1 Fluctuation field computations - the general setup
We start with briefly reviewing the fluctuation approach for the functional renormalization group. Generically, we consider a theory whose field content comprises \(N\) fields collectively denoted by \(\chi\equiv(\chi_{1},\cdots,\chi_{N})\). In the ADM-formalism, \(\chi\) contains the lapse, shift, and spatial metric, as well as the ghost and anti-ghost fields. The background field method decomposes these fields into their background parts \(\bar{\chi}\equiv(\bar{\chi}_{1},\cdots,\bar{\chi}_{N})\) and fluctuations \(\hat{\chi}\equiv(\hat{\chi}_{1},\cdots,\hat{\chi}_{N})\). This decomposition could be implemented through the linear split \(\chi_{a}=\bar{\chi}_{a}+\hat{\chi}_{a}\), \(a=1,\cdots,N\). Typically, the background fields associated with the ghost and anti-ghost fields are taken to be zero.
The idea of the fluctuation approach is to expand the effective average action in powers of the fluctuation fields
\[\Gamma_{k}[\hat{\chi};\bar{\chi}]=\sum_{n=0}^{\infty}P(a_{i})\int_{x}\Gamma^{ (\hat{\chi}a_{1}\cdots\hat{\chi}a_{n})}_{k}[\bar{\chi}]\ \hat{\chi}_{a_{1}}\cdots\hat{\chi}_{a_{n}}\,. \tag{3.12}\]
The dependence of \(\Gamma_{k}\) on the coarse-graining scale is carried by the expansion coefficients \(\Gamma^{(\hat{\chi}a_{1}\cdots\hat{\chi}a_{n})}_{k}[\bar{\chi}]\). These depend on background quantities only and may contain covariant derivatives acting on the fluctuation fields. The \(P(a_{i})\) denotes a combinatorial factor ensuring that
\[\Gamma^{(\hat{\chi}a_{1}\cdots\hat{\chi}a_{n})}_{k}[\bar{\chi}]=\left.\frac{ \delta^{m}}{\delta\hat{\chi}a_{1}\cdots\delta\hat{\chi}a_{m}}\Gamma_{k}[\hat{ \chi};\bar{\chi}]\right|_{\bar{\chi}=0}\,, \tag{3.13}\]
and a sum over the indices \(a_{i}\) is implied. The expansion disentangles the contributions from the background and fluctuation fields. In explicit computations, it is useful to extract the wave-function renormalization factors of the fluctuation fields from the vertices,
\[\Gamma^{(\hat{\chi}a_{1}\cdots\hat{\chi}a_{n})}_{k}[\bar{\chi}]=\left(\prod_{ i=1}^{n}Z^{\frac{1}{2}}_{\hat{\chi}a_{i}}\right)\ \bar{\Gamma}^{(\hat{\chi}a_{1}\cdots\hat{\chi}a_{n})}_{k}[\bar{\chi}]\,. \tag{3.14}\]
In order to keep our notation light, we will keep these factors within \(\Gamma^{(\hat{\chi}a_{1}\cdots\hat{\chi}a_{n})}_{k}[\bar{\chi}]\).
At this stage, one can introduce a basis on the space of tensor structures (\(\mathcal{T}^{(\hat{\chi}a_{1}\cdots\hat{\chi}a_{n})}\)) contracting \(n\) fluctuation fields. In general, these basis elements carry the dependence of the vertices on the internal indices of the fields. The expansion coefficients \(\Gamma^{(\hat{\chi}a_{1}\cdots\hat{\chi}a_{n})}_{k}[\bar{\chi}]\) can then be expanded in this basis
\[\Gamma^{(\phi_{a_{1}}\cdots\phi_{a_{n}})}_{k}[\bar{\chi}]=\sum_{j}\bar{u}_{n,j }(k)\,\mathcal{T}^{(\phi_{a_{1}}\cdots\phi_{a_{n}})}_{j}[\bar{\chi}]. \tag{3.15}\]
The expansion coefficients \(\bar{u}_{n,j}(k)\) are the dimensionful coupling constants of the theory and depend on the coarse-graining scale \(k\).
The dependence of the couplings \(\bar{u}_{n,j}(k)\) on \(k\) can then be obtained by substituting the expansion (3.12) into the Wetterich equation (3.1) and comparing the coefficients multiplying a given tensor structure on its left- and right-hand side. The projection onto the tensor structures is conveniently performed by taking functional derivatives of the initial equation with respect to the fluctuation fields and their contracted (generalized) momenta. Making use of (3.13), this leads to equations of the form
\[\partial_{t}\Gamma_{k}^{(\hat{X}_{a_{1}}\cdots\hat{X}_{a_{n}})}[\bar{\chi}]= \frac{1}{2}\left.\frac{\delta^{n}}{\delta\hat{\chi}_{a_{1}}\cdots\delta\hat{ \chi}_{a_{n}}}\text{Str}\left[\left(\Gamma_{k}^{(2)}[\hat{\chi};\bar{\chi}]+ \mathcal{R}_{k}[\bar{\chi}]\right)^{-1}\partial_{t}\mathcal{R}_{k}[\bar{\chi}] \right]\right|_{\hat{\chi}=0}\,. \tag{3.16}\]
For the one-point correlation functions this general expression implies
\[\partial_{t}\Gamma_{k}^{(\hat{X}_{i})} = -\frac{1}{2}\text{STr}\left.\left[\left(\Gamma_{k}^{(\hat{X}_{a} \hat{X}_{b})}+\mathcal{R}_{k}\right)^{-1}\Gamma_{k}^{(\hat{X}_{i}\hat{X}_{b} \hat{X}_{c})}\left(\Gamma_{k}^{(\hat{X}_{c}\hat{X}_{d})}+\mathcal{R}_{k} \right)^{-1}\partial_{t}\mathcal{R}_{k}^{(\hat{X}_{d}\hat{X}_{a})}\right] \right|_{\hat{\chi}=0}\,. \tag{3.17}\]
while the flow of a two-point function has the general form
\[\begin{split}\partial_{t}\Gamma_{k}^{(2)}=&\,\text{ STr}\left[\left(\Gamma_{k}^{(2)}+\mathcal{R}_{k}\right)^{-1}\Gamma_{k}^{(3)} \left(\Gamma_{k}^{(2)}+\mathcal{R}_{k}\right)^{-1}\Gamma_{k}^{(3)}\left( \Gamma_{k}^{(2)}+\mathcal{R}_{k}\right)^{-1}\partial_{t}\mathcal{R}_{k} \right]\\ &-\frac{1}{2}\text{STr}\left[\left(\Gamma_{k}^{(2)}+\mathcal{R}_{ k}\right)^{-1}\Gamma_{k}^{(4)}\left(\Gamma_{k}^{(2)}+\mathcal{R}_{k}\right)^{-1} \partial_{t}\mathcal{R}_{k}\right]\,.\end{split} \tag{3.18}\]
Thus, determining the \(k\)-dependence of the expansion coefficients appearing at the \(n\)th order in the fluctuation fields requires knowledge about the coefficients at order \(n+1\) and \(n+2\). Any approximation of the exact solution has to supply this information in order to close the hierarchy at any finite order \(n\). Typically, this closure is provided by identifying couplings \(\bar{u}_{n,j}(k)\) appearing in the interaction vertices of different order.
#### 3.3.2 Vertex structures in the covariant setting
Before discussing the intricacies related to fluctuation field computations on a foliated spacetime, it is useful to illustrate the working of the approach in the covariant setting. Explicit computations based on this setup have been reported in [72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84] and reviewed in [85]. We focus on the gravitational field \(g_{\mu\nu}\) which is decomposed into background and fluctuation fields according to
\[g_{\mu\nu}=\bar{g}_{\mu\nu}+\hat{g}_{\mu\nu}\,. \tag{3.19}\]
Most of the computations utilize a flat, Euclidean background where \(\bar{g}_{\mu\nu}=\delta_{\mu\nu}\). This has the advantage that standard momentum-space techniques are available. Derivatives within the expansion coefficients can then be replaced by the momenta of the fluctuation fields. We will adopt this choice in the sequel.
Subsequently, we consider the Einstein-Hilbert action supplemented by the harmonic gauge condition
\[\Gamma_{k}[\hat{g};\bar{g}]=-\frac{1}{16\pi G_{k}}\int d^{d}x\sqrt{g}R+\frac{1 }{32\pi G_{k}}\int d^{d}x\sqrt{\bar{g}}\,\bar{g}^{\mu\nu}F_{\mu}F_{\nu}\,, \tag{3.20}\]
where
\[F_{\mu}=\left[\bar{D}^{\alpha}\delta^{\beta}_{\mu}-\frac{1}{2}\,\bar{g}^{\alpha \beta}\bar{D}_{\mu}\right]\hat{g}_{\alpha\beta}\,. \tag{3.21}\]
Expanding (3.20) in a flat background, the first non-trivial term appears at second order in the fluctuation fields
\[\Gamma^{\rm quad}_{k}[\hat{g};\bar{g}]=\frac{1}{128\pi G_{k}}\int\frac{d^{d}p} {(2\pi)^{d}}\,p^{2}\,\left[\delta^{\mu\alpha}\delta^{\nu\beta}+\delta^{\mu \beta}\delta^{\nu\alpha}-\delta^{\mu\nu}\delta^{\alpha\beta}\right]\hat{g}_{ \mu\nu}(p)\hat{g}_{\alpha\beta}(-p)\,. \tag{3.22}\]
Comparing this result with (3.12) gives the expansion coefficient
\[\Gamma^{(\hat{g}\hat{g})}_{k}=\frac{1}{64\pi G_{k}}\,p^{2}\,\left[\delta^{\mu \alpha}\delta^{\nu\beta}+\delta^{\mu\beta}\delta^{\nu\alpha}-\delta^{\mu\nu} \delta^{\alpha\beta}\right]\,. \tag{3.23}\]
The tensor structure appearing in brackets can be understood as a linear combination of the two orthogonal tensor structures
\[\mathcal{T}^{(\hat{g}\hat{g})}_{1}=\frac{1}{2}\left(\delta^{\mu}_{\alpha} \delta^{\nu}_{\beta}+\delta^{\nu}_{\alpha}\delta^{\mu}_{\beta}\right)-\frac{1} {d}\delta^{\mu\nu}\delta_{\alpha\beta}\,,\qquad\mathcal{T}^{(\hat{g}\hat{g})}_ {2}=\frac{1}{d}\delta^{\mu\nu}\delta_{\alpha\beta}\,, \tag{3.24}\]
which can be built from a flat metric without resorting to the momentum of the field. The two \(\mathcal{T}\)'s project a symmetric, rank-2 tensor onto its traceless and trace part, respectively. Eq. (3.23) can then be recast in the form (3.15),
\[\Gamma^{(\hat{g}\hat{g})}_{k}=\bar{u}_{2}(p^{2};k)\left[\mathcal{T}^{(\hat{g} \hat{g})}_{1}+\frac{d-2}{2}\,\mathcal{T}^{(\hat{g}\hat{g})}_{2}\right] \tag{3.25}\]
where
\[\bar{u}_{2}(p^{2};k)\equiv\,\frac{1}{32\pi G_{k}}\,p^{2}\,. \tag{3.26}\]
At this stage, the following remarks are in order. Eq. (3.26) indicates that _the avatar of Newton's coupling_ associated with the graviton two-point function appears at order \(p^{2}\) in the momentum of the fluctuation field. While this coupling is derived from the gauge-fixed Einstein-Hilbert action (3.20), the expansion in terms of tensor structures introduced in eqs. (3.12) and (3.15) indicates that this coupling _is different from_ Newton's coupling which arises at zeroth order in the fluctuation field by evaluating (3.20) at \(\hat{g}=0\)[86; 87].
Secondly, our definition of \(\bar{u}_{2}(p^{2};k)\) promotes Newton's coupling to a form factor depending on the squared momentum \(p^{2}\) of the fluctuation fields. The form factor then captures the non-trivial momentum dependence of the graviton two-point function.4 The Wetterich equation allows to compute the dependence of \(\bar{u}_{2}\) on \(p^{2}\). In the covariant approach this "reconstruction of the graviton propagator" has been carried out in [83]. In [43] it was reported that the two-point function interpolates between \(\bar{u}_{2}\propto\text{const}\) and \(\bar{u}_{2}\propto k^{\eta_{h}}\), \(\eta_{h}=0.96\), for small and large momenta, respectively.
Footnote 4: Structurally, this is very similar to the form factors appearing in the curvature expansion of the effective average action [88; 89; 90; 91] reviewed in [46; 92].
#### 3.3.3 Vertex structures on a foliated spacetime
In comparison to the covariant setting, the foliation present in the ADM-formalism provides an additional structure which allows to distinguish the spatial and time-components of a covariant object. In particular, the squared momentum can be written as
\[p^{2}=p_{0}^{2}+\vec{p}^{\,2}\,. \tag{3.27}\]
It is now instructive to insert this relation into (3.26)
\[\bar{u}_{2}(p_{0}^{2},\vec{p}^{\,2};k)=\,\frac{1}{32\pi G_{k}}\,\left(p_{0}^{2} +\vec{p}^{\,2}\right)\,. \tag{3.28}\]
Generically, the coefficients in front of the two terms on the right-hand side can be different. This implies that the presence of the foliation allows to avatars of Newton's coupling associated with the spatial and "time"-like contributions to the squared momentum
\[\bar{u}_{2}=\frac{1}{32\pi G_{k}}p^{2}\qquad\begin{array}{ccc}&\nearrow& \bar{u}_{2a}=\frac{1}{32\pi G_{k}}\,p_{0}^{2}\,,\\ &\searrow&\bar{u}_{2b}=\frac{1}{32\pi G_{k}}\,\vec{p}^{\,2}\,.\end{array} \tag{3.29}\]
At the level of the action (2.15), these avatars are implemented through the couplings \(\alpha_{1}\) and \(\alpha_{2}\). The symmetries of the covariant setting fix \(\alpha_{1}=\alpha_{2}=1\), indicating that the avatars generated in the split (3.29) should be identified.
The Wetterich equation adapted to the ADM-formalism incorporates contributions which break Lorentz covariance explicitly. Hence the flow of the avatars generated in the split (3.29) is expected to be different. Conceptually, reading off the flow of a coupling from the terms containing \(p_{0}^{2}\) or \(\vec{p}^{\,2}\), should be considered as computing the beta functions of _two different couplings_. The conceptual consequences of this setting are illustrated in Fig. 1. The figure depicts a generic RG fixed point and its projection to lower-dimensional subspaces spanned by a set of dimensionless couplings \(u_{i}\) (black dots). The fixed point should be visible in all projections (straight lines). The stability coefficients accessed in the projections capture different properties of the RG flow in the vicinity of the fixed point though. Thus there is no a priori reason that their values found in different projections actually agree. This applies specifically for the two projections described in eq. (3.29). Convergence of stability coefficients requires studying extended projections which include the same subsystem. Thus, it is meaningful to compare the stability coefficients obtained in the three-dimensional and each of the two-dimensional subsystems, while a comparison among the different two-dimensional systems may not show identical stability properties.
## 4 RG flows at second order in the derivative expansion
Upon introducing the general framework underlying fluctuation field computations in the ADM-formalism, we work out a specific example and compute the scale-dependence of the graviton two-point function resulting from the two-derivative action (2.15). We start by giving the gauge-fixed propagators for the fluctuation fields in Sect. 4.1. The projection
of the flow on these tensor structures and the resulting beta functions encoding the \(k\)-dependence of the couplings are given in Sect. 4.2. Many technical details are relegated to App. A. Throughout the computation we work in a four-dimensional flat Euclidean background and we manifestly make use of momentum-space methods in order to facilitate the computation.
### Gauge-fixed two-point functions
We start from the action (15) which contains all terms constructed from at most two derivatives. We then adopt this action as the gravitational part of the effective average action,
\[\Gamma^{\text{grav}}_{k}[N,N_{i},\sigma_{ij}]=\frac{1}{16\pi G_{k}}\int d\tau d ^{3}y\,N\sqrt{\sigma}\left(\alpha_{1}K^{ij}K_{ij}-\alpha_{2}K^{2}-{}^{(d)}R+2 \Lambda_{k}\right). \tag{17}\]
Here the couplings \(\alpha_{1},\alpha_{2},G_{k}\), and \(\Lambda_{k}\) have been promoted to depend on the coarse-graining scale \(k\). Background computations along the lines of [19; 20; 22] start from a similar ansatz and subsequently evaluate the Wetterich equation (11) at zeroth order in the fluctuation field.
In our fluctuation computation, eq. (17) serves as the generating functional for the tensor structures onto which we project the flow equation. In order to obtain these struc
Figure 1: Projection of a three-dimensional approximation of theory space to two-dimensional subspaces. The NGFP and its projections are marked by black dots. The arrows indicate eigendirections of the stability matrix, describing the RG flow in the vicinity of the fixed point. The value of the critical exponent is encoded in the length of the arrow. Notably, projections to different subspaces will see different projections of these arrows, illustrating that the critical exponents do not lend themselves to a meaningful comparison when considering subspaces spanned by different couplings a priori.
tures, we first supplement (4.1) by the gauge-fixing and ghost terms given in eqs. (3.8) and (3.11) (also see App. A.3 for explicit expressions). We then substitute (3.6) and expand the resulting functional in powers of the fluctuation fields in the flat background. In order to bring the Hessian \(\Gamma_{k}^{(2)}\) into (almost) diagonal form, we implement a transverse-traceless decomposition of the fluctuation fields [93]
\[\begin{split}\hat{\sigma}_{ij}=& h_{ij}+\partial_{i}v_{j}+ \partial_{j}v_{i}+\partial_{i}\partial_{j}E-\frac{1}{3}\delta_{ij}\partial^{2 }E+\frac{1}{3}\delta_{ij}\Psi,\\ \hat{N}_{i}=& u_{i}+\partial_{i}B\,,\end{split} \tag{4.2}\]
where the component fields are subject to the constraints
\[\partial^{i}h_{ij}=0\,,\quad\delta^{ij}h_{ij}=0\,,\quad\partial^{i}v_{i}=0\,,\quad\partial^{i}u_{i}=0\,,\quad\Psi=\delta^{ij}\hat{\sigma}_{ij}\,. \tag{4.3}\]
These constraints ensure that the two-point functions depend on contracted spatial derivatives \(\partial^{2}\equiv\partial_{i}\partial^{i}\) only [31; 94]. The field redefinition (4.2) has a non-trivial Jacobian. This extra term is conveniently accounted for by rescaling the fluctuations according to
\[v_{i}\to \frac{1}{\sqrt{-\partial^{2}}}v_{i}\,,\quad E\to\frac{1}{(- \partial^{2})}E\,,\quad B\to\frac{1}{\sqrt{-\partial^{2}}}B\,. \tag{4.4}\]
The projection of the fluctuation fields \(\hat{\sigma}_{ij},\hat{N}_{i}\) onto the subspaces spanned by the component fields is readily achieved through projection operators. Introducing the unit on the space of symmetric \(3\times 3\)-matrices
\[\mathbb{1}^{ij}{}_{kl}\equiv\frac{1}{2}\left(\delta^{i}_{k}\delta^{j}_{l}+ \delta^{j}_{k}\delta^{i}_{l}\right)\,, \tag{4.5}\]
the subspaces for the component fields in (4.2) are spanned by
\[\begin{split}\Pi_{\psi}{}^{ij}{}_{kl}&=\frac{1}{ 3}\sigma^{ij}\sigma_{kl}\,,\\ \Pi_{E}{}^{ij}{}_{kl}&=(\partial^{i}\partial^{j}- \frac{1}{3}\delta^{ij}\partial^{2})(\frac{2}{3}\partial^{4})^{-1}(\partial_{k }\partial_{l}-\frac{1}{3}\delta_{kl}\partial^{2})\,,\\ \Pi_{\psi}{}^{ij}{}_{kl}&=2\Big{(}\delta^{(j}_{(l} \,\partial^{i)}\,\partial^{-2}\,\partial_{k)}\Big{)}-2\partial^{i}\partial^{j }\partial^{-4}\partial_{k}\partial_{l},\\ \Pi_{\mu}{}^{ij}{}_{kl}&=\mathbb{1}^{ij}{}_{kl}- \Pi_{\psi}{}^{ij}{}_{kl}-\Pi_{E}{}^{ij}{}_{kl}-\Pi_{\psi}{}^{ij}{}_{kl}\,.\end{split} \tag{4.6}\]
Similarly, we also give the projection tensors for the space of vector fields
\[\Pi_{B}{}^{i}{}_{j}\equiv\partial^{i}\,\partial^{-2}\,\partial_{j}\,,\qquad\Pi _{u}{}^{i}{}_{j}\equiv\delta^{i}_{j}-\partial^{i}\,\partial^{-2}\,\partial_{j}\,. \tag{4.7}\]
At this point we have all the ingredients to write down the matrix elements of the Hessian \(\Gamma_{k}^{(2)}\). Including the contribution of the gauge-fixing terms, these are tabulated in Table 1. At this point a closer inspection of these expressions in order. All two-point functions are proportional to the projection operators restricting the fluctuation fields to the corresponding transverse (and traceless) subspaces. For scalars, this projection is trivial and the corresponding projectors are omitted. For \(\alpha_{1}=\alpha_{2}=1\), the gauge-fixed (inverse) propagators all come with a _relativistic dispersion relation_. In this case, the
spatial and time-components of the momentum combine into a relativistic four-momentum \(p_{0}^{2}+\vec{p}^{\,2}=p^{2}\). This singles out the gauge-fixing adopted in eq. (3.10) [21]. The cosmological constant then plays the role of a mass term in the two-point function.
At this point we have all the ingredients to specify the components of the regulator \(\mathcal{R}_{k}(\Box)\). Following the nomenclature of [13], we implement a Type I regularization, which fixes \(\mathcal{R}_{k}\) through the substitution rule
\[\Box\to P_{k}(\Box)\equiv\Box+R_{k}(\Box)\,, \tag{4.8}\]
where \(R_{k}(\Box)\) is a scalar cutoff function. Throughout this work, we chose the cutoff function to be of Litim-type [95; 96],
\[R_{k}(\Box)=(k^{2}-\Box)\Theta(k^{2}-\Box)\,, \tag{4.9}\]
where \(\Theta(x)\) is the Heaviside step function. Inspecting Table 1, one identifies various candidates for the coarse-graining operator \(\Box\). Adopting a momentum-space representation, these are within the class
\[\Box_{\alpha}=\alpha\,p_{0}^{2}+\vec{p}^{\,2}\,, \tag{4.10}\]
where \(\alpha\) depends on \(k\). Based on the kinetic terms for the various component fields, we would then encounter as many different forms of \(\Box_{\alpha}\) as there are distinct dispersion relations.
\begin{table}
\begin{tabular}{l l} \hline \hline fields \((i,j)\) & \(\Gamma_{k}^{\rm grav\,(ij)}+\frac{1}{2}\Gamma_{k}^{\rm f\,(ij)}\) \\ \hline \hline \(h_{ij}h^{kl}\) & \(\frac{1}{32\pi G_{k}}\left((\alpha_{1}p_{0}^{2}+\vec{p}^{\,2})-2\Lambda_{k} \right)\,\Pi_{h}{}^{ij}_{kl}\) \\ \hline \(v_{i}v^{j}\) & \(\frac{1}{16\pi G_{k}}\left((\alpha_{1}p_{0}^{2}+\vec{p}^{\,2})-2\Lambda_{k} \right)\Pi_{u}{}^{i}_{j}\) \\ \(EE\) & \(\frac{1}{48\pi G_{k}}\left((\alpha_{1}p_{0}^{2}+\vec{p}^{\,2})-2\Lambda_{k}\right)\) \\ \(\Psi\Psi\) & \(-\frac{1}{192\pi G_{k}}\left((6\alpha_{2}-2\alpha_{1}-3)p_{0}^{2}+\vec{p}^{\,2 }-2\Lambda_{k}\right)\) \\ \(\hat{N}\hat{N}\) & \(\frac{1}{16\pi G_{k}}(p_{0}^{2}+\vec{p}^{\,2})\) \\ \(\Psi\hat{N}\) & \(-\frac{1}{16\pi G_{k}}\left(p_{0}^{2}+\vec{p}^{\,2}-2\Lambda_{k}\right)\) \\ \(u^{i}u_{j}\) & \(\frac{1}{16\pi G_{k}}(p_{0}^{2}+\alpha_{1}\vec{p}^{\,2})\Pi_{u}{}^{i}_{j}\) \\ \(BB\) & \(\frac{1}{16\pi G_{k}}\left(p_{0}^{2}+(1+2\alpha_{1}-2\alpha_{2})\,\vec{p}^{\,2 }\right)\) \\ \hline \(\bar{c}c\) & \(\sqrt{2}\,(p_{0}^{2}+\vec{p}^{\,2})\) \\ \(\bar{b}^{i}b_{i}\) & \(\sqrt{2}\,(p_{0}^{2}+\vec{p}^{\,2})\,\Pi_{uj}{}^{i}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Matrix elements of the Hessian \(\Gamma_{k}^{(2)}\). The first line is singled out since this is the tensor structure onto which we are going to project the flow equation. The second block gives the propagators associated with the metric degrees of freedom while the third block captures the information about the propagators in the ghost sector.
The \(\alpha\)-dependence of \(\Box\) introduces a significant number of technical complications in the computation. This can be understood from noticing that each operator \(\Box_{\alpha}\) leads to a different stepfunction \(\Theta(k^{2}-\Box_{\alpha})\). Loop integrals then involve different step functions which results in quite complicated domains when integrating over loop momenta [97]. We avoid this complication by employing a modified (but perfectly admissible) replacement rule for the regularization
\[\Box_{\alpha}\to\Box_{\alpha}+R_{k}(\Box_{\alpha=1})\,. \tag{4.11}\]
This results in the matrix elements for \(\mathcal{R}_{k}\)
\[\mathcal{R}_{k}^{hh}= \frac{1}{32\pi G_{k}}R_{k}(p^{2})\,\Pi_{hk}^{\;ij}\,,\qquad \mathcal{R}_{k}^{uu}=\frac{\alpha_{1}}{32\pi G_{k}}R_{k}(p^{2})\,\Pi_{uj}^{\;i }\,, \tag{4.12}\]
and similar for the other matrix entries. We will adopt the choice (4.11) in the sequel.5
Footnote 5: In principle, one can chose other regularization procedures as well. These differ by the anomalous dimension of the coupling appearing in \(\partial_{k}\mathcal{R}_{k}\). For instance, one could adopt
\[\Box_{\alpha}\to P_{k}(\Box_{\alpha=1})+(\Box_{\alpha}-\Box_{\alpha=1}). \tag{4.13}\]
In comparison to (4.11) the two choices differ by terms proportional to the beta functions associated with \(\alpha_{1}\) and \(\alpha_{2}\). These extra contribution vanish on a fixed point by definition, so that eq. (4.13) and (4.11) lead to the same fixed point structure.
The different dispersion relations listed in Table 1 then lead to the same profile function \(R_{k}\). This function is just a function of the four-momentum \(p^{2}\) and does not include \(k\)-dependent couplings. In this way, the argument of \(R_{k}\) is covariant and preserves Lorentz symmetry. In this way, the choice of regulator minimizes the Lorentz-symmetry violating terms generated by the regularization procedure. Still the non-linearity of the field decomposition (3.6) makes the regularization procedure non-covariant.
### Projecting and closing the flow equation
At this point, we have all the ingredients for specifying the projection of the Wetterich equation underlying our computation. Specifically, we project the flow onto the graviton two-point function singled out in the first line of Table 1. Substituting this expression into the left-hand side of the flow equation gives
\[32\pi\,\partial_{t}\Gamma_{k}^{(hh)}(p_{0}^{2},\vec{p}\,^{2})=\partial_{t} \left(\frac{\alpha_{1}}{G_{k}}\right)\,p_{0}^{2}\,\Pi_{h}+\partial_{t}\left( \frac{1}{G_{k}}\right)\,\vec{p}\,^{2}\,\Pi_{h}-2\partial_{t}\left(\frac{ \Lambda_{k}}{G_{k}}\right)\Pi_{h}\,. \tag{4.14}\]
This indicates that the projection can track up to three scale-dependent couplings \(\alpha_{1}\), \(\Lambda_{k}\), and \(G_{k}\).6
Footnote 6: In principle, the formalism can also be used to track the scale-dependence of \(\alpha_{2}\). This requires considering one additional two-point function, e.g., \(\Gamma_{k}^{(\Psi\Psi)}(p_{0}^{2},\vec{p}\,^{2})\) which contains the corresponding coupling. While this computation is conceptually straightforward, it essentially doubles the complexity of the underlying algebra. For this reason, we leave this extension for future work. This projection then identifies the tensor structures which need to be extracted from the trace on the right-hand side of the equation. We define
\[\begin{split}\frac{1}{2}\text{STr}\left[\cdots\right]\simeq& \,T_{p_{0}}(G_{k},\alpha_{1},\alpha_{2},\Lambda_{k},\eta_{N}, \partial_{t}\alpha_{1})\,p_{0}^{2}\,\Pi_{h}\\ &+T_{\vec{p}}(G_{k},\alpha_{1},\alpha_{2},\Lambda_{k},\eta_{N}, \partial_{t}\alpha_{1})\,\vec{p}\,^{2}\,\Pi_{h}\\ &-T_{0}(G_{k},\alpha_{1},\alpha_{2},\Lambda_{k},\eta_{N}, \partial_{t}\alpha_{1})\,\Pi_{h},\end{split} \tag{4.15}\]
where the \(\simeq\) indicates that we neglect terms outside of the projection subspace and the sign in front of \(T_{0}\) has been adjusted for later convenience. The arguments of the traces highlight the dependence of the contributions on the couplings and we defined the anomalous dimension of \(G_{k}\) as \(\eta_{N}=\partial_{t}G_{k}/G_{k}\). Equating the coefficients multiplying the independent tensor structures in eqs. (4.14) and (4.15) leads to the following system of first order differential equations
\[\begin{split}\frac{1}{32\pi}\partial_{t}\left(\frac{\alpha_{1}}{G _{k}}\right)&=T_{p_{0}}(G_{k},\alpha_{1},\alpha_{2},\Lambda_{k}, \eta_{N},\partial_{t}\alpha_{1})\,,\\ \frac{1}{32\pi}\partial_{t}\left(\frac{1}{G_{k}}\right)& =T_{\vec{p}}(G_{k},\alpha_{1},\alpha_{2},\Lambda_{k},\eta_{N}, \partial_{t}\alpha_{1})\,,\\ \frac{1}{16\pi}\partial_{t}\left(\frac{\Lambda_{k}}{G_{k}}\right)& =T_{0}(G_{k},\alpha_{1},\alpha_{2},\Lambda_{k},\eta_{N},\partial _{t}\alpha_{1})\,.\end{split} \tag{4.16}\]
As indicated above, the projection does not capture information about the running of \(\alpha_{2}\). So this coupling has to be treated as a dimensionless parameter.
At this point the computation has reduced to determining the coefficients \(T\). Inspecting eq. (3.18) shows that this requires information about the (\(k\)-dependent) 3- and 4-point vertices of the theory. Generically, this information is not available exactly, and one has to adopt an approximation in order to close the system of equations. We then generate the relevant 3- and 4-point vertices by taking additional functional derivatives of \(\Gamma_{k}^{\rm grav}+\Gamma_{k}^{\rm ghost}\) with respect to the collection of fluctuation fields \(\hat{\chi}\) and subsequently specifying to the flat background. Note that the projection to (4.14) entails that is is not necessary to consider all vertices: only 3-point vertices with at least on leg being \(h_{ij}\) and 4-point vertices with two legs associated with the field \(h_{ij}\) contribute. Our actual computation of the coefficients \(T\) retains the full momentum dependence of these vertices on the loop-momentum. While the actual computation is conceptually straightforward, it is technically involved and we collect the details in App. B.
The \(k\)-dependence of the couplings is then conveniently encoded in the beta functions for the dimensionless couplings \((g_{k},\lambda_{k},\alpha_{1})\)
\[\partial_{t}g_{k}=\beta_{g_{k}}(g_{k},\lambda_{k},\alpha_{1};\alpha_{2})\,, \quad\partial_{t}\lambda_{k}=\beta_{\lambda_{k}}(g_{k},\lambda_{k},\alpha_{1} ;\alpha_{2})\,,\quad\partial_{t}\alpha_{1}=\beta_{\alpha_{1}}(g_{k},\lambda_{k },\alpha_{1};\alpha_{2})\,, \tag{4.17}\]
with
\[g_{k}\equiv G_{k}\,k^{2}\,,\quad\lambda_{k}\equiv\Lambda_{k}\,k^{-2}\,. \tag{4.18}\]
The parametric dependence on \(\alpha_{2}\) is highlighted by the semicolon.
The explicit expressions for the beta functions (4.17) are rather lengthy and can be provided in form of an auxiliary Mathematica notebook upon request. Here we limit ourselves to giving the results for the foliated Einstein-Hilbert action (\(\alpha_{1}=\alpha_{2}=1\)) only. In this case we find
\[\begin{split}\beta_{g}=&\,(2+\eta_{N})\,g\,,\\ \beta_{\lambda}=&\,(\eta_{N}-2)\lambda+\frac{g}{\pi} \left\{\frac{p_{\lambda}^{1}(\lambda)+\eta_{N}\,\tilde{p}_{\lambda}^{1}(\lambda )}{24\left(1-2\lambda\right)^{2}\left(2-3\lambda\right)^{2}}+\frac{p_{\lambda}^ {2}(\lambda)+\eta_{N}\,\tilde{p}_{\lambda}^{2}(\lambda)}{7200\left(1-2\lambda \right)^{3}\left(2-3\lambda\right)^{3}}\right\}\,.\end{split} \tag{4.19}\]
The polynomials are tabulated in the first block of Table 2. The anomalous dimension takes the form
\[\eta_{N}=\frac{gB_{1}(\lambda)}{1-gB_{2}(\lambda)}\,. \tag{4.20}\]
In the foliated Einstein-Hilbert case, eq. (4.14) indicates that the functions \(B_{1}(\lambda)\) and \(B_{2}(\lambda)\) can either be obtained from the coefficient of the \(p_{0}^{2}\)-term (\(p_{0}\)-projection) or the \(\vec{p}^{2}\)-term (\(\vec{p}\)-projection). The \(p_{0}\)-projection yields
\[\begin{split} B_{1}^{p_{0}}(\lambda)=&\frac{48 \lambda^{2}-60\lambda+19}{2\pi(1-2\lambda)^{2}(2-3\lambda)^{2}}-\frac{p_{ \lambda}^{3}(\lambda)}{360\pi(1-2\lambda)^{4}(2-3\lambda)^{4}}\,,\\ B_{2}^{p_{0}}(\lambda)=&-\frac{48\lambda^{2}-60 \lambda+19}{12\pi(1-2\lambda)^{2}(2-3\lambda)^{2}}+\frac{\tilde{p}_{\lambda}^ {3}(\lambda)}{720\pi(1-2\lambda)^{3}(2-3\lambda)^{2}}\,.\end{split} \tag{4.21}\]
the projection on the spatial momentum yields an extra contribution to \(B_{1}(\lambda)\) and \(B_{2}(\lambda)\):
\[\begin{split} B_{1}^{\vec{p}}(\lambda)=& B_{1}^{p_{0}}( \lambda)-\frac{p_{\lambda}^{4}(\lambda)}{315\pi(2-3\lambda)^{2}(1-2\lambda)^ {4}}\,,\\ B_{2}^{\vec{p}}(\lambda)=& B_{2}^{p_{0}}(\lambda)- \frac{\tilde{p}_{\lambda}^{4}(\lambda)}{630\pi(2-3\lambda)^{2}(1-2\lambda)^ {3}}\,.\end{split} \tag{4.22}\]
The functions \(B_{1}\) and \(B_{2}\) obtained from the two projections are compared in Fig. 2. While the results differ quantitatively, the functions agree on a qualitative level. Together with the generalization of our results including the beta function for \(\alpha_{1}\) and the dimensionless parameter \(\alpha_{2}\) provided in the supplementary material, eqs. (4.19)-(4.22) constitute the main result of this section.
We stress that the anomalous dimension \(\eta_{N}\), eq. (4.20), is conceptually different from the "dynamical anomalous dimension" \(\eta_{h}\) considered in the covariant approach. Determining the latter requires information about the scale-dependence of the graviton 3-point
\begin{table}
\begin{tabular}{l l} \hline \hline \(p_{\lambda}^{1}(\lambda)\) & \(12\left(18\lambda^{4}-116\lambda^{3}+142\lambda^{2}-61\lambda+8\right)\) \\ \(\tilde{p}_{\lambda}^{1}(\lambda)\) & \(-27\lambda^{4}+211\lambda^{3}-259\lambda^{2}+106\lambda-12\) \\ \(p_{\lambda}^{2}(\lambda)\) & \(10\left(68040\lambda^{6}+439308\lambda^{5}-1156914\lambda^{4}+1012717\lambda ^{3}-419962\lambda^{2}+94584\lambda-11760\right)\) \\ \(\tilde{p}_{\lambda}^{2}(\lambda)\) & \(87480\lambda^{6}-1420428\lambda^{5}+2828930\lambda^{4}-2242429\lambda^{3}+851 098\lambda^{2}-166200\lambda+17520\) \\ \hline \(p_{\lambda}^{3}(\lambda)\) & \(408240\lambda^{8}-2262816\lambda^{7}+6128784\lambda^{6}-10347048\lambda^{5}\) \\ & \(+10788945\lambda^{4}-6629544\lambda^{3}+2229588\lambda^{2}-344176\lambda+12704\) \\ \(\tilde{p}_{\lambda}^{3}(\lambda)\) & \(4968\lambda^{4}-67980\lambda^{3}+94335\lambda^{2}-40380\lambda+4148\) \\ \hline \(p_{\lambda}^{4}(\lambda)\) & \(88248\lambda^{5}-180807\lambda^{4}+125287\lambda^{3}+3744\lambda^{2}-30324 \lambda+6376\) \\ \(\tilde{p}_{\lambda}^{4}(\lambda)\) & \(20055\lambda^{4}-32850\lambda^{3}+19676\lambda^{2}-771\lambda-2158\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Polynomials appearing in the beta functions (4.19) and the anomalous dimension (4.20).
vertex in order to disentangle the running of the graviton self-interaction from the wave function renormalization of the fluctuation field. This information is not contained in the present projection though. As a consequence, eq. (4.19) dictates that \(\eta_{N}(g_{*},\lambda_{*})=-2\) at a NGFP.
We close the section with a conceptual remark. The differences appearing in the \(p_{0}\)- and \(\vec{p}\)-projection can be traced back to the 3- and 4-point vertices which depend on \(p_{0}\) and \(\vec{p}\) in a non-relativistic way. A prototypical example is the 4-point vertex \(\Gamma^{(hh\hat{N}\hat{N})}_{k}\). Since the potential terms in the action (4.1) are linear in \(N\) these do not contribute to this vertex. Considering the kinetic part of the action, the definition of the extrinsic curvature entails that this sector gives non-trivial contributions which contain just the time-derivatives of the spatial metric. Schematically, the vertex then comes with the momentum structure
\[\Gamma^{(hh\hat{N}\hat{N})}_{k}h^{2}\hat{N}^{2}\propto\,p_{0}^{2}\,h^{kl}(-p)h _{kl}(p)\hat{N}(-q)\hat{N}(q), \tag{4.23}\]
where \(p=(p_{0},\vec{p})\) is the external 4-momentum, and \(q=(q_{0},\vec{q})\)is the momentum running in the loop. So the vertex (4.23) contributes to the \(p_{0}\)-projection while it does not enter into the \(\vec{p}\)-projection.
This illustrates that, in general, the \(n\)-point functions arising in the ADM-framework are non-relativistic. At the level of the 2-point function this difference has been eliminated by making the specific choice for the gauge-fixing term (3.10). The Lorentz invariance breaking effect comes from the non-linear ADM-decomposition of the metric. It is then tempting to try to dispel these effects by admitting fluctuation terms of higher order in the gauge-fixing procedure. An investigation along these lines will be left for the future work.
## 5 Properties of the renormalization group flow
We proceed by analyzing the fixed point structure and phase diagrams entailed by the beta functions (4.17). We start with the foliated Einstein-Hilbert truncation, where \(\alpha_{1}=\alpha_{2}=1\) in Sect. 5.1. The flow of the full system including the beta function for \(\alpha_{1}\) is discussed in
Figure 2: Comparison between the functions \(B_{1}(\lambda)\) (left panel) and \(B_{2}(\lambda)\) (right panel) obtained from reading off the anomalous dimension \(\eta_{N}\) from the time- (blue lines) and spatial component (orange lines) of the momentum.
Sect. 5.2. As the main result, we show that all settings exhibit a non-Gaussian fixed point (NGFP) suitable for providing the high-energy completion of the RG flow.
### The foliated Einstein-Hilbert truncation
The beta functions for the foliated Einstein-Hilbert truncation are given in eq. (4.19). Depending on whether the anomalous dimension of Newton's coupling \(\eta_{N}\) is read off from the time or spatial component of the momentum appearing in the graviton two-point function, the functions \(B_{1}(\lambda)\) and \(B_{2}(\lambda)\) are given in eqs. (4.21) and (4.22), respectively. Following the terminology introduced in the last section, we refer to these cases as the \(p_{0}\)-projection and the \(\vec{p}\)-projection, respectively.
_Fixed points._ We first identify the fixed points \((g_{*},\lambda_{*})\) where, by definition, \(\beta_{g}(g_{*},\lambda_{*})=0,\beta_{\lambda}(g_{*},\lambda_{*})=0\) and obtain their stability properties by computing the critical exponents from the stability matrix (3.5). We first observe that both projection schemes admit a GFP
\[\text{GFP}:\qquad(g_{*},\lambda_{*})=(0,0)\,,\qquad\theta_{1}=2\,,\,\,\, \theta_{2}=-2\,. \tag{5.1}\]
The critical exponents agree with the ones obtained from canonical power counting, in agreement with the definition of a GFP advocated in [9; 10]. The GFP is a saddle point in the \(g\)-\(\lambda\)-plane. The eigenvector associated with the UV-repulsive eigendirection indicates that this fixed point cannot serve as a UV-completion for RG-trajectories with \(g_{k}>0\).
In addition to the GFP, our numerical search for real roots also revealed three NGFPs, present in both the \(\vec{p}\)- and \(p_{0}\)-projection. Their properties are summarized in Table 3. The
\begin{table}
\begin{tabular}{|c|c|c|c c|c c|} \hline \hline Work & Projection & Fixed Points & \multicolumn{2}{c|}{Couplings} & \multicolumn{2}{c|}{Critical Exponents} \\ \hline & & & \(g_{*}\) & \(\lambda_{*}\) & \(\theta_{1}\) & \(\theta_{2}\) \\ \hline \hline & & GFP & 0 & 0 & 2 & \(-2\) \\ \cline{3-6} this article & \(p_{0}\)-projection & \(\begin{array}{c}\text{NGFP}_{1}\\ \end{array}\) & \(1.43\) & \(-0.12\) & \(\begin{array}{c}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{ }\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{ }\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{ }\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{
fixed point of physical interest is NGFP\({}_{1}\). It is located at \(g_{*}>0\) and \(\lambda_{*}<0\) and serves as a UV-attractor for the RG flow in its vicinity. The negative sign of \(\lambda_{*}\) indicates that the dimensionless graviton mass appearing in the two-point function at the fixed point is positive. Comparing the results for NGFP\({}_{2}\) and NGFP\({}_{3}\), it is evident that the two projections do not necessarily give rise to the same qualitative behavior. It is therefore remarkable that the NGFP\({}_{1}\) found in both cases share the same qualitative features.
The last three lines summarize properties for NGFPs reported from either ADM-based computations at the background level [59; 22] or flows of the graviton two-point function studied in the covariant approach [73]. Strictly speaking, these results do not lend themselves to a direct comparison since - from the perspective of the fluctuation field approach - the couplings \(g\) and \(\lambda\) listed in the various blocks are associated with different correlation functions and projection prescriptions. Nevertheless, the results form a coherent picture in the sense that they all point towards the existence of a phenomenologically interesting NGFP in foliation approach. Qualitatively, the features of this fixed point agree with the ones observed in fluctuation computations carried out in the covariant framework.
_Phase diagram._ The phase diagram for the foliated Einstein-Hilbert truncation is shown in Fig. 3. The plots show that the \(p_{0}\)- and \(\vec{p}\)-projection lead to results which are qualitatively identical. In order to understand the structure, we first note that the line
Figure 3: Phase diagram obtained from the beta functions of the foliated Einstein-Hilbert truncation in the \(p_{0}\)-projection (left diagram) and the \(\vec{p}\)-projection (right diagram) with the arrows pointing towards lower values of the coarse-graining scale \(k\). The diagrams focus on the physically interesting region where \(g_{k}\geq 0\). The GFP (5.1), the NGFP\({}_{1}\) given in Table 3, and the IR-FP (5.2) are marked by the black dots while the red lines mark a singular locus of the beta functions. The flow is governed by the interplay of these fixed points. The RG trajectory connecting the NGFP\({}_{1}\) in the UV to the GFP in the IR (Separatrix) is highlighted by the black line. The other black lines have been added to highlight the boundary of the region where the RG trajectories interpolate between NGFP\({}_{1}\) in the UV and the IR-FP as \(k\to 0\).
\(g=0\) separates the trajectories with positive and negative \(g_{k}\) and cannot be crossed. Moreover, there is a singular locus at \(\lambda=1/2\) linked to a singularity in the beta functions. The beta functions give rise to a second, non-trivial singular locus which is depicted by the red line. We then limit the discussion to the physically interesting region \(g\geq 0\), \(\lambda\leq 1/2\), and the region below the red line. The RG flow in this region is governed by the interplay of the NGFP\({}_{1}\), the GFP, and the IR-fixed point
\[\text{IR-FP:}\qquad(g_{*}^{\text{IR}},\lambda_{*}^{\text{IR}})=(0,1/2)\,. \tag{109}\]
The NGFP\({}_{2}\) and NGFP\({}_{3}\) are separated from this region by the lines discussed above and do not influence the flow in this region. As indicated by the critical exponents, the NGFP\({}_{1}\) serves as a UV-attractor for the RG trajectories in its vicinity. At the IR-FP, the beta functions are ambiguous. Investigating the scaling properties of RG trajectories in its vicinity, one finds that
\[\lambda_{*}^{\text{IR}}-\lambda(t)\sim c_{1}\,e^{t}\,,\qquad g(t)-g_{*}^{\text {IR}}\sim c_{2}\,e^{4t}\,, \tag{110}\]
where \(c_{1}\) and \(c_{2}\) are constants depending on the RG trajectory chosen and we use the symbol \(\sim\) to indicate that the relations are valid in the scaling regime of the IR-FP.
Among the set of trajectories emanating from NGFP\({}_{1}\), two play a distinguished role in determining the structure of the phase diagram. These are highlighted by the black solid lines. Firstly, we have one trajectory which connects the NGFP in the UV to the GFP in the IR. This line has been called separatrix in [98]. Secondly, there is a "broken" trajectory which connects the NGFP to a specific point on the singular line (red curve) and subsequently to the IR-FP in the IR. These black lines bound different phases encountered in the phase diagram. RG trajectories to the left of the separatrix (and not terminating in the red, singular line) flow to \((g,\lambda)=(0,-\infty)\) and lead to negative values \(\Lambda_{k}\) in the IR. In this case, the mass appearing in the two-point function is positive. Trajectories in the triangle formed by the connecting lines approach the IR-FP in the IR. They all lead to a zero value of the squared graviton mass \(\mu^{2}\equiv-2\Lambda_{0}\).
At this stage, it is interesting to investigate the RG trajectories located between the solid black lines in more detail. A set of example solutions situated in this region is shown in Fig. 4. All trajectories are complete: the high-energy completion is provided by the NGFP\({}_{1}\). Lowering \(k\) the flow crosses over to a "classical regime" where the dimensionful couplings \(G_{k}\) and \(\Lambda_{k}\) are constant. In terms of the phase diagrams given in Fig. 3, this corresponds to the regime where the trajectories linger in the vicinity of the GFP. At even lower values of \(k\) the trajectories are captured by the IR-FP and follow the scaling law (110). This feature has profound consequences for the mass appearing in the graviton propagator: _based on the flow diagram the renormalized squared mass \(\mu^{2}\) can never be negative_. The scaling induced by the IR-FP indicates that any positive \(\Lambda_{k}\) at \(k>0\) is quenched to \(\Lambda_{0}=0\) for trajectories in this region.
### Fixed points of the non-relativistic system
We proceed by analyzing the full system of beta functions including the flow of \(\alpha_{1}\) and the dependence on the parameter \(\alpha_{2}\). Before delving into the discussion, we make the following
observation: if \(\alpha_{1}\neq 1\) or \(\alpha_{2}\neq 1\), the action (15) is no longer invariant with respect to the full diffeomorphism group \(\mathrm{Diff}(\mathcal{M})\). Instead the symmetry group is reduced to foliation-preserving diffeomorphisms \(\mathrm{Diff}(\mathcal{M},\Sigma)\subset\mathrm{Diff}(\mathcal{M})\). This results in a mismatch between the dependence of the fields on spacetime points, \(N(\tau,y)\) and the time-component of the vector (9) generating infinitesimal foliation-preserving diffeomorphism which depends on \(\tau\) only (see [19] for detailed discussions).7 The change in the symmetry group potentially entails consequences for the Faddeev-Popov procedure leading to the ghost action (3.11): it is unclear whether the scalar ghost and anti-ghost should be treated as fully spacetime-dependent fields. Solving this problem is identical to finding a good gauge-fixing procedure for non-projectable Horava-Lifshitz gravity. Resolving this issue is beyond the scope of this work. Instead we will analyze two versions of the flow equations: the first one includes the
Figure 4: Illustration of typical RG trajectories situated between the solid black lines shown in Fig. 3 as a function of RG time \(t\). The trajectories are obtained by solving the beta function obtained from the \(p_{0}\)-projection numerically with high precision. Starting from large values of \(t\) and going towards lower coarse-graining scales, the trajectories exhibit three distinct scaling regimes: for \(t\gg 1\) the running is controlled by the \(\mathrm{NGFP}_{1}\) indicating that the dimensionless couplings \(g_{k}\) and \(\lambda_{k}\) take their fixed point values. Lowering \(t\), there is a cross-over to the GFP. This phase is characterized by the dimensionful couplings \(G_{k}\) and \(\Lambda_{k}\) being constant. For even lower values of \(t\) the ritercories leave the scaling regime of the GFP and approach the IR-FP. Here the scale-dependence of \(g_{k}\) and \(\lambda_{k}\) follows the relation (5.3). This implies that \(\lim_{k\to 0}\Lambda_{k}=0\) for all trajectories situated in this region.
contribution of the scalar ghost sector (\(\bar{c}c\)) and a second version where these contributions are not included in the beta functions.
In order to close the beta functions, the analysis also requires an assumption about the free parameter \(\alpha_{2}\). Motivated by the previous section, we first consider the case where \(\alpha_{2}=1\). We briefly comment on other values for this parameter at the end of this section.
_Fixed Points._ We first investigate the generalization of the GFP in the enlarged truncation. Expanding the beta functions in powers of \(g\), one finds
\[\begin{split}\beta_{g}&=2g+\mathcal{O}[g^{2}]*f_{ g}(\lambda,\alpha_{1})\,,\\ \beta_{\lambda}&=-2\lambda+\mathcal{O}[g]*f_{ \lambda}(\lambda,\alpha_{1})\,,\\ \beta_{\alpha_{1}}&=\mathcal{O}[g]*f_{\alpha_{1}}( \lambda,\alpha_{1})\,.\end{split} \tag{108}\]
The first terms in \(\beta_{g}\) and \(\beta_{\lambda}\) are fixed by the classical mass dimensions of the couplings and \(f_{g}\), \(f_{\lambda}\) and \(f_{\alpha_{1}}\) encode the leading corrections to the classical result. From the expansion (108) one readily verifies that all three beta functions vanish for \(g=\lambda=0\). Hence the system admits _a one-parameter family of GFPs_:
\[\text{GFPs:}\qquad(g_{*},\lambda_{*},\alpha_{1,*})=(0,0,\alpha_{1,*}). \tag{109}\]
The critical exponents of these fixed points are fixed by the mass dimension of the couplings. Evaluating the stability matrix based on the expansion (108) one finds \(\theta_{1}=2\), \(\theta_{2}=-2\), and \(\theta_{3}=0\), with the latter corresponding to a marginal direction.
In addition to the GFPs, the full system also possess a NGFP generalizing the fixed point NGFP\({}_{1}\) of the foliated Einstein-Hilbert truncation. Its position and stability properties are summarized in Table 4 which gives the results for both cases where the scalar ghost contribution has been taken into account (labeled by "with \(\bar{c}c\)") and left out (labeled by "without \(\bar{c}c\)"). We checked that the two fixed points are related by a continuous deformation when turning on the scalar ghost contribution in the beta functions. We note that the fixed point is situated close to but not within the surface \(\alpha_{1}=1\) which is spanned by the foliated Einstein-Hilbert truncation. The position is rather insensitive to whether the scalar ghost contribution is included. The latter has a profound consequence on its
\begin{table}
\begin{tabular}{|c|c|c|c c c|c c|} \hline \(\alpha_{2}\) & Fixed Points & Ghost Sector & \multicolumn{3}{c|}{Couplings} & \multicolumn{3}{c|}{Critical Exponents} \\ \hline & & & \(g_{*}\) & \(\lambda_{*}\) & \(\alpha_{1,*}\) & \(\theta_{1}\) & \(\theta_{2}\) & \(\theta_{3}\) \\ \hline
1 & NGFP\({}_{1}\) & with \(\bar{c}c\) & \(1.92\) & \(-0.25\) & \(0.71\) & \(6.12\) & \(-4.24\pm 5.91i\) \\
1 & NGFP\({}_{1}\) & without \(\bar{c}c\) & \(1.21\) & \(-0.14\) & \(0.73\) & \(5.09\) & \(0.16\pm 4.97i\) \\ \hline \(\alpha_{1}\) & NGFP\({}_{2}\) & with \(\bar{c}c\) & \(65.03\) & \(-10.96\) & \(8.66\) & \(10.19\) & \(-1.16\pm 2.66i\) \\ \(\alpha_{1}\) & NGFP\({}_{2}\) & without \(\bar{c}c\) & \(43.54\) & \(-13.71\) & \(14.53\) & \(13.80\) & \(0.23\pm 3.85i\) \\ \hline \end{tabular}
\end{table}
Table 4: Fixed point structure of the full beta functions (105) including the flow of \(\alpha_{1}\). For \(\alpha_{2}=1\) (top block) and \(\alpha_{1}=\alpha_{2}\) (bottom block), the fixed point without ghost contributions are connected continuously to the ones with ghost contributions. The NGFP\({}_{1}\) and NGFP\({}_{2}\) are not continuously connected though, therefore justifying their different labels.
stability properties though: in the absence of the \(\bar{c}c\)-contribution \({\rm NGFP}_{1}\) is UV-attractive in all three directions. Upon including the scalar contribution in the beta function the fixed point turns into a saddle point with one UV-attractive and two UV-repulsive directions.
The occurrence of a line of GFPs in truncations of the form (15) has already been observed in [20]. The non-relativistic viewpoint adopted in the present work offers a natural explanation of this phenomenon. Let us set \(\Lambda_{k}=0\) for the time being, i.e., all component fields are taken to be massless. Inspecting the dispersion relations of the component fields, we encounter the generic form
\[\Gamma_{k}^{(ij)}\propto a_{i}\,p_{0}^{2}+b_{i}\,\vec{p}^{\,2}\,,\qquad c_{i}^{ 2}\equiv\frac{b_{i}}{a_{i}}\,. \tag{16}\]
The explicit values for the coefficients \(a_{i}\) and \(b_{i}\) can be read off from Table 1 and \(c_{i}\) is the resulting "speed of light" associated with a given component field. For instance, the transverse-traceless modes \(h_{ij}\) and the scalar \(B\) propagate with
\[c_{h}^{2}=\frac{1}{\alpha_{1}}\,,\quad\text{and}\qquad c_{B}^{2}=1+2\alpha_{1} -2\alpha_{2}\,, \tag{17}\]
respectively. 8 Keeping \(\alpha_{2}\) fixed, the system then admits a one-parameter family of non-interacting theories, where the component fields satisfy different dispersion relations controlled by the value of \(\alpha_{1}\). This feature then generates the line of GFPs (15). The fact that differences in propagation speeds are observable quantities suggests that each point on the line actually corresponds to a physically distinct theory. This also entails that lines of GFPs should be a rather generic property of non-relativistic systems where one has several fields with distinct dispersion relations.
Footnote 8: In the scalar sector, comprising the fields \(\Psi\) and \(\hat{N}\) the dispersion relations do not have the simple structure (16) owed to the off-diagonal terms in the propagator. Generically the eigenvalues of this matrix involve square-roots. In the special case \(\alpha_{1}=\alpha_{2}=1\), all equations reduce to their relativistic form and one has \(c_{i}=1\) for all component fields.
Table 4 also reports a NGFP found within the alternative identification \(\alpha_{1}=\alpha_{2}\). This NGFP comes with identical stability properties as the \(\alpha_{2}=1\) case, albeit at much larger values of the couplings. Again one readily checks that the NGFPs found with and without the scalar ghost contributions are connected by analytic continuation. While the positions and stability coefficients suggest that the fixed points for \(\alpha_{2}=1\) and \(\alpha_{1}=\alpha_{2}\) are also continuously connected we have not been able to construct such a deformation explicitly. Building on the interpolation \(\alpha_{2}=1+\gamma(\alpha_{2}-1)\), \(\gamma\in[0,1]\), the lines followed by the fixed point as a function of \(\gamma\) terminate around \(\gamma\approx 0.7\). We conclude that the \({\rm NGFP}_{1}\) is robust under a wide range of values for \(\alpha_{2}\). Moreover, it is the scalar ghost sector which has an decisive effect on the stability properties of the fixed point.
## 6 Summary and discussion
The presence of a foliation on spacetime is essential for transiting from quantum gravity formulated on Euclidean signature spacetime to its Lorentzian counterpart. One way to
implement this structure is through the Arnowitt-Deser-Misner (ADM)-decomposition of the metric degrees of freedom. In this work, we studied the RG flows arising within this setting using non-perturbative functional renormalization group methods. Our work reports the first fluctuation field computation in the ADM-framework, studying the RG flow of the graviton two-point function. It complements earlier investigations based on the ADM-formalism [18; 19; 20; 21; 22; 23] and covariant computations utilizing a foliation gauge fixing [24; 25; 26]. It is also closely related to the Lorentzian signature computations based on the Wetterich equation [27; 28; 29]. At the technical level, these computations are significantly more complex than their counterparts in the covariant setting reviewed in [16]. This is owed to the increased number of component fields carrying the gravitational degrees of freedom as well as the proliferation of potential vertex structures coming with independent couplings.
We give a comprehensive introduction to the general setup and structures featuring in these computations. In particular, we highlight the differences between fluctuation computations in the covariant and foliated frameworks. The fact that the latter naturally also comprises non-relativistic theories like Horava-Lifshitz gravity [99] introduces new elements. In particular, one finds that, generically, projections onto tensor structures involving the spatial and time-parts of the fluctuation field momentum may lead to different results. Conceptually, this implies that one is dealing with different coupling constants of the theory which, in the covariant setting, exhibit the same flow due to the enhanced symmetry.
In order to illustrate the general framework, we performed an explicit computation of the gravitational RG flow projected onto the graviton two-point function. The setting is based on a truncation of the effective average action containing all two-derivative terms compatible with foliation-preserving diffeomorphisms, cf. eq. (15). This action is used to derive the graviton two-point function as well as the 3- and 4-point vertices required to close the flow equation. By projecting the RG equation onto the 2-point function, we derive the beta functions for Newton's coupling, the cosmological constant (featuring as a graviton mass term in the 2-point function) as well as one coupling \(\alpha_{1}\) encoding deviations from general relativity. Furthermore, the results depend parametrically on a second Lorentz-symmetry violating coupling \(\alpha_{2}\) which does not enter in the 2-point function under consideration but appears on the right-hand side of the flow equation.
The fixed point structure and phase diagram resulting from the foliated Einstein-Hilbert action is discussed in detail. As a key result, all projections studied in this work identify a non-Gaussian fixed point (NGFP) suitable for asymptotic safety. It serves as a UV-attractor for the two couplings retained within the projection. This puts the existence of the NGFP already identified in background field computations on spherical [19; 59], toroidal [23], and cosmological backgrounds [100; 21] on firm grounds. The NGFP seen in our work is part of an intricate phase diagram, see Fig. 3. In the physically relevant region, its structure is governed by the interplay of the GFP, the NGFP, and an IR-FP. The latter controls the IR-behavior of RG-trajectories which, in principle, could lead to a negative squared graviton mass \(\mu^{2}\). The fixed point quenches this mass to zero. This restricts the values of the squared graviton mass supported by the phase diagram to \(\mu^{2}\geq 0\).
Remarkably, our phase diagram is qualitatively identical to the one derived from similar fluctuation field computations in the covariant setting [72; 73].9 This is rather striking, since the underlying computations are based on an entirely different implementation of the gravitational degrees of freedom. Moreover, the vertex structures used in the right-hand side of the flow equation are manifestly different since the relation between the covariant fluctuation field and the fluctuation fields in the ADM-formalism are non-linear. Taken together, this indicates a remarkable stability of the phase diagram related to the graviton 2-point function.
Footnote 9: The phase diagram is also very similar to the one found for the Einstein-Hilbert truncation at the background level, first constructed in [98]. In this case, the IR-FP is replaced by a singular line which leads to a termination of the RG-trajectories at finite values of the coarse-graining scale though.
The space of action functionals which are invariant with respect to diffeomorphisms spans a subspace of all action functionals compatible with foliation preserving diffeomorphisms. The enhanced symmetry implies that RG flows on the former are closed [19; 24]. The Wetterich equation adapted to the ADM-formalism allows to study RG flows on the larger theory space as well. In this work, we did this by deriving the beta function for the coupling \(\alpha_{1}\) encoding different speeds of light for the component fields in the construction. We show that all projections within this truncation possess a NGFP which is the analogue of the NGFP seen in the foliated Einstein-Hilbert case. Based on symmetry arguments, this NGFP should be located on the surface with enhanced symmetry, \(\alpha_{1}=\alpha_{2}=1\). Since the Wetterich equation for the ADM-formalism contains diffeomorphism breaking terms the fixed point is slightly shifted away from this surface though. The stability properties of the NGFP depend on the implementation of the scalar ghost sector. In case that the scalar ghost is left out (as suggested by the reduced symmetry of the setup) one finds that the NGFP is a UV-attractor on the projection space. This is in agreement with the background computation [20], which identified a similar UV-attractive fixed point once diffeomorphism-breaking couplings are included.
Our work also makes a notable step forward in connecting continuum computations and Monte Carlo simulations of the gravitational path integral. In particular, the framework introduced in our work gives access to the spatially averaged correlation functions accessible in the Causal Dynamical Triangulations (CDT) program [55; 56]. In this context, it is important to highlight that the phase diagram obtained in the present work differs from the one obtained at the background level [59; 98] by the addition of an IR-fixed point. This feature may be interesting when interpreting the RG flows [101; 102] obtained from Monte Carlo simulations in the context of CDT. Both settings share the same foundations in the sense that they build on a foliation structure and admit anisotropy parameters distinguishing between space and time. We hope to come back to this intriguing possibility in the future.
Obviously, it will also be interesting to investigate the robustness of the fixed point structure and phase diagram shown in Fig. 3 once matter degrees of freedom are included. Specifically, one may want to ad massless gauge fields supplemented by Lorentz symmetry breaking parameters, similar to \(\alpha_{1}\) and \(\alpha_{2}\) introduced in the gravitational sector. This setting may allow to study differences in the light cone structure for the two fields which
can be constrained by multi-messenger astronomy. Moreover, the framework set up in the present work allows for the direct transition to Lorentzian signature computations. Results along these lines will be reported in [103] and [104], respectively.
We thank J. Ambjorn, M. Becker, A. Bonanno, G. P. de Brito, A. Ferreiro, B. Knorr, G. Korver, A. Koshelev, A. Pereira, M. Reuter, and M. Schiffer for discussion and insightful comments on the manuscript. JW acknowledges the China Scholarship Council (CSC) for financial support.
## Appendix A Technical Annex
The derivation of the results presented in Sect. 4 is technically quite involved. This appendix collects some additional background information and lengthy formulas underlying these computations. We start by briefly outlining the implementation of a foliation structure within Mathematica in App. A.1. App. A.2 collects the propagators for the component fields and the actions underlying the derivation of the gravity-ghost vertices are given in App. A.3.
### Implementation within Mathematica
In the ADM-formalism the relations (7) decompose the spacetime \(g_{\mu\nu}\) into the lapse function \(N\), a shift vector \(N_{i}\), and spatial metric \(\sigma_{ij}\). As a result, the four-dimensional spacetime \(\mathcal{M}\) inherits a foliation structure
\[\mathcal{M}=\mathbb{R}\times\Sigma_{\tau}\,. \tag{104}\]
The dynamics of the theory is expressed in terms of time-derivative \(\partial_{\tau}\) and covariant derivatives \(D_{i}\) constructed from the metric \(\sigma_{ij}\) on the spatial slices.
In order to implement the decomposition (104) in the xAct-package [105; 106] for Mathematica, we define the three-dimensional manifold \(\Sigma_{\tau}\) and a one-dimensional manifold \(\mathbb{R}\). These correspond to the two factors in (104), respectively. Subsequently, the code defines the four-dimensional spacetime \(\mathcal{M}\) as the product of the two factors. The field content is then coded as the spatial metric \(\sigma_{ij}(\tau,y)\), one scalar field \(N(\tau,y)\), and one vector field \(N_{i}(\tau,y)\). The indices are taken to be covariant with respect to \(\Sigma_{\tau}\). The time-derivative and spatially covariant derivatives are defined as the covariant derivatives on \(\mathbb{R}\) and \(\Sigma_{\tau}\), respectively. They are compatible with the metrics on the two submanifolds, respectively.
Building on this setup, all propagators and vertices needed in our computation can be generated via the Perturbation command provided by the xAct package. The expressions for the vertices obtained in this way are very lengthy though. This holds specifically for the 3- and 4-point vertices which contain hundreds of contributions. Therefore, we will not give the explicit form of these contributions within this paper. The mathematica-code generating them is available on request.
### Propagators for the component fields
The 2-point functions including the contribution from the gauge fixing term have been summarized in Table 1. The propagators required in the evaluation of eq. (3.18) are obtained as the inverse of this matrix. Defining the propagator matrix including the contributions of the scalar regulator \(\mathcal{R}_{k}(p^{2})\),
\[\mathcal{G}\equiv\left(\Gamma_{k}^{(2)}+\mathcal{R}_{k}\right)^{-1}\,,\] (A.2)
we list the matrix elements in Table 5. In addition to the definition of the 4-momentum, \(p^{2}=p_{0}^{2}+\vec{p}^{\,2}\), the table uses the shorthand notations
\[f_{a}=\alpha_{1}p_{0}^{2}+\vec{p}^{\,2}\,,\ f_{b}=\frac{1}{\alpha_{1}}p_{0}^{2 }+\vec{p}^{\,2}\,,\ f_{d}=\alpha_{2}p_{0}^{2}+\vec{p}^{\,2}\,,\ f_{e}=\frac{1}{ \alpha_{2}}p_{0}^{2}+\vec{p}^{\,2}\,,\] (A.3)
for capturing the \(\alpha\)-dependence of the dispersion relations. The projectors \(\Pi\) carrying the index structure of the propagators are defined in eqs. (4.6) and (4.7).
### Ghost actions on a foliated spacetime
We close the technical annex by giving the ghost action resulting from evaluating the general formula (3.11) for the specific choice of gauge fixing functional (3.10). The result
\begin{table}
\begin{tabular}{l l} \hline \hline fields \((i,j)\) & \(\mathcal{G}_{k}^{(ij)}\) \\ \hline \hline \(h_{ij}h^{kl}\) & \(32\pi G_{k}\,\frac{1}{f_{a}-2\Lambda_{k}+R_{k}}\,{\Pi_{h}}^{ij}_{kl}\) \\ \(v_{i}v^{j}\) & \(16\pi G_{k}\,\frac{1}{f_{a}-2\Lambda_{k}+R_{k}}\,{\Pi_{u}}^{i}_{j}\) \\ \(EE\) & \(48\pi G_{k}\,\frac{1}{f_{a}-2\Lambda_{k}+R_{k}}\) \\ \(u^{i}u_{j}\) & \(\frac{16\pi G_{k}}{\alpha_{1}}\,\frac{1}{f_{b}+R_{k}}\,{\Pi_{u}}^{i}_{j}\) \\ \(BB\) & \(16\pi G_{k}\,\frac{1}{p^{2}+2\alpha_{1}f_{b}-2\alpha_{2}f_{e}+(2\alpha_{1}-2 \alpha_{2}+1)\,R_{k}}\) \\ \hline \(\Psi\Psi\) & \(96\pi G_{k}\Big{(}\frac{p^{2}+R_{k}}{f_{a}p^{2}-3p^{2}f_{d}+7p^{2}\Lambda_{k}-6 \Lambda_{k}^{2}+(f_{a}-2p^{2}-3f_{d}+7\Lambda_{k})R_{k}-2R_{k}^{2}}\Big{)}\) \\ \(\hat{N}\hat{N}\) & \(-8\pi G_{k}\Big{(}\frac{6f_{d}-2f_{a}-3p^{2}-2\Lambda_{k}+R_{k}}{f_{a}p^{2}-3p ^{2}f_{d}+7p^{2}\Lambda_{k}-6\Lambda_{k}^{2}+(f_{a}-2p^{2}-3f_{d}+7\Lambda_{k} )R_{k}-2R_{k}^{2}}\Big{)}\) \\ \(\Psi\hat{N}\) & \(48\pi G_{k}\Big{(}\frac{p^{2}+R_{k}-2\Lambda_{k}}{f_{a}p^{2}-3p^{2}f_{d}+7p^{2} \Lambda_{k}-6\Lambda_{k}^{2}+(f_{a}-2p^{2}-3f_{d}+7\Lambda_{k})R_{k}-2R_{k}^{2 }}\Big{)}\) \\ \hline \(\bar{c}c\) & \(\frac{1}{\sqrt{2}\,p^{2}}\) \\ \(\bar{b}^{i}b_{i}\) & \(\frac{1}{\sqrt{2}\,p^{2}}\,{\Pi_{u}}_{j}^{i}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Matrix elements of the regulated propagators required in the evaluation of (3.18). For the diagonal blocks given in Table 1, the inversion is straightforward and the results are given in the first block. For the scalar sector spanned by the fields \(\Psi\) and \(\hat{N}\) the propagator is found by inverting a \(2\times 2\)-matrix. The determinant arising from this inversion leads to the rather complicated dispersion relations listed in the second block. The propagators in the ghost sector are given in the third block.
is used to extract the propagators and interaction vertices involving the scalar (anti-)ghost \(\bar{c},c\) and the vector (anti-)ghosts \(\bar{b}^{i},b_{j}\), respectively. In order to keep the complexity at a manageable level we chose the background to be flat space. Moreover, all derivatives act on the right, \(\partial_{\tau}N_{i}c=N_{i}(\partial_{\tau}c)+(\partial_{\tau}N_{i})c\). The contribution proportional to the scalar anti-ghost \(\bar{c}\), arising from the first term in (3.11) is
\[\begin{split}\Gamma^{\text{scalar}}_{\text{ghost}}=-\sqrt{2}\, \int d\tau d^{d}y\,\bar{c}\Big{[}&\partial_{\tau}^{2}cN+ \partial_{\tau}b^{k}\partial_{k}N-\partial_{\tau}NN^{i}\partial_{i}c+\partial ^{i}\partial_{\tau}N_{i}c+\partial^{i}b^{k}\partial_{k}N_{i}\\ &+\partial^{i}N_{k}\partial_{i}b^{k}+\partial^{i}\sigma_{ki} \partial_{\tau}b^{k}+\partial^{i}N_{k}N^{k}\partial_{i}c+\partial^{i}N^{2} \partial_{i}c\\ &-\frac{1}{2}\partial_{\tau}\sigma^{ij}c\partial_{\tau}\sigma_{ ij}-\frac{1}{2}\partial_{\tau}\sigma^{ij}b^{k}\partial_{k}\sigma_{ij}-\partial_{ \tau}\partial_{k}b^{k}-\partial_{\tau}N^{i}\partial_{i}c\Big{]}\,.\end{split}\] (A.4)
In the vector sector, the terms proportional to the anti-ghost \(\bar{b}^{i}\) are
\[\begin{split}\Gamma^{\text{vec}}_{\text{ghost}}=-\sqrt{2}\,\int \,d\tau d^{d}y\,\bar{b}^{i}\Big{[}&\partial_{\tau}^{2}N_{i}c+ \partial_{\tau}b^{k}\partial_{k}N_{i}+\partial_{\tau}N_{k}\partial_{i}b^{k}+ \partial_{\tau}\sigma_{ki}\partial_{\tau}b^{k}\\ &+\partial_{\tau}N_{k}N^{k}\partial_{i}c+\partial_{\tau}N^{2} \partial_{i}c-\partial_{i}\partial_{\tau}cN-\partial_{i}b^{k}\partial_{k}N\\ &+\partial_{i}NN^{k}\partial_{k}c-\frac{1}{2}\partial_{i}\sigma^{ mn}c\partial_{\tau}\sigma_{mn}-\frac{1}{2}\partial_{i}\sigma^{mn}b^{k} \partial_{k}\sigma_{mn}-\partial_{i}\partial_{k}b^{k}\\ &-\partial_{i}N^{m}\partial_{m}c+\partial^{j}c\partial_{\tau} \sigma_{ij}+\partial^{j}b^{k}\partial_{k}\sigma_{ij}+\partial^{j}\sigma_{jk} \partial_{i}b^{k}+\partial^{j}\sigma_{ik}\partial_{j}b^{k}\\ &+\partial^{j}N_{j}\partial_{i}c+\partial^{j}N_{i}\partial_{j}c \Big{]}.\end{split}\] (A.5)
## Appendix B Trace computations
The beta functions given in Sect. 4.2 are obtained by projecting the Wetterich equation (3.1) onto the 2-point function of the transverse-traceless fluctuation field \(h_{ij}(x)\). Their computation requires evaluating the traces appearing on the right-hand side of eq. (3.18). In a flat background, this is conveniently done by using standard momentum space methods for evaluating Feynman diagrams. In this appendix, we provide some general identities useful in these computations in App. B.1. The computation of a typical diagram built from a 4-point vertex and 3-point vertices is illustrated in App. B.2. We also briefly discusses the analytic properties of the loop integrals.
### Structural aspects of the traces
The right-hand side of the projected flow equation contains one-loop diagrams built from a pair of 3-point vertices (bubble diagrams) and a 4-point vertex (tadpole diagrams)
\[\begin{split} T_{3}=&-\frac{1}{2}\text{STr}[ \mathcal{G}_{\chi_{a}i}\chi_{aj}\Gamma^{(hh\chi_{a^{j}}\chi_{a^{k}})}\mathcal{G }_{\chi_{a}k}\chi_{a^{l}}\partial_{t}R_{k}{}^{\chi_{a^{l}}\chi_{a^{l}}}]\\ T_{4}=&\,\text{STr}[\mathcal{G}_{\chi_{a^{l}}\chi_{ a^{j}}}\Gamma^{(\chi_{a^{j}}\chi_{a^{k}})}\mathcal{G}_{\chi_{a^{k}}\chi_{a^{l}}} \Gamma^{(h\chi_{a^{l}}\chi_{a^{m}})}\mathcal{G}_{\chi_{a^{m}}\chi_{a^{n}}} \partial_{t}R_{k}{}^{\chi_{a^{n}}\chi_{a^{l}}}]\,.\end{split}\] (B.1)
Here the subscript on \(T\) refers to the vertices contained in the trace. The trace contains a sum over all fluctuation fields which can propagate in the regularized loop and subsequently we will use superscripts to single out the contributions of specific fields to the complete trace. In addition, the trace also contains an integral over loop momenta \(q_{\mu}=(q_{0},q_{i})\). The
propagators and vertices depend on both the loop momentum and the external momentum associated with the transverse-traceless field \(h_{ij}(p_{0},\vec{p})\). The integrals over the spatial loop momenta have the generic form
\[I_{n}\equiv\int d^{3}q\,f(\vec{q}\ ^{2})\,T^{i_{1}i_{2}\cdots i_{n}}q_{i_{1}}q_{i_ {2}}\cdots q_{i_{n}}\,. \tag{114}\]
The \(T^{i_{1}i_{2}\cdots i_{n}}\) are tensors constructed from the fields and external momenta allowed by our projection. Canonical building blocks are, for example, \(h^{ij}(p_{0},\vec{p})\) and \(p^{i}\). The parts depending on the magnitude of the spatial loop momentum have been factored out and collected in the function \(f(\vec{q}\ ^{2})\).
In the present computation, the maximum number of loop momenta contracted with \(T^{i_{1}i_{2}\cdots i_{n}}\) turns out to be \(n=6\). Higher-order terms are outside of the projection subspace and do not contribute to the computation. Following [107], the integrals (114) can be simplified as follows. For \(n\) odd the contributions vanish due to the anti-symmetry in sending \(q_{i}\mapsto-q_{i}\). For even \(n\) the integrands can be reduced to functions depending on \(\vec{q}\ ^{2}\) only. For the integrals defined in eq. (114), the relevant identities are
\[I_{2} =\frac{1}{3}\,\int d^{3}qf(\vec{q}\ ^{2})T_{ij}\,\delta^{ij}\vec{q} \ ^{2}\,, \tag{115}\] \[I_{4} =\frac{1}{15}\int d^{3}qf(\vec{q}\ ^{2})T_{ijkl}(\delta^{ij} \delta^{kl}+\delta^{ik}\delta^{jl}+\delta^{il}\delta^{jk})(\vec{q}\ ^{2})^{2},\] \[I_{6} =\frac{1}{105}\int d^{3}qf(\vec{q}\ ^{2})T_{ijklmn}\left(\delta^{ij} \delta^{kl}\delta^{mn}+14\ \text{permutations}\right)(\vec{q}\ ^{2})^{3}\,.\]
The external legs associated with the transverse-traceless field \(h_{ij}\) have to be attached to the 3- and 4-point vertices. In terms of computational efficiency, it is convenient to retain these fields in the intermediate steps in order to deal with scalar instead of tensorial quantities. Upon carrying out the integration over loop momenta, they can be stripped. The projection onto the tensor structure \(\Pi_{h}\) appearing on the left-hand side is then evoked by contracting the resulting matrix with the projectors \(\Pi_{h}\) defined in eq. (109). The coefficients \(T_{p_{0}}\), \(T_{\vec{p}}\), and \(T_{0}\) defined in (116) are the read off by matching powers of the external momenta.
### Computations of selected traces
In general, the evaluation of the tadpole diagrams is simpler than the one of the bubble diagrams. Hence we start with an example based on the 4-point vertex containing four gravitons. Subsequently, we discuss the projection of the traces associated with bubble-diagrams, expanding the trace arguments in terms of the external momenta.
#### b.2.1 The tadpole diagram containing the \(4\)-graviton vertex
The contribution originating from the 4-graviton vertex has the form
\[T_{4}^{hh}=-\frac{1}{2}\text{Tr}[\mathcal{G}_{hh}\Gamma_{k}^{(hhhh)}\mathcal{ G}_{hh}\partial_{t}\mathcal{R}_{k}{}^{hh}]\,. \tag{116}\]
The propagators are listed in Table 5 and the 4-point vertex is generated from the computer algebra. The trace contains an integration over loop-momentum. The explicit evaluation of these integrals requires choosing a regulator function \(R_{k}\). In practice, we have opted for a Litim-type regulator [95; 96] where \(R_{k}(q^{2})=(k^{2}-q^{2})\Theta(k^{2}-q^{2})\). In this case, the integration is restricted to the domain where \(q_{0}^{2}+\vec{q}\,^{2}\leq k^{2}\). Performing all index contractions and using the identities (B.3), one finds
\[\begin{split} T_{4}^{hh}=-\int_{q^{2}\leq k^{2}}& \frac{dq_{0}d^{3}\vec{q}}{(2\pi)^{4}}\bigg{[}\frac{(k^{2}(\eta_{N}-2)-q^{2} \eta_{N})(5\vec{q}\,^{2}+9\alpha_{1}q_{0}^{2}-4\alpha_{2}q_{0}^{2}-14k^{2} \lambda_{k})}{5(q_{0}^{2}(\alpha_{1}-1)+k^{2}(1-2\lambda_{k}))^{2}}\\ &+\frac{(k^{2}(\eta_{N}-2)-q^{2}\eta_{N})}{(q_{0}^{2}(\alpha_{1}- 1)+k^{2}(1-2\lambda_{k}))^{2}}\left(\frac{9\alpha_{1}-4\alpha_{2}}{5}p_{0}^{2 }+\vec{p}\,^{2}\right)\bigg{]}h^{ij}(-p)h_{ij}(p)\,.\end{split}\] (B.5)
This expression illustrates the following feature. We have picked the regulator to be a function of \(q^{2}=q_{0}^{2}+\vec{q}\,^{2}\). The propagators entering our computation contain the loop momentum \(q_{0}^{2}\) and \(\vec{q}\,^{2}\) in different linear combination. In eq. (B.5) this manifests itself in form of the terms \(q_{0}^{2}(\alpha_{1}-1)\) in the denominators. As a consequence the integrand may exhibit poles when performing the integral over \(q_{0}\). However, we find that there are regions in the parameter space of \(\alpha_{1}\) and \(\lambda_{k}\) where the poles are absent and the integral is finite. For the integrands in eq. (B.5) this requires that the following range of couplings should be excluded:
\[hh:\qquad-1\leq\frac{1-2\lambda_{k}}{\alpha_{1}-1}\leq 0\,.\] (B.6)
Note that this bound is satisfied by the foliated Einstein-Hilbert truncation where \(\alpha_{1}=\alpha_{2}=1\) and \(q_{0}\) drops out of the denominators.
We proceed by evaluating (B.5). Since \(\vec{q}\) appears in the numerator only, performing this integration is rather straightforward. Subsequently, we integrate over \(q_{0}\). Assuming the absence of poles, this integration results in inverse trigonometric functions. Since the outcome is rather lengthy, we illustrate the generic structure based on the coefficient multiplying the tensor structure \(h^{ij}h_{ij}\ \vec{p}\ ^{2}\),
\[T_{4}^{hh}=\left(P_{1}(q_{0})+P_{2}(q_{0})\ f_{1}^{hh}(q_{0})+P_{3}(q_{0})\ f_{2}^{hh}(q_{0})\ \right)\Big{|}_{q_{0}=-k}^{q_{0}=k}\times h^{ij}h_{ij}\ \vec{p}\,^{2}\,.\] (B.7)
The trigonometric contributions are captured by
\[f_{1}^{hh}(q_{0})=\arctan\left(\frac{q_{0}}{\sqrt{k^{2}-q_{0}^{2}}}\right)\,, \quad f_{2}^{hh}(q_{0})=\text{arctanh}\left(\frac{q_{0}}{\sqrt{k^{2}-q_{0}^{2 }}}\sqrt{\frac{\alpha_{1}-2\lambda_{k}}{2\lambda_{k}-1}}\right)\,.\] (B.8)
The polynomials appearing in this expression read
\[P_{1}(q_{0})= -\frac{q_{0}\sqrt{k^{2}-q_{0}^{2}}}{60\pi^{3}\left(\alpha_{1}-1 \right){}^{2}\left(1-2\lambda_{k}\right)\left(k^{2}\left(1-2\lambda_{k}\right)+ \left(\alpha_{1}-1\right)q_{0}^{2}\right)} \tag{124}\] \[\left(\alpha_{1}k^{2}\left(\lambda_{k}\left(10-4\eta_{N}\right)+5 \right)+k^{2}\left(-10\lambda_{k}+8\lambda_{k}^{2}\eta_{N}-4\lambda_{k}\eta_{N} +\eta_{N}\right)\right.\] \[\qquad+\alpha_{1}^{2}k^{2}\left(\eta_{N}-5\right)+\left(2\lambda_ {k}-1\right)\eta_{N}q_{0}^{2}+\alpha_{1}\left(1-2\lambda_{k}\right)\eta_{N}q_{ 0}^{2}\Big{)}\] \[P_{2}(q_{0})= \frac{k^{2}}{60\pi^{3}\left(\alpha_{1}-1\right){}^{3}}\times \left(8\lambda_{k}\eta_{N}-5\alpha_{1}\left(\eta_{N}-2\right)+\eta_{N}-10 \right),\] \[P_{3}(q_{0})= -\frac{k^{2}\sqrt{\alpha_{1}-2\lambda_{k}}}{60\pi^{3}\left( \alpha_{1}-1\right){}^{3}\left(2\lambda_{k}-1\right){}^{3/2}}\times\Big{(} \alpha_{1}\left(6\lambda_{k}\eta_{N}-5\eta_{N}+20-20\lambda_{k}\right)\] \[\qquad-16\lambda_{k}^{2}\eta_{N}+10\lambda_{k}\left(\eta_{N}+2 \right)+\alpha_{1}^{2}\left(\eta_{N}-5\right)-15\Big{)}.\]
Matching the result (123) to the definition (108), one then obtains the contribution of this specific tadpole diagram to the beta functions.
The remaining tadpole contributions can be computed along the same lines. Again, one encounters inverse trigonometric functions which put bounds on the space of admissible couplings. Since the component fields come with different dispersion relations, one actually finds a set of inequalities. Retaining both parameters \(\alpha_{1}\) and \(\alpha_{2}\) we find the following exclusion regions
\[BB: -1\leq\frac{2\alpha_{1}-2\alpha_{2}+1}{2(\alpha_{2}-\alpha_{1})} \leq 0\,, \tag{125}\] \[NN: -1\leq-\frac{2-7\lambda_{k}+6\lambda_{k}^{2}}{2+\alpha_{1}-3 \alpha_{2}}\leq 0\,,\] \[uu: -1\leq\frac{\alpha_{1}}{1-\alpha_{1}}\leq 0\,.\]
For \(\alpha_{2}=1\) the condition for \(uu\) is a subset of \(BB\), so that one has to consider only three inequalities in this case. The admissible values for the couplings leading to convergent loop integrals are most conveniently found graphically and are illustrated in Fig. 5. We note that the traces found for the foliated Einstein-Hilbert truncation are always convergent due to our choice of regulator.
#### b.2.2 Bubble diagrams
We complete the appendix with a brief discussion of the contributions resulting from bubble diagrams, encoded in the first trace of eq. (107). We can always shift the loop momentum so that the argument of the regulator is \(q^{2}\). The trace contributions then take the form
\[T_{3}=\mathrm{STr}\left[\mathcal{G}_{\chi_{a}\chi_{b}}(q)\Gamma_{k}^{(h\chi_{b }\chi_{c})}(p,q)\mathcal{G}_{\chi_{c}\chi_{d}}(p+q)\Gamma_{k}^{(h\chi_{d}\chi _{c})}(p,q)\mathcal{G}_{\chi_{c}\chi_{f}}(q)\,\partial_{t}\mathcal{K}_{k}^{ \chi_{f}\chi_{a}}(q)\right]\,, \tag{126}\]
where we have highlighted the momentum dependence of the building blocks.
The schematics of (126) shows that there is always one propagator depending on both the internal and external momentum. This momentum dependence does not align with the step function contained in the regulator. In general, this would then lead to a
quite complicated integration region. This complication can be avoided by noting that our computation does not need to track the full dependence of the trace on the external momentum. It is sufficient to extract the terms proportional to \(\vec{p}^{2}\) and \(p_{0}^{2}\) to extract the beta functions. Thus, we can expand the middle propagator in orders of external momenta and only keep the terms contributing to our projection. The relevant terms follow from a standard Taylor expansion in multiple variables
\[\begin{split} f(|\vec{p}+\vec{q}|,|p_{0}+q_{0}|)=& f(|\vec{q}|,|q_{0}|)+\frac{\partial f(|\vec{q}|,|q_{0}|)}{ \partial|\vec{q}|}\frac{\vec{q}\cdot\vec{p}}{|\vec{q}|}+\frac{\partial f(| \vec{q}|,|q_{0}|)}{\partial|q_{0}|}\frac{q_{0}\cdot p_{0}}{|q_{0}|}\\ &+\frac{1}{2}\frac{\partial^{2}f(|\vec{q}|,|q_{0}|)}{\partial| \vec{q}|^{2}}\frac{(\vec{q}\cdot\vec{p})^{2}}{|\vec{q}|^{2}}+\frac{1}{2}\frac {\partial f(|\vec{q}|,|q_{0}|)}{\partial|\vec{q}|}\left(\frac{\vec{p}\cdot\vec {p}}{|\vec{q}|}-\frac{(\vec{q}\cdot\vec{p})^{2}}{|\vec{q}|^{3}}\right)\\ &+\frac{1}{2}\frac{\partial^{2}f(|\vec{q}|,|q_{0}|)}{\partial|q_ {0}|^{2}}p_{0}^{2}+\frac{\partial^{2}f(|\vec{q}|,|q_{0}|)}{\partial|q_{0}| \partial|\vec{q}|}\frac{\vec{p}\cdot\vec{q}}{|\vec{q}|}\frac{q_{0}\cdot p_{0}} {|q_{0}|}+\mathcal{O}(p^{3}).\end{split} \tag{108}\]
We can then apply this expansion to \(\mathcal{G}_{\chi_{c}\chi_{d}}(p+q)\). As a result, the propagator and its derivatives depend on the loop momentum only. The tensor structures related to the external momenta can then be simplified by applying (107) and the evaluation of the traces proceeds along the same lines as the computation of the tadpole diagrams described in the previous subsection.
|
2310.15296 | DeTiME: Diffusion-Enhanced Topic Modeling using Encoder-decoder based
LLM | In the burgeoning field of natural language processing (NLP), Neural Topic
Models (NTMs) , Large Language Models (LLMs) and Diffusion model have emerged
as areas of significant research interest. Despite this, NTMs primarily utilize
contextual embeddings from LLMs, which are not optimal for clustering or
capable for topic based text generation. NTMs have never been combined with
diffusion model for text generation. Our study addresses these gaps by
introducing a novel framework named Diffusion-Enhanced Topic Modeling using
Encoder-Decoder-based LLMs (DeTiME). DeTiME leverages Encoder-Decoder-based
LLMs to produce highly clusterable embeddings that could generate topics that
exhibit both superior clusterability and enhanced semantic coherence compared
to existing methods. Additionally, by exploiting the power of diffusion model,
our framework also provides the capability to do topic based text generation.
This dual functionality allows users to efficiently produce highly clustered
topics and topic based text generation simultaneously. DeTiME's potential
extends to generating clustered embeddings as well. Notably, our proposed
framework(both encoder-decoder based LLM and diffusion model) proves to be
efficient to train and exhibits high adaptability to other LLMs and diffusion
model, demonstrating its potential for a wide array of applications. | Weijie Xu, Wenxiang Hu, Fanyou Wu, Srinivasan Sengamedu | 2023-10-23T19:03:04Z | http://arxiv.org/abs/2310.15296v2 | # DeTiME: Diffusion-Enhanced Topic Modeling using Encoder-decoder based LLM
###### Abstract
In the burgeoning field of natural language processing, Neural Topic Models (NTMs) and Large Language Models (LLMs) have emerged as areas of significant research interest. Despite this, NTMs primarily utilize contextual embeddings from LLMs, which are not optimal for clustering or capable for topic generation. Our study addresses this gap by introducing a novel framework named Diffusion-Enhanced Topic Modeling using Encoder-Decoder-based LLMs (DeTiME). DeTiME leverages encoder-Decoder-based LLMs to produce highly clusterable embeddings that could generate topics that exhibit both superior clusterability and enhanced semantic coherence compared to existing methods. Additionally, by exploiting the power of diffusion, our framework also provides the capability to generate content relevant to the identified topics. This dual functionality allows users to efficiently produce highly clustered topics and related content simultaneously. DeTiME's potential extends to generating clustered embeddings as well. Notably, our proposed framework proves to be efficient to train and exhibits high adaptability, demonstrating its potential for a wide array of applications.
## 1 Introduction
Topic modeling methods, such as (Blei et al., 2003), are unsupervised approaches for discovering latent structures in documents and achieving great performance (Blei et al., 2009). These methods take a list of documents as input, generate a defined number of topics, and can further produce keywords and related documents for each topic. In recent years, topic modeling methods have been widely used in various fields such as finance (Aziz et al., 2019), healthcare (Bhattacharya et al., 2017), education (Zhao et al., 2021, 2021), marketing (Reisenbichler, 2019), and social science (Roberts et al., 2013). With the development of Variational Autoencoder (VAE) (Kingma and Welling, 2013), the Neural Topic Model (Miao et al., 2018; Dieng et al., 2020) has attracted attention due to its better flexibility and scalability. The topic is generated through the reconstruction of the bag-of-word representations of the document (Miao et al., 2018).
The progress of large language model (LLM) (Vaswani et al., 2017; Radford et al., 2019) brings significant advancements in the NLP community. Sentence embedding is the process of converting sentences into numerical vectors in a high-dimensional space. LLM-based sentence embedding has been applied to topic modeling by using it to reconstruct bag of word representation of documents (Bianchi et al., 2021), to cluster document directly (Grootendorst, 2022) or both (Han et al., 2023). Sentence embedding-based models have been shown to achieve high performance regarding coherence and diversity (Zhang et al., 2022). Embeddings with higher clusterability are likely to perform well in classification tasks. However, _sentence embeddings are in general not perform well in clustering_. The best performed sentence embedding has an average v-measure (Rosenberg and Hirschberg, 2007) below 0.44 even if it uses kmeans and set the cluster equal to the number of different labels (Muennighoff et al., 2022). This means that their clusterability can be even lower when the latent dimension increases. Lastly, language modeling is a powerful generative tool (Brown et al., 2020). _While topic modeling has been utilized for generation (Wang et al., 2019), its integration with Large Language Models (LLMs) for generation remains less explored._
In this study, we introduce DeTiME, an innovative topic modeling framework that exploits the capabilities of the encoder-decoder Large Language Model (LLM). Specifically, we design a task to train an adapted encoder-decoder LLM, as depicted in Figure 2. We generate an embedding using this architecture, which exhibits high clusterability
compared to established models as illustrated in Figure 1. Furthermore, we design a topic modeling approach using the last hidden layer of our modified LLM encoder as input. This technique notably outperforms standard methods across all pertinent metrics. Additionally, we leverage diffusion and our proposed framework to generate relevant documents. Our major contributions are as follows:
1. We modify the encoder-decoder LLM and design a task to create an embedding ideal for topic modeling, even using a smaller model.
2. The fabricated embeddings outperform existing methods in terms of clusterability
3. We devise a topic modeling method based on the embedding that achieves superior results in both clusterability and semantic coherence, compared to the relevant topic modeling methods.
4. We demonstrate the ability to produce relevant content based on this model by harnessing diffusion, indicating potential practical applications.
5. Our framework exhibits flexibility as it can be seamlessly adapted to various encoder-decoder LLMs and neural topic modeling methods, broadening its applicability in the field.
By documenting detailed methodology and empirical results, we aim to inspire further research in this domain, and provide a strong foundation for future work on topic modeling and LLMs.
## 2 Related work
### Language Modeling
Recent transformer-based models, such as BERT Devlin et al. (2019), GPT-3 Brown et al. (2020), and GPT-4 OpenAI (2023) have achieved unmatched performance in numerous language tasks. Utilizing self-attention mechanisms, they capture context from both past and future tokens, generating coherent text. These rapidly evolving Large Language Models (LLMs) carry significant implications for diverse sectors and society. T5 Raffel et al. (2020) treats every NLP task as a text-to-text problem, using a standard format with input and output as text sequences. It employs an encoder-decoder framework and is pretrained on extensive datasets. FlanT5 Chung et al. (2022) enhances T5 by finetuning instructions across multiple datasets. Compared to encoder only (Bert) or decoder only model(GPT), encoder-decoder models such as FlanT5 allow the encoder to extract vital input information for output generation Rothe et al. (2020).
Prefix tuning Li and Liang (2021) modifies a fixed-length "prefix" of parameters prepended to the input during fine-tuning, significantly reducing the number of parameters required. This efficiency doesn't compromise performance; it often matches or surpasses traditional fine-tuning methods across various NLP tasks. The technique enables the model to learn task-specific initial hidden states for LLM, steering the generation process appropriately without hindering the model's generality due to the fine-tuning task.
### Sentence Embedding
Contextual embeddings aim to encode sentence semantics in a machine-readable format. Word embeddings like Word2Vec Mikolov et al. (2013)
Figure 1: A summary of a few of our findings: (1) Our embeddings outperform the best clusterable methods (selected from Muennighoff et al. (2022)). (2) The same framework with a slightly different finetuned task(DeTiME Training) does not perform well. (3) When compressed, our embeddings excel in higher dimensions, making them ideal for topic modeling. Detailed settings is in Appendix E.
and GloVe (Pennington et al., 2014) capture word-level meaning but struggle with larger text structures. Advanced models like the Universal Sentence Encoder (USE) (Cer et al., 2018) and InferSent (Conneau et al., 2018) were developed to better capture sentence nuances. USE employs transformer or Deep Averages Networks, while InferSent uses a bidirectional LSTM with max pooling. Sentence-BERT (Reimers and Gurevych, 2019) utilizes siamese BERT-Networks. However, _these models often struggle to capture context-dependent sentence meanings, resulting in lower clusterability_. This might be due to their reliance on contrastive loss on sentence pairs, which might focus on specific similarities rather than the overall semantic relationship.
### Topic Modeling
The Neural Topic Model (NTM) (Miao et al., 2016) employs variational inference but struggles with semantics and interpretability, while the Embedding Topic Model (ETM) (Dieng et al., 2019) uses pre-trained word embeddings to capture semantics. However, _NTMs rely on bag-of-word representations, limiting their ability to capture document semantics effectively_.
The Contextual Topic Model (CTM) (Bianchi et al., 2021) uses sentence embeddings and bag of words as input to reconstruct bag of words embeddings, while BERTopic (Grootendorst, 2022) combines sentence embedding and clustering techniques like UMAP and HDBSCAN for topic generation. Other models (Han et al., 2023) use both clustering techniques and reconstruction to create high-quality topics. Nonetheless, _contextual embedding based topic modeling methods lack a reconstruction process or only reconstruct bag of words representations_. These disadvantages limit its ability to generate relevant content. We examined other related works in Appendix H
### Diffusion
Drawing inspiration from non-equilibrium thermodynamics, the diffusion model adds noise to the data distribution in a forward process and learns a reverse denoising process (Sohl-Dickstein et al., 2015). (Song and Ermon, 2020) further applied this for high-quality image generation, comparable to leading likelihood-based models and GANs (Goodfellow et al., 2014), but with more stable training and generation due to iterative diffusion.
Denoising Diffusion Probabilistic Models (DDPM) (Ho et al., 2020) have garnered attention for generating high-quality samples sans adversarial training, sometimes surpassing other generative models. Speedier sampling was achieved in (Song et al., 2022) with denoising diffusion implicit models. The success of image generation models like CLIP (Radford et al., 2021), Stable Diffusion (Rombach et al., 2022), and Midjourney (Oppelaender, 2022) leveraged such diffusion-based methods. Their use extends to NLP tasks including natural language generation, sentiment analysis, and machine translation (Zou et al., 2023). It has also demonstrated that the diffusion model is able to generate high-quality text from noise samples in the continuous embedding space(Li et al., 2022; Gong et al., 2023; Gao et al., 2022; Lin et al., 2023). Yet, _diffusion hasn't been used for topic modeling as a content generation tool._
## 3 Methods
The goal of this paper is to create a framework that leverages encoder-decoder LLM to generate topics that is highly clusterable and able to generate topic related sentence. To achieve that, we need to create an embedding that could be used to generate text as well as be highly clusterable. Thus, we designed a specific task and dataset for our use case. We add CNN encoder and decoder on top of FlanT5 to generate that can easily fit into neural topic modeling for further dimension reduction. We further design a variational autoencoder to take the output of the CNN encoder as input and generate topics and reconstruct embeddings. This is achieved by \(2\) autoencoders. The first autoencoder is a variational autoencoder which generates topic distribution and reconstructs bag of words representations. To reconstruct the embeddings from \(enc_{2}\). We use another autoencoder to generate embeddings from topic distribution and reconstructed bag of words. The detailed structure and name are in Figure 3. We do not train or finetune FlanT5 and CNN during the topic modeling process which makes our methods cost-effective. We then leverage diffusion to generate high quality text that represents the document.
This section contains four components. First, we present the dataset and define the finetuned task. Second, we elaborate on our modified FlanT5 and the fine-tuning strategy. The third component introduces a variational autoencoder designed for topic
modeling and generation. Finally, we utilize diffusion to generate content relevant to the derived topics.
### Tasks and Finetune Dataset
To achieve effective topic modeling methods, we aim to generate embeddings that are highly clusterable and capable of generating document-relevant topics. We utilize a paraphrase dataset in which the input and output sentences are equivalent in meaning. Such equivalent sentences should belong to similar topics, thereby aiding us in generating similar sentences. In contrast to methods that use the same sentence for both input and output, our task assists the language model in learning the semantic meaning of sentences rather than simply memorizing the embeddings. As illustrated in Figure. 1, the DeTiME-training model represents the model generated by the task where the input and output are identical contents. As you can see, the clusterability of this method is substantially lower than ours. Thus, **rephrase task is effective to generate clusterable contents.** Moreover, the paraphrase task is not sufficiently easy (Vahtola et al., 2022) and is less likely to impair the utility of the language model. We concatenate similar sentence pairs from the STS benchmark, a dataset for comparing meaning representations, to form our dataset (Agirre et al., 2012, 2013, 2014, 2015, 2016). We select pairs with scores above 80 percent of the maximum, yielding a total of 22,278 pairs. This dataset addresses the limitations of existing paraphrase datasets, which are either domain-specific (Dolan and Brockett, 2005; Gohsen et al., 2023), or generated by potentially unreliable language models (Shumailov et al., 2023). Our composite dataset is diverse, including data from news, captions, and forums.
### Modified Encoder Decoder LLM
The motivation for this nested autoencoder structure stems from the limitation of existing sentence embeddings, which struggle to reconstruct sentences as they are primarily trained using contrastive learning (Reimers and Gurevych, 2019) rather than reconstruction. In other words, similar sentences are distributed close to each other in the learned embedded vector space. We choose an encoder-decoder model due to its ability to preserve essential information through encoding process. Specifically, encoder-decoder approaches, like T5, encapsulate vital information in the encoder's final hidden state. We can compress this final hidden state to create our embeddings. FlanT5 (Chung
Figure 2: DeTiME framework. We have 4 encoders and 4 decoders. \(enc_{1}\) and \(enc_{2}\) are compressing the input document to the lower dimension. \(enc_{3}\) is to construct topic distribution. \(dec_{1}\) is to reconstruct bag of words representations. \(enc_{4}\) is to extract the hidden dimension from the reconstructed bag of words. \(dec_{2}\), \(dec_{3}\) and \(dec_{4}\) is to reconstruct/rephrase the input document. In our method, we name the number of dimensions for embedding \(D_{token}\) and maximum sequence length \(N_{1}\). The dimension of the compressed vector is \(D_{embed}\). The number of topics equals \(T\). The dimension of vocabulary is \(N_{BoW}\). The dimension of topic embeddings is \(D_{topic}\).
et al., 2022) outperforms T5 in standard tasks by leveraging a Wei et al. (2023) and instruction fine-tuning Chung et al. (2022). We believe that the final hidden layer of a fine-tuned FlanT5 can represent the input information.
The purpose of CNN is to compress output from FlanT5 encoder to create embeddings for topic modeling as illustrated in Append F. Using the encoder output as an embedding leads to excessive length and dimensionality, causing sparsely distributed vectors, poor clusterability, and issues in downstream tasks like topic modeling. To address this, we incorporate a variational autoencoder to reconstruct FlanT5's final encoder hidden layer. We trained MLP, RNN, and CNN-based autoencoders, but MLP introduced too many parameters and underperformed. LSTM, bidirectional LSTM, and GRU Sherstinsky (2020), with varied attention schemes Xia et al. (2021), mostly yielded empty results or identical output embeddings, likely due to the FlanT5 encoder's non-sequential information processing. Applying a 1D convolution on the sequence dimension allowed for dimensionality reduction, with nearby embeddings showing high correlation, suggesting possible compression using a convolutional network on the sequence side. **We can adapt the same framework to other existing encoder decoder LLM** such as BART Lewis et al. (2019).
We utilize Parameter Efficient Fine-tuning (PEFT) because it reduces the number of parameters to be fine-tuned, making the process more efficient and often yielding comparable or even superior performance to traditional fine-tuning Liu et al. (2022). We adopt prefix fine-tuning Li and Liang (2021) in our work. During fine-tuning, we train both prefix fine-tuning related parameters and the CNN-based autoencoder for the paraphrase tasks. We then use the output from the CNN-based autoencoder's encoder for downstream topic modeling tasks. In our experiment, we use a relatively small model FlanT5 base (248M parameters) to illustrate the effectiveness of our framework.
### VAE structure for topic modeling
Our VAE serves two purposes. First, it generates a highly clusterable topic distribution. Second, it reconstructs the output of the CNN encoder \(e\), enabling it to be input into the decoder of the CNN autoencoder. Prior research Srivastava and Sutton (2017) demonstrated that a Variational Autoencoder (VAE) aiming to reconstruct a bag of words produces high-quality topic embeddings. Our VAE has two encoders and two decoders. \(enc_{3}\) is used to encode the output of the CNN encoder (\(e\)) into a topic distribution \(t\). \(enc_{3}\) has two parts: the first is a multi-layer perceptron (MLP) that maps the input to a lower dimension, and the second consists of two MLPs to generate the mean and the log of the standard deviation vector of size T: \(\mu,log(\sigma)=enc_{3}(e)\). We sample a latent representation using the mean and standard deviation: \(\eta\sim N(\mu,\sigma)\), and apply a softmax function to generate the topic distribution \(t=softmax(\eta)\).
The \(dec_{3}\) is used to decode the topic distribution \(t\) into a bag-of-words representation \(X^{\prime}\). Existing research Dieng et al. (2020) shows that topic-word similarity matrix offers better quality in reconstructions. The decoder consists of two matrices. We use a vocabulary embedding matrix \(e_{V}\in R^{D_{Topic}\times N_{BoW}}\), where \(D_{Topic}\) represents the dimension of word embeddings and \(N_{BoW}\) represents the dimension of the vocabulary. The decoder \(\phi\) learns a topic embedding matrix \(e_{T}\in R^{T\times D_{Topic}}\). The topic-to-word distribution is denoted as
\[E=softmax(e_{T}e_{V}^{T}) \tag{1}\]
\[X^{{}^{\prime}}=t\times E \tag{2}\]
Here, \(X^{\prime}\) represents the reconstructed bag of words. The product of the generated topic distribution and this matrix \(E\) yields a bag-of-words reconstruction.
The \(enc_{4}\) is a neural network that encodes the generated bag of words back to a vector \(t^{\prime}\), having the same dimension as the topic embeddings dimension: \(t^{\prime}=enc_{4}(X)\). We add residual connections between two compressed vectors and use a neural network to generate input embeddings:
\[e^{\prime}=dec_{4}(t+t^{\prime}) \tag{3}\]
It's necessary to reconstruct input embeddings (\(e\)) to be fed into the decoder to reconstruct the rephrased input sentence. We believe that the reconstructed bag of words can enhance the ability of sentence reconstruction. The residual connection helps the model leverage both the reconstructed bag of words and topic distribution to reconstruct input embeddings. This simplifies our input embedding reconstruction and ensures that the topic embeddings can capture semantic information from the output of the CNN decoder \(e\). Our VAE leverages
only bag of words representations and contextual embeddings. **Our VAE can also take other contextual embeddings as input.** Our loss function has three components: the reconstruction loss for the bag of words, the reconstruction loss for input embeddings using mean square error, and the KL Divergence for the normal distribution. The loss for a single input \(e\) is as follows:
\[L=-Xlog(X^{\prime})+(e-e^{\prime})^{2}+KL(t|N(\mu,\sigma)) \tag{4}\]
### Diffusion for content generation
Our pretrained model can compress the text and embed them in a low-dimensional space while keeping the semantic information and high-quality clustering. It is natural to wonder if this pretrained model can be used to generate topic-guided text. One of the challenges is that the decompression process in the pretrained model may induce noise, loss some information and thus the quality of the generated text will be impacted. Specifically, the latent dimension (i.e. the vector space of \(z^{\prime\prime}\) before the \(dec_{2}\) in Figure 3) is several orders of magnitude lower than the dimension of embedding vector \(e^{\prime}\) in DeTiME. When we reconstruct text from latent vectors, it may hugely deviate from any reasonable input for FlanT5 decoder \(dec_{3}\).
To overcome this, we have leveraged the diffusion models to denoise the generated text embedding from the topic modeling with structure as shown in Figure 3. It has demonstrated that the diffusion model is able to generate high-quality text from noise samples in the continuous embedding space (Li et al., 2022; Gong et al., 2023; Gao et al., 2022; Lin et al., 2023b). In the training component, we employ a DDPM-scheduled Autoencoder with residual connections as the diffusor (Ho et al., 2020) in the text embedding continuous space (i.e. the space after \(enc_{2}\) in Figure 3) using the embedded vectors obtained from the pretrained model. Specifically, during the forward process, the Gaussian noises is gradually added to \(X_{0}\) according to a variance schedule \(\beta_{1},...,\beta_{T}\), the noisy sample at time step \(t\) is expressed as
\[q(X_{t}|X_{0})=N\left(X_{t};\sqrt{\bar{\alpha}_{t}}X_{0},\sqrt{1-\bar{\alpha}_ {t}}I\right) \tag{5}\]
where \(\bar{\alpha}_{t}=\Pi_{i=1}^{t}\alpha_{i}\) with \(\alpha_{i}=1-\beta_{i}\). Our diffusor is trained to minimize the squared error between the predicted and true noise. The predicted noise \(z(X_{t},t)\) at time step \(t\) is obtained by the diffusor as following:
\[z^{1}=X_{t}+Sinusoid(t)\] \[z^{2}=FC_{1}^{COMP}(z^{1})\] \[z^{3}=FC_{2}^{COMP}(z^{2})\] \[z^{4}=FC_{3}(z^{3})\] \[z^{5}=FC_{4}^{RECONST}(z^{4}+z^{3})\] \[z(X_{t},t)=FC_{5}^{RECONST}(z^{5}+z^{2}). \tag{6}\]
This diffusor consists of \(2\) fully connected layers \(FC^{COMP}\) to compress the input and \(2\) fully-connected layers \(FC^{RECONST}\) to reconstruct. We also add residual connections between compress and reconstruct layers. Similar to UNet (Ronneberger et al., 2015), the Sinusoidal positional embeddings \(Sinusoid(t)\) is used to encode time.
Then, in generating component, this trained diffusor is used to denoise the embedding after the \(dec_{2}\) in Figure 3. The intuition behind this denoising process is as follows. The forward process of diffusion itself is a process that converts the unknown and complex data distribution into one (normal distribution in our case) that is easy to sample from. By adding back the learned noise with small iterative steps, we are able to take a sample from the noise subspace (support a simple distribution) to the data subspace (support the unknown data distribution). Similarly, for an embedding obtained from the topic modeling that deviates from the embedding distribution corresponding to the unknown input data distribution, we should also be able to take this embedding back to the area supporting the original embedding distribution.
## 4 Experimental Results
### Topic Modeling
**Dataset** Our experiments are conducted on labeled benchmark datasets for topic modeling: **AgNews**(Zhang et al., 2016), **20Newsgroups**(Lang, 1995) and **bbc-news**(Greene and Cunningham, 2006). The average document length varies from 38 to 425. We use the text as it is for the contextual embedding generation. To get bag of words, we use the word tokenizer from nltk to tokenize, remove digits and words with lengths less than 3, and remove stop words and words that appear less than 10 time. Additional details on the dataset and places to download processed data are available in Appendix B.
**Baseline Methods** We compare with common NTM methods and contextual embedding based methods. We explain the reasons for choosing these methods in Appendix D. These methods include: **NVDM**Wang and YANG (2020), VAE architecture for topic modeling with the encoder is implemented by multilayer perceptron, the variational distribution is a Gaussian distribution; **GSM**Miao et al. (2018), an NTM replaces the Dirichlet-Multinomial parameterization in LDA with Gaussian Softmax; **ETM**Dieng et al. (2020), an NTM model which incorporates word embedding to model topics; **vONT**Xu et al. (2023), a vMF based NTM where they set the radius of vMF distribution equal to 10; **CTM**Bianchi et al. (2021) trains a variational autoencoder to reconstruct bag of words representation using both contextual embeddings as well as bag of words representation. **ZTM**Bianchi et al. (2021) is similar to CTM but only use contextual embeddings; **DeTiME resi** is the DeTiME model with out residual connections. The reconstruction of embedding is hugely dependent on the reconstructed bag of words; **DeTiME bow** is the DeTiME model without reconstruction of bag of words and \(t^{\prime}\) is used to represent topics.
**Settings** In our experiment setting, The hyperparameter setting used for all baseline models and DeTiME is the same as Burkhardt and Kramer (2019). For neural topic modeling and our encoder and decoder, we use a fully-connected neural network with two hidden layers of half of the hidden dimension and one quarter of hidden dimension and GELU Hendrycks and Gimpel (2023) as the activation function followed by a dropout layer. We use Adam Kingma and Ba (2017) as the optimizer with learning rate 0.001 and use batch size 256. We use Smith and Topin (2018) as scheduler and use learning rate 0.001. We use 0.0005 learning rate for the DeTiME bow because the loss may
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline Methods & Purity & NMI & Km-Purity & Km-NMI & diversity & \(C_{v}\) \\ \hline ETM & \(\mathbf{0.4677\pm 0.04}\) & \(0.2502\pm 0.07\) & \(0.4063\pm 0.07\) & \(0.2400\pm 0.08\) & \(0.4177\pm 0.05\) & \(0.5594\pm 0.01\) \\ GSM & \(0.2701\pm 0.02\) & \(0.0687\pm 0.03\) & \(0.3167\pm 0.03\) & \(0.1312\pm 0.03\) & \(0.2991\pm 0.01\) & \(0.3495\pm 0.01\) \\ VONT & \(0.3727\pm 0.02\) & \(0.1604\pm 0.03\) & \(0.4941\pm 0.05\) & \(0.2688\pm 0.05\) & \(0.5937\pm 0.06\) & \(0.5151\pm 0.01\) \\ NVMD & \(0.4254\pm 0.04\) & \(0.2373\pm 0.07\) & \(0.3768\pm 0.07\) & \(0.2138\pm 0.05\) & \(0.2633\pm 0.05\) & \(0.4715\pm 0.02\) \\ ZTM & \(0.3637\pm 0.003\) & \(0.1019\pm 0.003\) & \(0.3479\pm 0.003\) & \(0.1087\pm 0.001\) & \(0.6796\pm 0.03\) & \(0.6705\pm 0.02\) \\ CTM & \(0.4307\pm 0.03\) & \(0.1641\pm 0.04\) & \(0.4191\pm 0.04\) & \(0.1819\pm 0.05\) & \(\mathbf{0.7198\pm 0.01}\) & \(0.6966\pm 0.02\) \\ \hline DeTiME bow & \(0.3146\pm 0.004\) & \(0.1300\pm 0.009\) & \(0.5007\pm 0.03\) & \(0.2591\pm 0.02\) & \(0.5362\pm 0.04\) & \(0.7186\pm 0.004\) \\ DeTME resi & \(0.3239\pm 0.01\) & \(0.1098\pm 0.01\) & \(0.4230\pm 0.01\) & \(0.1741\pm 0.02\) & \(0.5802\pm 0.01\) & \(\mathbf{0.7435\pm 0.002}\) \\ \hline DeTME & \(0.4577\pm 0.03\) & \(\mathbf{0.2983\pm 0.03}\) & \(\mathbf{0.5929\pm 0.04}\) & \(\mathbf{0.3463\pm 0.05}\) & \(0.6913\pm 0.02\) & \(0.7203\pm 0.01\) \\ \hline \end{tabular}
\end{table}
Table 1: The main results for all clusterability metrics, diversity, and coherence (\(C_{v}\)). The number of topics is \(20\). The best and second-best scores of each dataset are highlighted in boldface and with an underline, respectively. The result represents the average value obtained from three datasets, where each dataset was processed 10 times to compute the mean and standard deviation.
Figure 3: The diffusion framework based on the main framework in Figure 2. In the training component, a DDPM-scheduled Autoencoder with residual connections diffusor is trained using the embedding vectors obtained from the \(enc_{2}\). In generating part, the trained diffusor is used to denoise the embedding vectors transformed from the topic vectors hidden space before the text generation. It’s important to note that we normalized the hidden space before passing it to the \(dec_{2}\).
overflow when the learning rate is 0.001. We use word embeddings Mikolov et al. (2013) to represent word embeddings on the dataset for vONT, ETM, and DeTiME and keep it trainable for DeTiME. For vONT, we set the radius of the vMF distribution equal to 10. For CTM and ZTM, we use all-mpnet-base-v2 as our embeddings since it performs the best in clusterability in Figure 1. We use the same way to find key words as suggested by CTM. Our code is written in PyTorch and all the models are trained on AWS using ml.p2.8xlarge (NVIDIA K80). Detailed code implementations for methods and metrics are in Appendix C
**Evaluation Metrics** We measure the topic clusterability, diversity, and semantic coherence of the model. To measure clusterability, we assign every document the topic with the highest probability as the clustering label and compute **Top-Purity** and Normalized Mutual Information(**Top-NMI**) as metrics Nguyen et al. (2018) to evaluate alignment. Both of them range from 0 to 1. A higher score reflects better clustering performance. We further apply the KMeans algorithm to topic proportions z and use the clustered documents to report purity(**Km-Purity**) and NMI **Km-NMI** Zhao et al. (2020). We set the number of clusters to be the number of topics for the KMeans algorithm. Topic coherence(\(C_{v}\)) uses the one-set segmentation to count word co-occurrences and the cosine similarity as the similarity measure. Compared to other metrics, \(C_{v}\) is able to capture semantic coherence. We only benchmark \(C_{v}\) because most of coherence metrics are similar to each other Lim and Lauw (2023). For **diversity**, we measure the uniqueness of the words across all topics divided by total keywords. For each topic, we set the number of keywords equal to 25. Furthermore, we run all these metrics 10 times. We report averaged mean and standard deviation. We also include evaluations on Perplexity in Appendix G
**Results** The experiment shows that DeTiME outperforms all other methods in NMI, Km-NMI, and Km-Purity, which underscores its ability to **generate highly clusterable topic distributions**. Furthermore, DeTiME has the second highest scores in coherence(The highest score is also a DeTiME variation), **validating the exceptional semantic coherence of topics generated from our methods**. Observations reveal that the CTM and DeTiME's high diversity scores highlight the benefit of incorporating bag of words inputs, enhancing diversity performance. By eliminating the bag of words reconstruction components, we found a decrease in diversity and clusterability, indicating the importance of this component in boosting purity and NMI. When we removed the residual connection, we observed an improvement in coherence but a decrease in clusterability. This trade-off suggests that the absence of a residual connection may prevent the topic distribution from effectively capturing the information from embeddings, thus reducing clusterability. DeTiME resi performs better than ZTM in clusterability related metrics, which confirms that **our embedding is more clusterable than existing sentence embeddings**.
### Diffusion for content generation
To evaluate how the diffusor improves the quality of the generated text, we compared the generated text before and after the diffusion. Specifically, we utilized the Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level (FKGL), and Dale-Chall Readability Score (DCRS) to measure the readability of the generated text before and after the diffusion Goldsack et al. (2022). In general, a higher FRE (lower FKGL and DCRS) indicates that the text is easier to read. In this experiment, we generated \(1000\) random topic vectors and passed them to \(dec_{2}\), then the denoising process is followed to generate text. The main results are shown in Table 2. As observed, after the denoising process, the FRE increases significantly across all datasets, which indicates that diffusion makes the content easier to understand. Meanwhile, the value of FKGL and DCRS decreases from \(T=500\) to \(T=1000\). One of the reasons for the low score of FKGL and DCRS at \(T=0\) is that some of the samples contain only repeated words, making them easy to understand. Overall, after more steps in diffusion, the generated text becomes more readable for a lower grade. This experiment demonstrates that **our generated content achieves higher readability, indicating the potential of our framework to generate topic-relevant content.**
**Human Evaluation** To ensure the generated content is valuable to humans, a human evaluation was conducted with regard to the text generated after diffusion, as seen in Figure 3. In this evaluation, we generated 400 pieces of text. Each piece was evaluated for fluency, grammar, and redundancy by three different human annotators, as suggested by Celikyilmaz et al. (2021). We com
pared our results with a baseline through t-tests and found that the generated text exhibited fluency and grammatical correctness with statistical significance (\(p<1e-14\)). This demonstrates that **our generated contents are of high quality**. More details about the survey setup, results, and examples of generated text can be found in Appendix A.
## 5 Conclusion and Future Work
We have developed a framework DeTiME for generating highly clusterable embeddings, leveraging the strengths of paraphrase tasks, FlanT5, and CNN. In addition to this, we introduced a variational autoencoder structure capable of reconstructing embeddings while simultaneously producing highly coherent, diverse, and clusterable topics. Our design incorporates a diffusion process for generating content that provides representative depictions of various topics. The flexibility of our embedding generation structure allows for easy adaptation to other encoder-decoder language model architectures, eliminating the need for retraining the entire framework, thereby ensuring cost-effectiveness. Additionally, our variational autoencoder structure is versatile, and capable of being applied to any contextual embeddings. Other methods could further improve with larger LLM.
Moving forward, we aim to further improve the performance of our embeddings by training on larger models such as Flan-T5-XL. Benchmarking other Pre-training with Fine-Tuning (PEFT) methods, such as LORA, may also enhance our system's performance. Given the high clusterability of our embeddings, we plan to extend our work to semi-supervised document classification (Xu et al., 2023, 2023, 2023). This framework could be applied to identify the most representative documents within extensive document collections. This functionality could make our model suitable for generation topic guided generation (Xu et al., 2023) Finally, we envisage utilizing this framework to generate superior summarizations for large documents. This could be achieved by training a decoder for summarization, generating a summarization for each topic, and subsequently concatenating them. This framework can also be extended to hierarchical topic modeling (Chen et al., 2023; Shahid et al., 2023; Eshima and Mochihashi, 2023), mitigate data sparsity for short text topic modeling (Wu et al., 2022), generate topic-relevant and coherent long texts (Yang et al., 2022), and construct a network of topics together with meaningful relationships between them (Byrne et al., 2022).
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline Datasets & \multicolumn{3}{c|}{**20Newsgroups**} & \multicolumn{3}{c|}{**blue-news**} & \multicolumn{3}{c|}{**ABNews**} \\ \hline Time point & \(T=0\) & \(T=500\) & \(T=1000\) & \(T=0\) & \(T=500\) & \(T=1000\) & \(T=500\) & \(T=1000\) \\ \hline FRE & -25.9600 & 51.1390 & 54.2467 & 6.8600 & 36.8589 & 60.9407 & 36.6200 & 64.1077 & 63.1074 \\ \hline FKGL & 53.2000 & 10.7017 & 9.8955 & 30.3000 & 12.6860 & 9.1856 & 8.4000 & 9.0876 & 8.6781 \\ \hline DCRS & 7.3500 & 8.4758 & 7.8822 & 4.0100 & 8.3304 & 8.2010 & 66.8500 & 8.1890 & 8.1059 \\ \hline \end{tabular}
\end{table}
Table 2: The average readability scores at different time steps during the denoising process. A general increase in readability is observed.
## 6 Limitations
While our study has made significant strides in its domain, we acknowledge certain limitations that present themselves as opportunities for future research and optimization. Firstly, we have not yet benchmarked our model with other encoder-decoder frameworks such as BART, or with alternative PEFT methods like LORA, leaving room for potential performance enhancement. We believe that the diversity could further improve with diversity aware coherence loss [10]. Secondly, our model has yet to reach the full potential of FlanT5 due to current model size constraints. This implies that scaling up the model could further improve its performance. Thirdly, we have not fine-tuned the number of dimensions for the CNN encoder output or explored structures beyond basic CNN, LSTM, and MLP, both of which could enhance our current performance. Fourthly, We noted a relatively high variance in DeTiME's performance, we interpret this as a consequence of the complicated autoencoder structure. Lastly, we have not benchmarked all coherence metrics. Though many metrics have similarities and some may not consider semantic word meaning, a more extensive benchmarking could provide a richer evaluation of our approach. Despite these limitations, each of these points serves as a promising direction for future research, thereby helping to further elevate our model's capabilities.
|
2309.01380 | Understanding Video Scenes through Text: Insights from Text-based Video
Question Answering | Researchers have extensively studied the field of vision and language,
discovering that both visual and textual content is crucial for understanding
scenes effectively. Particularly, comprehending text in videos holds great
significance, requiring both scene text understanding and temporal reasoning.
This paper focuses on exploring two recently introduced datasets, NewsVideoQA
and M4-ViteVQA, which aim to address video question answering based on textual
content. The NewsVideoQA dataset contains question-answer pairs related to the
text in news videos, while M4-ViteVQA comprises question-answer pairs from
diverse categories like vlogging, traveling, and shopping. We provide an
analysis of the formulation of these datasets on various levels, exploring the
degree of visual understanding and multi-frame comprehension required for
answering the questions. Additionally, the study includes experimentation with
BERT-QA, a text-only model, which demonstrates comparable performance to the
original methods on both datasets, indicating the shortcomings in the
formulation of these datasets. Furthermore, we also look into the domain
adaptation aspect by examining the effectiveness of training on M4-ViteVQA and
evaluating on NewsVideoQA and vice-versa, thereby shedding light on the
challenges and potential benefits of out-of-domain training. | Soumya Jahagirdar, Minesh Mathew, Dimosthenis Karatzas, C. V. Jawahar | 2023-09-04T06:11:00Z | http://arxiv.org/abs/2309.01380v2 | # Understanding Video Scenes through Text: Insights from Text-based Video Question Answering
###### Abstract
Researchers have extensively studied the field of vision and language, discovering that both visual and textual content is crucial for understanding scenes effectively. Particularly, comprehending text in videos holds great significance, requiring both scene text understanding and temporal reasoning. This paper focuses on exploring two recently introduced datasets, NewsVideoQA and M4-ViteVQA, which aim to address video question answering based on textual content. The NewsVideoQA dataset contains question-answer pairs related to the text in news videos, while M4-ViteVQA comprises question-answer pairs from diverse categories like vlogging, traveling, and shopping. We provide an analysis of the formulation of these datasets on various levels, exploring the degree of visual understanding and multi-frame comprehension required for answering the questions. Additionally, the study includes experimentation with BERT-QA, a text-only model, which demonstrates comparable performance to the original methods on both datasets, indicating the shortcomings in the formulation of these datasets. Furthermore, we also look into the domain adaptation aspect by examining the effectiveness of training on M4-ViteVQA and evaluating on NewsVideoQA and vice-versa, thereby shedding light on the challenges and potential benefits of out-of-domain training.
## 1 Introduction
Multimodal understanding, specifically, VideoQA is a challenging yet crucial problem that involves multimodal and temporal reasoning. Researchers have developed various datasets and methods to facilitate research in this field [17, 19, 20, 13, 18, 10, 21]. Xu et al. [20] and Yu et al. [21] propose datasets that contain questions about the events happening in the video but disregard the text. However, works such as Lei et al. [10] and Tapaswi et al. [17] have introduced datasets that use both visual and subtitle information to understand the story. However, these existing datasets lacked the ability to handle questions that require reading text in videos. As textual content in man-made environments carries significant semantic information, the need for visual question-answering datasets that involve reading text became evident. These text-based systems have great potential for real-life scenarios, particularly for visually-impaired users and the development of assistive devices. While previous works have explored single-image scene-text and document images [16, 11, 12, 1, 3, 6] there has been limited exploration of works that require extracting information from text present in videos. Recently, Hegde et al. [4] shed light on the bias aspects of TextVQA datasets. Recently, there have been multiple datasets [18, 7, 22] that deal with a new line of research wherein the datasets require models to read and understand the text present in the videos to answer the questions. Jahagirdar et al. [7] proposed a dataset: NewsVideoQA that contains question-answer pairs framed on news videos from multiple news channels, and these questions are formulated such that answering the questions requires an understanding of the embedded text i.e. text occurring in the news videos. Similarly, Zhao et al. [22] introduced a dataset that contains videos from multiple cat
Figure 1: Example illustrating two major concerns of existing text-based VideoQA datasets [7, 22]. Both examples showcase that only **textual information** from a **single frame** is sufficient to obtain answers to the questions.
egories such as shopping, traveling, etc, and requires both temporal reasoning and textual reasoning to answer questions. Tom et al. [18], proposed a dataset for the task of video question answering in the context of driver assistance on road videos. Additionally, a competition 1 centered on the task of answering questions based on video content using text was introduced.
Footnote 1: [https://tianchi.aliyun.com/competition/entrance/532050/information](https://tianchi.aliyun.com/competition/entrance/532050/information)
In this work, we explore the task of text-based video question answering. Firstly, we study and analyze two recently introduced datasets, namely NewsVideoQA [7] and M4-ViteVQA [22], which includes various types of videos, such as news videos, vlogging, traveling and shopping. We conduct an exploratory analysis to examine the level of visual understanding and multi-frame comprehension required for answering the questions in both datasets. Additionally, we conduct experiments using BERT-QA [2], a text-only model, and demonstrate its effectiveness by achieving comparable results to the original methods that consider both visual and textual information. We also analyze domain adaptation by training on M4-ViteVQA and testing on NewsVideoQA, and vice-versa, revealing insights into cross-domain understanding challenges.
## 2 Benchmarking and Experiments
In this section, we present details of the exploratory analysis and the experiments we conduct. BERT-QA is a transformer-based encoder-only model pre-trained on a large corpus and further finetuned on SQuAD dataset [15] for question answering (Extractive QA). Extractive QA is the task of extracting a short snippet from the document/context on which the question is asked. The answer'span' is determined by its start and end tokens. It is selected for its effective extractive QA performance, implementation ease, and finetuning, despite limitations like no answer generation or handling yes/no questions. Its ability in extracting answers from textual content makes it a suitable choice for tasks where answers are primarily found in the text of the video. To convert both M4-ViteVQA and NewsVideoQA datasets in SQuAD format, we find the first substring of the answer in the context, which is an approximation of the answer span as followed in [12].
### Exploratory Analysis
For exploratory analysis, we randomly sample 100 QA pairs from both M4-ViteVQA and NewsVideoQA. For each QA pair, we check the following aspects: i) if the question can be answered by a single frame or need multi-frame information, ii) if the question needs visual information and/or textual information to obtain the answer, iii) if the frame which is essential to obtain the answer, is crowded with text (approximately more than 15 OCR tokens). From Table. 1, it can be seen that for both datasets, information from a single frame is sufficient to obtain answers, which is counter-intuitive to the video question-answering task. From Table. 1, it can also be seen that most of the questions in both datasets need textual information to obtain answers. As M4-ViteVQA contains videos from multiple categories, it contains more questions of visual type compared to NewsVideoQA that contains only news videos. Since both datasets are designed for questions that require reading text to answer questions, this has resulted in minimal questions that require multimodal information. We also check for the answer type: i) extractive, ii) reasoning based, and iii) knowledge-based, and combinations of each type. From Table. 1, it can be seen that most of the questions are extractive in nature and have fewer reasoning-based and knowledge-based questions. However, having more reasoning/knowledge-based questions is crucial, thereby creating the need for better methods beyond the scope of text-only models.
### BERT-QA experiments
**M4-ViteVQA [22]:** The M4-ViteVQA dataset consists of two tasks. The first task is divided into two splits and both splits contain evenly distributed question-answer pairs from all video categories in train-val-test sets. In the second task, the training set comprises videos from seven categories, while the question-answer pairs and videos in validation in test splits are exclusively sourced from the remaining two categories. Zhao et el. [22] also propose a multimodal video question-answering method: T5-ViteVQA, that combines information from multiple modalities including OCR features, question features, and video features.
In our experiments on the BERT-QA model, we first sample frames at 1fps and order the OCR tokens of the frames to the default reading order based on the position of the top-left corner of the OCR token. We further concatenate the ordered OCR tokens which becomes the context of the BERT-QA model. After the training phase, we conduct two types of testing to evaluate the performance of the BERT-QA model. For the first type, we evaluate the model
\begin{table}
\begin{tabular}{l c c} \hline Category & M4-ViteVQA (\%) & NewsVideoQA (\%) \\ \hline Single Frame & 92.0 & 95.0 \\ Multi Frame & 8.0 & 5.0 \\ Visual Info & 33.0 & 6.0 \\ Textual Info & 95.0 & 100.0 \\ Frame crowded with text & 18.0 & 64.0 \\ Extractive-based & 81.0 & 98.0 \\ Reasoning-based & 5.0 & 2.0 \\ Knowledge-based & 1.0 & 0.0 \\ \hline \end{tabular}
\end{table}
Table 1: Analysis of 100 random QA pairs from M4-ViteVQA and NewsVideoQA datasets.
on the entire validation set without checking if the answer is present in the context. This experiment allows us to assess the model's overall ability to obtain answers. In the second type of testing, we specifically focus on questions that have answers in the context.
**NewsVideoQA [7]:** This dataset proposes questions on news videos. The dataset has timestamps for each question indicating the frame at which the question was defined. This work also proposes a repurposed baseline: OCR-aware SINGULARITY, which was originally inspired by SINGULARITY [9]. OCR-aware SINGULARITY is a multi-modal transformer-based video question-answering model that combines information from OCR tokens, questions, and visual information from a randomly sampled frame.
In this work, we conduct two types of training on this dataset. In the first approach, we train the BERT-QA model using the OCR tokens of the single frame on which the question was defined (BERT-QA-SF: BERT-QA Single Frame). In the second approach, we concatenate the OCR tokens from frames sampled at 1fps which forms the context of the BERT-QA model. (BERT-QA-MF: Multi-frame). By conducting training in both single-frame (BERT-QA-SF) and multi-frame (BERT-QA-MF) setups, we aim to explore the impact of variations in the length of context on the performance of the BERT-QA model. These two training approaches provide insights into the model's ability to obtain answers based on either a specific frame or a broader contextual understanding derived from multiple frames.
### Domain Adaptation Experiments
We conduct experiments to determine if the BERT-QA model can perform or generalize well with the out-of-domain context. This evaluation aims to determine if the model can provide accurate answers even in unfamiliar video categories and their corresponding contexts. To achieve this understanding, we perform several experiments. We check for the performance of the BERT-QA model trained on the **Source dataset** followed by testing on the **Target dataset**. We do this in two settings: i) without finetuning on the target dataset, and ii) with finetuning on the target dataset (Example: Train on NewsVideoQA and test on M4-ViteVQA in two settings i.e. with/without finetuning and vice-versa). By doing these, we try to examine the impact of domain shift and the importance of training the model on videos from diverse categories, where scene text serves as the textual content in one dataset that is M4-ViteVQA, as opposed to embedded text in NewsVideoQA. These experiments help us determine the model's ability to generalize and adapt to the specific categories of videos.
ferent tasks and splits on the validation set of the M4-ViteVQA dataset. It can be seen that a simple text-only model achieves comparable results and beats the scores of T5-ViteVQA for certain splits. The results indicate that we need more datasets that require information from multiple modalities and multiple frames which is a concerning limitation in the current datasets. It can be seen that the BERT-QA relies purely on the OCR output to infer and extract the answer. Therefore, if the OCR output is noisy or if the tokens are incorrectly ordered (errors in default reading order) the model might fail to find the right answer. However, since the ANLS metric acts softly on OCR errors, BERT-QA outperforms T5-ViteVQA on the ANLS metric. In Table. 3, we show the performance of BERT-QA for the questions that contain answers in the context. We create this test set by checking if the answer is a substring of context. For each of the splits, nearly half of the original questions in the validation set have answers in the context. In Table. 4, we show the performance comparison--in terms of Accuracy--of two methods: i) M4C [5]: It uses a multimodal transformer and an iterative answer prediction module. The model answers questions based on scene-text questions on a single image. ii) T5-ViteVQA: method proposed as a baseline in [22], with BERT-QA on the validation set of Task 1 Split 1. It can be seen that BERT-QA outperforms M4C and T5-ViteVQA on different sets. Here, the "sets" correspond to the type of questions which is provided with the dataset. These sets are: i) easy - answering requires information from a single frame, ii) hard - answering requires information from multiple frames, iii) text - answering requires only reading text, and iv) vision - answering requires both visual and textual information. Only for questions that require visual information, BERT-QA underperforms, yet still manages to obtain decent performance.
In Table. 5, we show results of the performance of different methods on the test set of NewsVideoQA [7] dataset. OCR-aware SINGULARITY is a model trained in a single-frame setup and is tested on a multi-frame setup (by combining visual and textual information from 12 frames - more details in [7]). This is followed by results of BERT-QA-SF i.e. trained on OCR context from a single frame and tested by picking a random frame. In the third row, we show the results of BERT-QA when tested with OCR tokens of the frame on which the question was defined (correct frame). In the fourth row, BERT-QA-MF: BERT-QA is trained and tested on a multi-frame setup. In Table. 6, we show the results of out-of-domain training performance on both [7, 22] datasets. It can be seen that testing a model initially trained on M4-ViteVQA (Source dataset) achieves decent performance on an out-of-domain NewsVideoQA (target dataset) and vice-versa. By further finetuning on the target dataset, the performance of the model increases. This indicates that the BERT-QA model can effectively generalize across domains through out-of-domain training. More details are present in the supplementary.
## 3 Conclusion
This paper focused on the important task of understanding textual information within videos for question-answering. The study provides insights that current text-based VideoQA datasets majorly focus on extractive answers and provide insights that the degree of visual understanding and multi-frame comprehension in current datasets is limited for better VideoQA using text in videos. Additionally, the paper demonstrates the effectiveness of BERT-QA, a text-only model, in achieving comparable performance to original methods on both datasets and also looks into the domain transfer aspect, by comparing the performances by training on one type of dataset and testing on the other. In future developments, we hope to see datasets that prioritize non- extractive answers and incorporate multimodal questions based on multiple frames to facilitate improved multimodal learning.
**Acknowledgements.** This work is supported by MeitY, Government of India.
|
2305.07424 | Instance Smoothed Contrastive Learning for Unsupervised Sentence
Embedding | Contrastive learning-based methods, such as unsup-SimCSE, have achieved
state-of-the-art (SOTA) performances in learning unsupervised sentence
embeddings. However, in previous studies, each embedding used for contrastive
learning only derived from one sentence instance, and we call these embeddings
instance-level embeddings. In other words, each embedding is regarded as a
unique class of its own, whichmay hurt the generalization performance. In this
study, we propose IS-CSE (instance smoothing contrastive sentence embedding) to
smooth the boundaries of embeddings in the feature space. Specifically, we
retrieve embeddings from a dynamic memory buffer according to the semantic
similarity to get a positive embedding group. Then embeddings in the group are
aggregated by a self-attention operation to produce a smoothed instance
embedding for further analysis. We evaluate our method on standard semantic
text similarity (STS) tasks and achieve an average of 78.30%, 79.47%, 77.73%,
and 79.42% Spearman's correlation on the base of BERT-base, BERT-large,
RoBERTa-base, and RoBERTa-large respectively, a 2.05%, 1.06%, 1.16% and 0.52%
improvement compared to unsup-SimCSE. | Hongliang He, Junlei Zhang, Zhenzhong Lan, Yue Zhang | 2023-05-12T12:46:13Z | http://arxiv.org/abs/2305.07424v2 | # Instance Smoothed Contrastive Learning for Unsupervised Sentence Embedding
###### Abstract
Contrastive learning-based methods, such as unsup-SimCSE, have achieved state-of-the-art (SOTA) performances in learning unsupervised sentence embeddings. However, in previous studies, each embedding used for contrastive learning only derived from one sentence instance, and we call these embeddings **instance-level** embeddings. In other words, each embedding is regarded as a unique class of its own, which may hurt the generalization performance. In this study, we propose IS-CSE (instance smoothing contrastive sentence embedding) to smooth the boundaries of embeddings in the feature space. Specifically, we retrieve embeddings from a dynamic memory buffer according to the semantic similarity to get a positive embedding group. Then embeddings in the group are aggregated by a self-attention operation to produce a **smoothed instance** embedding for further analysis. We evaluate our method on standard semantic text similarity (STS) tasks and achieve an average of \(78.30\%\), \(79.47\%\), \(77.73\%\), and \(79.42\%\) Spearman's correlation on the base of BERT-base, BERT-large, RoBERTa-base, and RoBERTa-large respectively, a \(2.05\%\), \(1.06\%\), \(1.16\%\) and \(0.52\%\) improvement compared to unsup-SimCSE.
## Introduction
Learning better universal sentence embedding [11] can benefit many natural language processing tasks, such as sentiment analysis, information retrieval and semantic search [13, 14], and thus has received much attention. Recently, it has been shown that the contrastive learning-based methods give strong results for sentence embeddings [11, 12, 13]. The core idea of contrastive learning is that positive and negative embedding pairs are generated given a batch of training sentences. Whereas the positive embeddings are often obtained via augmentation, and negative embeddings are sampled from a random collection of sentences. Following the construction of pairs, contrastive learning forces the model to learn discriminative embeddings by pulling positive sentence pairs together and pushing apart negative ones.
In the unsupervised contrastive learning framework, while some works seek to optimize for selecting "hard" negative examples [12] or using pre-defined prompt [15] to extract features, other methods investigate the effects of augmentation on constructing sentence pairs. One of the most influential methods for learning sentence embeddings is SimCSE [11], which takes drop-out as data augmentation, providing expressive semantically similar embeddings to construct positive pairs. ESimCSE[21] augmented the input sentences by word repetition, insertion, and deletion. Similarly, CARDS [20] randomly flip the first letter in a word to augment the inputs.
However, most of these methods take each of the sentences as a unique class and discriminate it from other sentences in a batch. This could make models become "overconfident" about each sentence being a separate class, be
Figure 1: Comparison between our method with SimCSE. In SimCSE, two views of the same input sentence are regarded as positive pairs. Other sentences in the same batch are regarded as negative examples. (a): In SimCSE, each embedding is derived from one sentence, and one view of the input sentence is regarded as a label of another view. (b): Our method uses additional soft labels (weighted average of closing-by embeddings).
cause there may be some false negative pairs in an unsupervised setting. To address this problem, DCLR [22] generates negative examples by sampling them from a learned gaussian distribution and filtering out negative examples with high similarities. However, DCLR does not make use of rich positive embeddings. Inspired by the success of label smoothing [17] where soft labels are applied to release the "overconfident" of a network caused by hard labels, we propose to smooth the positive examples to release the "over-confident" problem. For the positive pairs in contrastive learning, one positive embedding can be regarded as a label which another positive one should fit. Following the label smoothing method, we smooth the label by a weighted average operation with retrieved semantically similar embeddings. Specifically, we hold a First-in-First-out memory buffer which saves the sentence embeddings in the previous steps during the training process. While constructing the positive pairs, we retrieve sentence embeddings from the memory buffer based on the cosine similarity and do a weighted average operation with the positive embedding to get smooth embeddings. This can push each sentence to be similar to other closing-by sentences, not just itself. This new practice has a label smoothing effect [23]. We call it instance smoothing to contrast sentence embedding (IS-CSE).
We evaluate IS-CSE on seven standard semantic textual similarity (STS) tasks [1, 2,
where \(\tau\) is a temperature parameter and the \(sim(\cdot,\cdot)\) represents the cosine similarity function:
\[sim(h_{i},h_{i}^{+})=\frac{h_{i}^{T}h_{i}^{+}}{\left\|h_{i}\right\|\left\|h_{i}^{+ }\right\|}. \tag{2}\]
All embeddings \(h\) in Equ.1 are instance-level embeddings, each of which is derived from one sentence instance. In this paper, we propose an instance-smoothing mechanism to regularize the InfoNCE loss by applying smoothed instance embeddings (derived from a group of semantically similar sentences).
### Dynamic Memory Buffer
Instead of only using embeddings derived from input sentences, we construct smoothed embeddings by averaging the closing-by embeddings. One key process of IS-CSE is to retrieve these closing-by embeddings at each step during finetuning. Directly retrieving sentence embeddings from the whole dataset can lead to a huge computational burden. To bound the memory usage, we propose to use a dynamic memory buffer in the unsupervised contrastive learning task. Specifically, given a dynamic memory buffer \(\mathcal{B}\in\mathbb{R}^{L\times d}\), where \(L\) is the length of the buffer and \(d\) is the dimension of an embedding. For each step, we feed the buffer normalized augmented embeddings \(h^{+}\) with a First-in-first-out (FIFO) strategy. The embeddings in memory buffer \(\mathcal{B}\) are stop-gradient embeddings. Formally, the method for updating the memory buffer \(\mathcal{B}\) is:
\[\mathcal{B}_{new}=Concat(\mathcal{B}_{old}[l:L],sg\{\frac{h_{1}^{+}}{||h_{1}^ {+}||},...,\frac{h_{l}^{+}}{||h_{1}^{+}||}\}), \tag{3}\]
where \(l\) is the number of coming/discarded embeddings for the FIFO strategy (\(l\) equals the batch size in our experiment), \(sg\) is the stop-gradient operation and \(Concat\) operation is used to maintain the buffer size and dynamically update the buffer. Based on the memory buffer, several semantically similar embeddings are retrieved for smoothing the augmented positive embedding \(h^{+}\).
### Retrieving Sentence Embeddings
After setting up the dynamic memory buffer, we retrieve sentence representations and apply the weighted average operation to get the smoothed embeddings. We compare two types of retrieval methods: kNN and K-means.
kNNA simple way to obtain semantically similar embeddings is kNN [1]. Given the augmented embedding \(h^{+}\), we calculate the cosine similarly (Equ.2) between \(h^{+}\) and each of the embedding in buffer \(\mathcal{B}\). Then \(k\) nearest embeddings are retrieved from \(\mathcal{B}\).
K-meansWe perform the K-means algorithm [1] on \(\mathcal{B}\) with a pre-defined number of clusters \(k^{\prime}\). We assign each embedding to a cluster based on semantic similarity. We directly retrieve the center embedding to which \(h^{+}\) belongs.
We empirically compare the performances of kNN and K-means in Table 3 and select the kNN as our final retrieval method.
### Smoothing Instance Embeddings
In IS-CSE, the augmented embeddings \(h^{+}\) are smoothed by retrieved embeddings with high semantic similarity from the dynamic buffer. For kNN, we apply a self-attention aggregation method. Specifically, given \(k\) retrieved embeddings \(\{h^{r}\}_{i=1}^{k}\) and the augmented embedding \(h^{+}\), we normalized and then concatenate them to get a combined matrix \(K=\{h^{+},h_{1}^{r},h_{2}^{r},...,h_{k}^{r}\}\in\mathbb{R}^{(k+1)\times d}\). We thus obtain smoothed embedding \(h^{s+}\) by:
\[h^{s+}=softmax(\frac{h^{+}K^{T}}{\beta})K, \tag{4}\]
where \(\beta\) is a temperature parameter.
For K-means, we cluster all the embeddings in the buffer based on the cosine similarity. Then we obtain a list of cluster centers, and select the center \(c^{+}\) of the cluster which \(h^{+}\) belongs to. We get our smoothed embedding \(h^{s+}\) by:
\[h^{s+}=\gamma h^{+}+(1-\gamma)c^{+}, \tag{5}\]
where \(\gamma\) is a hyper-parameter. In Equ.4 and Equ.5, \(h^{+}\) is not a stop-gradient embedding but the retrieved embeddings \(h^{r}\) and centers \(c^{+}\) are stop-gradient embeddings.
### Instance Smoothing Contrastive Sentence Embedding (IS-CSE)
The main difference between our method and SimCSE is that we add an additional contrastive loss whose augmented positive embeddings are smoothed. Given a batch of input sentences, we obtain the projected instance-level embeddings of \(h_{i}\) and \(h_{i}^{+}\). We calculate our smoothed embedding
Figure 2: Overview of our method. We retrieve embeddings from the memory buffer (orange) and the smoothed embeddings are the weighted average of the retrieved and positive embeddings.
\(h_{i}^{s+}\) using Equ.4. The smoothed embedding loss can be calculated by:
\[\mathcal{L}_{smoothing}=-log\frac{e^{sim(h_{i},h_{i}^{s+})/\tau}}{\sum_{j=1}^{N}e ^{sim(h_{i},h_{j}^{s+})/\tau}}. \tag{6}\]
Combining Equ.1 and Equ.6, we treat the smoothing loss as a regularizer. The final form of our training objective is:
\[\mathcal{L}=\mathcal{L}_{instance}+\alpha\mathcal{L}_{smoothing}, \tag{7}\]
where \(\alpha\) is a coefficient.
The quality of retrieved embeddings may be low at the initial stages because the model has not been fully finetuned. A big \(\alpha\) may hurt the model performance at the initial stages of finetuning. We adopt a cosine scheduler for \(\alpha\):
\[\alpha=\min\{\cos(\pi\cdot\frac{T_{i}}{T_{max}})*(\alpha_{start}-\alpha_{end}),0\}+\alpha_{end}, \tag{8}\]
where \(\alpha_{start}\), \(\alpha_{end}\), \(T_{i}\) and \(T_{max}\) are the inital value of \(\alpha\), end value of \(\alpha\), the current step and the max step, respectively.
## Experiments
### Setup
For unsupervised sentence embedding learning, we follow the same training process as SimCSE [14]. We conduct our main experiments on 7 standard semantic textual similarities (STS) tasks: STS 2012-2016 [1, 1, 1, 2, 3, 4], STS Benchmark [1] and SICK-Relatedness [1]. We compare our IS-CSE against methods reported in SimCSE [14] and SimCSE-related methods: DCLR [15], CARD [20]. Although our method does not perform as good as CARD [20], we argue that CARD is an orthogonal method in that it finetunes BERT/RoBERTa with the help of finetuned models and additional data augmentation method, and can be combined with IS-CSE. We also include 7 transfer learning tasks [10], taking STS as the main result for comparison following previous SimCSE-related papers [14, 20, 21]. Our experients are conducted on one NVIDIA A100 GPU.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline Model & STS12 & STS13 & STS14 & STS15 & STS16 & STS-B & SICK-R & Avg. \\ \hline GloVe embeddings(avg.)\({}^{*}\) & 55.14 & 70.66 & 59.73 & 68.25 & 63.66 & 58.02 & 53.76 & 61.32 \\ BERT\({}_{base}\)(first-last avg.)\({}^{*}\) & 39.70 & 59.38 & 49.67 & 66.03 & 66.19 & 53.87 & 62.06 & 56.80 \\ BERT\({}_{base}\)-flow\({}^{*}\) & 58.40 & 67.10 & 60.85 & 75.16 & 71.22 & 68.66 & 64.47 & 66.55 \\ BERT\({}_{base}\)-whitening\({}^{*}\) & 57.83 & 66.90 & 60.90 & 75.08 & 71.31 & 68.24 & 63.74 & 66.28 \\ IS-BERT\({}_{base}^{*}\) & 56.77 & 69.24 & 61.21 & 75.23 & 70.16 & 69.21 & 64.25 & 66.58 \\ CT-BERT\({}_{base}^{*}\) & 61.63 & 76.80 & 68.47 & 77.50 & 76.48 & 74.31 & 69.19 & 72.05 \\ SimCSE-BERT\({}_{base}^{*}\) & 68.40 & 82.41 & 74.38 & 80.91 & 78.56 & 76.85 & 72.23 & 76.25 \\ DLCR-BERT\({}_{base}^{*}\) & 70.81 & 83.73 & 75.11 & 82.56 & 78.44 & 78.31 & 71.59 & 77.22 \\ IS-CSE-BERT\({}_{base}\) & **72.86** & **84.02** & **76.35** & **82.64** & **78.65** & **79.53** & **74.05** & **78.30** \\ \hline SimCSE-BERT\({}_{large}^{*}\) & 70.88 & 84.16 & 76.43 & 84.50 & 79.76 & 79.26 & 73.88 & 78.41 \\ DCLR-BERT\({}_{large}\) & 71.87 & 84.83 & 77.37 & 84.70 & **79.81** & 79.55 & 74.19 & 78.90 \\ IS-CSE-BERT\({}_{large}\) & **73.76** & **85.06** & **78.14** & **85.02** & 79.59 & **80.43** & **74.30** & **79.47** \\ \hline RoBERTa\({}_{base}\)(first-last avg.)\({}^{*}\) & 40.88 & 58.74 & 49.07 & 65.63 & 61.48 & 58.55 & 61.63 & 56.57 \\ RoBERTa\({}_{base}\)-whitening\({}^{*}\) & 46.99 & 63.24 & 57.23 & 71.36 & 68.99 & 61.36 & 62.91 & 61.73 \\ DeCLUTR-RoBERTa\({}_{base}^{*}\) & 52.41 & 75.19 & 65.52 & 77.12 & 78.63 & 72.41 & 68.62 & 69.99 \\ SimCSE-RoBERTa\({}_{base}^{*}\) & 70.16 & 81.77 & 73.24 & 81.36 & 80.65 & 80.22 & 68.56 & 76.57 \\ DCLR-RoBERTa\({}_{base}\) & 70.01 & **83.08** & **75.09** & **83.66** & 81.06 & **81.86** & **70.33** & **77.87** \\ IS-CSE-RoBERTa\({}_{base}\) & **71.39** & 82.58 & 74.36 & 82.75 & **81.61** & 81.40 & 69.99 & 77.73 \\ \hline SimCSE-RoBERTa\({}_{large}^{*}\) & 72.86 & 83.99 & 75.62 & 84.77 & 81.80 & 81.98 & 71.26 & 78.90 \\ DCLR-RoBERTa\({}_{large}^{*}\) & 73.09 & 84.57 & 76.13 & 85.15 & 81.99 & 82.35 & 71.80 & 79.30 \\ DCLR-RoBERTa\({}_{large}\) (ours) & 71.30 & 84.67 & 76.17 & 84.65 & 81.62 & 81.93 & 72.29 & 78.95 \\ CARDS-RoBERT\({}_{large}\) (ours) & **74.78** & 86.42 & 79.02 & 85.95 & 82.36 & 83.65 & 70.81 & 80.46 \\ IS-CSE-RoBERTa\({}_{large}\) & 72.84 & 85.02 & 76.99 & 85.58 & 80.93 & 82.87 & 71.68 & 79.42 \\ + DCLR & 73.67 & 85.46 & 76.86 & 85.16 & 81.31 & 82.25 & 71.71 & 79.49 \\ + CARDS & 74.30 & **86.47** & **79.06** & **85.99** & **82.78** & **84.02** & **72.80** & **80.77** \\ \hline \end{tabular}
\end{table}
Table 1: Sentence embedding performance on STS tasks (Spearman’s correlation). The best performance and the second-best performance with the same pre-trained encoder are denoted in bold and underlined fonts respectively. \({}^{*}\): results from [14]; \({}^{\dagger}\): results from [15]; \({}^{\dagger}\): results from [15]; (ours): our reproduced results based on code released by their authors; We add our \(L_{smoothing}\) to the DCLR to get combined results and show it on ”+DCLR”. All the experiments are conducted in an unsupervised setting.
### Training Details
Our experimental settings are consistent with the SimCSE Gao, Yao, and Chen (2021). Specifically, all our models are trained to start from the pre-trained checkpoints given by Huggingface Wolf et al. (2020). Following SimCSE, the training corpus contains \(10^{6}\) sentences randomly sampled from English Wikipedia. We adopt [CLS] representation as the sentence embedding and an MLP pooler is used during training but discarded during inference. Hyperparameters for our model are the same as those for SimCSE. We train our model for 1 epoch and use the Adam optimizer Kingma and Ba (2014). Cosine similarity with \(\tau=0.05\) is used to calculate sentence similarity. The details of batch size and learning rate are shown in Table 2. In IS-CSE, we set the buffer size \(L=1024\) and the number of kNN neighbors \(k=16\). According to the STS-B score on the development set in Table 3, we finally select the kNN group to apply our smoothing method. The temperature \(\beta\) for self-attention aggregation is set to 2. For BERT\({}_{base}\) and RoBERTa\({}_{base}\) we set \(\alpha=0.1\). For BERT\({}_{large}\) and RoBERTa\({}_{large}\) we set a cosine schedule (Equ. 8) for \(\alpha\) from 0.005 to 0.05.
### Main Results
We compare IS-CSE against previously published state-of-the-art unsupervised sentence embedding learning methods on STS tasks. We take the results reported in SimCSE for average GloVe embeddings Pennington, Socher, and Manning (2014), average BERT or RoBERTa embeddings Gao, Yao, and Chen (2021), BERT-flow Li et al. (2020), BERT-whitening Su et al. (2021), unsup-SimCSE. For DCLR Zhou et al. (2022), we take both the results reported on paper and our reproduced results based on their released code.
The results on 7 STS tasks are shown in Table 1. IS-CSE can outperform most previous competitive results on the basis of four different encoders (BERT\({}_{base}\), BERT\({}_{large}\), RoBERTa\({}_{base}\) and RoBERTa\({}_{large}\)). Although we do not perform as well as DCLR on some of the tasks, IS-CSE is an orthogonal method in that it finetunes models with instance weighting, and may be combined with our methods. To evaluate it, we reproduce DCLR based on their released code and strictly follow their training settings. We further adding \(L_{smoothing}\) to DCLR and the results ( "+DCLR" in Table 1) indicate that IS-CSE can improve DCLR on most STS tasks.
### Ablation Studies
We investigate the impact of buffer size \(L\), the hyper-parameter \(\alpha,\beta\), and the number of neighbors in a group. All reported results in this section are based on the STS-B development set.
Buffer SizeTable 4 shows the results of IS-CSE-BERT\({}_{base}\) with different buffer sizes. As can be seen from Table 4, when \(L\) increases from 256 to 1024, the performance also improves, which shows that larger buffers can allow more similar instances to be retrieved. However, a large buffer size beyond 1024 may cause performance degradation, this can be because a large buffer stores embeddings of several batches, and older embeddings are inconsistent with the current model parameters.
Number of NeighborsTable 5 shows the effects of different numbers of neighbors in kNN. We empirically find that IS-CSE performs well when \(k=16\), which is probably because that smoothing is not sufficient when \(k\) is smaller than 16 and some noise samples will be introduced when \(k\) is greater than 16.
Hyperparameter \(\alpha\)In IS-CSE, \(\alpha\) is used as the weight of the \(L_{smoothing}\). We tried two types of \(\alpha\): constant \(\alpha\) and dynamic \(\alpha\). For the former, we just assign a constant value to \(\alpha\) and never change it during finetuning; For the latter, we use a cosine schedule function (Equ. 8) to gradually increase the value from \(\alpha_{start}\) to \(\alpha_{end}\). Table 6 shows the result of applying different constant \(\alpha\) and dynamic \(\alpha\) on IS-CSE-RoBERTa\({}_{large}\). We empirically find that BERT\({}_{large}\) and RoBERTa\({}_{large}\) can perform better with dynamic \(\alpha\).
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{2}{c}{BERT} & \multicolumn{2}{c}{RoBERTa} \\ & base & large & base & large \\ \hline Batch size & 64 & 64 & 512 & 512 \\ Learning rate & 3e-5 & 1e-5 & 1e-5 & 3e-5 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Batch sizes and learning rates for IS-CSE
\begin{table}
\begin{tabular}{l c c c} \hline \hline Group type & kNN & K-means & kNN+K-means \\ \hline STS-B & **84.18** & 83.74 & 84.14 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results on STS-B development set of kNN group and K-means group using BERT\({}_{base}\) backbone. For the K-means group, the number of groups is 64 so the average number of embeddings in each group is equal to that in the kNN group. kNN+K-means denotes that two groups will be used and thus two smoothing objectives will be added. kNN retrieval method is finally selected.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \(N_{neighbors}\) & 8 & 12 & 16 & 20 & 24 \\ \hline STS-B & 83.18 & 83.31 & **84.18** & 82.97 & 82.63 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation studies of the number of neighbors on the STS-B development set using IS-CSE-BERT\({}_{base}\).
Hyperparameter \(\beta\)After finishing the retrieval process, we perform self-attention aggregation on a group of embeddings to smooth the representation. In Table 7, we compare the impact of choosing different \(\beta\) on STS-B development set. \(\beta\) is used to adjust the attention weights, and a larger \(\beta\) will make the attention weights more even.
in most seeds and tasks.
Cosine-Similarity DistributionTo directly evaluate our approaches to STS tasks, we illustrate the cosine similarity distribution of sentence pairs in the STS-B dataset with different groups of human ratings in Figure 4. Compared with SimCSE, our method has a more scattered distribution with lower variance and has a similar discrimination ability. This observation validates that our method can achieve a better alignment-uniformity balance.
Case Study of Retrieved SentencesWe smooth the instance-level embeddings by aggregating retrieved embeddings. To better understand the smoothing process, we list the top three highest retrieved sentences based on kNN in Table 9. The "Query Sentence" is used as the query embedding during retrieval and the "Retrieved Sentences" are the top three highest sentences retrieved from the dynamic memory buffer according to the similarity. Though the meaning of retrieved sentences and the query sentences is not totally the same, they are similar semantically in some text segments. For example, the query sentence "ravenswood may refer to" has the same structure as retrieved sentence "roanoke" may refer to". Thus the retrieved sentences help to smooth the query sentence and achieve better performance on STS tasks.
## Conclusion
We proposed IS-CSE, an instance smoothing contrastive learning framework for unsupervised sentence representation learning. Our main idea is to improve the generalization ability by smoothing the positive examples. Specifically, in our framework, we aggregate retrieved semantically similar instances from a dynamic memory buffer to produce group-level positive embeddings, which are then used for discrimination. Experimental results on seven STS tasks have shown that our approach outperforms several competitive baselines. Our instance-level smoothing method is general and can be applied to other settings in Contrastive Learning.
In the future, we will explore more granularities for smoothing positive sentences for discrimination. Whether negative examples can be smoothed will also be studied. We will also consider applying our method for more natural language processing tasks, such as summarization.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Model & MR & CR & SUBJ & MPQA & SST & TREC & MRPC & Avg. \\ \hline GloVe embeddings (avg.) & 77.25 & 78.30 & 91.17 & 87.85 & 80.18 & 83.00 & 72.87 & 81.52 \\ Skip-thought & 76.50 & 80.10 & 93.60 & 87.10 & 82.00 & 92.20 & 73.00 & 83.50 \\ Avg. BERT embeddings & 78.66 & 86.25 & 94.37 & 88.66 & 84.40 & **92.80** & 69.54 & 84.94 \\ BERT-\([CLS]\) embedding & 78.68 & 84.85 & 94.21 & 88.23 & 84.13 & 91.40 & 71.13 & 84.66 \\ IS-BERT\({}_{base}\) & 81.09 & **87.18** & **94.96** & 88.75 & **85.96** & 88.64 & 74.24 & **85.83** \\ SimCSE-BERT\({}_{base}\) & **81.18** & 86.46 & 94.45 & 88.88 & 85.50 & 89.80 & 74.43 & 85.81 \\ \hline IS-CSE-BERT\({}_{base}\) & 80.48 & 85.32 & 94.67 & **89.44** & 85.06 & 87.40 & **75.77** & 85.45 \\ \hline SimCSE-RoBERTa\({}_{base}\) & 81.04 & 87.74 & **93.28** & 86.94 & 86.60 & **84.60** & 73.68 & 84.84 \\ IS-CSE-RoBERTa\({}_{base}\) & **81.93** & **87.76** & 93.24 & **87.61** & **87.48** & 83.20 & **76.35** & **85.37** \\ SimCSE-RoBERTa\({}_{large}\) & **82.74** & **87.87** & **93.66** & 88.22 & 88.58 & 92.00 & 69.68 & 86.11 \\ IS-CSE-RoBERTa\({}_{large}\) & 82.70 & 87.79 & 93.30 & **88.36** & **89.02** & **92.40** & **74.96** & **86.93** \\ \hline \hline \end{tabular}
\end{table}
Table 8: Transfer task results of different sentence embedding models (measured as accuracy). Results for comparison are reported in published paper SimCSE [1]. We highlight the highest numbers among models with the same pre-trained encoder.
\begin{table}
\begin{tabular}{l l l} \hline \hline Query Sentence & \multicolumn{2}{l}{This can probably be attributed to the intelligence-gathering of german civilians based} \\ & & inreland during the 1930s. \\ \hline \multirow{4}{*}{Retrieved Sentences} & 1 & The “luftwaffe” carried out a number of air raids against the midlands and england in the middle part of 1942. \\ & 2 & During the world war ii, the area became an important station for anti-activities \\ & 3 & Many union members were jewish and were killed during world war ii. \\ \hline Query Sentence & \multicolumn{2}{l}{“ravenswood” may refer to} \\ \hline \multirow{4}{*}{Retrieved Sentences} & 1 & “roanoke” may refer to \\ & 2 & “yasir ali” may refer to \\ \cline{1-1} & 3 & “datuna” may refer to \\ \hline \hline \end{tabular}
\end{table}
Table 9: We show the retrieved sentences in our method. “Query Sentence” represents the sentence used as a query. “Retrieved Sentences” represents the sentence retrieved from the dynamic memory buffer.
## Acknowledgements
This research has been supported by the Key R&D program of Zhejiang Province (Grant No. 2021C03139). We also would like to thank Westlake University HPC Center for providing HPC support.
|
2306.16079 | On Card guessing games: limit law for one-time riffle shuffle | We consider a card guessing game with complete feedback. A ordered deck of n
cards labeled 1 up to n is riffle-shuffled exactly one time. Then, the goal of
the game is to maximize the number of correct guesses of the cards, where one
after another a single card is drawn from the top, and shown to the guesser
until no cards remain. Improving earlier results, we provide a limit law for
the number of correct guesses. As a byproduct, we relate the number of correct
guesses in this card guessing game to the number of correct guesses under a
two-color card guessing game with complete feedback. Using this connection to
two-color card guessing, we can also show a limiting distribution result for
the first occurrence of a pure luck guess. | Markus Kuba, Alois Panholzer | 2023-06-28T10:26:26Z | http://arxiv.org/abs/2306.16079v2 | # On card guessing games: limit law for one-time riffle shuffle
###### Abstract.
We consider a card guessing game with complete feedback. A ordered deck of \(n\) cards labeled \(1\) up to \(n\) is riffle-shuffled exactly one time. Then, the goal of the game is to maximize the number of correct guesses of the cards, where one after another a single card is drawn from the top, and shown to the guesser until no cards remain. Improving earlier results, we provide a limit law for the number of correct guesses. As a byproduct, we relate the number of correct guesses in this card guessing game to the number of correct guesses under a two-color card guessing game with complete feedback. Using this connection to two-color card guessing, we can also show a limiting distribution result for the first occurrence of a pure luck guess.
Key words and phrases:Card guessing, riffle shuffle, two-color card guessing game, limit law 2000 Mathematics Subject Classification: 05A15, 05A16, 60F05, 60C05
## 1. Introduction
Different card guessing games have been considered in the literature in many articles [4, 5, 12, 13, 15, 16, 17, 20, 21, 22, 23]. An often discussed setting is the following. A deck of a total of \(M\) cards is shuffled, and then the guesser is provided with the total number of cards \(M\), as well as with the individual numbers of say hearts, diamonds, clubs and spades. After each guess of the type of the next card, the person guessing the cards is shown the drawn card, which is then removed from the deck. This process is continued until no more cards are left. Assuming that the guesser tries to maximize the number of correct guesses, one is interested in the total number of correct guesses. Such card guessing games are not only of purely mathematical interest, but there are applications to the analysis of clinical trials [3, 6], fraud detection related to extra-sensory perceptions [4], guessing so-called Zener Cards [20], as well as relations to tea tasting and the design of statistical experiments [7, 21].
The card guessing procedure can be generalized to an arbitrary number \(n\geq 2\) of different types of cards. In the simplest setting there are two colors, red (hearts and diamonds) and black (clubs and spades), and their numbers are given by non-negative integers \(m_{1}\), \(m_{2}\), with \(M=m_{1}+m_{2}\). One is then interested in the random variable \(C_{m_{1},m_{2}}\), counting the number of correct guesses. Here, not only the distribution and the expected value of the number of correct guesses is known [5, 13, 17, 22, 23], but also multivariate limit laws and additionally interesting relations to combinatorial objects such as Dyck paths and un models are given [5, 15, 16]. For the general setting of \(n\) different types of cards we refer the reader to [5, 12, 20, 21] for recent developments.
Different models of card guessing games involving so-called riffle shuffles are also of importance and are the main topic of this work. Liu [18] and also Krityakierne and Tha-natipanonda [14] studied a card guessing game carried out after a single riffle shuffle under the famous _Gilbert-Shannon-Reeds_ model (see Subsection 2.1 for details): one starts with an ordered deck of \(n\) cards, labeled one up to \(n\), and the deck is once riffle shuffled. The number of correct guesses \(X_{n}\), assuming that complete feedback is given, i.e., the drawn card is shown to the guessing person, and further assuming that the guesser is using the optimal strategy, is then of interest. An analysis of this procedure including an asymptotic
## 1. Introduction
In this paper we consider a class of _non-commutative_ quantum groups (or _finite_) over a field \(k\), and a set \(\mathcal{F}\) of \(k\)-tuples \(\{1,\ldots,n\}\)
\(k\leq n\) many, by the increasing sequence \(1,2,\ldots,k\), and the \(b\)'s in the sequence by the increasing sequence \(k+1,k+2,\ldots,n\). Thus, the \(a\)'s and \(b\)'s, respectively, correspond to the packet of cards below and above the cut, respectively. Let us denote by \(\mathcal{D}_{n}\) this multiset of permutations on \([n]=\{1,2,\ldots,n\}\) generated by the family \(\mathcal{W}_{n}=\{a,b\}^{n}\) of length-\(n\) words. Then the \(n+1\) words in \(\mathcal{W}_{n}\) of the kind \(a^{k}b^{n-k}\), with \(0\leq k\leq n\), all generate the identity permutation \(\text{id}_{n}\) in \(\mathcal{D}_{n}\), whereas the remaining \(2^{n}-n-1\) words in \(\mathcal{W}_{n}\) generate pairwise different permutations in \(\mathcal{D}_{n}\).
### First drawn card and the optimal strategy
The optimal strategy for maximizing the number \(X_{n}\) of correctly guessed cards, starting with a deck of \(n\) cards, after a one-time riffle shuffle stems from the following Proposition.
**Proposition 1** (Guessing the first card [14, 18]).: _Assume that a deck of \(n\) cards has been riffle shuffled once. The probability \(p_{n}(m)\) that the first card being \(m\), \(1\leq m\leq n\), is given by_
\[p_{n}(m)=\begin{cases}\frac{1}{2}+\frac{1}{2^{n}},&\text{for }m=1,\\ \frac{\binom{n-1}{m-1}}{2^{n}},&\text{for }2\leq m\leq n.\end{cases} \tag{1}\]
For the sake of completeness we include a short proof.
Proof.: First, we condition on the cut leading to two decks containing \(\{1,\ldots,k\}\) and \(\{k+1,\ldots,n\}\), \(0\leq k\leq n\), which happens with probability \(\frac{\binom{n}{k}}{2^{n}}\). Each resulting deck has the same probability \(1/\binom{n}{k}\). Then, we observe that the probability of a top one is in the case \(k>0\) given by
\[\frac{\binom{n-1}{k-1}}{\binom{n}{k}}=\frac{k}{n},\]
as there are \(\binom{n-1}{k-1}\) different ways of choosing the positions of the other cards. Of course, for \(k=0\) the top card is always one. Thus, we obtain
\[p_{n}(1)=\frac{1}{2^{n}}+\sum_{k=1}^{n}\frac{\binom{n}{k}}{2^{n}}\cdot\frac{k} {n}=\frac{1}{2^{n}}+\frac{1}{2}.\]
Similarly, for \(m>1\) we observe that only a cut at \(m-1\) may lead to a top card labeled \(m\), thus in this situation the subsequences to be interleaved have to be \(1,\ldots,m-1\) and \(m,\ldots,n\). If \(m\) is the top card, there are \(\binom{n-1}{n-m}\) different ways of choosing the positions of the other cards, which yields
\[p_{n}(m)=\frac{\binom{n}{m-1}}{2^{n}}\cdot\frac{\binom{n-1}{n-m}}{\binom{n}{m -1}}=\frac{\binom{n-1}{m-1}}{2^{n}}.\]
Now we turn to the optimal strategy. The guesser should guess 1 on the first card, as his chance of success is more than \(50\%\) by Proposition 1
If the first guess is incorrect, say the shown card has label \(m\geq 2\), this implies that the cut was made exactly at \(m-1\). The person is left with two increasing subsequences \(1,2,\ldots,m-1\) and \(m+1,\ldots,n\). The remaining numbers are then guessed according to the proportions of the lengths of the remaining subsequences until no cards are left.
If the first guess was correct, then the person continues with guessing the number two, etc., i.e., as long as all previous such predictions turned out to be correct, the guesser makes a guess of the number \(j\) for the \(j\)-th card. This is justified, since by considerations as before one can show easily that the probability that the \(j\)-th card has the number \(j\) conditioned
on the event that the first \(j-1\) cards are the sequence of numbers \(1,2,\ldots,j-1\) is for \(1\leq j\leq n\) given by
\[\frac{2^{n-j}+j}{2^{n-j+1}+j-1}=\frac{1}{2}+\frac{(1+j)2^{-(n-j+2)}}{1+(j-1)2^{-( n-j+1)}},\]
and thus exceeds \(50\%\). If such a prediction turns out to be wrong, i.e., gives a number \(m>j\) for the \(j\)-th card, then again one can determine the two involved remaining subsequences \(j,j+1,\ldots,m-1\) and \(m+1,\ldots,n\), and all the numbers of the remaining cards are again guessed according to the proportions of the lengths of the remaining subsequences until no cards are left.
### Enumeration and distributional decomposition
Our starting point is the recurrence relation for the generating function
\[D_{n}(q):=\sum_{\sigma\in\mathcal{D}_{n}}q^{\#\text{ correct guesses for deck }\sigma}=2^{n}\cdot\mathbb{E}(q^{X_{n}})=2^{n}\sum_{\ell=0}^{n}\mathbb{P}(X_{n }=\ell)\,q^{\ell},\]
counting the number of correct guesses using the optimal strategy when starting with a once-shuffled deck of \(n\) different cards, which has been stated in [14] and basically stems from Proposition 1.
**Lemma 1** (Recurrence relation for \(D_{n}(q)\)[14]).: _The generating function \(D_{n}(q)\) satisfies the following recurrence:_
\[D_{n}(q)=qD_{n-1}(q)+q^{n}+\sum_{j=0}^{n-2}F_{n-1-j,j}(q),\quad n\geq 1, \qquad D_{0}(q)=1, \tag{2}\]
_where the auxiliary function \(F_{m_{1},m_{2}}(q)\) is for \(m_{1}\geq m_{2}\geq 0\) defined recursively by_
\[F_{m_{1},m_{2}}(q)=qF_{m_{1}-1,m_{2}}(q)+F_{m_{1},m_{2}-1}(q),\]
_with initial values \(F_{m_{1},0}(q)=q^{m_{1}}\), and for \(m_{2}>m_{1}\geq 0\) by the symmetry relation_
\[F_{m_{1},m_{2}}(q)=F_{m_{2},m_{1}}(q).\]
Proof.: To keep the work self-contained we give a proof of this recurrence, where we use the before-mentioned combinatorial description of once-shuffled decks of \(n\) cards \(\sigma=(\sigma_{1},\ldots,\sigma_{n})\in\mathcal{D}_{n}\) by means of length-\(n\) words \(w=w_{1}\ldots w_{n}\in\mathcal{W}_{n}\). We count the number of correct guesses, where we distinguish according to the first letter \(w_{1}\). If \(w_{1}=a\) then the first drawn card is \(1\), \(\sigma_{1}=1\), and this card will be predicted correctly by the guesser. The guesser keeps his strategy of guessing for the deck of remaining cards, which is order-isomorphic to a deck of \(n-1\) cards generated by the length-\((n-1)\) word \(w^{\prime}=w_{2}\ldots w_{n}\); to be more precise, if \(\sigma=(1,\sigma_{2},\ldots,\sigma_{n})\in\mathcal{D}_{n}\) and \(\sigma^{\prime}=(\sigma^{\prime}_{1},\ldots,\sigma^{\prime}_{n-1})\in\mathcal{ D}_{n-1}\) are the labels of the cards in the deck generated by the words \(w=aw^{\prime}\in\mathcal{W}_{n}\) and \(w^{\prime}\in\mathcal{W}_{n-1}\), respectively, then it simply holds \(\sigma_{i}=\sigma^{\prime}_{i-1}+1\), \(2\leq i\leq n\). Since \(w^{\prime}\) is a random word of length \(n-1\) if started with a random word \(w\) of length \(n\), this yields the summand \(qD_{n-1}(q)\) in equation (2).
If \(w_{1}=b\) then we first consider the particular case that \(w=b^{n}\), i.e., that the cut of the deck has been at \(0\). Since in this case the deck of cards corresponds to the identity permutation \(\sigma=\text{id}_{n}\), the guesser will predict all cards correctly using the optimal strategy, which leads to the summand \(q^{n}\) in (2). Apart from this particular case, \(w_{1}=b\) corresponds to a deck of cards where the first card is \(m\geq 2\) and thus will cause a wrong prediction by the guesser; however, due to complete feedback, now the guesser knows that the cut is at \(m-1\), or in alternative terms, he knows that the remaining deck is generated from a word \(w^{\prime}=w_{2}\ldots w_{n}\) that has \(j:=n-m\)\(b\)'s and \(n-1-j=m-1\)\(a\)'s, with \(0\leq j\leq n-2\). From this point on the guesser changes the strategy, which again could be formulated in alternative terms by saying that the guesser makes a guess for the next letter in the word, in a way that the guess is \(a\) if the number of \(a\)'s exceeds the number
of \(b\)'s in the remaining subword, that the guess is \(b\) in the opposite case, and (in order to keep the outcome deterministic) that the guess is \(a\) if there is a draw between the number of \(a\)'s and \(b\)'s. More generally, let us assume that the word consists of \(m_{1}\geq 0\)\(a\)'s and \(m_{2}\geq 0\)\(b\)'s and each of these \(\binom{m_{1}+m_{2}}{m_{1}}\) words occur with equal probability, then let us define the r.v. \(\hat{C}_{m_{1},m_{2}}\) counting the number of correct guesses by the before-mentioned strategy as well as the generating function \(F_{m_{1},m_{2}}(q)=\binom{m_{1}+m_{2}}{m_{1}}\mathbb{E}(q^{\hat{C}_{m_{1},m_{2 }}})\). It can be seen immediately that \(\hat{C}_{m_{1},m_{2}}\) and so \(F_{m_{1},m_{2}}(q)\) is symmetric in \(m_{1}\) and \(m_{2}\), and that \(F_{m_{1},m_{2}}(q)\) satisfies the recurrence stated in (2). Moreover, these considerations yield the third summand \(\sum_{j=0}^{n-2}F_{n-1-j,j}(q)\) in equation (2).
When considering the two-color card guessing game (with complete feedback) starting with \(m_{1}\) cards of type (color) \(a\) and \(m_{2}\) cards of type (color) \(b\) it apparently corresponds to the guessing game for the letters of a word over the alphabet \(\{a,b\}\) consisting of \(m_{1}\)\(a\)'s and \(m_{2}\)\(b\)'s as described in the proof of Lemma 1. Thus, the r.v. \(C_{m_{1},m_{2}}\) counting the number of correct guesses when the guesser uses the optimal strategy for maximizing correct guesses, i.e., guessing the color corresponding to the larger number of cards present [5, 13, 15, 16], and the r.v. \(\hat{C}_{m_{1},m_{2}}\) are equally distributed, \(C_{m_{1},m_{2}}\mathop{=}^{\mathcal{L}}\hat{C}_{m_{1},m_{2}}\). Consequently, the auxiliary function \(F_{m_{1},m_{2}}(q)\) stated in Lemma 1 is the generating function of \(C_{m_{1},m_{2}}\):
\[F_{m_{1},m_{2}}(q)=\binom{m_{1}+m_{2}}{m_{1}}\cdot\mathbb{E}(q^{C_{m_{1},m_{2 }}}). \tag{3}\]
**Remark 1**.: In most works considering \(C_{m_{1},m_{2}}\) it is assumed without loss of generality that \(m_{1}\geq m_{2}\geq 0\). However, we note that by definition of the two-color card guessing game the order of the parameters is not of relevance under the optimal strategy: \(C_{m_{1},m_{2}}=C_{m_{2},m_{1}}\).
**Remark 2**.: As has been pointed out in [15], the two-color guessing procedure for the cards of a deck with \(m_{1}\) cards of type \(a\) (say color red) and \(m_{2}\) cards of type \(b\) (say color black) can be formulated also by means of the so-called sampling without replacement urn model starting with \(m_{1}\) and \(m_{2}\) balls of color red and black, respectively, where in each draw a ball is picked at random, the color inspected and then removed, until no more balls are left. Then the urn histories can be described via weighted lattice paths from \((m_{1},m_{2})\) to the origin with step sets "left" \((-1,0)\) and "down" \((0,-1)\): at position \((k_{1},k_{2})\), a left-step and a down-step have weights \(\frac{k_{1}}{k_{1}+k_{2}}\) and \(\frac{k_{2}}{k_{1}+k_{2}}\), respectively, and reflect the draw of a red ball or a black ball, resp., occurring with the corresponding probabilities. Several quantities of interest for card guessing games can be formulated also via parameters of the sample paths of this urn, such as the first hitting of the diagonal or the first hitting of one of the coordinate axis, which is used in a subsequent section.
Concerning a distributional analysis of \(X_{n}\), an important intermediate result is the following distributional equation, which we obtain by translating the recurrence relation (2) into a recursion for probability generating functions.
**Theorem 2** (One-time riffle and two-color card guessing).: _The random variable \(X=X_{n}\) of correctly guessed cards, starting with a deck of \(n\) cards, after a one-time riffle satisfies the following decomposition:_
\[X_{n}\mathop{=}^{\mathcal{L}}I_{1}\big{(}X_{n-1}^{*}+1\big{)}+(1-I_{1})\Big{(} I_{2}\cdot n+(1-I_{2})\cdot C_{n-1-J_{n},J_{n}}\Big{)}, \tag{4}\]
_where \(I_{1}\mathop{=}^{\mathcal{L}}\mathrm{Be}(0.5)\), \(I_{2}=\mathop{=}^{\mathcal{L}}\mathrm{Be}(0.5^{n-1})\), and \(C_{m_{1},m_{2}}\) denotes the number of correct guesses in a two-color card guessing game. Additionally, \(X_{n-1}^{*}\) is an independent copy of \(X\) defined on \(n-1\) cards. Moreover, \(J_{n}\mathop{=}^{\mathcal{L}}\mathrm{B}^{*}(n-1,p)\) denotes a truncated binomial
distribution:_
\[\mathbb{P}(J_{n}=j)=\binom{n-1}{j}/(2^{n-1}-1),\quad 0\leq j\leq n-2.\]
_All random variables \(I_{1}\), \(I_{2}\), \(J_{n}\), as well as \(C_{m_{1},m_{2}}\) are mutually independent._
Proof.: By definition, the probability generating function of \(X_{n}\) is given as follows:
\[\mathbb{E}(q^{X_{n}})=\frac{D_{n}(q)}{2^{n}}.\]
Thus, we get from (2) the equation
\[\mathbb{E}(q^{X_{n}})=\frac{1}{2}\cdot\mathbb{E}(q^{X_{n-1}+1})+\frac{1}{2^{n} }\cdot q^{n}+\frac{1}{2^{n}}\sum_{j=0}^{n-1}F_{n-1-j,j}(q). \tag{5}\]
As pointed out above, the probability generating function of \(C_{m_{1},m_{2}}\) is given via
\[\mathbb{E}(q^{C_{m_{1},m_{2}}})=\frac{F_{m_{1},m_{2}}(q)}{\binom{m_{1}+m_{2}} {m_{1}}}.\]
Thus, the last summand in (5) yields the following representation
\[\frac{1}{2^{n}}\sum_{i=0}^{n-2}F_{n-1-j,j}(q)=\frac{1}{2^{n}}\sum _{i=0}^{n-2}\mathbb{E}(q^{C_{n-1-j,j}})\cdot\binom{n-1}{j}\] \[\quad=\frac{2^{n-1}-1}{2^{n}}\sum_{i=0}^{n-2}\mathbb{E}(q^{C_{n-1 -j,j}})\cdot\frac{\binom{n-1}{j}}{2^{n-1}-1}\] \[\quad=\frac{1}{2}\Big{(}1-\frac{1}{2^{n-1}}\Big{)}\sum_{i=0}^{n-2 }\mathbb{E}(q^{C_{n-1-j,j}})\mathbb{P}\{J_{n}=j\}=\frac{1}{2}\Big{(}1-\frac{1 }{2^{n-1}}\Big{)}\mathbb{E}(q^{C_{n-1-J_{n},J_{n}}}).\]
Translating these expressions for the probability generating functions involved into a distributional equation leads to the stated result. Note that the fact that \(X_{n-1}^{*}\) indeed has the same distribution as \(X\) defined on a deck of \(n-1\) cards follows from equation (2).
The distributional decomposition together with the properties of the binomial distribution and the limit laws of two-color card guessing game allow to obtain a limit law for \(X_{n}\). By the classical de Moivre-Laplace theorem, we can approximate the binomial distribution \(J_{n}\) with mean \(\frac{n}{2}\) and standard deviation \(\sqrt{n}/2\) by a normal random variable. This suggests that we need to study \(C_{n-1-j,j}\) for \(j=\frac{n}{2}+x\sqrt{n}\), as \(n\) tends to infinity. We recall the limit law for the two-color card guessing game in the required range (see [15, 16] for a complete discussion of all different limit laws of \(C_{m_{1},m_{2}}\) depending on the growth behaviour of \(m_{1}\), \(m_{2}\); additionally, we also refer to [5, 23] for the case \(m_{1}=m_{2}\)).
**Theorem 3** (Limit law for two-color card guessing [15, 16]).: _Assume that the numbers \(m_{1}\), \(m_{2}\) satisfy \(m_{1}-m_{2}\sim\rho\cdot\sqrt{m_{1}}\), as \(m_{1}\to\infty\), with \(\rho>0\). Then, the number of correct guesses \(C_{m_{1},m_{2}}\) is asymptotically linear exponentially distributed,_
\[\frac{C_{m_{1},m_{2}}-m_{1}}{\sqrt{m_{1}}}\xrightarrow{\xi}\mathrm{LinExp}( \rho,2),\]
_or equivalently by explicitly stating the cumulative distribution function of \(\mathrm{LinExp}(\rho,2)\):_
\[\mathbb{P}\{C_{m_{1},m_{2}}\leq m_{1}+\sqrt{m_{1}}z\}\to 1-e^{-z(\rho+z)}, \quad\text{for }z\geq 0.\]
In order to derive a limit law for \(X_{n}\) we require first a limit law for \(C_{n-1-J_{n},J_{n}}\) as occurring in Theorem 2.
**Lemma 4**.: _The random variable \(C_{n-1-J_{n},J_{n}}\), with \(J_{n}\) as defined in Theorem 2, satisfies the following limit law:_
\[\frac{C_{n-1-J_{n},J_{n}}-\frac{n}{2}}{\sqrt{n}}\to G,\]
_where \(G\) denotes a generalized gamma distributed random variable with probability density function_
\[f(x)=\sqrt{\frac{2}{\pi}}\cdot 8x^{2}e^{-2x^{2}},\quad x\geq 0.\]
**Remark 3**.: This special instance of a generalized Gamma distribution is also known as a Maxwell-Boltzmann distribution with parameter \(a=1/2\), which is of important for describing particle speeds in idealized gases.
The first three raw integer moments of \(G\) are
\[\mathbb{E}(G)=\mu_{G}=\sqrt{\frac{2}{\pi}}\approx 0.7979,\quad\mathbb{E}(G^{2}) =\frac{3}{4},\quad\mathbb{E}(G^{3})=\sqrt{\frac{2}{\pi}}.\]
Consequently, the standard deviation \(\sigma_{G}\) and the skewness \(\gamma_{G}\) are given by
\[\sigma_{G}=\sqrt{\mathbb{E}(G^{2})-\mu_{G}^{2}}\approx 0.3367,\quad\gamma_{G}= \frac{\mathbb{E}(G^{3})-3\mu_{G}\mathbb{E}(G^{2})+2\mu_{G}^{3}}{\sigma_{G}^{3 }}\approx 0.4857,\]
leading to a right-skewed distribution, in agreement with the numerical observations of the limit law of \(X_{n}\) (which turns out to be \(G\) as well) in [14]. See Figure 2 for a plot of the density function of \(G\).
Proof.: We consider the distribution function
\[F_{n}(x)=\mathbb{P}\big{\{}C_{n-1-J_{n},J_{n}}\leq\frac{n}{2}+x\sqrt{n}\big{\}}\]
for fixed positive real \(x\). Conditioning on the truncated binomial distribution gives
\[F_{n}(x)=\sum_{j=0}^{n-2}\mathbb{P}\big{\{}C_{n-1-j,j}\leq\frac{n}{2}+x\sqrt{n }\big{\}}\mathbb{P}\{J_{n}=j\}.\]
We can exploit the symmetry of the binomial distribution, as well as \(C_{m_{1},m_{2}}\), to get
\[F_{n}(x)\sim 2\cdot\sum_{j=\lfloor n/2\rfloor}^{n-2}\mathbb{P}\big{\{}C_{j,n-1-j} \leq\frac{n}{2}+x\sqrt{n}\big{\}}\cdot\mathbb{P}\{J_{n}=j\}.\]
Figure 2. Plot of the density function \(f(x)\) of the generalized Gamma distribution occurring in Theorem 5 and Lemma 4.
By the de Moivre-Laplace limit theorem for the binomial distribution we get for large \(n\)
\[F_{n}(x)\sim 2\int_{n/2}^{n-2}\mathbb{P}\big{\{}C_{j,n-1-j}\leq\frac{n}{2}+x \sqrt{n}\big{\}}\cdot\frac{e^{-\frac{(j-\mu_{n})^{2}}{2\sigma_{n}^{2}}}}{\sigma _{n}\sqrt{2\pi}}dj,\]
where \(\mu_{n}=n/2\) and \(\sigma_{n}=\sqrt{n}/2\). Substituting \(j=\mu_{n}+t\sigma_{n}\), we obtain further
\[F_{n}(x)\sim 2\int_{0}^{\infty}\mathbb{P}\big{\{}C_{n/2+t\sqrt{n}/2,n-1-n/2-t \sqrt{n}/2}\leq\frac{n}{2}+x\sqrt{n}\big{\}}\cdot\frac{e^{-\frac{t^{2}}{2}}}{ \sqrt{2\pi}}dt.\]
Next, we asymptotically evaluate the integrand by using the limit law from Theorem 3 for the two-color card guessing game with
\[m_{1}=n/2+t\sqrt{n}/2,\quad m_{2}=n-1-n/2-t\sqrt{n}/2.\]
Since \(C_{m_{1},m_{2}}\geq\max\{m_{1},m_{2}\}\) (see, e.g., [15]), we deduce that for \(t>2x\) it holds
\[\mathbb{P}\big{\{}C_{n/2+t\sqrt{n}/2,n-1-n/2-t\sqrt{n}/2}\leq\frac{n}{2}+x \sqrt{n}\big{\}}\sim 0.\]
Furthermore, in the range \(0\leq t\leq 2x\) we obtain from Theorem 3, by setting \(\rho=\sqrt{2}t\) and \(z=\sqrt{2}(x-t/2)\),
\[\mathbb{P}\big{\{}C_{n/2+t\sqrt{n}/2,n-1-n/2-t\sqrt{n}/2}\leq \frac{n}{2}+x\sqrt{n}\big{\}}\\ \to 1-\exp\Big{(}-\sqrt{2}\big{(}x-\frac{t}{2}\big{)}\big{(} \sqrt{2}t+\sqrt{2}(x-\frac{t}{2})\big{)}\Big{)}=1-\exp\Big{(}-2x^{2}+\frac{t^ {2}}{2}\Big{)}.\]
This implies that
\[F_{n}(x) \sim\frac{2}{\sqrt{2\pi}}\cdot\int_{0}^{2x}e^{-t^{2}/2}\Big{(}1- \exp\Big{(}-2x^{2}+\frac{t^{2}}{2}\Big{)}\Big{)}dt\] \[=\frac{2}{\sqrt{2\pi}}\cdot\int_{0}^{2x}\Big{(}e^{-t^{2}/2}-e^{-2 x^{2}}\Big{)}dt=\frac{2}{\sqrt{2\pi}}\cdot\Big{(}\int_{0}^{2x}e^{-t^{2}/2}dt- \frac{2x}{\sqrt{2\pi}}e^{-2x^{2}}\Big{)}.\]
Differentiating the last expression with respect to \(x\) leads to the desired density function of the limiting r.v. \(G\),
\[f(x)=\frac{2}{\sqrt{2\pi}}\Big{(}e^{-2x^{2}}\cdot 2-2e^{-2x^{2}}+8x^{2}\cdot e ^{-2x^{2}}\Big{)}=\sqrt{\frac{2}{\pi}}\cdot 8x^{2}e^{-2x^{2}}.\]
Next we state the main result of this work, a limit law for the number of correct guesses \(X_{n}\). The limit law is the same as in Lemma 4, involving the generalized Gamma distribution.
**Theorem 5**.: _The normalized random variable \(Y_{n}=(X_{n}-\frac{n}{2})/\sqrt{n}\) converges in distribution to a generalized gamma distributed random variable \(G\), \(Y_{n}\mathop{\rightarrow}^{\mathcal{L}}G\), with density \(f(x)=\sqrt{\frac{2}{\pi}}\cdot 8x^{2}e^{-2x^{2}}\), \(x\geq 0\)._
**Remark 4** (A fixed-point equation).: Once we know that the limit law exists, one can informally derive the limit law from the distributional equation (4) by omitting asymptotically negligible terms:
\[Y_{n}\sim I_{1}\cdot Y_{n-1}+(1-I_{1})\frac{C_{n-1-J_{n},J_{n}}-\frac{n}{2}}{ \sqrt{n}},\]
where \(I_{1}=\mathrm{Be}(0.5)\). Thus, for large \(n\) we anticipate a sort of fixed-point equation for the limit law \(Y\) of \(Y_{n}\):
\[Y\sim I_{1}\cdot Y+(1-I_{1})\cdot G,\]
with \(G\) the generalized Gamma limit law. Similarly, we may anticipate that all integer moments of \(Y\) are simply the moments of \(G\):
\[\mathbb{E}(Y^{r})=\frac{1}{2}\mathbb{E}(Y^{r})+\frac{1}{2}\mathbb{E}(G^{r}), \quad\text{and further}\quad\mathbb{E}(Y^{r})=\mathbb{E}(G^{r}),\quad r\geq 0.\]
Proof.: According to Theorem 2 we get
\[\mathbb{P}\big{\{} X_{n}\leq\frac{n}{2}+x\sqrt{n}\big{\}}\] \[=\frac{1}{2}\mathbb{P}\big{\{} X_{n-1}+1\leq\frac{n}{2}+x\sqrt{n}\big{\}}+\Big{(}\frac{1}{2}-\frac{1}{2^{n}} \Big{)}\mathbb{P}\big{\{} C_{n-1-J_{n},J_{n}}\leq\frac{n}{2}+x\sqrt{n}\big{\}}.\]
Moreover, by iterating this recursive representation we observe that, for \(n\to\infty\),
\[\mathbb{P}\big{\{} X_{n}\leq\frac{n}{2}+x\sqrt{n}\big{\}}\sim\sum_{\ell\geq 1}\frac{1}{ 2^{\ell}}\cdot\mathbb{P}\big{\{} C_{n-\ell-J_{n-\ell},J_{n-\ell}}\leq\frac{n}{2}+x\sqrt{n}\big{\}}.\]
As \(n\) tends to infinity, Lemma 4 ensures that all the distribution functions occurring converge to the same limit, from which the stated result follows.
### Moment convergence
Krityakierne and Thanatipanonda [14] provided extremely precise results for the first few (factorial) moments of \(X_{n}\), as well as for the centered moments \(\mathbb{E}((X_{n}-\mu)^{r})\), for \(r=1,2,3\). We state a simplified version of their result:
\[\mu=\mathbb{E}(X_{n})=\frac{n}{2}+\sqrt{\frac{2n}{\pi}}-\frac{1}{ 2}+\mathcal{O}(n^{-1/2}),\quad\mathbb{E}\big{(}(X_{n}-\mu)^{2}\big{)}=\Big{(} \frac{3}{4}-\frac{2}{\pi}\Big{)}n+\mathcal{O}(1),\] \[\mathbb{E}\big{(}(X_{n}-\mu)^{3}\big{)}=\sqrt{\frac{2}{\pi}} \Big{(}\frac{4}{\pi}-\frac{5}{4}\Big{)}n^{3/2}+\mathcal{O}(n^{1/2}). \tag{6}\]
First we use above expansions of \(\mathbb{E}((X_{n}-\mu)^{r})\) to determine the asymptotics of the first moments of \(Y_{n}=(X_{n}-\frac{n}{2})/\sqrt{n}\) in a straightforward way. One observes that the limits of \(\mathbb{E}\big{(}Y_{n}^{r}\big{)}\), \(r=1,2,3\), are in agreement with the limit law \(G\) stated in Theorem 5.
**Proposition 2**.: _Let \(Y_{n}=(X_{n}-\frac{n}{2})/\sqrt{n}\). The moments \(\mathbb{E}(Y_{n}^{r})\) converge for \(r=1,2,3\) to the moments of the limit law \(G\):_
\[\mathbb{E}\big{(}Y_{n}\big{)}\to\sqrt{\frac{2}{\pi}}=\mathbb{E}(G),\quad \mathbb{E}\big{(}Y_{n}^{2}\big{)}\to\frac{3}{4}=\mathbb{E}(G^{2}),\quad \mathbb{E}\big{(}Y_{n}^{3}\big{)}\to\sqrt{\frac{2}{\pi}}=\mathbb{E}(G^{3}).\]
Proof.: The result for the expected value \(\mathbb{E}(Y_{n})\) follows directly from (6). In the following let \(\mu=\mathbb{E}(X_{n})=\frac{n}{2}+\delta_{n}\). Due to (6) it holds
\[\delta_{n}=\sqrt{\frac{2n}{\pi}}-\frac{1}{2}+\mathcal{O}(n^{-1/2}). \tag{7}\]
Consequently, the second centered moment can be rewritten as follows:
\[\mathbb{E}\big{(}(X_{n}-\mu)^{2}\big{)}=\mathbb{E}\big{(}(X_{n}-\frac{n}{2}- \delta_{n})^{2}\big{)}=\mathbb{E}\big{(}(X_{n}-\frac{n}{2})^{2}\big{)}-2\delta _{n}\mathbb{E}\big{(}X_{n}-\frac{n}{2}\big{)}+\delta_{n}^{2},\]
which gives, by using expansions (6) and (7),
\[\mathbb{E}\big{(}Y_{n}^{2}\big{)} =\frac{1}{n}\mathbb{E}\big{(}(X_{n}-\frac{n}{2})^{2}\big{)}=\frac {1}{n}\Big{[}\mathbb{E}\big{(}(X_{n}-\mu)^{2}\big{)}+2\delta_{n}\mathbb{E} \big{(}X_{n}-\frac{n}{2}\big{)}-\delta_{n}^{2}\Big{]}\] \[=\frac{1}{n}\Big{[}\mathbb{E}\big{(}(X_{n}-\mu)^{2}\big{)}+ \delta_{n}^{2}\Big{]}\sim\frac{3}{4}.\]
In a similar way, by rewriting the third centered moment and using (6) and (7), one obtains the stated result for \(\mathbb{E}(Y_{n}^{3})\).
Actually, in the following we are going to show that indeed all integer moments of \(Y_{n}\) converge to the corresponding moments of the limit law \(G\). Let us first state them.
**Proposition 3**.: _The integer moments of the generalized gamma distributed random variable \(G\) with probability density function as defined in Lemma 4 are given as follows:_
\[\mathbb{E}\big{(}G^{r}\big{)}=\frac{\Gamma\big{(}\frac{r+3}{2}\big{)}}{2^{\frac{ r}{2}-1}\sqrt{\pi}},\quad r\geq 0.\]
Proof.: A straightforward evaluation of the defining integral of the \(r\)-th moment of \(G\) by means of the \(\Gamma\)-function after substituting \(t=2x^{2}\) yields the stated result:
\[\mathbb{E}\big{(}G^{r}\big{)} =\int_{0}^{\infty}f(x)x^{r}dx=8\sqrt{\frac{2}{\pi}}\cdot\int_{0}^ {\infty}x^{r+2}e^{-2x^{2}}dx=\frac{1}{2^{\frac{r}{2}-1}\sqrt{\pi}}\cdot\int_{0 }^{\infty}t^{\frac{r+1}{2}}e^{-t}dt\] \[=\frac{\Gamma\big{(}\frac{r+3}{2}\big{)}}{2^{\frac{r}{2}-1}\sqrt{ \pi}}.\]
**Theorem 6**.: _Let \(Y_{n}=(X_{n}-\frac{n}{2})/\sqrt{n}\). The \(r\)-th integer moments \(\mathbb{E}(Y_{n}^{r})\) converge, for arbitrary but fixed \(r\) and \(n\to\infty\), to the moments of the limit law \(G\):_
\[\mathbb{E}\big{(}Y_{n}^{r}\big{)}\to\mathbb{E}\big{(}G^{r}\big{)}=\frac{\Gamma \big{(}\frac{r+3}{2}\big{)}}{2^{\frac{r}{2}-1}\sqrt{\pi}},\quad r\geq 0.\]
**Remark 5**.: Since the generalized gamma distributed r.v. \(G\) is uniquely characterized by its moments (which easily follows, e.g., from simple growth bounds), we note that an application of the moment's convergence theorem of Frechet and Shohat (see, e.g., [19]) immediately shows convergence in distribution of \(Y_{n}\) to \(G\), thus gives an alternative proof of Theorem 5.
To show Theorem 6 we will again start with the recursive description of \(D_{n}(q)\) given in Lemma 1, but in order to deal with this recurrence we use an alternative approach based on generating functions and basic techniques from analytic combinatorics [8]. Furthermore, we use explicit formulae for a suitable bivariate generating function of \(\mathbb{E}\big{(}q^{C_{m_{1},m_{2}}}\big{)}\) and the so-called diagonal as have been derived in [13, 15]. They can be stated in the following form.
**Proposition 4** ([13, 15]).: _The g.f. \(\tilde{F}(x,y,q)=\sum\limits_{m_{1}\geq m_{2}\geq 0}\binom{m_{1}+m_{2}}{m_{1}} \mathbb{E}\big{(}q^{C_{m_{1},m_{2}}}\big{)}x^{m_{1}}y^{m_{2}}\) and \(\tilde{F}_{0}(x,y,q)=\sum\limits_{m\geq 0}\binom{2m}{m}\mathbb{E}\big{(}q^{C_{m,m}}\big{)}x^{m}y^{m}\) are given as follows:_
\[\tilde{F}(x,y,q) =\frac{1-y}{1-qx-y}+\frac{qxy(q-(1+q)y)}{(1-qx-y)(1-B(qxy))(1-(1+ q)B(qxy))},\] \[\tilde{F}_{0}(x,y,q) =\frac{1}{1-(1+q)B(qxy)},\]
_where \(B(t)=\frac{1-\sqrt{1-4t}}{2}=\sum_{n\geq 1}\frac{1}{n}\binom{2n-2}{n-1}t^{n}\) denotes the g.f. of the shifted Catalan-numbers._
With these results we obtain a generating functions solution of recurrence (2) for \(D_{n}(q)\).
**Lemma 7**.: _The bivariate generating function_
\[D(z,q)=\sum_{n\geq 0}D_{n}(q)z^{n}=\sum_{n\geq 0}2^{n}\mathbb{E}\big{(}q^{X_{n} }\big{)}z^{n}\]
_is given by the following explicit formula, with \(B(t)=\frac{1-\sqrt{1-4t}}{2}\):_
\[D(z,q)=\frac{1-z}{(1-qz)^{2}}+\frac{z}{1-qz}\left[\frac{2(1-z)}{1-(1+q)z}\right.\]
\[\frac{1}{8}\left(1+\mathcal{O}\big{(}\mathcal{Z}^{-\frac{1}{2}}\big{)}\right),\quad r \geq 0.\]
**Remark 6**.: We remark that a closer inspection shows that the second dominant singularity \(\rho_{2}=-\frac{1}{2}\) occurring in the functions \(g_{r}(z)\) defined by Lemma 8 yield contributions that do not affect the main terms stemming from the contributions of the singularity \(\rho=\rho_{1}=\frac{1}{2}\). Since we are here only interested in the main term contribution, we will restrict ourselves to elaborate the expansion around \(\rho\). However, the presence of two dominant singularities is reflected by the fact, that lower order terms of the asymptotic expansions of the \(r\)-th moments of \(X_{n}\) are different for \(n\) even and \(n\) odd, resp., as has been observed in [14].
Proof.: Using (9) and the explicit formula of \(D(z,q)\) given in Lemma 7, one gets after simple manipulations
\[\hat{D}(z,q)=\frac{\sqrt{q}(\sqrt{q}-z)}{(\sqrt{q}-qz)^{2}}+\frac{2 z(\sqrt{q}-z)}{(\sqrt{q}-(1+q)z)(\sqrt{q}-qz)}\\ +\frac{z\big{(}\sqrt{q}(q-1)+\big{(}(1+q)z-q^{\frac{3}{2}}\big{)} \big{(}1-2B(z^{2})\big{)}\big{)}}{(1-(1+q)B(z^{2}))(\sqrt{q}-(1+q)z)(\sqrt{q}- qz)}. \tag{11}\]
We set \(q=1+u\) and carry out a series expansion of the summands of (11) around \(u=0\). Since this is a rather straightforward task using essentially the binomial series, but leads to rather lengthy computations when one intends to be exhaustive in every step, we will here only give a sketch of such computations and are omitting some of the details.
When treating the first summand in (11) and inspecting the coefficients in the series expansion around \(u=q-1=0\),
\[\hat{D}^{[1]}(z,q):=\frac{\sqrt{q}\,(\sqrt{q}-z)}{(\sqrt{q}-qz)^{2}}=\sum_{r \geq 0}g_{r}^{[1]}(z)u^{r},\]
one easily observes that the functions \(g_{r}^{[1]}(z)\) are analytic for \(|z|<1\) (to be more precise, the unique dominant singularity is at \(z=1\)), which causes exponentially small contributions for the coefficients \([z^{r}]g_{r}^{[1]}(z)\) compared to the remaining summands. Thus, these contributions are negligible and do not have to be considered further.
When expanding the second summand of (11) around \(u=q-1=0\),
\[\hat{D}^{[2]}(z,q):=\frac{2z(\sqrt{q}-z)}{(\sqrt{q}-(1+q)z)(\sqrt{q}-qz)}= \sum_{r\geq 0}g_{r}^{[2]}(z)u^{r}, \tag{12}\]
we have to treat with more care the factor \((\sqrt{q}-(1+q)z)^{-1}\). First, by using the binomial series we get
\[\sqrt{q}-(1+q)z=\sqrt{1+u}-(2+u)z=(1-2z)\Big{(}1+\frac{u}{2}+\sum_{k\geq 2} \frac{c_{k}}{1-2z}u^{k}\Big{)},\]
with \(c_{k}=\big{(}\frac{1}{k}\big{)}\), and further, by using the geometric series,
\[\frac{1}{\sqrt{q}-(1+q)z}=\frac{1}{(1-2z)\big{(}1+\frac{u}{2} \big{(}1+\sum_{k\geq 1}\frac{2c_{k+1}}{1-2z}u^{k}\big{)}\big{)}}\\ =\mathcal{Z}\Big{(}1+\sum_{\ell\geq 1}(-\frac{1}{2})^{\ell}u^{ \ell}\big{(}1+\sum_{k\geq 1}2c_{k+1}\mathcal{Z}u^{k}\big{)}^{\ell}\Big{)}. \tag{13}\]
From this expansion it is apparent that all the coefficients of \(u^{r}\) in the series expansion, considered as functions in \(z\), have a unique dominant singularity at \(z=\rho=\frac{1}{2}\). Furthermore, for \(\ell\geq 1\) we obtain the following expansion in powers of \(u\) and locally around \(z=\rho\), i.e., \(\mathcal{Z}=\infty\):
\[\big{(}1+\sum_{k\geq 1}2c_{k+1}\mathcal{Z}u^{k}\big{)}^{\ell}=1+ \ell(2c_{2})\mathcal{Z}u+\sum_{j=2}^{\ell}\binom{\ell}{j}(2c_{2})^{j}\mathcal{ Z}^{j}\big{(}1+\mathcal{O}(\mathcal{Z}^{-1})\big{)}u^{j}\\ +\ell(2c_{2})^{\ell-1}(2c_{3})\mathcal{Z}^{\ell}\big{(}1+\mathcal{ O}(\mathcal{Z}^{-1})\big{)}u^{\ell+1}+\sum_{k\geq\ell+2}\mathcal{O}(\mathcal{Z}^{ \ell})u^{k},\]
which, after plugging into (13) and using \(c_{2}=-\frac{1}{8}\), \(c_{3}=\frac{1}{16}\) leads to the required expansion:
\[\frac{1}{\sqrt{q}-(1+q)z}=\mathcal{Z}-\frac{\mathcal{Z}}{2}u+\sum_{\ell\geq 1 }\bigg{[}(\frac{1}{8})^{\ell}\mathcal{Z}^{\ell+1}(1+\mathcal{O}(\mathcal{Z}^{-1 }))u^{2\ell}\]
\[-\frac{1}{2}(\frac{1}{8})^{\ell}(2\ell+1)\mathcal{Z}^{\ell+1}(1+ \mathcal{O}(\mathcal{Z}^{-1}))u^{2\ell+1}\right]. \tag{14}\]
Next, it is easy to see that the coefficients in the expansion around \(u=0\) of the remaining factors of \(\hat{D}^{[2]}(z,q)\) are functions in \(z\) with radius of convergence \(1\), and one gets
\[\frac{2z(\sqrt{q}-z)}{\sqrt{q}-qz}=1+\mathcal{O}(\mathcal{Z}^{-1})+(1+ \mathcal{O}(\mathcal{Z}^{-1}))u+\sum_{r\geq 2}\mathcal{O}(\mathcal{Z}^{0})u^{r}. \tag{15}\]
Combining the expansions (14) and (15), we obtain that the functions \(g_{r}^{[2]}(z)\) in expansion (12) have a unique dominant singularity at \(z=\rho\) and allow there the local expansions
\[g_{r}^{[2]}(z)=\begin{cases}(\frac{1}{8})^{\ell}\mathcal{Z}^{\ell+1}(1+ \mathcal{O}(\mathcal{Z}^{-1})),&\text{for $r=2\ell$ even},\\ -\frac{1}{2}(\frac{1}{8})^{\ell}(2\ell-1)\mathcal{Z}^{\ell+1}(1+\mathcal{O}( \mathcal{Z}^{-1})),&\text{for $r=2\ell+1$ odd}.\end{cases} \tag{16}\]
Finally, we consider an expansion in powers of \(u=q-1\) of the third summand of (11),
\[\hat{D}^{[3]}(z,q):=\frac{z\big{(}\sqrt{q}(q-1)+\big{(}(1+q)z-q^{\frac{3}{2}} \big{)}\big{(}1-2B(z^{2})\big{)}\big{)}}{(1-(1+q)B(z^{2}))(\sqrt{q}-(1+q)z)( \sqrt{q}-qz)}. \tag{17}\]
Let us define \(\tilde{\mathcal{Z}}=\frac{1}{1-4z^{2}}\). Since \(B(z^{2})=\frac{1}{2}(1-\tilde{\mathcal{Z}}^{-\frac{1}{2}})\), we get
\[1-(1+q)B(z^{2})=\tilde{\mathcal{Z}}^{-\frac{1}{2}}\Big{(}1-\frac{1}{2}( \tilde{\mathcal{Z}}^{\frac{1}{2}}-1)\Big{)}\]
and thus
\[\frac{1}{1-(1+q)B(z^{2})}=\frac{\tilde{\mathcal{Z}}^{\frac{1}{2}}}{1-\frac{1} {2}(\tilde{\mathcal{Z}}^{\frac{1}{2}}-1)u}=\tilde{\mathcal{Z}}^{\frac{1}{2}} \Big{(}1+\sum_{r\geq 1}\Big{(}\frac{1}{2}(\tilde{\mathcal{Z}}^{\frac{1}{2}}-1)u \Big{)}^{r}\Big{)}. \tag{18}\]
Therefore, for this factor of \(\hat{D}^{[3]}(z,q)\) we obtain that the coefficients of \(u^{r}\) are functions in \(z\) with two dominant singularities \(\rho_{1,2}=\pm\frac{1}{2}\). However, as already pointed out in Remark 6, the contributions stemming from the singularity \(\rho_{2}=-\frac{1}{2}\) do not affect the main term contributions and thus they are not considered any further. Since \(\tilde{\mathcal{Z}}=\frac{1}{(1-2z)(1+2z)}=\frac{1}{2}\mathcal{Z}(1+\mathcal{ O}(\mathcal{Z}^{-1}))\), we thus obtain from (18) the local expansion around \(z=\rho\):
\[\frac{1}{1-(1+q)B(z^{2})}=\sum_{r\geq 0}(\frac{1}{2})^{\frac{3r+1}{2}}\mathcal{ Z}^{\frac{r+1}{2}}(1+\mathcal{O}(\mathcal{Z}^{-\frac{1}{2}}))u^{r}. \tag{19}\]
In a similar fashion one obtains the expansion
\[\frac{z\big{(}\sqrt{q}(q-1)+\big{(}(1+q)z-q^{\frac{3}{2}}\big{)} \big{(}1-2B(z^{2})\big{)}\big{)}}{\sqrt{q}-qz}\\ =-2^{\frac{1}{2}}\mathcal{Z}^{-\frac{3}{2}}(1+\mathcal{O}(\mathcal{ Z}^{-1}))+(1+\mathcal{O}(\mathcal{Z}^{-\frac{1}{2}}))u+\sum_{r\geq 2} \mathcal{O}(\mathcal{Z}^{0})u^{r}, \tag{20}\]
whereas the last factor of \(\hat{D}^{[3]}(z,q)\) has been treated already in (14). Combining expansions (19), (20) and (14), we get
\[\hat{D}^{[3]}(z,q)=\Big{(}\sum_{r\geq 0}(\frac{1}{8})^{ \frac{r}{2}}\mathcal{Z}^{\frac{r}{2}}(1+\mathcal{O}(\mathcal{Z}^{-\frac{1}{2}}) )u^{r}\Big{)} \tag{21}\] \[\cdot\Big{(}\sum_{\ell\geq 0}(\frac{1}{8})^{\ell}\mathcal{Z}^{ \ell}(1+\mathcal{O}(\mathcal{Z}^{-1}))u^{2\ell}+(-\frac{1}{2})(\frac{1}{8})^{ \ell}(2\ell+1)\mathcal{Z}^{\ell}(1+\mathcal{O}(\mathcal{Z}^{-1}))u^{2\ell+1} \Big{)}\] \[\cdot\Big{(}-(1+\mathcal{O}(\mathcal{Z}^{-1}))+(\frac{1}{2})^{ \frac{1}{2}}\mathcal{Z}^{\frac{3}{2}}(1+\mathcal{O}(\mathcal{Z}^{-\frac{1}{2} }))u+\sum_{r\geq 2}\mathcal{O}(\mathcal{Z}^{\frac{3}{2}})u^{r}\Big{)}.\]
To compute the Cauchy product of the first two factors of (21) we use (with some coefficients \(\alpha_{r},\beta_{r}\in\mathbb{R}\)):
\[\Big{(}\sum_{r\geq 0}\alpha_{r}\mathcal{Z}^{\frac{r}{2}}(1+\mathcal{O}( \mathcal{Z}^{-\frac{1}{2}}))u^{r}\Big{)}\cdot\Big{(}\sum_{r\geq 0}\beta_{r} \mathcal{Z}^{\lfloor\frac{r}{2}\rfloor}(1+\mathcal{O}(\mathcal{Z}^{-\frac{1}{2 }}))u^{r}\Big{)}\] \[=\sum_{r\geq 0}\gamma_{r}\mathcal{Z}^{\frac{r}{2}}(1+\mathcal{O}( \mathcal{Z}^{-\frac{1}{2}}))u^{r},\qquad\text{with}\quad\gamma_{r}=\sum_{\ell= 0}^{\lfloor\frac{r}{2}\rfloor}\beta_{2\ell}\,\alpha_{r-2\ell}.\]
In particular, for \(\alpha_{r}=(1/8)^{\frac{r}{2}}\) and \(\beta_{2\ell}=(1/8)^{\ell}\) one gets \(\gamma_{r}=(1/8)^{\frac{r}{2}}(\lfloor r/2\rfloor+1)\), which eventually shows that the coefficients \(g_{r}^{[3]}(z)\) in the expansion of \(\hat{D}^{[3]}(z,q)\) around \(u=q-1=0\) are given as follows:
\[g_{r}^{[3]}(z)=\begin{cases}-(1+\mathcal{O}(\mathcal{Z}^{-1})),&\text{for }r=0,\\ 2(\frac{1}{8})^{\frac{r}{2}}(\lfloor\frac{r-1}{2}\rfloor+1)\mathcal{Z}^{\frac {r}{2}+1}(1+\mathcal{O}(\mathcal{Z}^{-\frac{1}{2}})),&\text{for }r\geq 1.\end{cases} \tag{22}\]
Thus, combining (16) and (22) one obtains, after simple manipulations, the stated local expansion of the coefficients \(g_{r}(z)=g_{r}^{[1]}(z)+g_{r}^{[2]}(z)+g_{r}^{[3]}(z)\) in the series expansion of \(\hat{D}(z,q)\) around \(u=q-1=0\).
The expansion of \(\hat{D}(z,q)\) stated in Lemma 8 easily yields the asymptotic behaviour of the moments of \(Y_{n}\).
Proof of Theorem 6.: According to the definition of \(\hat{X}_{n}\) and relation (10) we get for the factorial moments:
\[\mathbb{E}\big{(}\hat{X}_{n}^{\mathtt{r}}\big{)}=\frac{r![z^{n}u^{r}]\hat{D} (z,1+u)}{2^{n}}=\frac{r![z^{n}]g_{r}(z)}{2^{n}},\]
with \(g_{r}(z)\) as defined in Lemma 8. Since the dominant singularity of \(g_{r}(z)\) relevant for the asymptotic behaviour of the main term is at \(z=\rho=\frac{1}{2}\) (see Remark 6) with a local expansion stated in above lemma, we can apply basic transfer lemmata [8] to obtain for the coefficients:
\[[z^{n}]g_{r}(z) =[z^{n}](r+1)(\frac{1}{8})^{\frac{r}{2}}\frac{1}{(1-2z)^{\frac{r} {2}+1}}\cdot\big{(}1+\mathcal{O}(\sqrt{1-2z})\big{)}\] \[=(r+1)(\frac{1}{8})^{\frac{r}{2}}\frac{2^{n}n^{\frac{r}{2}}}{ \Gamma(\frac{r}{2}+1)}\cdot\big{(}1+\mathcal{O}(n^{-\frac{1}{2}})\big{)}.\]
Thus, the asymptotic behaviour of the factorial moments is given by
\[\mathbb{E}\big{(}\hat{X}_{n}^{\mathtt{r}}\big{)}=\frac{(r+1)!(\frac{1}{8})^{ \frac{r}{2}}}{\Gamma(\frac{r}{2}+1)}n^{\frac{r}{2}}\cdot\big{(}1+\mathcal{O}( n^{-\frac{1}{2}})\big{)},\quad r\geq 0. \tag{23}\]
Since the \(r\)-th integer moments can be obtained by a linear combination of the factorial moments of order \(\leq r\), due to \(\mathbb{E}\big{(}\hat{X}_{n}^{r}\big{)}=\mathbb{E}\big{(}\hat{X}_{n}^{ \mathtt{r}}\big{)}+\mathcal{O}\big{(}\mathbb{E}\big{(}\hat{X}_{n}^{\mathtt{r} -1}\big{)}\big{)}=\mathbb{E}\big{(}\hat{X}_{n}^{\mathtt{r}}\big{)}\cdot\big{(}1 +\mathcal{O}(n^{-\frac{1}{2}})\big{)}\) the same asymptotic behaviour (23) also holds for the raw moments. An application of the duplication formula for the \(\Gamma\)-function gives then the alternative representation
\[\mathbb{E}\big{(}\hat{X}_{n}^{r}\big{)}=\frac{\Gamma\big{(}\frac{r+3}{2}\big{)} }{2^{\frac{r}{2}-1}\sqrt{\pi}}\cdot\big{(}1+\mathcal{O}(n^{-\frac{1}{2}})\big{)},\quad r\geq 0. \tag{24}\]
Since \(\hat{X}_{n}=X_{n}-n/2=\sqrt{n}\,Y_{n}\), equation (24) implies \(\mathbb{E}(Y_{n}^{r})\to\mathbb{E}(G^{r})\) as stated.
## 3. First pure luck guess
So far, we have been interested in the total number of correct guesses. As the guesser follows the optimal strategy, the chances of a correct guess are always greater or equal \(50\) percent. Starting with a deck of \(n\) cards, we might be interested in the number of cards \(P_{n}\) (divided by two) remaining in the deck when the first "pure luck guess" with only a \(50\) percent success chance occurs. By Proposition 1 and Theorem 2, this can only happen after the "first phase" of always guessing the smallest number remaining in the deck has failed
and thus finished and so the "two-color card guessing process" has been started already. Similar to Theorem 2 we obtain for \(P:=P_{n}\) the distributional equation
\[P_{n}\mathop{=}^{\mathcal{L}}I_{1}\cdot P_{n-1}^{*}+(1-I_{1})(1-I_{2})\cdot H_{n -1-J_{n},J_{n}},\]
where \(I_{1}\mathop{=}^{\mathcal{L}}\operatorname{Be}(0.5)\), \(I_{2}\mathop{=}^{\mathcal{L}}\operatorname{Be}(0.5^{n-1})\), and \(H_{m_{1},m_{2}}\) denotes the number of cards present, divided by two, in a two-color card guessing game when for the first time a pure luck guess occurs. Additionally, \(P_{n-1}^{*}\) is an independent copy of \(P\) defined on \(n-1\) cards. Moreover, as in Theorem 2, \(J_{n}\mathop{=}^{\mathcal{L}}\operatorname{B}^{*}(n-1,p)\) denotes a truncated binomial distribution:
\[\mathbb{P}(J_{n}=j)=\binom{n-1}{j}/(2^{n-1}-1),\quad 0\leq j\leq n-2.\]
All random variables \(I_{1}\), \(I_{2}\), \(J_{n}\), as well as \(H_{m_{1},m_{2}}\) are mutually independent.
We use a limit law for \(H_{m_{1},m_{2}}\), for a certain regime of \(m_{1},m_{2}\) when both parameters tending to infinity, relying on results of [15, 16].
First, we require a new distribution, a functional of a Levy distributed random variable \(L=\operatorname{Levy}(c)\), \(c>0\), with density
\[f_{L}(x)=\sqrt{\frac{c}{2\pi}}\frac{e^{-c/(2x)}}{x^{3/2}},\quad x>0. \tag{25}\]
**Definition 1** (Reciprocal of a shifted Levy distribution).: Let \(L=\operatorname{Levy}(c)\), \(c>0\). Then, let \(R=R(c)\) denote the reciprocal of the shifted random variable \(1+L\):
\[R=\frac{1}{1+L},\quad\text{with support }(0,1).\]
The density of \(R\) is given by
\[f_{R}(x)=\sqrt{\frac{c}{2\pi}}\cdot\frac{1}{(1-x)^{3/2}x^{1/2}}\cdot e^{-\frac {cx}{2(1-x)}},\quad 0<x<1.\]
This random variable in terms of above density function has been appeared already in several applications. See for example [16] for the limit law of the hitting time in sampling without replacement or [10, 11] for its occurrence in the limit law of an uncover process for random trees. Moreover, this random variable has appeared earlier in context of the standard additive coalescent, where also the relation to the Levy distribution has been observed by Aldous and Pitman [1, Corollary 5 and Theorem 6]. We also note the random variable appears as the limit law of random dynamics on the edges of a uniform Cayley tree, a so-called "fire on tree" model [2]. In contrast to the Levy distribution, the random variable \(R\) has integer moments of all orders. In the special case of \(c=1\) the moments have a particularly interesting structure [2, Lemma 3]:
\[\mathbb{E}(R^{k})=\mathbb{E}\big{(}\exp(-\chi(2k))\big{)},\]
where \(\chi(2k)\) is a chi-variable with \(2k\) degrees of freedom, with density
\[\frac{2^{1-k}}{(k-1)!}x^{2k-1}\exp(-x^{2}/2)dx,\quad x\geq 0.\]
Finally, we note that it is easy to see that \(R\) has the stated density function:
\[F_{R}(x) =\mathbb{P}\{R\leq x\}=\mathbb{P}\Big{\{}\frac{1}{1+L}\leq x \Big{\}}=\mathbb{P}\Big{\{}\frac{1}{x}\leq 1+L\Big{\}}\] \[=\mathbb{P}\Big{\{}L\geq\frac{1}{x}-1\Big{\}}=1-\mathbb{P}\Big{\{} L<\frac{1-x}{x}\Big{\}}.\]
Consequently,
\[f_{R}(x)=-f_{L}\big{(}(1-x)/x\big{)}\cdot(-1)\cdot x^{-2}=\sqrt{\frac{c}{2\pi} }\frac{e^{-cx/(2(1-x))}x^{3/2}}{(1-x)^{3/2}}\cdot\frac{1}{x^{2}},\]
immediately leading to the stated density.
Next, we use the following result.
**Lemma 9** (Hitting time and first pure luck guess).: _Let \(H_{m_{1},m_{2}}\) denote the random variable counting the number of remaining cards, divided by two, when for the first time a pure luck guess happens in the two-color card guessing game, starting with \(m_{1}\) red and \(m_{2}\) black cards. Assume further that \(m_{1},m_{2}\to\infty\) and \(m_{2}=m_{1}-\rho\sqrt{m_{1}}\), with \(\rho>0\). Then,_
\[\frac{H_{m_{1},m_{2}}}{m_{1}}\xrightarrow{\mathcal{L}}R(\rho^{2}/2).\]
Proof.: We combine arguments of [15, 16]: by the results of [15], the weighted sample paths of the two-color card guessing game coincide with the sample paths of the sampling without replacement urn (see also Remark 2). In particular, this holds with respect to the hitting position of the diagonal \(x=y\), as a crossing of the diagonal without hitting cannot happen. In [16] such hitting positions have been studied in a general setting for paths starting at \((m_{1},m_{2})\), with \(m_{1}\geq tm_{2}+s\), and absorbing lines \(y=x/t-s/t\), for \(t\in\mathbb{N}\) and \(s\in\mathbb{N}_{0}\). For our purpose we set \(t=1\) and \(s=0\) in [16, Theorem 2 (4)], which gives for \(0<x<1\):
\[\mathbb{P}\Big{\{}\frac{H_{m_{1},m_{2}}}{m_{1}}\leq x\Big{\}}\sim\int_{0}^{x} \frac{\rho}{\sqrt{2}\sqrt{2\pi}}\frac{1}{\sqrt{u}\left(1-u\right)^{\frac{3}{2 }}}\cdot e^{-\frac{\rho^{2}u}{4(1-u)}}du=\int_{0}^{x}f_{R}(u)du,\]
with \(f_{R}(x)\) the density of the reciprocal of a shifted Levy distribution with parameter \(c=\rho^{2}/2\). Thus, this shows the stated limit law.
In order to obtain the limit law of \(P_{n}\) we require the limit law of \(H_{n-1-J_{n},J_{n}}\), which will be determined next.
**Lemma 10**.: _The random variable \(H_{n-1-J_{n},J_{n}}\) has an Arcsine limit law \(\beta(\frac{1}{2},\frac{1}{2})\):_
\[\frac{H_{n-1-J_{n},J_{n}}\xrightarrow{\mathcal{L}}\beta}{\frac{1}{2}}\beta \big{(}\frac{1}{2},\frac{1}{2}\big{)},\]
_i.e., after suitable scaling, it converges in distribution to a Beta-distributed r.v. with parameters \(1/2\) and \(1/2\) that has the probability density function_
\[f_{\beta}(x)=\frac{1}{\pi}\frac{1}{\sqrt{x(1-x)}},\quad 0<x<1.\]
In a way analogous to the proof of Theorem 5, this lemma readily leads to the main result of this section.
**Theorem 11**.: _The random variable \(P_{n}\) counting the number of remaining cards, divided by two, when the first pure luck guess with only a \(50\) percent success chance occurs, starting with \(n\) ordered cards and performing a single riffle shuffle, has a \(\beta\big{(}\frac{1}{2},\frac{1}{2}\big{)}\) limit law, a so-called Arcsine distribution:_
\[\frac{P_{n}}{\frac{n}{2}}\to\beta\big{(}\frac{1}{2},\frac{1}{2}\big{)}.\]
Proof of Lemma 10.: We proceed similar to the proof of Lemma 4. We study the distribution function \(F(k)=\mathbb{P}\{H_{n-1-J_{n},J_{n}}\leq k\}\) and obtain
\[F(k)=\sum_{j=0}^{n-2}\frac{\binom{n-1}{j}}{2^{n-1}-1}\mathbb{P}\{H_{n-1-j,j} \leq k\}.\]
We use the symmetry of the binomial distribution around \(\lfloor n/2\rfloor\) as well as \(H_{m_{1},m_{2}}=H_{m_{2},m_{1}}\) and approximate the binomial distribution using the de Moivre-Laplace theorem. This leads to
\[F(k)\sim 2\int_{\lfloor n/2\rfloor}^{n}e^{-\frac{(j-kp)^{2}}{2\sigma_{n}^{2}}} \cdot\frac{1}{\sigma_{n}\sqrt{2\pi}}\cdot\mathbb{P}\{H_{j,n-1-j}\leq k\}dj,\]
where \(\mu_{n}=n/2\) and \(\sigma_{n}=\sqrt{n}/2\). Changing the range of integration and the choice \(k=x\cdot n/2\), with \(0<x<1\), leads then, together with Lemma 9, to the improper integral
\[F(k) =\mathbb{P}\Big{\{}\frac{H_{n-1-J_{n},J_{n}}}{n/2}\leq x\Big{\}}\] \[\sim 2\int_{0}^{\infty}e^{-t^{2}/2}\frac{1}{\sqrt{2\pi}}\int_{0}^{x }\frac{t}{\sqrt{2\pi u}(1-u)^{3/2}}\cdot e^{-\frac{\sigma^{2}u}{2(1-u)}}du\;dt.\]
Derivation with respect to \(x\) gives then the desired density function, where the arising improper integral is readily evaluated:
\[\int_{0}^{\infty}e^{-t^{2}/2}\cdot t\cdot e^{-t^{2}g/2}dt=\frac{1}{1+g},\qquad \text{for }g\geq 0.\]
Setting \(g=x/(1-x)\) immediately yields the Arcsine law density function
\[f_{\beta}(x)=\frac{1}{\pi}\frac{1}{\sqrt{x(1-x)}},\quad 0<x<1.\]
Finally, we note that the number of guesses with success probability one, i.e., where the guesser knows in advance to be correct, can be treated in a similar way.
## Declarations of interest
The authors declare that they have no competing financial or personal interests that influenced the work reported in this paper.
|
2310.02932 | Assessing Large Language Models on Climate Information | As Large Language Models (LLMs) rise in popularity, it is necessary to assess
their capability in critically relevant domains. We present a comprehensive
evaluation framework, grounded in science communication research, to assess LLM
responses to questions about climate change. Our framework emphasizes both
presentational and epistemological adequacy, offering a fine-grained analysis
of LLM generations spanning 8 dimensions and 30 issues. Our evaluation task is
a real-world example of a growing number of challenging problems where AI can
complement and lift human performance. We introduce a novel protocol for
scalable oversight that relies on AI Assistance and raters with relevant
education. We evaluate several recent LLMs on a set of diverse climate
questions. Our results point to a significant gap between surface and
epistemological qualities of LLMs in the realm of climate communication. | Jannis Bulian, Mike S. Schäfer, Afra Amini, Heidi Lam, Massimiliano Ciaramita, Ben Gaiarin, Michelle Chen Hübscher, Christian Buck, Niels G. Mede, Markus Leippold, Nadine Strauß | 2023-10-04T16:09:48Z | http://arxiv.org/abs/2310.02932v2 | # Assessing Large Language Models
###### Abstract
Understanding how climate change affects us and learning about available solutions are key steps toward empowering individuals and communities to mitigate and adapt to it. As Large Language Models (LLMs) rise in popularity, it is necessary to assess their capability in this domain. In this study, we present a comprehensive evaluation framework, grounded in science communication principles, to analyze LLM responses to climate change topics. Our framework emphasizes both the presentational and epistemological adequacy of answers, offering a fine-grained analysis of LLM generations. Spanning 8 dimensions, our framework discerns up to 30 distinct issues in model outputs. The task is a real-world example of a growing number of challenging problems where AI can complement and lift human performance. We introduce a novel and practical protocol for scalable oversight that uses AI Assistance and relies on raters with relevant educational backgrounds. We evaluate several recent LLMs and conduct a comprehensive analysis of the results, shedding light on both the potential and the limitations of LLMs in the realm of climate communication.
## 1 Introduction
As concerns surrounding _climate change_ continue to intensify worldwide (Poushter et al., 2022; WHO, 2021), more and more people are turning to digital media as their primary source of information (Newman et al., 2021). However, in spite of ubiquitous access to information, there remains a considerable gap in public climate literacy, exacerbated by the spread of mis- and disinformation (Leiserowitz et al., 2022). The challenge of conveying climate data arises from the nature of scientific communication itself: science, as an evolving domain, is laden with specialized knowledge, technical complexity, and inherent uncertainties (Moser, 2016). The digital media landscape, characterized by limited attention spans and adversarial dynamics, further compounds these challenges (Pearce et al., 2019).
This research explores the potential of AI in curating and presenting climate information in an accessible manner. While AI's promise in addressing global challenges, especially climate change, is evident through its applications in climate modeling, energy optimization, and disaster management (Rolnick et al., 2022), its intersection with Natural Language Processing (NLP) is still under-explored. Recent advancements in LLMs (Brown et al., 2020; Chowdhery et al., 2022; OpenAI, 2023) have captured the attention of the scientific community and the general public for their performance on standard benchmarks, and their broad approachability as information technology. Given their tremendous potential, there is hope that LLMs may support us in addressing climate information challenges.
However, using LLMs to address science-related information needs raises safety concerns, due to their limitations in assessing factuality (Weidinger et al., 2021; Birhane et al., 2023). Fluent, grammatical responses and advanced linguistic dialogue behaviors are preferred and trusted by users, even in the absence of trustworthy information (Chiesurim et al., 2023). This makes evaluating LLMs, especially with non-expert human raters, treacherous. Research on how to evaluate systems that may achieve or exceed human abilities, or _scalable oversight_(Amodei et al., 2016) is so far mostly theoretical (Irving et al., 2018; Leike et al., 2018; Christiano et al., 2018).
Our work contributes to this growing field. We have meticulously developed a principled evaluation framework based on Science Communication research, tailored to the responses of LLMs within the climate change context. Research points out the importance of how information is presented (Jamieson et al., 2017). Drawing on the wealth of scientific knowledge, we examine relevant principles and best practices to propose an implementation of a human assessment framework that delivers high-quality results with educated (but non-expert) raters. We systematically assess **presentational** properties such as _style, clarity_, linguistic _correctness_, and _tone_. More importantly, we also assess **epistemological** issues: _accuracy, specificity, completeness_, and _uncertainty_.
Our main contributions are as follows: (1) We introduce a principled evaluation framework for LLMs on climate information,1 developed through a rigorous interdisciplinary approach. (2) To improve rating performance, we introduce a novel and practical protocol for scalable oversight that uses AI Assistance (cf. Figure 0(a)) and relies on raters with relevant educational background. (3) Our experiments involve the most recent and prominent LLMs to demonstrate the usefulness of the evaluation. (4) Results (Figure 0(b)) show that, while exceptionally fluent, current LLMs have much room for improvement regarding content quality on climate information. Thus, our framework provides concrete directions for improving future LLMs for communicating scientific information. (5) Finally, we analyze the relation of these dimensions to attribution-based evaluations of LLMs (Rashkin et al., 2022) and find that they emerge as mostly orthogonal and complementary aspects.
Footnote 1: To aid reproducibility of the framework, we provide the exact evaluation protocols and all prompts we used to generate additional data.
## 2 Evaluative Dimensions for Climate Information
Scholarship on science communication - originating from disciplines such as communication science, sociology, psychology, human geography, and education, among others (Trench and Bucchi, 2021; Nisbet et al., 2018; Jamieson et al., 2017) - offers conceptual arguments and empirical evidence for appropriately disseminating scientific information, e.g., on climate change, to the general public (Konig et al., 2023; Lewis Jr. and Wai, 2021). Two basic dimensions have to be distinguished here. (1) Presentational features of the messages that contain the information, such as their comprehensibility (Lang, 2000), to ensure that recipients can receive, understand, memorize, and retrieve
Figure 1: Rated example and average results for several LLMs.
such information. We conceive this dimension as _presentational adequacy_. (2) The conveyed information must represent the current state of scientific knowledge as adequately and comprehensively as possible while being specific and appropriately communicating associated uncertainties (Fahnrich et al., 2023). We conceive this dimension as _epistemological adequacy_.
### Presentational Adequacy
Scholarly literature from science communication (Jamieson et al., 2017) suggests that an adequate _presentation_ should comply with three criteria: It should (1) be comprehensible, (2) aid understanding through layout and visualizations, and (3) use appropriate sources and references. In this paper we focus primarily on comprehensibility. We return to sources and references in Section 4 and discuss layout and visualization in Section 5. The _comprehensibility_ of a text is of utmost importance and can be conceptualized along four criteria: style, clarity, linguistic correctness, and tone.
**Style.** Scholarship on how to achieve comprehensible science and climate communication suggests that the language style should not be too informal or colloquial (Mazer and Hunt, 2008), as this can undermine the credibility of information and cause users to rely on their own rather than expert judgements (Scharrer et al., 2012). Moreover, texts should not be too short, because exposure to brief snippets of scientific information may lead recipients to get a "feeling of knowing" from reading messages that contain insufficient information (Leonhard et al., 2020). Long texts, however, require high motivation and cognitive resources that readers may not want to invest, hence they should be avoided as well (Lang, 2000). In addition, some stylistic dimensions can be borrowed from the Multidimensional Quality Metrics (MQM) framework, which was designed to assess the quality of (machine) translated texts (Lommel et al., 2013). One of the MQM's core dimensions is 'terminology', referring to the correct and consistent use of (in this case scientific) terminology.
**Clarity.** Climate-related messages should be formulated in a clear and simple way (Maibach et al., 2023). Risk and health communication research also shows that language should be clear and easy to understand - avoiding long sentences, for example - as less detailed texts require less cognitive effort and are preferred by users (Fagerlin and Peters, 2011; Neuhauser and Paul, 2011). In addition, the use of jargon should be avoided (Baram-Tsabari and Lewenstein, 2013; Baram-Tsabari et al., 2020), as technical terms can inhibit readers' ability to process information (Bullock et al., 2019; Brooks, 2017; Shulman et al., 2020). Clarity seems particularly relevant for individuals with lower numeracy skills (Bruine de Bruin and Bostrom, 2013). If numbers are used, communicators should tailor the presentation to the recipient's numeracy level (Fagerlin and Peters, 2011).
**Correctness.** MQM (Lommel et al., 2013) emphasizes that messages should adhere to linguistic quality criteria to be comprehensible: One of its core components is adherence to linguistic conventions, i.e., the correct use of punctuation, spelling, and grammar.2 Violating these criteria can damage the perceived credibility of the message or its sender (Berger, 2020) and has been shown to influence behavior (e.g., Mollick, 2014). Accordingly, linguistic correctness is an important aspect of the presentational adequacy of science communication (Mercer-Mapstone and Kuchel, 2017).
Footnote 2: [https://themqm.info/typology](https://themqm.info/typology)
**Tone.** Science communication scholars maintain that the tone of a message is important. This concerns the neutrality of the tone, its persuasiveness and its positivity or negativity. Research suggests that messages should not adopt or lean towards a certain valence, worldview, or ideological conviction in order to be effective (Blanton and Ikizer, 2019; Yuan and Lu, 2020). Climate-related messages with a neutral tone can be more effective than messages with a persuasive tone (Kerr et al., 2022; Munoz-Carrier et al., 2020). Likewise, messages should not use too positively or negatively valenced language, particularly if the goal is to convey factual information (Palm et al., 2020).
### Epistemological Adequacy
The epistemological adequacy of climate-related messages is of greatest importance. According to research, this entails several aspects: (1) accuracy, (2) specificity, (3) completeness, (4) the degree of (un)certainty, and (5) the presentation of methods and methodology. We focus on the first four dimensions here, leaving the latter for future work (cf. also the discussion in Section 5).
**Accuracy.** A basic principle of epistemological adequacy in science communication is that scientific information - such as climate change information presented by LLMs - should be _accurate_(Kellesidou and Chabrol, 2021). _Incorrect, wrong,_ or _self-contradictory_ information that takes scientific findings or anecdotal evidence out of context should be prevented. (Hinnant et al., 2016). This is particularly important when considering known accuracy issues of LLMs (Schafer, 2023) such as _hallucination_, i.e. presenting, or referring to, non-existent information (Ji et al., 2023).
**Specificity.** Epistemologically adequate science and climate communication should not miss information that is important to the audience while ignoring irrelevant information, and should address the regional and temporal contexts of target audiences. In other words, it should be _relevant_ to the respective audience, i.e., should fit their personal contexts _spatially and temporally._ Spatial fit implies that the information given for the respective answer is also relevant for the geographical area the question refers to. For example, if a question is posed about climate change in India, the reply should provide data and insights relevant to the Indian context. Research, in fact, shows that specific, local information leads to a higher perceived relevance among recipients (Leiserowitz and Smith, 2017; Lee et al., 2015). For an answer to have high temporal fit, it should address the time frame mentioned in the question. For questions where a specific time frame is not specified, the answer should generally be based on information and data that is up to date. Research has also shown that "here & now" associations can be powerful in science communication (Holmes et al., 2020).
**Completeness.** Answers should be _complete_. Rather than only referring to a part of the question posed, the answer should be formulated in a way that addresses all parts of the question in full (Bergquist et al., 2022; Leiserowitz and Smith, 2017). At the same time, to answer all aspects of the question, the information given should reflect the depth and breadth of relevant scientific knowledge available regarding the topic(s) addressed by the question (Kelesidou and Chabrol, 2021).
**Uncertainty.** Communicating the level of agreement and certainty for scientific findings can be crucial to adequately informing the audience (Budescu et al., 2012; Howe et al., 2019). Likewise, when the level of agreement or quantified certainty is unknown, the audience should be informed about the uncertainty and/or isolation of the supporting evidence (Keohane et al., 2014). This is particularly important in climate communication (Chinn and Hart, 2021; Goldberg et al., 2022; Maertens et al., 2020), as the scientific consensus on climate change has been found to function as a "gateway belief", implying that perceived scientific agreement can positively influence the belief in human-caused climate change and motivate public action (van der Linden et al., 2015).
## 3 Presentational and Epistemological Adequacy Evaluation
We evaluate the presentational and epistemological dimensions using a human rating framework. We collect experimental data by employing LLMs, primarily GPT-4, one of the most popular and powerful models at the time of writing.
### Data
**Questions.** Our first goal is to assess a representative sample of common climate-related information needs. For this, we turned to search engines, popular climate forums and Wikipedia. We collect a diverse set of \(300\) questions from three different sources. For the first set (Wikipedia), we use GPT-4 to generate questions from the English Wikipedia articles. First, we select articles that are related to climate change, then we feed in the paragraphs of each of the selected articles to GPT-4 and task the model to generate questions that can be answered by the paragraph. For the second set (SkS), we turn to Skeptical Science, a website that publishes authoritative information about climate science. We take the list of debated _myths_3 and manually rephrase them as questions. For the third set of questions (GTrends), we use Google Trends, a tool that provides data on public interest in specific search terms and topics.4 We collect the most popular questions, by search volume, from the U.S., for the topics 'Climate Change' and 'Global Warming'. We post-process all questions to remove duplicates, questions that are not related to climate change, and questions that are taken out of context. Finally, we sample \(100\) questions from each set. Please see Appendix C.1 for the details.
**Answers.** We prompt each LLM with the instruction: _You are an expert on climate change communication. Answer each question in a 3-4 sentence paragraph._
**Keypoints.** We extract keypoints from each answer. These are used to find supporting evidence for the answer. To do so, we instruct GPT-4 to examine all the statements in the answer, and identify 1 to 3 key statements that are made to answer the question. We specifically ask the model to copy each statement verbatim from the answer.
**Evidence.** We fetch evidence for each keypoint in the answer. Given the question and the answer, we first ask GPT-4 to provide URL(s) of Wikipedia articles that support the answer. We limit evidence to Wikipedia because GPT-4 is fairly consistent in generating relevant, valid Wikipedia URLs, while the quality is lower for other web sources. Furthermore, Wikipedia is uniform in style and quality as it adheres to established guidelines.5 We break down each article into its paragraphs. For each keypoint, we ask the model to rank the paragraphs based on their relevance to the keypoint and the question, and pick the \(3\) highest ranking as the evidence. Table 6 shows an example.
Footnote 5: [https://en.wikipedia.org/wiki/Wikipedia:Policies_and_guidelines](https://en.wikipedia.org/wiki/Wikipedia:Policies_and_guidelines).
**AI Assistance.** To assist human raters, we use GPT-4 to generate assistance along the dimensions introduced in Section 2. For each dimension, we ask the model to express its agreement or disagreement that the information is presented well according to that dimension. For epistemological dimension, we also provide the retrieved evidence and instruct the model to use that verbatim to support its disagreement (if any). Please refer to Table 3 for a complete list of prompts used to generate the data, and to Appendix E for some statistics of the generated answers.
### Rating Framework and Raters
Our rating task involves evaluating an answer to a climate-related question, based on the four presentational (Section 2.1) and the four epistemological dimensions (Section 2.2). Screenshots of the template can be found in Appendix M.4. We select candidate raters with relevant educational background (see Appendix M.1). To be admitted to the task, after finishing a brief tutorial, the raters need to pass an admission test that evaluates their performance on three full examples (see Appendix M.3). A summary of the broad demographics of raters that participated can be found in Appendix M.1. Each answer is assessed by three human raters. We compute agreement metrics for all experiments and report the numbers in Appendix H.
### Experimental Results
**High-level view.** Figure 0(b) provides an overview of the rating results, aggregated at the presentational and epistemological level, for the following LLMs: GPT-4 (OpenAI, 2023), Chat6PT-3.5, InstructGPT (turbo), InstructGPT (text-davinci-003), InstructGPT (text-davinci-002)6, as well as PalM2 (text-bison) (Anil et al., 2023) and Falcon-180B-Chat7. For a full summary of results, for all the individual dimensions, see Figure 2.
Footnote 6: [https://platform.openai.com/docs/models](https://platform.openai.com/docs/models).
Footnote 7: [https://falconllm.tii.ae/falcon.html](https://falconllm.tii.ae/falcon.html).
All models, except for InstructGPT (text-davinci-002), perform well on presentation (Figure 0(b) and Table 1). This demonstrates how far LLMs have come in terms of surface form quality, in particular after the introduction of learning from human preferences (Ouyang et al., 2022). We note, however, a marked performance drop for _tone_ (cf. Figure 2). This dimension captures more subtle challenges for LLMs, touching on aspects related to Pragmatics (Levinson, 1983). Table 22 shows an example, while Appendix B elaborates on the subject in the broader context of argumentative style.
The epistemological evaluation reveals lower performance on all systems (Figure 0(b)): it is around one point worse than presentation, across all models. Except for _accuracy_ (Figure 2), performance is consistently below average, especially for _completeness_ and _uncertainty_. We also note that the latter epistemological dimensions-completeness and uncertainty-may be difficult to satisfy in short 3-4 sentence answers. Being comprehensive in such a short space is harder than being accurate. On the other hand, we notice that LLMs can also make sub-optimal use of the available space with generic statements (cf. Appendix B). Overall, on climate information, current top-of-the-line LLMs have
significant headroom for improvement. For examples, please see Tables 23 to 26. Table 2 reports complete results and confidence intervals.
**Resolution and Range.** Our evaluation has sufficient resolution to tell models apart, indicate where they differ and suggest interesting trends. ChatGPT-3.5 is the best overall in presentation, but, amongst the LLMs we tested, places fifth on epistemological scores (Figure 0(b)). This brings up the relationship between presentational and epistemological properties. The data suggests the possibility of trade-offs, but models like P4LM2 (text-bion) can strike a good balance. The three Instruct-GPT models' performance is consistent with their version. Noticeably, the best performing model on the epistemological quality of generated content is a recent open model, Falcon-180B-Chat. Falcon-180B-Chat's performance may be related to its large size, but we can only speculate as this information is not generally available for all models. Finally, the difference between the best LLM and the worst (often InstructGPT (text-davinci-002), the oldest) is large, and well beyond confidence intervals, providing evidence that the evaluation has sufficient dynamic range.
**Impact of AI Assistance.** We expect raters to identify more (real) issues when they are shown assistance, because it may make them aware of additional issues. We find supporting evidence in two separate experiments.
Figure 2(a) reports the number of issues detected for each dimension on GPT-4 answers in three different settings, each with a different degree of the raters' exposure to assistance. The setting 'Without AI Assistance' refers to a setting where no assistance is provided. The second setting 'Without AI Assistance, but previous exposure' refers to a setting where no assistance was shown, but the raters have worked on several previous studies where they were exposed to assistance.8 Lastly, 'With AI Assistance' denotes the standard setting where specific assistance is shown. The results suggest that the presence of assistance is key for detecting more issues. This is consistent with the results from Saunders et al. (2022), who found improved rater performance on summarization tasks with assistance. Raters with extensive previous exposure to assistance are in an interesting "middle" position: They detect more issues than the assistance-unaware group, but less than the group provided with specific assistance for the experiment. This suggests that raters learn from repeated exposure to assistance, and show improved performance even when no assistance is present.
Footnote 8: We do make sure that the raters have not worked on the same examples before and have never seen assistance for the specific examples they are working on.
Further evidence of the usefulness of AI Assistance comes from our validation experiments (cf. Appendix G for more details). Similar to Saunders et al. (2022), we want to determine if assistance helps surface real issues, without general access to gold truth in our data. To do this, the authors manually generated \(30\) different examples, each exhibiting a particular issue. We found that the majority of three raters detected \(77\%\) of issues when shown assistance, while the majority of three raters only detected \(60\%\) of the issues when not shown assistance.
The data we collected on the helpfulness of assistance suggests that when raters do not find assistance helpful, they give higher ratings (see Figure 2(b)). This indicates that the raters think critically
Figure 2: Results for all presentational and epistemological dimensions.
about the assistance and do not follow it blindly; when they disagree with it, they do give high ratings. These experiments provide strong evidence that the AI Assistance helps the raters find real issues that they would not have otherwise been discovered.
**Other Findings.** Comparing the rating outcome by source of the question - Skeptical Science, GTrends, and synthetic questions based on Wikipedia paragraphs - we find no major differences, with a slight trend that Wikipedia questions tend to be more specific and thus harder to answer. In particular, we see no evidence that GPT-4 performs better on questions that were generated with GPT-4 compared to the other sources. Similarly, the topic of the question does not show a strong correlation with answer quality. See Appendix I for additional discussion and figures.
## 4 Epistemological Adequacy and Attribution
Audiences of science and climate communication are more likely to trust information if the source is perceived as credible, engaged and concerned about the audience's interests (Brown and Bruhn, 2011; Maibach et al., 2023; Hayhoe, 2018). An adequate presentation of climate information should include curated references. To address the factuality limitations of LLMs, researchers have proposed Attribution to Identified Source (AIS) as a dedicated evaluation (Rashkin et al., 2022; Dziri et al., 2022). An attributable answer must include an explicit quote, from an existing document, in order to support its claims and reduce hallucination (Menick et al., 2022; Bohnet et al., 2023).
Evaluating the ability of LLMs to properly reference the statements they make goes beyond the scope of this paper. For instance, as proposed by Liu et al. (2023), this may involve evaluating generative search engines. However, we started examining the relationship between attribution and the epistemological dimensions with an AIS experiment. We run this experiment only on GPT-4. In our data, each answer is associated with a set of keypoints which, in turn, are used to identify Wikipedia articles that are likely to contain supporting evidence. For 87.7% of the questions, GPT-4 produces a valid Wikipedia article from which evidence passages can be extracted. We evaluate the attribution of each keypoint individually by asking the annotators whether a keypoint is fully, partially or not supported by the evidence. \(66.79\%\) of keypoints are either fully or partially supported. At the answer level, \(46.08\%\) of the answers are fully or partially supported by the evidence. While far from perfect, the data suffices for a first analysis (cf. Appendix F for details).
Figure 4 compares the distribution of average epistemological ratings, with respect to the attribution of answers, revealing interesting trends. In both the _accuracy_ and _specificity_ dimensions, we observe that answers that are fully attributed have higher minimum ratings compared to answers that are only partially attributed, or not attributed at all. Interestingly, we see an opposite pattern in the _completeness_ dimension: Answers that are fully attributed have lower minimum ratings on _completeness_. This result highlights a blind spot for attribution methods; AIS can only consider what _is_ included in the answers, and not what important information is missing. In the _uncertainty_
Figure 3: Evidence of the impact of AI Assistance.
dimension, we observe that there are more answers with low uncertainty ratings among the answers that are not attributed, compared to answers that are either partially or fully attributed.
More generally, there does not seem to be any correlation between AIS and epistemological results. The Spearman's coefficient between AIS and the 3-raters mean rating value for _accuracy_, _specificity_, _uncertainty_ and _completeness_ are, respectively: \(0.03\), \(-0.06\), \(0.002\), \(-0.02\), with corresponding p-values: \(0.65,0.31,0.97,0.78\). We interpret this as evidence that AIS and epistemological assessments are orthogonal and complementary. We provide more qualitative support in Table 7. At a high level, this suggests that attribution, either human or model-based, is not a reliable proxy for epistemological quality. On the other hand, grounding in authoritative sources is required of good science communication. We leave it to future work to extend our framework to include references in a principled way.
## 5 Limitations and Future Work
Our rating dimensions inherently have a subjective component, introducing noise when evaluating at the answer-level. However, our findings show that the evaluation is robust at the system level. Another limitation of our work is that we do not have access to gold ratings. As procuring reliable human judgements becomes unfeasible and/or uneconomical, especially for complex and difficult tasks, such a setting is likely to become more common in the future. Hence, this poses an exciting challenge for future studies, and we envision evaluation frameworks of the kind proposed here serving as a valuable testbed to develop new protocols for _scalable oversight_.
Ideally, an answer would be tailored towards the audience, and take into account their specific attributes (Hendriks et al., 2016; Klinger & Metag, 2021). Unless specifically prompted, LLMs do not do this. We explore in Appendix B how the kind of arguments LLMs seem to gravitate towards may hurt their efficacy with some audiences, and leave further exploration to future work.
Interesting challenges and opportunities may be lying ahead also in the area of _storytelling techniques_ and _narratives_. Research shows that these characteristics make information more comprehensible (Heath & Heath, 2007; Zebregs et al., 2015). They can be powerful tools to communicate science to non-expert audiences, help understanding complex topics, including climate change (Dahlstrom, 2014; Ettinger et al., 2021; Nisbet & Markowitz, 2016; Ettinger & Painter, 2023). Another important topic is multi-turn interaction. Delving deeper in the presentation may also help resolving the tension between presentation and epistemological performance.
Research provides abundant evidence on the importance of supplementing textual information with visual aids in the form of cartoons, charts, pictographs and videos (Flemming et al., 2018; Brown & Bruhn, 2011). Visual complements can be especially useful for understanding quantitative
Figure 4: Comparing AIS ratings with average ratings of the \(4\) epistemological dimensions.
data (Fagerlin and Peters, 2011) and in the case of limited literacy (Wolf et al., 2010). Visual components must be appropriate, contextually relevant, carefully labeled, and fit seamlessly into the textual narrative. The abstract nature of climate change, and its distant implications, makes visualization particularly challenging (Schafer, 2020). Visual information is likely to contribute key attributable evidence and multimodal LLMs (Wang et al., 2022; Alayrac et al., 2022; Chen et al., 2023) provide the foundation for future research on this topic.
Another important direction is the presentation of more technical questions dealing with topics such as research designs, methods, causal explanations, or claim and evidence validation (Bromme et al., 2015; Downs and Fischhoff, 2011). Such aspects will require a deeper look at the role of raters' expertise, and of attribution. A related topic is the role of LLMs as raters. Preliminary experiments are promising (Appendix L). We found that, as with humans, LLMs benefit from AI Assistance and that humans and LLM raters tend to agree on major points. What bias gets introduced by assistance (and rating), and how to measure and control it properly, is a significant open question that needs to be addressed. This links this research to the broader AI alignment field.
## 6 Related Work
**Evaluating LLMs.** While LLMs can generate fluent text on the surface level, it is not yet obvious to which degree the generations are grounded, attributable to reliable sources, and complete. For instance, Liu et al. (2023) assess four generative search engines and report that, although responses are fluent and perceived as high quality, only half are fully supported. Their findings reveal an inverse correlation between fluency/utility and evidential support. Xu et al. (2023) advocate for expert-level human evaluations in question answering, cautioning against over-reliance on single metrics instead of comprehensive assessments. Another domain that needs expert-level evaluation is the medical domain. Singhal et al. (2023) propose Med-PaLM, an LLM for medical information, and introduces a clinical evaluation framework.These cover criteria like alignment with scientific consensus, potential harm, and comprehension. Evaluating LLMs on climate information is without doubt another domain that can benefit from expert-level evaluation. However, prior work mainly emphasizes text classification (Diggelmann et al., 2020; Varini et al., 2020) and sustainability report analysis (Webersinke et al., 2022; Bingler et al., 2022). This study aims to fill this gap by providing a comprehensive evaluation framework for climate change.
**Scalable Oversight.** This area, introduced by Amodei et al. (2016), studies the question of how to scale human oversight, especially in the setting where evaluating (or supervising) models becomes increasingly difficult. Contributions in this area have initially focused on theoretical proposals for how AI can help humans supervise models that exceed their abilities (Irving et al., 2018; Leike et al., 2018; Christiano et al., 2018). Following Irving et al. (2018), one can see our AI Assistance as a single-turn debate, where the human annotator is shown the answer proposed by the model and a single response to that answer.9 Two recent studies provide interesting proofs of concepts for AI Assistance: Bowman et al. (2022) study _sandwiching_, an approach where non-experts align a model with the help of a model while experts provide validation. They show that non-expert raters perform better on an (artificially) difficult multiple-choice task when interacting with a dialogue agent. Saunders et al. (2022) report that human raters of summarization tasks produce more critiques when given the opportunity to accept or edit critiques written by a model. Our work contributes a study of a _scalable oversight_ protocol to improve rating quality in a realistic setting.
Footnote 9: In the setting of Irving et al. (2018), this corresponds to the second level of the polynomial hierarchy \(\Sigma_{2}^{P}\).
**AI Ratings.** Recent studies explore the feasibility of evaluations performed by AI. Kocmi and Federmann (2023) indicate that LLMs can perform state-of-the-art quality assessment of translations, even without references. Their work has been extended to automatic MQM annotation by Fernandes et al. (2023). Gilardi et al. (2023) reports that ChatGPT has a higher agreement with expert-level raters than with less qualified ones. Chiang and Lee (2023) argue that humans and LLMs ratings are correlated on several tasks but point out LLM's factuality and bias limitations. Instead of replacing human raters entirely, in our work we demonstrate the effectiveness of using AI Assistance to aid educated raters.
Conclusion
We introduce an evaluation framework informed by science communication research and assess LLMs on a first set of common climate information needs. Our task is difficult for human raters. To support them, an important part of our framework relies on a novel and practical protocol for scalable oversight that leverages AI Assistance. Our results show that, while presentationally adequate, current LLMs have much room for improvement regarding the epistemological qualities of their outputs. Our evaluation provides concrete directions for improving LLMs and provides enough resolution to quantify gains or regressions along each dimension. A comparison to attribution-based evaluations shows that approaches beyond attribution are needed. Moreover, we believe the implications of our findings extend beyond climate information, and contribute to making generative AI systems both safer and more useful.
## Ethics Statement
The details of our study design, including compensation rates, were reviewed by an independent ethical review committee. All raters provided informed consent prior to completing tasks and received fair compensation with respect to local markets. It is our policy that researchers must pay workers/participants at least the living wage for their location. No personally identifiable information (PII) was collected or will be released.
We conducted the experiments in English, therefore we do not claim generalization of our findings across languages. However, we believe that the proposed methods could be transferred to other languages.
LLMs are already an important source of information for many people, and it is important to assess whether they can adequately address information needs around climate change. Our work contributes to this effort and sheds light on both the potential and the limitation of LLMs in this domain.
|
2301.11799 | Factors influencing to use of Bluezone | This study aims to understand the main factors and their influence on the
behavioral intention of users about using Bluezone. Surveys are sent to users
through the Google Form tool. Experimental results through analysis of
exploratory factors on 224 survey subjects show that there are 4 main factors
affecting user behavior. Structural equation modeling indicates that trust,
performance expectations, effort expectations, and social influence have a
positive impact on behavioral intention of using Bluezone | Vinh T. Nguyen, Anh T. Nguyen, Tan H. Nguyen, Dinh K. Luong | 2023-01-25T03:40:25Z | http://arxiv.org/abs/2301.11799v1 | # Moi Tuong Quan Cua Cac Nhan to Anh Huong
###### Abstract
The emergence of the Covid-19 pandemic has been causing many negative impacts on all aspects of life. The government has taken many measures to minimize the impact and transmission of the disease. Among them is the application of digital transformation to the management and tracing of people infected with Covid through the Bluezone app (now PC-Covid). However, using and installing Bluezone is not as expected. Therefore, this study aims to understand the main factors and their influence on the behavioral intention of users about using Bluezone. Surveys are sent to users through the Google Form tool. Experimental results through analysis of exploratory factors on 224 survey subjects show that there are 4 main factors affecting user behavior. Structural equation modeling indicates that trust, performance expectations, effort expectations, and social influence have a positive impact on behavioral intention of using Bluezone. Meanwhile, privacy risks have a negative effect on this behavior.
EFA, SEM, UTAUT, trust, privacy, Covid-19. |
2303.16857 | Did You Mean...? Confidence-based Trade-offs in Semantic Parsing | We illustrate how a calibrated model can help balance common trade-offs in
task-oriented parsing. In a simulated annotator-in-the-loop experiment, we show
that well-calibrated confidence scores allow us to balance cost with annotator
load, improving accuracy with a small number of interactions. We then examine
how confidence scores can help optimize the trade-off between usability and
safety. We show that confidence-based thresholding can substantially reduce the
number of incorrect low-confidence programs executed; however, this comes at a
cost to usability. We propose the DidYouMean system which better balances
usability and safety. | Elias Stengel-Eskin, Benjamin Van Durme | 2023-03-29T17:07:26Z | http://arxiv.org/abs/2303.16857v3 | # Did You Mean...? Confidence-based Trade-offs in Semantic Parsing
###### Abstract
We illustrate how a calibrated model can help balance common trade-offs in task-oriented parsing. In a simulated annotator-in-the-loop experiment, we show that well-calibrated confidence scores allow us to balance cost with annotator load, improving accuracy with a small number of interactions. We then examine how confidence scores can help optimize the trade-off between usability and safety. We show that confidence-based thresholding can substantially reduce the number of incorrect low-confidence programs executed; however, this comes at a cost to usability. We propose the DidYouMean system (cf. Fig. 1) which better balances usability and safety.
## 1 Introduction
Task-oriented dialogue systems Gupta et al. (2018); Cheng et al. (2020); Semantic Machines et al. (2020) represent one path towards achieving the long-standing goal of using natural language as an API for controlling real-world systems by transforming user requests into executable programs, i.e. translating natural language to code. Central to the systems' success is the ability to take rational actions under uncertainty Russell and Norvig (2010). When model confidence is low and the system is unlikely to succeed, we would prefer it defer actions and request clarification, while at high confidence, clarification requests may annoy a user. Relying on model confidence requires it to be well-correlated with accuracy, i.e. it requires a _calibrated_ model.
Recent work has focused on the calibration of semantic parsing models. Specifically, Stengel-Eskin and Van Durme (2022) benchmarked the calibration characteristics of a variety of semantic parsing models, finding some of them to be well-calibrated, especially on parsing for task-oriented dialogue. Given the relatively well-calibrated nature of these models, we first examine how they could be used in an annotation interface, with a view to balancing the trade-off between _annotation cost_ and _correctness_. We simulate a human-in-the-loop (HITL) experiment where high-confidence tokens are automatically annotated and low-confidence tokens trigger a dialogue with an oracle annotator, who either picks the correct token from a top-K list or manually inserts it. With a small number of interactions we substantially boost annotator accuracy.
A similar trade-off exists between _usability_ and _safety_ in task-oriented user interfaces. We examine how sequence-level model confidence scores can be used to balance this trade-off by reducing the number of incorrect programs executed while also minimizing the number of follow-up user interactions and their cognitive burden. We find that thresholding outputs based on model confidence (i.e. rejecting outputs falling below a tuned threshold) reduces the number of incorrect programs exe
Figure 1: The DidYouMean system. At high confidences, we simply execute the predicted parse. At low confidences, DidYouMean rephrases the query based on the predicted program and asks a user to confirm the paraphrase. The program is executed if the user accepts.
cuted by \(76\%\) compared to the baseline. However, this comes at a cost to usability, where roughly half the correctly-predicted parses are also rejected. To strike a balance between safety and usability, we introduce the DidYouMean system (cf. Fig. 1), which rephrases the input conditioned on the predicted parse and asks users to confirm the accuracy of the paraphrase. In a user study, we obtain an \(36\%\) improvement in usability over the thresholded system while maintaining a \(58\%\) reduction in the number of incorrect programs executed.
## 2 Related Work
Our experiments in Section 4 involve a predictive model for human-in-the-loop coding: similar models have been integrated into IDEs, e.g. Chen et al. (2021). DidYouMean relates to the interactive semantic parsing domain Li and Jagadish (2014); Chaurasia and Mooney (2017); Su et al. (2018), where humans are included in the semantic parsing loop. In this domain, Yao et al. (2019) introduce a confidence-based interactive system in which a parsing agent can ask users for clarification. Our work follows in this spirit, but asks the user to confirm a parse rather than generating questions for the user to answer. DidYouMean also relates broadly to selective prediction, where a model is expected to abstain from making decisions at low confidence Chow (1957); El-Yaniv et al. (2010); Varshney et al. (2022); Xin et al. (2021); Whitehead et al. (2022). Our system extends beyond selective prediction's setting by including a human-in-the-loop. Finally, DidYouMean shares a motivation with Fang et al. (2022), who introduce a method for reliably summarizing programs. Their work provides post-hoc action explanations while we focus on resolving misunderstandings _before_ execution.
## 3 Methods
**Datasets** Our data is drawn from the SMCalFlow Semantic Machines et al. (2020) task-oriented dialogue dataset, which contains Lisp-like programs (cf. Appendix A). We follow the same preprocessing as Platanios et al. (2021), and use the SMCalFlow data splits given by Roy et al. (2022): \(108{,}753\) training, \(12{,}271\) validation, and \(13{,}496\) testing dialogue turns.
**Models** We use MISO Zhang et al. (2019, 2019), a well-calibrated model from Stengel-Eskin and Van Durme (2022). Rather than predict the SMCalFlow surface form, including syntactic tokens like parentheses, MISO directly predicts the underlying execution graph. The graph can deterministically be "de-compiled" into its surface form, and vice-versa. The fact that MISO predicts an underlying graph makes it attractive for applications which require confidence scores, as it only predicts content tokens (i.e. functions, arguments) rather than syntactic tokens (parentheses). For details on MISO's architecture, see Zhang et al. (2019) and Stengel-Eskin et al. (2022).
For token confidence estimation, we use the maximum probability across the output vocabulary at each timestep. This has been shown to be a relatively robust confidence estimator in classification Hendrycks and Gimpel (2016); Varshney et al. (2022). For sequence-level scores, we follow Stengel-Eskin and Van Durme (2022) and take the minimum over token-level confidence scores.
## 4 Human-in-the-Loop Simulation
Production datasets like SMCalFlow are constantly evolving as new functionalities are added. The expensive and time-consuming nature of annotating data can be mitigated by the use of predictive parsing models which suggest speculative parses for new utterances. However, the model's output can be incorrect, especially given out-of-distribution inputs. We need to ensure that annotators are not introducing errors by overly trusting the model.
If the model is well-calibrated, we can use the confidence to reduce such errors. For example, we can alert annotators to low confidence predictions and ask them to intervene Lewis and Gale (1994). Using a threshold, we can prioritize time or correctness: a higher threshold would result in more annotator-model interactions, decreasing the speed but increasing program correctness (reducing the need for debugging) while a lower threshold would increase speed but also lower the accuracy.
Since we do not have access to expert SMCalFlow annotators, we simulate an oracle human-in-the-loop (HITL) annotator who always provides a correct answer by using the gold annotations provided in the dataset. Specifically, for a given input, we decode the output tokens of a predicted program \(o_{0},\dots o_{n}\) normally as long as predictions are confident (above a given threshold). If at time \(t\) the confidence \(p(o_{t})\) falls the threshold, we attempt to match the decoded prefix \(o_{0},\dots,o_{t-1}\) to the gold prefix \(g_{0},\dots g_{t-1}\). If the prefixes do not match, we count the example as incorrect. If they do match, we replace \(o_{t}\) with \(g_{t}\), the gold prediction from our oracle annotator, and continue decoding. We consider three metrics in this experiment: (1) The exact
match accuracy of the decoded programs (higher is better). (2) The percentage of total tokens for which we have to query an annotator (lower is better). (3) The percentage of uncertain tokens (below the threshold) for which the gold token \(g_{t}\) is in the top 5 predictions at timestep \(t\). Here, higher is better, as selecting a token from a candidate list is typically faster than producing the token.
**Results and Analysis** Fig. 2 shows our three metrics as the threshold is increased in increments of \(0.1\). We see that accuracy grows exponentially with a higher threshold, and that the percentage of tokens for which an annotator intervention is required grows at roughly the same rate. The exponential growth reflects the distribution of token confidences, with most tokens having high confidence. Finally, we see that while at low confidence, most tokens must be manually inserted, the rate at which they are chosen from the top 5 list rapidly increases with the threshold. Thus, the increased number of annotator interactions required at higher thresholds may be offset by the fact that many of these interactions are a choice from the top-5 list.
## 5 User Correction via DidYouMean
Section 4 showed that token-level confidence scores can be used to balance speed and correctness in an annotation interface. We see a similar trade-off between safety and usability in user interfaces using semantic parsing models. Here, we define safety as rejecting unsuccessful programs _before executing them_. This strict definition is motivated by physical domains: imagine that rather than controlling a digital assistant, a user is guiding a robot via language commands (e.g. Winograd (1972); Lynch and Sermanet (2020); Stengel-Eskin et al. (2021); Lynch et al. (2022); Nair et al. (2022)). In this setting, actions may have irreversible consequences, so determine safety before execution is key. Safety considerations need to be balanced with usability of the system: an unplugged agent would be very safe but unusable. To increase usability, an agent might make follow-up requests to a user, like asking for clarification or confirmation. The types of requests the agent makes have varying cognitive load on the user: for example, providing confirmation takes less effort than rephrasing.
We measure how well we can reject incorrect programs _before_ executing them. Following past work in selective prediction, we measure success by coverage and risk, as well as F score w.r.t. program correctness. Coverage is the percentage of inputs for which a program is executed and risk is the percentage of executed programs that were _incorrect_. Precision is inverse risk, and recall is the percentage of correct programs which were accepted. We additionally consider F1 and F0.5, which upweights precision (safety) by a factor of 2. A low-coverage, low-risk system may be safer but have more false negatives, i.e. reject more correct programs, decreasing its usability. A high-coverage, high-risk system is more usable at the cost of false positives, i.e. executing incorrect programs. We do not commit to setting an optimal threshold for this trade-off, since it is task-specific.
We consider 3 systems. As a baseline, we consider a system that executes everything it predicts (**accept**); this will result in the highest-possible coverage, but also high risk. We can also use MISO's calibrated nature to improve safety outcomes by tuning a sequence-level confidence threshold for rejecting programs (**tuned**). We tune on the full validation set using F1; we explore the range \([0.0,1.0)\) in increments of \(0.01\), finding \(0.40\) to be optimal. Finally, we introduce the **DidYouMean** system for filtering low-confidence programs. For a given utterance, DidYouMean shows the user a paraphrase of the input; the user then decides to accept the parse based on this paraphrase. This allows correctly-predicted low-confidence programs to be accepted and executed, while reducing the user load: making a binary choice to accept a paraphrase is a receptive task, while rephrasing an instruction is a more costly productive task.
**Glossing Model** Since users are typically unfamiliar with formats like Lisp, we need to present the user with a natural language paraphrase - or _gloss_ - of the candidate parse. To train a glossing model, we modify Roy et al. (2022)'s seq2seqBenchCLAMP framework: rather than using the user utterance with the previous turn's context \((\mathcal{U}_{0},\mathcal{A}_{0},\mathcal{U}_{1})\) as input and a program \(\mathcal{P}\) as output, we take the context and _program_\((\mathcal{U}_{0},\mathcal{A}_{0},\mathcal{P})\) as the input and user instruction \(\mathcal{U}_{1}\) as the output. We use
Figure 2: Simulated annotator-in-the-loop results across increasing confidence thresholds.
the BART-large architecture Lewis et al. (2020). See Appendix A for model details.
**DidYouMean System** When a low-confidence parse is detected, DidYouMean triggers a dialogue with the user in order to recover some usability over simply rejecting all low-confidence parses. Fig. 1 shows the system workflow. DidYouMean shows the original utterance \(\mathcal{U}_{1}\) and the gloss \(\hat{\mathcal{U}}^{*}\) to the user, who determines whether they are identical or not. If they accept the gloss, we optionally re-parse the gloss \(\hat{\mathcal{U}}^{*}\) rather than the original utterance \(\mathcal{U}_{1}\); this can remove typos and other idiosyncracies. We call this the **re-parsed** setting, while choosing the original prediction \(\hat{\mathcal{U}}\) is the **chosen** setting. We predict that allowing users to accept and reject glosses will improve the balance between safety and usability (i.e. F1) over the threshold system by allowing them to accept correct low-confidence parses. In other words, adding human interaction will allow us to achieve a balance which cannot be attained given the tradeoffs resulting from thresholding.
**User Study** We conduct a static user study of DidYouMean using examples from the SMCalFlow validation set. We sample 100 MISO predictions with a minimum confidence below \(0.6\) (to ensure that the set contains a number of mistakes). This sample is stratified across 10 equally-spaced bins with 10 samples per bin. MTurk annotators were shown the dialogue history, the user utterance, and the gloss, and asked to confirm that the gloss was correct. The template and instructions can be seen in Appendix B. We obtained 3 judgments per example.
**Annotation Statistics** 8 annotators completed at least one judgement, with 4 completing the majority. All 3 annotators agreed on \(79\%\) examples, indicating the task is well-formulated. For the remaining \(21\%\), we use the majority decision to accept or reject. After majority voting, annotators accepted \(68/100\) glosses and rejected \(32\).
**Results** Table 1 shows the results of the user study.
In addition to standard selective prediction metrics like coverage (the percentage of inputs for which a program is executed) and risk (the percentage of executed programs that are incorrect) we report the number of false positives (incorrect programs executed) and F1 and F0.5 scores. Tuning a threshold results yields better safety outcomes than accepting everything, with lower risk. However, this safety comes at a cost to the usability of the system; a coverage of only 0.32 indicates that only \(32\%\) of inputs have their programs executed. The "tuned" system's low usability is reflected in the F1 and F0.5 scores, which balance precision and recall. The "chosen" system, while better in F1, is comparable to the "tuned" system in F0.5, which takes both usability and safety into account but prioritizes safety at a 2:1 ratio. Users are able to recover some usability (as measured by coverage) in this setting but also add to the risk, which is higher for "chosen" than "tuned". The number of incorrect programs executed increases when glosses are chosen (as compared to the tuned threshold). When the accepted glosses are re-parsed, we see a shift back towards a system favoring safety, with fewer incorrect programs being executed than in the "chosen" setting; this is reflected in a lower risk score. For both F1 and F0.5, the "re-parsed" system best balances usability and safety.
These results show that a calibrated model can be used with a threshold to greatly improve safety, reducing the number of incorrect programs accepted by \(76\%\). DidYouMean allows users to recover some low-confidence programs by accepting and rejecting programs based on their glosses, resulting in the best aggregated scores. Note also that the threshold was tuned on F1 score on the entire dev set. This means that the F1 performance of that tuned system is as high as possible for confidence-threshold-based system. Thus, DidYouMean achieves a balance outside what can be achieved by tuning: simply increasing the threshold would decrease safety and result in a lower F1 than the current threshold of 0.40.
## 6 Conclusion
We examine two common trade-offs in semantic parsing, and how a well-calibrated model can be used to balance them. In Section 4 we illustrated how token-level model confidences could be used in a simulated HITL task-oriented parsing annotation task. Our experiments in Section 5 extended these results to sequence-level confidences and non-expert users; we found that model confidence could be used to improve the usability-safety trade-off and introduced DidYouMean, which improved us
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Setting & Cov. \(\uparrow\) & Risk \(\downarrow\) & FP \(\downarrow\) & F1 \(\uparrow\) & F0.5 \(\uparrow\) \\ \hline \hline Accept & 1.00 & 0.67 & 67 & 0.50 & 0.38 \\ \hline Tuned & 0.32 & 0.50 & 16 & 0.49 & 0.50 \\ Chosen & 0.68 & 0.54 & 37 & 0.61 & 0.51 \\ Re-parsed & 0.68 & 0.41 & 28 & 0.66 & 0.62 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Coverage, risk, number of false positives (FP), and F measures for accepting correct parses and rejecting incorrect parses.
ability by asking users to accept predictions.
## 7 Limitations
Our study is limited by the models, datasets, and languages we consider. Firstly, we examine only English datasets, limiting the impact of our results. We also only consider one task-oriented parsing dataset, and focus on one model architecture.
We make several limiting assumptions in Section 4 and Section 5. Foremost amongst these is the assumption of access to an oracle annotator in Section 4; clearly, no such annotator exists. Our results may vary when real annotators are brought into the loop. For one, we do not know exactly how choosing from the top-k list will compare to insertion w.r.t. speed. We also do not know how automation bias (Cummings, 2004) would affect the top-k list: given that the correct answer is often in the list, real annotators might overly rely on the list and prefer it to token insertion, resulting in incorrect programs.
The experiments in Section 5 rely on a glossing model to translate predicted programs into natural language (NL). We approached with a neural Lisp-to-NL model; this has several limitations. Neural text generation models often hallucinate outputs, i.e. generated glosses may not be faithful to their corresponding programs. Unlike Fang et al. (2022), who use a grammar-based approach for response generation, we do not assume access to a grammar but note that our method is compatible with grammar-based constraints. Our annotators in Section 5 face the additional challenge of interpreting and choosing glosses. SMCalFlow programs are nuanced and slight input variations can result in different programs. These nuances are often obscured by the glossing model, resulting in two different programs glossing to semantically equivalent utterances; we explore this further in Appendix C. Annotators might mistakenly accept glosses from incorrect programs or reject correct glosses; this would be difficult to address even with a faithful translation method.
## Acknowledgements
We would like to thank Anthony Platanios, Subhro Roy, Zhengping Jiang, Kate Sanders, Yu Su, and Daniel Khashabi for their feedback on an earlier draft. This work was supported by NSF \(\#1749025\), and Elias Stengel-Eskin is supported by an NSF Graduate Research Fellowship.
|
2306.15558 | Collider constraints on massive gravitons coupling to photons | We study the discovery potential of massive graviton-like spin-2 particles
coupled to standard model fields, produced in photon-photon collisions at the
Large Hadron Collider (LHC) as well as in electron-positron ($e^+e^-$)
collisions, within an effective theory with and without universal couplings.
Our focus is on a massive graviton G coupled to the electromagnetic field,
which decays via $\mathrm{G}\to \gamma \gamma$ and leads to a resonant excess
of diphotons over the light-by-light scattering continuum at the LHC, and of
triphoton final states at $e^+e^-$ colliders. Based on similar searches
performed for pseudoscalar axion-like particles (ALPs), and taking into account
the different cross sections, $\gamma \gamma$ partial widths, and decay
kinematics of the pseudoscalar and tensor particles, we reinterpret existing
experimental bounds on the ALP-$\gamma$ coupling into G-$\gamma$ ones. Using
the available data, exclusion limits on the graviton-photon coupling are set
down to $g_{\mathrm{G}\gamma\gamma}\approx 1$--0.05~TeV$^{-1}$ for masses
$m_\mathrm{G} \approx 100$~MeV--2~TeV. Such bounds can be improved by factors
of 100 at Belle~II in the low-mass region, and of 4 at the HL-LHC at high
masses, with their expected full integrated luminosities. | David d'Enterria, Malak Ait Tamlihat, Laurent Schoeffel, Hua-Sheng Shao, Yahya Tayalati | 2023-06-27T15:35:42Z | http://arxiv.org/abs/2306.15558v3 | # Collider constraints on massive gravitons coupling to photons
###### Abstract
We study the discovery potential of massive graviton-like spin-2 particles coupled to standard model fields, produced in photon-photon collisions at the Large Hadron Collider (LHC) as well as in electron-positron (\(e^{+}e^{-}\)) collisions, within an effective theory with and without universal couplings. Our focus is on a massive graviton G coupled to the electromagnetic field, which decays via \(\mathrm{G}\to\gamma\gamma\) and leads to a resonant excess of diphotons over the light-by-light scattering continuum at the LHC, and of triphoton final states at \(e^{+}e^{-}\) colliders. Based on similar searches performed for pseudoscalar axion-like particles (ALPs), and taking into account the different cross sections, \(\gamma\gamma\) partial widths, and decay kinematics of the pseudoscalar and tensor particles, we reinterpret existing experimental bounds on the ALP-\(\gamma\) coupling into G-\(\gamma\) ones. Using the available data, exclusion limits on the graviton-photon coupling are set down to \(g_{\mathrm{G}\gamma}\approx 1\)-0.05 TeV\({}^{-1}\) for masses \(m_{\mathrm{G}}\approx 100\) MeV-2 TeV. Such bounds can be improved by factors of 100 at Belle II in the low-mass region, and of 4 at the HL-LHC at high masses, with their expected full integrated luminosities.
## 1 Introduction
The CERN Large Hadron Collider (LHC) does not only provide the highest energy and luminosity hadronic interactions recorded to date, but also delivers the most intense and energetic photon-photon collisions ever studied in the laboratory. In proton-proton, proton-ion, and ion-ion collisions, the ultrarelativistic beam charged particles can interact electromagnetically through photon exchange when passing by at large impact parameters (ultraperipheral collisions, UPCs) without hadronic overlap, and remain intact after the interaction [1; 2]. In the equivalent photon approximation (EPA) [3; 4], the collision of the two electromagnetic (EM) fields can be identified with the fusion of two quasireal photons, which can produce particles in the central detectors of the LHC experiments.
Pairs of bosons or fermions can thus be produced, back-to-back in azimuth, via \(\gamma\gamma\) processes (Fig. 1, left) and --by virtue of the Landau-Yang theorem [5; 6] and conservation of charge-conjugation (C) symmetry-- C-even neutral objects (scalars, pseudoscalars, and tensor particles) can also be _singly_ produced (Fig. 1, center). In all cases, \(\gamma\gamma\) collisions present a very clean environment for measurements of processes with very few particles produced exclusively in the final state, very small or negligible irreducible backgrounds, and with the possibility, in the p-p case, to further constrain the collision kinematics with the simultaneous reconstruction of the momenta of the forward/backward \(\gamma\)-emitting protons in dedicated Roman Pots (RPs) detectors located inside the beamline [7; 8; 9; 10].
At the LHC, photon-photon interactions happen at unprecedentedly large effective luminosities at low masses in heavy-ion UPCs [1], and up to very large \(\gamma\gamma\) center-of-mass energies (up to a few TeV) with UPCs with proton beams [2]. These facts have first revived the field of quantum electrodynamics (QED) at very high intensity initiated with the E-144 experiment at SLAC [11; 12]. In this context, the LHC has provided the first observation of light-by-light (LbL) scattering [13] (Fig. 1, left) in lead-lead UPCs at the LHC, PbPb \(\stackrel{{\gamma\gamma}}{{\rightarrow}}\) Pb \(\gamma\gamma\) Pb, at a nucleon-nucleon center of mass energy of \(\sqrt{s_{{}_{\rm NN}}}=5.02\) TeV [14; 15; 16]. Similarly, searches for LbL at the TeV scale have been carried out in pp at \(\sqrt{s}=13\) TeV via pp \(\stackrel{{\gamma\gamma}}{{\rightarrow}}\) p \(\gamma\gamma\) p by tagging one or both protons in very forward RPs [17; 18; 19]. Such measurements have been used e.g., to set competitive limits on nonlinear (Born-Infeld) extensions of QED [20].
Photon-photon collisions at the LHC provide also very clean conditions for searches for particles beyond the standard model (BSM) that couple to photons [21; 22; 23; 24; 25; 26; 27]. In particular, massive spin-0 particles, such as axion-like-particles (ALPs) [28; 29; 30; 31], as well as spin-2 tensor particles, such as gravitons [32; 26; 33; 34; 35; 23], can be produced in photon-fusion processes (Fig. 1, center), and manifest themselves as diphoton resonances on top of the LbL invariant mass continuum. Recent searches for excesses of exclusive diphotons produced above the LbL continuum [28] have allowed placing the most competitive limits on ALPs over masses \(m_{a}\approx 5\)-100 GeV in PbPb UPCs [15; 36], and over \(m_{a}\approx 0.5\)-2 TeV in pp collisions [17; 18; 19]. Limits on ALP-photon coupling have also been set
Figure 1: Schematic diagrams of photon-photon collisions producing a pair of exclusive photons, aka. LbL scattering (left), and an ALP or graviton decaying to two photons (center), and of \(e^{+}e^{-}\) collisions producing an ALP or graviton leading to a triphoton final state (right).
from searches for triphoton final states at electron-positron (\(e^{+}e^{-}\)) colliders (Fig. 1, right), using recent results from Belle II and BES-III as well as from previous studies at LEP [37, 38, 39].
Whereas massive spin-0 particles have been extensively studied, the physics case for the two-photon production of massive spin-2 states at colliders is still at an early stage, notwithstanding some exploratory works [23, 26, 32, 33, 35]. In this paper, we extract new bounds on the photon-graviton coupling as a function of the graviton mass, by properly recasting the existing experimental searches for ALPs coupling to photons mentioned above. We do so by applying the experimental selection criteria to simulated ALP and graviton pseudodata generated within an effective field theory (EFT) approach, taking into account the different cross sections, diphoton partial widths, and decay kinematic distributions of the pseudoscalar and tensor particles, and using standard statistical methods.
Let us start by recalling that General Relativity (GR), as a classical field theory, describes the gravitational force in terms of an interacting massless tensor (spin-2) field. When the field is quantized, massless spin-2 particles, called gravitons, appear. The masslessness of the graviton is generally considered to be guaranteed by diffeomorphism invariance of GR [40]. However, it is also known that gauge invariance does not always imply zero masses for gauge states. Quantum effects from other fields can, for example, give gravitons masses without breaking fundamental properties of GR. The fact that propagating degrees of freedom of gravity have mass is a fundamental issue with implications in many areas of physics including the propagation of gravitational waves [41, 42]. Also, the possible existence of a selfconsistent quantum field theoretical framework of GR valid at all energy scales is an open question [43, 44]. Although such a theory remains elusive, one can however study practical and reliable consequences of the underlying quantum theory of GR by employing an EFT approach [45]. In the following, we work under an EFT framework where the spacetime metric can be linearized and written in the form1: \(g_{\mu\nu}=\eta_{\mu\nu}+\kappa G_{\mu\nu}\), with \(G_{\mu\nu}\) the spin-2 quantum field (graviton) that we will assume to be potentially massive, and where \(\kappa\sim 1/M_{\rm Pl}\) with \(M_{\rm Pl}\) being the Planck mass. In this framework, the Einstein-Hilbert Lagrangian density takes the Fierz-Pauli expression, first formulated by them in 1939 to describe the linear theory of a massive spin-2 field [46], and the interaction of the graviton with SM fields reads,
Footnote 1: It is worth noting that the separation of the metric into a background flat metric and a quantum perturbation \(\kappa G_{\mu\nu}\), allows avoiding the conceptual problems of the standard interpretation of quantum mechanics applied to quantum gravity, i.e. that the classical observers doing preparations and measurements live themselves in the spacetime which they prepare and measure. Here, the observers and the experiment live in the classical, flat spacetime.
\[\mathscr{L}^{\rm G}_{V,f}=\frac{k_{V,f}}{\Lambda}\ T^{V,f}_{\mu\nu}\ G^{\mu \nu}. \tag{1}\]
Here, \(k_{V,f}\) a factor that describes the strength of the coupling of the graviton field \(G\) to the boson \(V\) (including gauge and Higgs bosons) or fermion \(f\), \(\Lambda\) is an energy scale, and \(T^{V,f}_{\mu\nu}\) is the energy-momentum tensor for bosons or fermions. In particular, for gravitons coupled to photons,
the expression above gives:
\[\mathscr{L}_{\gamma}^{\rm G}=g_{\rm G\gamma}\left(-F_{\mu\rho}F_{\nu}^{\rho}+\frac {1}{4}\eta_{\mu\nu}(F_{\rho\sigma})^{2}\right)\ G^{\mu\nu},\ \text{with}\ g_{\rm G\gamma}\equiv\frac{k_{\gamma}}{\Lambda}, \tag{2}\]
where \(F_{\mu\rho}\) is the EM field, \(\eta_{\mu\nu}\) the flat spacetime metric, and \(g_{\rm G\gamma}\) is the G-\(\gamma\) coupling.
In this work, we derive upper limits on \(g_{\rm G\gamma}\) as a function of the graviton mass \(m_{\rm G}\) using the experimental LHC and \(e^{+}e^{-}\) data mentioned above [15; 17; 18; 19; 37; 38; 39] that correspond to probing \(m_{\rm G}\) values from 100 MeV up to 2 TeV. We obtain these limits under two different scenarios. First, we take a simplified approach with a 100% decay branching fraction of the graviton into two photons, \(\mathcal{B}_{\rm G\to\gamma\gamma}=1\). Such a "photophilic" scenario is often assumed in ALPs searches [30; 47], and leads to a maximum sensitivity to the graviton-photon coupling. A second more realistic scenario is also considered with universal couplings of the graviton to all Standard Model (SM) particles [48]. In this case, the graviton decay into diphotons is dominant only at low \(m_{\rm G}\) values whereas above a few GeV, once the kinematic phase space for decays to massive SM fermions or bosons opens up, it amounts to \(\mathcal{B}_{\rm G\to\gamma\gamma}\approx 0.05\). The universal couplings scenario also allows a proper computation of the \(e^{+}e^{-}\to{\rm G\gamma}\) cross sections without problems linked to violation of perturbative unitarity as described in Section 2.2.
The paper is organized as follows. The theoretical setup used to compute the graviton and ALP cross sections in photon-photon collisions at the LHC and in \(e^{+}e^{-}\) collisions is presented in Section 2. The generation and analysis of graviton and ALPs simulated samples, and the method of extraction of G-\(\gamma\) coupling bounds from the experimental ALP limits, are discussed in Section 3. The derived limits as a function of \(m_{G}\), including current bounds and future projections is presented in Section 4, together with their comparison to other existing results. The paper is closed with a summary in Section 5.
## 2 Theoretical setup
The theoretical framework employed to study the production of gravitons and ALPs is presented, first, for photon fusion processes in UPCs at the LHC, \(\gamma\gamma\to{\rm G},a\to\gamma\gamma\) (Fig. 1 middle), and via \(e^{+}e^{-}\to\left({\rm G},a\right)\gamma\to 3\gamma\) final states (Fig. 1 right), second.
### Photon-photon collisions
The description of the \(\gamma\gamma\to{\rm G},a\to\gamma\gamma\) process is based on the EPA applied to ultrarelativistic protons or ions with low-virtuality equivalent photon fluxes, as implemented in the the gamma-UPC code [26]. The cross section for the production of a given final state \(X\) via photon fusion in an UPC of hadrons A and B with charges \(Z_{1,2}\), AB \(\stackrel{{\gamma\gamma}}{{\to}}\)\(\Lambda X\)B can be written as a convolution integral of the product of the elementary cross section at a given \(\gamma\gamma\) c.m. energy, \(\sigma_{\gamma\gamma\to X}(W_{\gamma\gamma})\), and the two-photon differential distribution of the colliding beams,
\[\sigma({\rm AB}\stackrel{{\gamma\gamma}}{{\to}}{\rm A}X{\rm B}) = \int\frac{{\rm d}E_{\gamma_{1}}}{E_{\gamma_{1}}}\frac{{\rm d}E_{ \gamma_{2}}}{E_{\gamma_{2}}}\frac{{\rm d}^{2}N_{\gamma_{1}/Z_{1},\gamma_{2}/Z_{ 2}}^{({\rm AB})}}{{\rm d}E_{\gamma_{1}}{\rm d}E_{\gamma_{2}}}\sigma_{\gamma \gamma\to X}(W_{\gamma\gamma}), \tag{3}\]
where \(W_{\gamma\gamma}^{2}=4E_{\gamma_{1}}E_{\gamma_{2}}\) is the c.m. energy of the collision of photons with energies \(E_{\gamma_{1}}\) and \(E_{\gamma_{2}}\), and
\[\frac{\mathrm{d}^{2}N_{\gamma_{1}/Z_{1},\gamma_{2}/Z_{2}}^{\mathrm{(AB)}}}{ \mathrm{d}E_{\gamma_{1}}\mathrm{d}E_{\gamma_{2}}} = \int\mathrm{d}^{2}\mathbf{b}_{1}\mathrm{d}^{2}\mathbf{b}_{2}P_{ \mathrm{no\,\,inel}}(\mathbf{b}_{1},\mathbf{b}_{2})N_{\gamma_{1}/Z_{1}}(E_{ \gamma_{1}},\mathbf{b}_{1})N_{\gamma_{1}/Z_{1}}(E_{\gamma_{2}},\mathbf{b}_{2}), \tag{4}\]
is the effective two-photon luminosity accounting for the probability \(P_{\mathrm{no\,\,inel}}(\mathbf{b}_{1},\mathbf{b}_{2})\) of hadrons A and B to remain intact after their interaction. In the expressions above, \(N_{\gamma_{i}/Z_{i}}(E_{\gamma_{i}},\mathbf{b}_{i})\) is the photon number density with the photon energy \(E_{\gamma_{i}}\) at the impact parameter \(\mathbf{b}_{i}\) from the \(i\)th initial hadron. The photon number densities are usually derived from two different hadron form factors, such as the electric-dipole (EDFF, Eq. (11) in [26]) and charge (ChFF, Eq. (13) in [26]) form factors. In the EDFF case, because the photon number density is divergent at low values of the impact parameter \(b\equiv|\mathbf{b}|\), arbitrary \(b_{1}>R_{A}\) and \(b_{2}>R_{B}\) cuts must be imposed, with \(R_{A,B}\) being the radii of hadrons A and B. On the other hand, such an issue is absent in the ChFF case, and one can safely integrate \(b_{1,2}\) down to zero. Although the \(\gamma\gamma\) cross sections obtained with EDFF and ChFF fluxes are in general similar, the ChFF is a more realistic, and therefore preferable, choice. In the latter formula, we have integrated over the virtualities \(Q^{2}\) of the initial photons, which can be certainly unintegrated in order to make explicit their very small values, typically of order \(Q^{2}\sim R_{A}^{-2}\lesssim 0.08\) GeV\({}^{2}\) for protons (\(R_{\mathrm{p}}\approx 0.8\) fm), and \(Q^{2}\lesssim 10^{-3}\) GeV\({}^{2}\) for Pb nuclei (\(R_{\mathrm{A}}\approx 7\) fm) [26].
In the case of heavy-ion beams, the action of all the charges in the nucleus adds coherently and the photon flux is enhanced by a \(Z^{2}\) factor compared to the proton case, leading to a \(Z_{1}^{2}Z_{2}^{2}\) increase in the corresponding \(\gamma\gamma\) cross sections. The nonoverlap hadronic interaction probability density \(P_{\mathrm{no\,\,inel}}(\mathbf{b}_{1},\mathbf{b}_{2})\) depends on the spatial separation of the two initial hadrons, i.e., \(P_{\mathrm{no\,\,inel}}(\mathbf{b}_{1},\mathbf{b}_{2})=P_{\mathrm{no\,\,inel }}(|\mathbf{b}_{1}-\mathbf{b}_{2}|)\), and can be derived from the standard opacity (optical density) computed from realistic hadronic transverse profile overlap functions with a Glauber Monte Carlo (MC) model [49].
The expected LbL continuum cross sections can be calculated through Eq. (3) plugging in the elementary \(\gamma\gamma\to\gamma\gamma\) cross section and using a proper setup for the photon fluxes and nonoverlap probabilities. For the resonant graviton and ALP total cross sections, a more convenient equation can be employed. The cross section for the exclusive production of a C-even resonance \(X\) of spin \(J\) and two-photon decay width \(\Gamma_{\gamma\gamma}(X)\), through \(\gamma\gamma\) fusion in an UPC of charged particles A and B, reads now [4]
\[\sigma(\mathrm{A\,\,B}\,\xrightarrow{\gamma\gamma}\mathrm{A\,\,}X\,\mathrm{B} )=4\pi^{2}(2J+1)\frac{\Gamma_{\gamma\gamma}(X)}{m_{X}^{2}}\left.\frac{\mathrm{ d}\mathcal{L}_{\gamma\gamma}^{\mathrm{(A\,\,B)}}}{\mathrm{d}W_{\gamma\gamma}} \right|_{W_{\gamma\gamma}=m_{X}}, \tag{5}\]
where \(\frac{d\mathcal{L}_{\gamma\gamma}^{\mathrm{(A\,\,B)}}}{dW_{\gamma\gamma}} \big{|}_{W_{\gamma\gamma}=m_{X}}\) is the value of the effective two-photon luminosity at the resonance mass \(m_{X}\) in an UPC at nucleon-nucleon c.m. energy \(\sqrt{s_{{}_{\mathrm{NN}}}}\), and amounts to
\[\frac{\mathrm{d}\mathcal{L}_{\gamma\gamma}^{\mathrm{(AB)}}}{ \mathrm{d}W_{\gamma\gamma}} = \frac{2W_{\gamma\gamma}}{s_{{}_{\mathrm{NN}}}}\int\frac{\mathrm{d}E _{\gamma_{1}}}{E_{\gamma_{1}}}\frac{\mathrm{d}E_{\gamma_{2}}}{E_{\gamma_{2}}} \delta\left(\frac{W_{\gamma\gamma}^{2}}{s_{{}_{\mathrm{NN}}}}-\frac{4E_{\gamma_ {1}}E_{\gamma_{2}}}{s_{{}_{\mathrm{NN}}}}\right)\frac{\mathrm{d}^{2}N_{\gamma _{1}/Z_{1},\gamma_{2}/Z_{2}}^{\mathrm{(AB)}}}{\mathrm{d}E_{\gamma_{1}} \mathrm{d}E_{\gamma_{2}}}\,. \tag{6}\]
From Eq. (5), one can straightforwardly see that, for the same values of resonance masses and diphoton partial decay widths, the photon-fusion production of gravitons (\(J=2\)) will be enhanced by a factor of \((2J+1)=5\) compared to the ALPs (\(J=0\)) case. Such an apparent benefit will be, however, outplayed by a comparatively reduced graviton coupling to photons, as explained below. The calculation of their expected photon-fusion cross sections through Eq. (5) relies on computing their \(\Gamma_{\gamma\gamma}(X)\) two-photon widths with a given interaction Lagrangian. For the ALP case, \(\gamma\gamma\to a\to\gamma\gamma\), the relevant Lagrangian is
\[\mathscr{L} \supset \frac{1}{2}\partial_{\mu}a\partial^{\mu}a-\frac{m_{a}^{2}}{2}a^{ 2}-\frac{g_{a\gamma}}{4}aF^{\mu\nu}\tilde{F}_{\mu\nu},\;\text{with}\;g_{a\gamma }\equiv C_{\gamma\gamma}/\Lambda, \tag{7}\]
where \(a\) is the ALP field, \(\tilde{F}_{\mu\nu}\) is the photon field strength dual tensor, and the dimensionful ALP-\(\gamma\) coupling strength \(g_{a\gamma}\) is inversely proportional to the high-energy scale \(\Lambda\) associated with the spontaneous breaking of an approximate Peccei--Quinn global U(1) symmetry [50], and the effective dimensionless coefficient \(C_{\gamma\gamma}\) rescales the ALP-\(\gamma\) coupling whenever the ALP also interacts with (and, therefore, decays to) other SM particles (although most often the photon-dominance, or photophilic \(C_{\gamma\gamma}=1\) case is considered in the literature) [47].
The production cross sections for massive gravitons via \(\gamma\gamma\to\text{G}\to\gamma\gamma\) can be similarly obtained from the Fierz-Pauli Lagrangian, Eq. (2). Writing explicitly the kinetic content for the graviton field of mass \(m_{\text{G}}\), it reads:
\[\mathscr{L}_{\text{FP}}=-\frac{1}{2}(\partial_{\rho}G_{\mu\nu})^{2}+\partial_{ \mu}G_{\nu\rho}\partial^{\nu}G^{\mu\rho}-\partial_{\mu}G^{\mu\nu}\partial_{ \nu}G+\frac{1}{2}(\partial_{\rho}G)^{2}-\frac{1}{2}m_{\text{G}}^{2}\left((G_{ \mu\nu})^{2}-G^{2}\right), \tag{8}\]
from which the propagator for the graviton field, represented by the dotted line in Fig. 1 (center), can be computed directly as
\[T^{\mu\nu\rho\sigma}=\frac{i}{p^{2}-m_{\text{G}}^{2}+i\epsilon}\left(\frac{1} {2}(P_{\mu\rho}P_{\nu\sigma}+P_{\mu\sigma}P_{\nu\rho})-\frac{1}{3}P_{\mu\nu}P_ {\rho\sigma}\right), \tag{9}\]
with \(P_{\mu\nu}=\eta_{\mu\nu}+p_{\mu}p_{\nu}/m_{\text{G}}^{2}\). In this latter expression we see the \(m_{\text{G}}\) pole in mass that gives the resonant effect in the invariant mass LbL spectrum2
Footnote 2: Let us note that in the massless case, the structure of the propagator is preserved, but with some modifications. For a massless graviton, Eq. (9) would give:
\[T^{\mu\nu\rho\sigma}_{m=0}=\frac{1}{2}\frac{i}{p^{2}+i\epsilon}\left(\eta_{ \mu\rho}\eta_{\nu\sigma}+\eta_{\mu\sigma}\eta_{\nu\rho}-\eta_{\mu\nu}\eta_{ \rho\sigma}\right), \tag{10}\]
which, interestingly, does not lead to a resonant effect but a particular behavior of the cross section in the forward limit.
The generation of ALP and graviton simulated events in this work is carried out with the gamma-UPC code [26], using ChFF \(\gamma\) fluxes for protons and ions and computing the nonoverlap probabilities with a Glauber MC [51], combined with MadGraph5_aMC@NLO[52, 53] (hereafter identified as MG5_aMC) where the corresponding Lagrangians, Eqs. (2) and (7), are coded as input models in the Universal Feynman Output (ufo) format [54, 55]. We have compared the computed
cross section for ALP or graviton production with the results of several alternative codes [23; 26; 52] finding fully consistent results (and, thus, also the corresponding graviton exclusion limits).
### Electron-positron collisions
We consider next the graviton and ALP production cross sections in \(e^{+}e^{-}\) collisions through the process shown in Fig. 1 (right), and describing their photon couplings with the same Lagrangians, Eqs. (2) and (7) respectively, used for photon-photon collisions. For \(e^{+}e^{-}\) collisions at Belle II and LEP energies, the leading-order inclusive cross section, neglecting the tiny electron mass \(m_{e}\), reads
\[\sigma(e^{+}e^{-}\to a\gamma\to\gamma\gamma\gamma)=\frac{\alpha g _{a\gamma}^{2}}{24}\frac{(s-m_{a}^{2})^{3}}{s^{3}}\;\mathcal{B}_{a\to\gamma \gamma},\;\text{for ALPs, and} \tag{11}\]
\[\sigma(e^{+}e^{-}\to\text{G}\gamma\to\gamma\gamma\gamma)=\frac{ \alpha}{36}\left(\frac{k_{\gamma}}{\Lambda}\right)^{2}\frac{(s-m_{\text{G}}^{ 2})^{3}}{s^{3}}\frac{s^{2}+3sm_{\text{G}}^{2}+6m_{\text{G}}^{4}}{m_{\text{G}}^ {4}}\;\mathcal{B}_{\text{G}\to\gamma\gamma},\;\text{for gravitons}, \tag{12}\]
where \(s\) is the squared center-of-mass energy of the collision, and \(\mathcal{B}_{a,\text{G}\to\gamma\gamma}\) the corresponding \(a,\text{G}\to\gamma\gamma\) branching fractions. This latter expression indicates that the graviton cross section, as opposed to the ALP one, has the asymptotic form
\[\lim_{s\gg m_{\text{G}}^{2}}\sigma(e^{+}e^{-}\to\text{G}\gamma \to\gamma\gamma\gamma) = \frac{\alpha}{36}\left(\frac{k_{\gamma}}{\Lambda}\right)^{2} \frac{s^{2}}{m_{\text{G}}^{4}}\;\mathcal{B}_{\text{G}\to\gamma\gamma}, \tag{13}\]
which is divergent in the \(m_{\text{G}}^{2}/s\to 0\) limit. Such a unitarity-violating behavior is due to the assumption that the graviton couples only to photons, and not to electrons. A more realistic universal-coupling scenario for gravitons can solve this perturbative unitarity problem [48]. In such a universal-coupling scenario, the expression for the \(e^{+}e^{-}\to\text{G}\to 3\gamma\) cross section, reads
\[\sigma(e^{+}e^{-}\to\text{G}\gamma\to\gamma\gamma\gamma) = \frac{\alpha}{24}\left(\frac{k_{U}}{\Lambda}\right)^{2}\frac{(s-m _{\text{G}}^{2})^{3}}{s^{3}}\;\mathcal{B}_{\text{G}\to\gamma\gamma}, \tag{14}\]
which, as its ALPs counterpart given by Eq. (11), is now well-behaved for all \(m_{\text{G}}\).
Of course, allowing for other couplings reduces also the diphoton decay probability for massive gravitons. In principle, for the graviton production in \(e^{+}e^{-}\) collisions via the diagram shown in Fig. (1) (right), one could just consider a simplified model with universal couplings to photons and electrons alone, \(k_{U}=k_{\gamma}=k_{e}\), neglecting all other couplings. In this case, the asymptotic cross section for \(s\gg m_{\text{G}}^{2}\) can be written as: \(\sigma\approx\frac{\alpha}{6}(\frac{k_{U}}{\Lambda})^{2}\mathcal{B}_{\text{G }\to\gamma\gamma}\), and the two partial widths would be: \(\Gamma(\text{G}\to\gamma\gamma)=(\frac{k_{\gamma}}{\Lambda})^{2}\frac{m_{ \text{G}}^{3}}{80\pi}\) and \(\Gamma(\text{G}\to e^{+}e^{-})=(\frac{k_{e}}{\Lambda})^{2}\frac{m_{\text{G}}^ {3}}{160\pi}(1-\frac{4m_{e}^{2}}{m_{\text{G}}^{2}})^{3/2}(1+\frac{8m_{e}^{2}} {3m_{\text{G}}^{2}})\). Asymptotically, one would then have \(\mathcal{B}_{\text{G}\to\gamma\gamma}=\frac{2}{3}\) when \(m_{\text{G}}\gg 2m_{e}\), and only when \(m_{\text{G}}\lesssim 2m_{e}\) the diphoton branching fraction would be unity. This simple example shows that for the range of graviton masses probed by the Belle II and LEP data (\(m_{\text{G}}\approx 0.1\)-\(100\) GeV), the assumption of \(\mathcal{B}_{\text{G}\to\gamma\gamma}=1\) would be incorrect. The actual decay branching fractions of the graviton to all SM particle pairs as a function of \(m_{\text{G}}\) in the universal-couplings scenario are shown in Fig. 2, and Table 1 collects a few reference values
as a guideline. One can see now that the diphoton decay is relatively dominant only in the case of gravitons with masses below twice the pion mass (\(m_{\rm G}\lesssim 0.25\) GeV) with values \({\cal B}_{\rm G\to\gamma\gamma}\approx 40\%\), whereas hadronic decays take over for heavier gravitons. Above \(m_{\rm G}\approx 5\) GeV, the diphoton decay amounts to \({\cal B}_{\rm G\to\gamma\gamma}\approx 5\%\), which would at face value translate into factors of \(\sim\)20 less constraining limits placed on gravitons compared to ALPs searches in the photon-dominance assumption often consider for the latter (\(C_{\gamma\gamma}=1\) in Eq. (7) leading to \({\cal B}_{a\to\gamma\gamma}=1\)).
At BES-III, the underlying production process differs from the Belle II and LEP cases as the \(a,{\rm G}\) resonance is not directly radiated from the \(s\)-channel (virtual) \(\gamma^{*}\) or \({\rm Z}^{*}\) boson (Fig. 1, right), but an intermediate \(J/\psi\) meson is first produced that decays into the ALP or graviton plus a photon, leading to the three-photon final state. At leading order, the partial width of the \(J/\psi\to a\gamma\to\gamma\gamma\gamma\) decay reads
\[\Gamma(J/\psi\to a\gamma\to\gamma\gamma) = \frac{\alpha}{81}g_{a\gamma}^{2}\left(1-\frac{m_{a}^{2}}{m_{J/ \psi}^{2}}\right)^{3}\langle O^{J/\psi}\rangle\;{\cal B}_{a\to\gamma\gamma}, \tag{15}\]
where \(\langle O^{J/\psi}\rangle\) is the long-distance matrix element of the \(J/\psi\) particle. For the graviton production,
Figure 2: Branching ratios for the various decay modes of a massive graviton as a function of its mass \(m_{\rm G}\) assuming its universal-coupling with the SM particles. Specific \({\cal B}_{\rm G\to XX}\) numerical values are given in Table 1.
a photon-only coupling will lead to the same perturbative unitarity violation problem mentioned above, and we have to work in the universal coupling scenario. In this case, the leading order partial width of \(J/\psi\to\mathrm{G}\gamma\to\gamma\gamma\gamma\) is given by:
\[\Gamma(J/\psi\to\mathrm{G}\gamma\to\gamma\gamma\gamma) = \frac{2\alpha}{243}\left(\frac{k_{U}}{\Lambda}\right)^{2}\left(1- \frac{m_{\mathrm{G}}^{2}}{m_{J/\psi}^{2}}\right)\left(1+3\frac{m_{\mathrm{G}}^ {2}}{m_{J/\psi}^{2}}+6\frac{m_{\mathrm{G}}^{4}}{m_{J/\psi}^{4}}\right)\langle O ^{J/\psi}\rangle\;\mathcal{B}_{\mathrm{G}\to\gamma\gamma}. \tag{16}\]
Combining Eqs. (15) and (16), we can derive a bound on the G-\(\gamma\) coupling from any given one obtained for the ALP-\(\gamma\) case via
\[\left(\frac{k_{U}}{\Lambda}\right) = \left(g_{a\gamma}\right)\,\frac{\left(m_{J/\psi}^{2}-m_{\mathrm{G }}^{2}\right)}{\sqrt{\left(4m_{\mathrm{G}}^{4}+2m_{\mathrm{G}}^{2}m_{J/\psi}^ {2}+\frac{2}{3}m_{J/\psi}^{4}\right)\,\mathcal{B}_{\mathrm{G}\to\gamma\gamma}}}, \tag{17}\]
where we have assumed \(\mathcal{B}_{a\to\gamma\gamma}=1\).
For \(e^{+}e^{-}\) collisions, the generation of simulated graviton and ALPs events is performed with MG5_aMC, with the universal-couplings setup of Ref. [48] for the graviton case and using the Lagrangian Eq. (7) for the ALP samples, coded both also in the ufo format.
## 3 Analysis of the simulated data
Simulated events are generated using the theoretical setup discussed in the previous section, for all ALP and graviton production processes at the LHC and in \(e^{+}e^{-}\) collisions at BES-III and Belle II3 listed in Table 2 for the relevant mass ranges.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{4}{c}{Graviton decay \(\mathcal{B}_{\mathrm{G}\to XX}(\%)\)} \\ \hline Channel & \(m_{\mathrm{G}}=100\) MeV & \(m_{\mathrm{G}}=5\) GeV & \(m_{\mathrm{G}}=100\) GeV & \(m_{\mathrm{G}}=1\) TeV & \(m_{\mathrm{G}}=2\) TeV \\ \hline \(\gamma\)\(\gamma\) & 44.4 & 6.1 & 5 & 4.3 & 4.2 \\ \(\nu\)\(\bar{\nu}\) & 33.3 & 4.5 & 4 & 3.2 & 3.2 \\ \(l^{+}l^{-}\) & 22.2 & 7.8 & 8 & 6.4 & 6.4 \\ Hadrons & – & 81.5 & 82 & 66 & 65.8 \\ ZZ & – & – & – & 4.7 & 4.6 \\ W\({}^{+}\)W\({}^{-}\) & – & – & – & 9.4 & 9.2 \\ HH & – & – & – & 0.3 & 0.4 \\ t\(\bar{\mathrm{t}}\) & – & – & – & 5.7 & 6.2 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Branching ratios for various decay modes of a graviton of different masses, assuming its universal coupling with SM particles.
As an example, Fig. 3 shows the computed total cross sections for graviton and ALP production versus mass in PbPb UPCs at \(\sqrt{s_{{}_{\rm NN}}}=5.02\) TeV, for the same photon-coupling values \(g_{\rm G\gamma}=g_{a\gamma}=1\) TeV\({}^{-1}\). The general trend for both particles is similar, featuring a decrease of the cross sections as a function of mass due to the \(\sigma\propto m_{X}^{-2}\) dependence of Eq. (5) and the reduced effective \(\gamma\gamma\) luminosity \(\frac{{\rm d}{\cal L}_{\gamma\gamma}^{(\rm A\,B)}}{{\rm d}W_{\gamma\gamma}} \big{|}_{W_{\gamma\gamma}=m_{X}}\) for increasing \(W_{\gamma\gamma}\) c.m. energy. Assuming \({\cal B}_{a,{\rm G}\to\gamma\gamma}=1\), one can observe graviton production cross sections (solid red curve) about five times larger than the ALP ones (blue solid curve), as given from the different spin counting of the two particles in Eq. (5). However, considering the more realistic scenario of universal couplings for the graviton, \({\cal B}_{\rm G\to\gamma\gamma}\approx 0.05\) (dashed red curve), and keeping the photon-dominance case for the ALP, we see that the final cross sections for PbPb \(\stackrel{{\gamma\gamma}}{{\to}}\) Pb \(X(\gamma\gamma)\) Pb are about four times smaller for gravitons than for ALPs.
\begin{table}
\begin{tabular}{l c c c} \hline Process & Colliding system & nucleon-nucleon or \(e^{+}e^{-}\) c.m. energy & \(m_{a,{\rm G}}\) range \\ \hline \(\gamma\gamma\to a,{\rm G}\to\gamma\gamma\) & PbPb & 5.02 TeV & 5–100 GeV \\ \(\gamma\gamma\to a,{\rm G}\to\gamma\gamma\) & pp & 14 TeV & 0.15–2 TeV \\ \(a,{\rm G}\to\gamma\gamma\gamma\;\gamma\) & \(e^{+}e^{-}\) & 3–11 GeV & 0.16–10 GeV \\ \hline \end{tabular}
\end{table}
Table 2: Summary of the six ALP and graviton production processes considered in this work, along with the mass ranges experimentally probed.
Figure 3: Total \(\gamma\gamma\) cross sections for graviton and ALP production in PbPb UPCs at 5.02 TeV as a function of resonance mass, for the same photon couplings values, \(g_{\rm G\gamma}=g_{a\gamma}=1\) TeV\({}^{-1}\), and two different assumptions (photophilic or universal) on the graviton-photon coupling.
For the extraction of upper limits on the photon-fusion graviton cross sections and, thus, on the \(g_{\rm G\gamma}\) coupling, one can proceed along two different but equivalent approaches:
1. One can use full simulations for the production cross sections for gravitons and associated backgrounds, applying the same requirements as in the experimental analyses, accounting for all detector effects, and employing a standard statistical framework for limits setting based on the experimental results and the generated pseudodata, as described in [25, 29].
2. Or else, one can use the existing ALPs limits derived from the data, and properly reinterpret them for the graviton case, taking into account all differences between the production and decay properties of both BSM particles after applying all experimental analysis selection criteria.
We employ here the second technique, and show in the Appendix A its statistical equivalence to the first method. In the approach (ii), the \(\sigma_{a}\) cross section for ALP production can be derived from the interacting Lagrangian, Eq. (7), and is proportional to \(g_{a\gamma}^{2}\times\mathcal{B}_{a\to\gamma\gamma}\). Similarly, following Eq. (2), the cross section for gravitons, \(\sigma_{\rm G}\), is proportional to \(g_{\rm G\gamma}^{2}\times\mathcal{B}_{\rm G\to\gamma\gamma}\). Then, any bound obtained for the ALP-\(\gamma\) coupling at a given \(m_{\gamma\gamma}\) bin can be converted into the corresponding bound for the G-\(\gamma\) coupling via
\[g_{\rm G\gamma}=\sqrt{\frac{\sigma_{a}}{\sigma_{\rm G}}}\times\frac{\mathcal{ A}_{\rm G}}{\mathcal{A}_{\rm a}}\times g_{a\gamma}. \tag{18}\]
Here \(\mathcal{A}_{a}/\mathcal{A}_{\rm G}\) is the ratio of experimental fiducial acceptances for ALPs and gravitons decaying into a pairs of photons. Tensor particles decay on average into softer and more isotropic photons than pseudoscalar particles. This latter factor is derived from our full simulations, after applying the fiducial criteria of each experiment, and amounts to about a 10% (50%) correction at high (low) masses. The same formula can be used to set graviton limits from those placed on ALPs at the Belle II and LEP experiments. The case of BES-III is slightly different, and the graviton limits are directly obtained through Eq. (17). In the Appendix A, a proof of the equivalence between both techniques (i) and (ii) is given. In particular, we demonstrate that a graviton search limit based on method (i) implies Eq. (18).
In order to obtain the final limits on \(g_{\rm G\gamma}\) through Eq. (18), we need to implement all experimental analyses and apply on our simulated samples the same selection requirements applied for ALP searches in the data. The searches carried out in PbPb UPCs are currently the most competitive for ALPs in the range \(m_{a}\approx 5\)-100 GeV. In this case, the final state of interest involves the observation of two exclusive photons with transverse energy \(E_{T}\gtrsim 2\) GeV, emitted over \(|\eta|\lesssim 2.4\) pseudorapidities, and pair invariant masses exceeding 5 GeV, with a rapidity gap requirement of no other significant hadronic activity occurring within \(|\eta|<5\). To further refine the analysis and reduce background contamination, additional kinematic criteria are applied to the photon pair, including selections on diphoton transverse momentum (\(p_{\rm T}^{\gamma\gamma}\)) below 1 GeV, and on acoplanarity (\(A_{\phi}^{\gamma\gamma}\equiv 1-|\Delta\phi_{\gamma\gamma}|/\pi\)) less than \(\approx 0.01\). These two additional criteria enhance the sensitivity to
photon-fusion production processes that are characterized by the production of a central system at rest that decays into two photons in a back-to-back configuration, while minimizing contributions from misidentified \(\gamma\gamma\to e^{+}e^{-}(\gamma,\gamma\gamma)\) events. The full list of requirements applied to our simulated data to reproduce the ATLAS and CMS measurements are summarized in Table 3.
Figure 4 shows a typical diphoton invariant mass distribution for a generated graviton signal (with \(m_{\mathrm{G}}=45\) GeV and \(g_{\mathrm{G}\gamma}=1\) TeV\({}^{-1}\)) and SM backgrounds after applying the ATLAS selection criteria, with a full emulation of the detector resolutions for the energies and angles of the outgoing photons, as well as the \(p_{\mathrm{T}}\)-dependent reconstruction efficiencies. All distributions are generated with gamma-UPC+MG5_aMC, except the contribution from central exclusive production (CEP, from gluon-gluon fusion in a color-singlet exchange, \(gg\to\gamma\gamma\)) that is obtained with Superchic v.3.0 [24]. A full statistical analysis of this sort of signal and background distributions for varying \(m_{\mathrm{G}}\) values and taking into account the experimental diphoton counts observed in each mass bin, would be the basis for the alternative limits-setting method (i) described above [25, 29, 56].
In proton-proton UPCs at the LHC, ALP constraints have been obtained in the masses range 150 GeV to 2 TeV requiring two exclusive photons produced with \(m_{\gamma\gamma}\gtrsim 150\) GeV over \(|\eta|\lesssim 2.4\) and low acoplanarity \(A_{\phi}^{\gamma\gamma}<0.01\). Given the very large backgrounds from other multiple pp pileup events, it is impossible to apply rapidity gap requirements as in the PbPb case, and the experiments require instead kinematic coincidences between the central diphoton system and one (single tagging) or both (double tagging) forward/backward protons detected in the RPs. As the forward detectors cannot get arbitrarily close to the proton beam, and the position of the LHC beam collimators limits their acceptance, the resulting coverage of the longitudinal fractional momentum loss of the
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline Variable & \multicolumn{2}{c}{PbPb \(\overset{\gamma\gamma}{\rightarrow}\) Pb} & \multicolumn{2}{c}{pp \(\overset{\gamma\gamma}{\rightarrow}\) p} \\ & (ATLAS) & (CMS) & (ATLAS) & (CMS) \\ \hline \(\sqrt{s_{{}_{\mathrm{NN}}}}\) c.m. energy (TeV) & 5.02 & 5.02 & 13.0 & 13.0 \\ Integrated luminosity \(\mathcal{L}\) & 2.2 nb\({}^{-1}\) & 0.4 nb\({}^{-1}\) & 14.6 fb\({}^{-1}\) & 9.4 fb\({}^{-1}\) \\ Exclusive number of photons & 2 & 2 & 2 & 2 \\ Single photon \(p_{\mathrm{T}}^{\gamma}\) & \(>2.5\) GeV & \(>2\) GeV & \(>40\) GeV & \(>100\) GeV \\ Single photon \(|\eta^{\gamma}|\) & \(<2.37\) & \(<2.4\) & \(<2.37\) & \(<2.5\) \\ Pair \(p_{\mathrm{T}}^{\gamma\gamma}\) & \(<1\) GeV & \(<1\) GeV & \(<1\) GeV & \(<1\) GeV \\ Pair \(m_{\gamma\gamma}\) & \(>5\) GeV & \(>5\) GeV & \(>150\) GeV & \(>200\) GeV \\ Pair acoplanarity \(A_{\phi}^{\gamma\gamma}\) & \(<0.01\) & \(<0.01\) & \(<0.01\) & \(<0.01\) \\ Rapidity gap range \(|\eta^{\mathrm{gap}}|\) & \(<5\) & \(<5\) & – & – \\ Proton tagging & – & – & single & double \\ Proton energy loss \(\xi\) & – & – & [0.035–0.08] & [0.02–0.2] \\ \hline \hline \end{tabular}
\end{table}
Table 3: Selection criteria applied in the analyses of simulated ALP and graviton samples, following the ATLAS and CMS measurements of exclusive diphotons in PbPb [15, 36] and pp [18, 19] UPCs.
protons, \(\xi\), is limited. Such a requirement and all the others are summarized in Table 3.
For the Belle II limits at low graviton masses, we apply the same analysis criteria used for searches for ALPs in the three-photon final state over the mass range 0.2-9.7 GeV [37]. At least three photon candidates are considered with energy \(E_{\gamma}\) above 0.65 GeV (for \(m_{a}>4\) GeV) or 1.0 GeV (for \(m_{a}\leq 4\) GeV) and the invariant mass \(m_{\gamma\gamma\gamma}\) of the three-photon is required to be in the range: \(0.88\sqrt{s}\leq m_{\gamma\gamma\gamma}\leq 1.03\sqrt{s}\). As mentioned above, for BES-III, the production process is a bit different and Eq. (17), instead of Eq. (18), is employed.
## 4 Results and discussion
Using Eq. (18) for LHC and Belle-II, and Eq. (17) for BES-III, we are able to reinterpret the existing limits on the ALP-\(\gamma\) coupling versus ALP mass [15; 16; 17; 18; 19; 37; 38; 39] into the corresponding limits for graviton-\(\gamma\) couplings. For the graviton limits from PbPb or pp UPCs, one can in principle keep the simplifying assumption of unity diphoton-decay branching fractions, \(\mathcal{B}_{\mathrm{G},a\to\gamma\gamma}=1\), without unitarity problems in the cross section calculations. The corresponding exclusion limits (upper limits) at 95% confidence level (CL) for the graviton-photon coupling \(g_{\mathrm{G}\gamma}=k_{\gamma}/\Lambda\) as a function of the mass of the graviton are displayed in Fig. 5. A comment is in order concerning the hypothesis \(\mathcal{B}_{\mathrm{G}\to\gamma\gamma}=1\) that, as we shall see below, is not always possible to keep. For \(\mathcal{B}_{\mathrm{G}\to\gamma\gamma}<1\), obviously,
Figure 4: Simulated invariant mass distribution of exclusive photon pairs produced in PbPb UPCs at 5.02 TeV for a graviton signal (\(m_{\mathrm{G}}=45\) GeV mass and \(g_{\mathrm{G}\gamma}=1\) TeV\({}^{-1}\) coupling), and LbL scattering (orange), CEP, and \(\gamma\gamma\to e^{+}e^{-}\) background processes (dark and light yellows). All distributions are presented with an emulation of diphoton detector resolution and inefficiencies.
the sensitivity of the search for gravitons with exclusive diphotons decreases, due to a lower signal rate. On the other hand, the _total_ decay width automatically increases for decreasing \(\mathcal{B}_{\mathrm{G}\to\gamma\gamma}\), but the efficiency of the search is independent of the width as it consists essentially of counting event numbers. Similarly, the region from LHC diphoton bump searches shrinks for reducing \(\mathcal{B}_{\mathrm{G}\to\gamma\gamma}\) values [57]. Thus, there is an interplay that makes the exclusive diphotons search to gain competitiveness in the case of a broad resonance.
Let us note that the fact that no event is observed in the data at an invariant mass \(m_{\gamma\gamma}=45\) GeV [19] with the simulation results of Fig. 4, implies a direct statistical derivation (method (i) mentioned above, see Appendix A) of \(g_{\mathrm{G}\gamma}<4.5\cdot 10^{-2}\) TeV\({}^{-1}\) at 95% CL for \(m_{\mathrm{G}}=45\) GeV, which is coherent with the value obtained in Fig. 5. In Fig. 5, we also show the limits (dashed curves) obtained by extrapolating the current results to the integrated luminosities to be recorded in PbPb and pp collisions at the HL-LHC. We take \(\mathcal{L}=20\) nb\({}^{-1}\) for PbPb [27], and a conservative \(\mathcal{L}=300\) fb\({}^{-1}\) for the pp case, instead of the nominal value of \(\mathcal{L}=3000\) fb\({}^{-1}\), given that the availability of RPs at ATLAS/CMS is not yet guaranteed over the full HL-LHC phase.
Figure 5: Exclusion limits at 95% CL on the graviton-photon coupling as a function of the graviton mass derived from the latest ATLAS and CMS measurements of exclusive \(\gamma\gamma\) production in PbPb and pp UPCs [15, 16, 17, 18, 19]. A photophilic scenario with \(\mathcal{B}_{\mathrm{G}\to\gamma\gamma}=1\) is assumed. Extrapolated limits (dashed lines) are also shown for expected HL-LHC integrated luminosities.
The derivation of \(g_{\rm G\gamma}\) from the measured \(g_{a\gamma}\) limits at \(e^{+}e^{-}\) colliders, requires to consider the universal coupling scenario (Section 2), for which the diphoton branching ratio of the graviton is fixed at any given \(m_{\rm G}\) to the values shown in Fig. 2 and Table 1. Within the more realistic universal coupling approach, it becomes possible to compute the cross section of the graviton production processes at ATLAS, CMS, and \(e^{+}e^{-}\) colliders and thus to recast all ALPs limits into graviton limits using Eq. (18). Results are presented in Fig. 6. Using all the experimental data, upper limits on the graviton-photon coupling are set up over \(g_{\rm G\gamma}\approx 1\)-0.05 TeV\({}^{-1}\) for masses \(m_{\rm G}\approx 100\) MeV-2 TeV. Figure 6 also shows extrapolated limits (dashed curves) for the total integrated luminosities expected to be collected over the entire lifetime of the HL-LHC and Belle II [58] experiments, which show that the current bounds can be improved by factors of about 100 in the low-mass region, and of 4 at high masses.
It is worth noting that the universal-coupling graviton has also a branching fraction of \(\mathcal{B}_{\rm G\to\ell^{+}\ell^{-}}\approx 2.5\%\) into each pair of charged leptons, and of \(\mathcal{B}_{\rm G\to W^{+}W^{-}}\approx 10\%\) into W\({}^{\pm}\) pairs at high masses
Figure 6: Exclusion limits at 95% CL on the graviton-photon coupling as a function of the graviton mass derived from the latest ATLAS, CMS, Belle II, BES-III, and LEP exclusive diphoton and triphoton results [15, 16, 17, 18, 19, 38, 39]. A universal coupling of the graviton to SM particles is assumed, which fixes its \(\gamma\gamma\) decay branching fractions as shown in Fig. 2 and Table 1. Extrapolated limits (dashed lines) are also presented for the final integrated luminosities expected at Belle II and LHC.
(Table 1). Exclusive measurements of \(\gamma\gamma\to e^{+}e^{-},\mu^{+}\mu^{-},\tau^{+}\tau^{-}\) in PbPb UPCs over \(m_{\ell^{+}\ell^{-}}\approx 5\)-100 GeV [59, 60, 61, 62, 63] and in pp UPCs over \(m_{\ell^{+}\ell^{-}}\approx 100\)-1000 GeV [64, 65, 66, 67], as well as of \(\gamma\gamma\to\mathrm{W}^{+}\mathrm{W}^{-}\) in pp UPCs over \(m_{\mathrm{W}^{+}\mathrm{W}^{-}}\approx 160\)-2000 GeV [68, 69, 70, 71, 72], have not shown any significant excess with respect to the SM predictions. Such measurements are in agreement with (but less stringent than) the graviton limits derived from the exclusive diphoton measurements discussed here.
A discussion is also in place regarding the comparison of our massive graviton limits with those set by other inclusive searches at the LHC. Massive spin-2 particles are typically predicted by BSM models proposed to explain the very large gap between the electroweak (\(10^{2}\) GeV) and Planck (\(10^{19}\) GeV) scales ("hierarchy problem") based on the existence on new compact spatial dimensions. Graviton-like particles appear as Kaluza-Klein (KK) excitations of these extra dimensions in the Randall-Sundrum (RS) [73], and Arkani-Hamed-Dimopoulos-Dvali (ADD) [74] approaches (with model differences arising mostly from the number of extra dimensions considered, and their compactification). Both RS and ADD gravitons have been searched for in standard parton-parton collisions at the LHC, in the form of high-mass dijet, dilepton, and/or diphoton resonances, \(\mathrm{pp}\to\mathrm{G}\to jj,\ell^{+}\ell^{-},\gamma\gamma\), above the corresponding dominant perturbative quantum chromodynamics (pQCD) continuum backgrounds, \(\mathrm{pp}\to jj,\,\ell^{+}\ell^{-},\,\gamma\gamma+X\). In the high mass range, our universal-coupling scenario predicts gravitons predominantly decaying into two high-\(p_{\mathrm{T}}\) hadronic jets with \(\mathcal{B}_{\mathrm{G}\to jj}\approx 65\%\) (Fig. 2). To date, exploiting the full Run-2 integrated luminosities (140 fb\({}^{-1}\)) of pp collisions at 13 TeV, no localized dijet excess has been found up to a few TeV [75, 76, 77]. For a graviton mass of 1 TeV, our limits predict \(g_{\mathrm{G}\gamma}\lesssim 4.5\cdot 10^{-2}\) TeV\({}^{-1}\) which, using the branching fraction of the graviton decaying into two jets, would translate into a production cross section smaller than 0.44 pb for the \(\mathrm{pp}\to\mathrm{G}\to jj\) process at 13 TeV [48]. However, the pQCD cross section for \(\mathrm{pp}\to jj\) at 13 TeV at \(m_{jj}=1\) TeV (with the difference in rapidities of the two jets being smaller than 1.2) has been measured to be more than 200 times larger, \(\mathcal{O}(100)\) pb within a few % total uncertainty [78]. At lower masses, the situation is even more dire, with pQCD dijet background invariant cross sections per mass bin increasing as a power law with exponent \(n\approx 5\). This explains why such a potential graviton would not have been observed in inclusive dijet searches, and that the exclusive photon-fusion-based search presented in this paper is competitive in the broader range of masses covered.
Similar searches for the inclusive production of RS and ADD gravitons have been performed in the diphoton channel in pp collisions at the LHC, \(\mathrm{pp}\to\mathrm{G}\to\gamma\gamma\). No diphoton spin-2 resonance excess has been neither found above the inclusive pQCD diphoton background, and exclusion limits for RS gravitons have been set by both ATLAS and CMS over \(m_{\mathrm{G}}\approx 100\)-3000 GeV [79, 80]. Since inclusive searches for diphoton resonances include, by definition, also any potential \(\gamma\gamma\to\mathrm{G}\to\gamma\gamma\) production, the reader may wonder what advantage the exclusive searches presented here provide in terms of limits settings. First, the exclusive final states in UPCs can probe much lower diphoton masses without pileup and collision backgrounds that prevent photon isolation in
inclusive searches. Second, any exclusive \(\gamma\gamma\) graviton searches are complementary to the inclusive ones, as they have different sources of systematic (experimental and theoretical) uncertainties. Third, arguably the clearer advantage is in the very different sizes of the irreducible backgrounds as shown in Fig. 7, which compares the cross sections for the continuum pQCD (\(\mathrm{pp}\to\gamma\gamma+X\)) and exclusive \(\mathrm{LbL}\) (\(\mathrm{pp}\to\mathrm{p}\gamma\gamma\mathrm{p}\)) diphoton backgrounds as a function of mass for proton-proton collisions at \(\sqrt{s}=14\) TeV. The parton-induced pQCD curve has been obtained at LO with MG5_aMC and scaled up by a \(K\)-factor of \(K\approx 4\)-2, at low and high masses respectively, derived from next-to-next-to-leading-order (NNLO) calculations [81, 82]. The LbL curve has been computed with gamma-UPC+MG5_aMC as explained in Section 2, the local "bump" at \(m_{\gamma\gamma}\approx 350\) GeV is due to the onset of top-antitop quark boxes (aka. the resonant anomalous threshold [83]). This figure shows that the cross sections for inclusive \(\gamma\gamma\) are up to 6 orders-of-magnitude larger than the exclusive \(\gamma\gamma\) ones: At \(m_{\gamma\gamma}\approx 1\) TeV, \(\mathrm{d}\sigma(\mathrm{pQCD},\mathrm{LbL})/\mathrm{d}m_{\gamma\gamma} \approx 50\) ab/GeV, 1 zb/GeV, respectively. Namely, the exclusive graviton \(\gamma\gamma\) production and decay mode considered in this paper is subject to negligible SM irreducible backgrounds, and with proper control of instrumental effects (and for equal integrated luminosities) the G-\(\gamma\) coupling limits that can be set from exclusive analyses can be more competitive than those from standard inclusive graviton searches at the LHC. This is particularly true for potentially nonresonant gravitons (or with a width much larger than the detector diphoton resolution), where the signal would be further washed out and swamped by the pQCD background in inclusive searches, but would still appear as an excess over the negligible LbL cross section in exclusive studies.
Figure 7: Cross sections for continuum NNLO (\(\mathrm{pp}\to\gamma\gamma+X\)) and exclusive \(\mathrm{LbL}\) (\(\mathrm{pp}\to\mathrm{p}\gamma\gamma\mathrm{p}\)) diphoton backgrounds as a function of mass, for proton-proton collisions at \(\sqrt{s}=14\) TeV.
## 5 Conclusions
We have examined the possibility of searching for massive spin-2 (graviton) particles produced via two-photon processes and decaying back to photons (\(\gamma\gamma\to\mathrm{G}\to\gamma\gamma\)), in ultraperipheral collisions (UPCs) of lead ions, \(\mathrm{PbPb}\to\mathrm{Pb}\,\mathrm{G}(\gamma\gamma)\,\mathrm{Pb}\), and of protons, \(\mathrm{pp}\to\mathrm{p}\,\mathrm{G}(\gamma\gamma)\,\mathrm{p}\), at the LHC, as well in three-photon final states in \(e^{+}e^{-}\) collisions measured at the Belle II, BES-III, and LEP experiments, \(e^{+}e^{-}\to\mathrm{G}(\gamma\gamma)\gamma\). We have considered a minimal effective field theory model that describes a linearized kinetic Lagrangian for a spin-2 graviton and its coupling to all standard model particles. Such a universal-coupling graviton model allows to consider a free G-\(\gamma\) coupling for the case of \(e^{+}e^{-}\) collisions with three-photon final states without breaking the perturbative unitarity of the calculations. Based on similar searches performed for pseudoscalar axion-like particles (ALPs), and taking into account the different cross sections, \(\gamma\gamma\) partial widths, and decay kinematics of the pseudoscalar and tensor particles, we can reinterpret existing experimental bounds on the ALP-\(\gamma\) coupling into G-\(\gamma\) ones. With this goal, simulations have been run for graviton and ALPs samples, reproducing the experimental searches for diphoton and triphoton excesses. For PbPb and pp collisions, 95% CL upper limits \(g_{\mathrm{G}\gamma}\approx\) 1-0.1 TeV\({}^{-1}\) have been set over \(m_{\mathrm{G}}=5\) GeV to 100 GeV, and over \(g_{\mathrm{G}\gamma}\approx 0.5\)-0.05 TeV\({}^{-1}\) for \(m_{\mathrm{G}}=150\) GeV and 2 TeV, respectively. Compared to standard inclusive searches of high-mass diphoton bumps above the pQCD continuum at the LHC, the exclusive UPC final states benefit from reduced pileup backgrounds, negligible SM model irreducible continuum backgrounds, and the possibility to probe graviton masses in the few-GeV range. The \(e^{+}e^{-}\) measurements allow further constraining the graviton-photon coupling down to \(g_{\mathrm{G}\gamma}\approx 1\) TeV\({}^{-1}\) at even smaller graviton masses, from 100 MeV up to about 10 GeV. Such bounds can be improved by factors of 100 at Belle II in the low-mass region, and of 4 at the HL-LHC at high masses, with their expected full integrated luminosities.
_Acknowledgments--._ Support from the European Union's Horizon 2020 research and innovation program (grant agreement No.824093, STRONG-2020, EU Virtual Access "NLOAccess"), the ERC grant (grant agreement ID 101041109, "BOSON"), and the French ANR (grant ANR-20-CE31-0015, "PrecisOnium"), are acknowledged.
**Appendix A: Statistical equivalence of limit-setting procedures (i) and (ii) of Section 3.**
In order to derive an exclusion limit on the signal cross section and its associated coupling (with \(\sigma_{g}=g^{2}\sigma_{g\equiv 1}\)), we need to assume a set of observed data. As commonly done, we assume that no statistical fluctuations are present in these pseudodata, which are usually dubbed "Asimov" data and that we denote here with a prime. As experimental data are used to derive the limits, the collected integrated luminosity \({\cal L}_{d}\) is considered in this discussion.
The observed events follow a Poisson distribution, and to simplify the discussion we can neglect the systematic uncertainties here. The statistical size of the event data sample, together with the prediction for the event rates, define the likelihood function needed:
\[L(\sigma)={\rm Pr}(n^{\prime}|b+\sigma{\cal L}_{d})\ {\rm with}\ {\rm Pr}(\hat{n}|n)= \frac{n^{\hat{n}}e^{-n}}{\hat{n}!}. \tag{1}\]
Here \({\rm Pr}(\hat{n}|n)\) is the probability density function of finding \(\hat{n}\) events if \(n\) (\(n^{\prime}\)) events are expected in the domain selected after experimental requirements, and \(b\) is the expected number of events from the background, given here mostly by the SM LbL prediction. We aim to obtain projected exclusion limits at 95% CL. Then, we define the posterior probability density for \(\sigma\) as \(L(\sigma)\pi(\sigma)\) where the prior is \(\pi(\sigma)=1\) if \(\sigma>0\), and 0 otherwise. In order to derive the limits, we assume that no event is observed, _i.e._\(n^{\prime}=0\), with the consequence that an upper bound on the signal event rate can be set. The higher posterior density region at \(1-\alpha\) credibility level is solved analytically and is simply given by:
\[1-\alpha=\frac{\int_{0}^{\sigma_{\alpha}}L(\sigma)\pi(\sigma)}{\int_{0}^{ \infty}L(\sigma)\pi(\sigma)}=1-e^{-\sigma_{\alpha}{\cal L}_{d}}. \tag{2}\]
This gives the upper limit cross section for the signal:
\[\sigma_{\alpha}=-\frac{1}{{\cal L}_{d}}\log(\alpha). \tag{3}\]
Then for a 95% credible interval, we take \(\alpha=0.05\) and the exclusion limit is simply given by \(\sigma_{\alpha}\approx 3{\cal L}_{d}^{-1}\). This implies that the corresponding upper limit on the ALP-photon coupling \(g_{a\gamma}\) is given by:
\[g_{a\gamma}=\sqrt{\frac{\sigma_{\alpha}}{\sigma_{a,{\rm gen}}}}\ g_{a\gamma,{ \rm gen}}. \tag{4}\]
Here, \(\sigma_{a,{\rm gen}}\) is the generated cross section for the ALPs production and \(g_{a\gamma,{\rm gen}}\) the corresponding ALP-\(\gamma\) coupling. Obviously, we can perform the same exercise for the graviton-\(\gamma\) coupling, leading to:
\[g_{{\rm G}\gamma}=\sqrt{\frac{\sigma_{\alpha}}{\sigma_{{\rm G},{\rm gen}}}}\ g_{{\rm G}\gamma,{\rm gen}}. \tag{5}\]
Then, the ratio of Eqs. (4) and (5) gives Eq. (18) of the method (ii) discussed in Section 3, as expected. |
2305.13479 | Rethinking Machine Learning Collective Communication as a
Multi-Commodity Flow Problem | We show communication schedulers' recent work proposed for ML collectives
does not scale to the increasing problem sizes that arise from training larger
models. These works also often produce suboptimal schedules. We make a
connection with similar problems in traffic engineering and propose a new
method, TECCL, that finds better quality schedules (e.g., finishes collectives
faster and/or while sending fewer bytes) and does so more quickly on larger
topologies. We present results on many different GPU topologies that show
substantial improvement over the state-of-the-art. | Behnaz Arzani, Siva Kesava Reddy Kakarla, Miguel Castro, Srikanth Kandula, Saeed Maleki, Luke Marshall | 2023-05-22T20:42:57Z | http://arxiv.org/abs/2305.13479v1 | # Rethinking Machine Learning Collective Communication as a Multi-Commodity Flow Problem
###### Abstract.
We show communication schedulers' recent work proposed for ML collectives does not scale to the increasing problem sizes that arise from training larger models. These works also often produce suboptimal schedules. We make a connection with similar problems in traffic engineering and propose a new method, TE-CCL, that finds better quality schedules (_e.g.,_ finishes collectives faster and/or while sending fewer bytes) and does so more quickly on larger topologies. We present results on many different GPU topologies that show substantial improvement over the state-of-the-art.
## 1. Introduction
Near-optimal collective communication optimizers [5, 27, 29] -- that optimize the communication routes and schedules of distributed training -- cannot scale to what cloud operators need. This is because cloud operators run large multi-tenant GPU clusters where they schedule distributed training jobs over many GPUs. Tools that find optimum topologies, hardware architectures, or co-optimize various aspects of distributed training [19, 30, 31] also rely on these optimizers and call them multiple times during their search.
Without communication optimizers GPU clusters spend a significant amount of time with idle GPUs: prior work reports the GPUs in BERT [8] and DeepLight [7] spent \(11\%\) and \(63\%\) of the time idle respectively [27]. The problem becomes worse as we move to faster GPUs. Current communication optimizers leave significant room for improvement: for example, we show we can improve upon state-of-the-art solutions such as TACCL [27]_by over \(2\times\)_ on its two chassis NDv2 topology [2] (Figure 11).
We scale near-optimal collective communication optimizers (_e.g.,_ SCCL [5]) -- that model the problem imperfectly but optimally solve their model -- to enable cloud operators to use them for today's large GPU collectives and improve their runtime to make them more usable as part of other collective optimizers such as [19, 30, 31] -- our goal is to improve the solution quality of state-of-the-art _heuristics_ (_e.g.,_ TACCL [27]) while maintaining the same ability to scale.
The input to a collective communication optimizer is a _demand_ (_e.g.,_ AllToAll, AllGather, AllReduce): a set of interconnected GPUs where each GPU has a certain amount of data to send to other GPUs in the interconnect. The goal of the optimizer is to produce routes and schedules that either maximize bandwidth utilization [29] or minimize job completion time [5, 27] for the input demand or both.
Near-optimal optimizers (_e.g.,_[5]) apply to a single chassis [5]. In contrast, operators require solutions that scale to topologies with 30-60 chassis (and project larger topologies) [6]. Heuristic scale but often produce highly sub-optimal solutions [27, 29]. This is becoming a problem as topologies grow and more users share the same underlying network.
SCCL cannot scale because it uses SMT solvers [28]. The heuristics avoid using SMT solvers and scale better but fail to account for one or more factors (_e.g.,_ identifying where traffic should be copied inside the network, enforcing synchronization barriers, and properly accounting for latency of transfers) and produce sub-optimal solutions as a result.
We propose an alternate solution: TE-CCL. Our insight is that we can model the problem of collective communication optimization through techniques from a class of problems known as multi-commodity flow.
Operators use multi-commodity flow problems in traffic engineering (TE) and use flow conservation constraints to model the flow of traffic -- they assign paths to maximize a cost function [3]. They, too, take a set of demands as input and produce routes and schedules that optimize various objectives. But the collective problem has nuances that are not present in a traditional multi-commodity flow model:
**Temporal variations.** Multi-commodity flow problems assume "sustained demand": such problems rely on a continuous flow of data between a source and destination (for several minutes), and this is why the demand in these problems is a bandwidth request (with units such as bits/sec). But GPUs in a collective have finite data to send - the demand in these problems is a transfer request (with units such as bits).
This means we can no longer minimize the delay on the longest path to minimize the transfer time as traditional flow problems do: we can no longer assume an uninterrupted flow of traffic to approximate the delay cost of transfers (see SS 2).
**Support for store and forward.** Traditional flow problems [3] do not model caches. We show in SS 6 that we can speed up the solver to find schedules faster if we use the available memory in GPUs.
**Supporting copy.** Unlike typical use-cases of the network flow formulation (_e.g.,_ in the TE context [14, 16]), collective communication often multicasts the same data to multiple parties, which requires the model to appropriately copy data within the network (and adjust the traditional flow conservation constraints accordingly).
Some prior works do extend multi-commodity flow problems to incorporate these concerns: _e.g.,_ Calendaring [17] supports deadlines on fixed-size transfers, NetStitcher [18] allows for store-and-forward, and several multicast TE works [10, 22] support copying (see SS 7). But, it is non-trivial to combine these techniques to add support for all three dimensions _simultaneously_ without affecting scalability.
We adapt multi-commodity flow problems to model all three behaviors and solve the general collective communication optimization problem. Our solution is a scalable mixed-integer linear program with optimality gap guarantees (based on the primal-dual theorem [3]). We show that this solution scales to much larger collectives than techniques such as TACCL [27] and SCCL [5] and improves the solution quality.
For certain collectives we can scale this solution even further by converting the MILP into an LP by removing all integer variables. In the general case, we improve scalability by partitioning the problem in time -- a technique inspired by the \(A^{*}\)[12] from robotics.
TE-CCL's solutions match the solution quality of SCCL and outperform the quality of state-of-the-art solutions such as TACCL [27] -- we show _a minimum of \(2\times\)_ performance improvement on the same 2 chassis NDv2 topology TACCL uses -- and shortest path schedules [31] because the optimization models the end-to-end problem (whereas these works contain consecutive optimizations that only see a partial view of the problem at each stage), and adds support for copy and store-and-forward. As part of TE-CCL we are also able to account for multi-tenant, heterogeneous topologies where links have different latency, and bandwidth costs and tenants have different priorities to support cloud-scale GPU clusters better.
Our contributions are as follows:
* We present a novel, scalable, solution to the collective communication optimization problem. To the best of our knowledge, this is the first multi-commodity based solution to this problem. This new mode of thinking provides an opportunity to improve other aspects of machine learning collectives such as topology design and adapting to failures.
* We show how to scale this solution to larger topologies through a linear program for AllToAll-like demands and a technique inspired by \(A^{*}\) in the general case.
* We evaluate TE-CCL both on popular topologies and on the proprietary, large-scale topologies from a large public cloud. We show our solution improves the solution quality of TACCL [27] by a minimum of \(2\times\) in many scenarios. We find TACCL's heuristic is unreliable (produces different solutions in each run) and cannot find a feasible solution in many cases. In contrast, TE-CCL is reliable, produces the same solution in each run, and finds a feasible solution in instances where TACCL was infeasible. TE-CCL and TE-CCL have similar abilities to scale although TE-CCL was able to run on much larger topologies.
## 2 Background and Motivation
We present the necessary background on collective communication and motivate the need for scalable communication schedules for ML collectives. We then describe the multi-commodity flow formulation, how it relates to collective communication optimization, and show why we should modify them to model delay, store-and-forward, and copy.
### The need for fast collective scheduling
ML collectives have pronounced communication patterns with flavors of multicast aggregation trees: _e.g.,_ AllGather, AllToAll, ScatterGather (Figure 2 in TACCL [27] illustrates these communication patterns and how they differ).
These communication patterns constitute a _demand_ on the network where each GPU wants to send data to other GPUs. For example, in an AllGather demand, each source GPU intends to send all of its data to all other GPUs, and in an AllToAll demand, each GPU wants to send data to all other GPUs, but the data it sends to each GPU is different.
Collective communication optimizers take these demands as input and find solutions that route and schedule them efficiently to minimize transfer time. Operators use these optimizers in their multi-tenant GPU clusters and as part of solutions that help improve their offerings [19, 30, 31].
Most optimizers use the \(\alpha-\beta\) cost model [13]. \(\beta\) is the transmission time of bytes on a link (how long it takes for the NIC to get the bytes on the wire): if we send \(\mathcal{B}\) bytes on a link with capacity \(C\) bytes per second, it takes \(\frac{\mathcal{B}}{C}\) seconds for the bytes to cross that link and \(\beta=\frac{1}{C}\). \(\alpha\) is the constant delay of a link. In its simplest form, we can think of it as the propagation delay over a link, but it can also include other factors such as the fixed compute cost of consolidating the data and making the call to the network stack to transmit it. It takes \(\alpha+\beta S\) seconds to send a chunk of size \(S\) over a link.
Most existing optimizers fail to scale to large topologies (_e.g.,_ SCCL [5]) or produce sub-optimal schedules (_e.g.,_ NCCL [5; 31], TACCL [27]). SCCL uses SMT solvers and does not scale. TACCL separates the routing and scheduling problems and fails to co-optimize the two. The shortest path first algorithm in [31] fails to leverage copy.
### Background on network flow solutions
Many works find optimal routes for wide area traffic engineering (WAN-TE) and for multicast networks (_e.g.,_[1; 9; 10; 14; 16; 17; 18; 21; 22]). These problems also take as input a set of _demands_: "rate requests" between a source-destination pair. The solutions aim to meet these demands and maximize the total flow the network carries, or the network utilization, or maintain fairness without violating capacity constraints.
Although these formulations take different forms (most notable of these is the path-formulation which takes a set of paths as input and only allows flows to go over the input paths [14; 16]) they share the following key components:
**Capacity constraints.** Ensure that the traffic the solution allocates on a link never exceeds its capacity.
**Flow conservation constraints.** Ensure that the solution does not create traffic "out of thin air" but that each non-source node forwards what it receives or consumes it.
**An objective.** The objective encodes what the optimization is trying to minimize or maximize: the cost model. The most common TE objectives include max-min fair rate allocations, total satisfied demand, or the link utilization.
We observe that the multi-commodity flow and the collective communication optimization problems have many commonalities: both take a set of demands and a topology as input and produce routes (and schedules) to optimize an objective.
But the two are different as the collective optimizer requires we account for: copy, store-and-forward, and temporal behavior (and the impact on the latency cost as a result). We next discuss each of these in detail:
**Temporal behavior.** In the collective problem, the source wants to transfer a fixed number of bits -- once the network satisfies the demand from that source, the demand goes to zero and frees up capacity. The network can then re-assign this capacity to other demands. This happens in the traditional TE as well but at larger time-scales and most deployed TE solvers periodically re-solve the optimization to handle it. This is not a problem at face-value -- after all we solve the problem offline -- but it impacts the scalability of the solution in the collective setting. Calendaring [17] and Netstitcher [18] both model this, but they do not model propagation delay and hence fail to address an important side-effect:
_Modeling delay (the a-cost)._ Most TE solutions (_e.g.,_[10; 22]) compute the delay-cost as the maximum delay across all paths where the delay of a path is the sum of the delay on each of its links. These models assume the total time needed to fulfill a demand is the transmission delay (or \(\beta\)-cost) + this delay-cost.
We show why this model breaks through an example (Figure 0(a)). Here, two sources (\(s_{1}\) and \(s_{2}\)) want to send a unit of traffic (\(\circled{3}\) and \(\circled{2}\)) to destination \(d\). The links on the path from \(s_{1}\) to \(h_{3}\) have a propagation delay \(\alpha_{1}\) and those on \(s_{2}\) to \(h_{3}\) have a propagation delay of \(\alpha_{2}\) where \(\alpha_{2}=2\beta+3\alpha_{1}\). If we take the traditional TE approach to model the delay, the path with the maximum delay is the one between \(S_{2}\) and \(d\) which has a propagation delay of \(\alpha_{2}\). It also takes an additional \(4\beta\) for the traffic to get from both \(S_{1}\) and \(S_{2}\) to \(d\): the TE solutions estimate \(\alpha_{2}+4\beta\) as the completion time.
But because of the higher propagation delay on the link \(s_{2}\)-\(h_{3}\) the data from \(s_{1}\) and \(s_{2}\) both arrive at \(h_{3}\) at the same time (\(t=\beta+\alpha_{2}\)), since the propagation delay on the link \(h_{3}\)-\(d\) is zero, the total time to complete the transfer is \(\alpha_{2}+3\beta\).
The impact of \(\alpha\) is greater for smaller transfers (Figure 2): the error in our estimate of algorithm bandwidth for a schedule where we do not model \(\alpha\) to one where we do goes up to \(100\times\).
**Store-and-forward.** Most nodes in a collective topology can buffer incoming traffic before sending it out. We can use this to improve solver time (Figure 0(b)) as the number (space) of optimal solutions increases. In Figure 0(b), without store and forward, in the first second, any two nodes (\(3\) schedules) can send their chunks to \(h\). With store and forward, we can have three additional schedules where all three sources send to \(h\) in the first second, and we then choose in which order to send them to the destination in the next. The solution quality is the same in both cases (we satisfy the demand in \(3\)s). WE confirmed this in our experiments in SS 6.3 across all the scenarios we considered. For some collective demands store-and-forward may also help with transfer time (though it did not in our experiments).
But traditional TE does not model buffering [14; 16]. Netstitcher [18] models store and forward but assumes flows do not compete for bandwidth and solves a separate optimization for each flow: it is sub-optimal and does not scale. Some multi-cast TE solutions model intermediate caches [10], but they fail to account for the delay, and it is difficult to modify them to do so.
**Copy.** Some collective demands (_e.g.,_ AllGather) consist of sources that send the same data to multiple destinations (_i.e.,_ multicast). Traditional TE does not model copy (_e.g.,_ SWAN and B4 [14; 16]) and produces sub-optimal solutions (see Figure 0(c)). Multi-cast TE [10; 22] addresses this problem but fails to model delay (these works assume sustained demands) and, in some instances [22], store-and-forward.
We formulate the collective communication optimization problem as a TE problem that supports these elements. The challenge is to maintain scalability. We show our model, as-is, outperforms current state-of-the-art solutions such as
SCCL [5] in its ability to scale and TACCL [27] in its solution quality. We further improve it's scalability through a technique inspired by \(A^{*}\) from robotics.
## 3. Solution
We next describe how we model the collective communication problem as a multi-commodity flow problem. We build on the ideas in Calendaring [17] and Netstitcher [18] to model delay, model store-and-forward, and copy.
But this solution does not scale to topologies with more than 64 GPUs. We scale it by changing our mixed integer program (MILP) into a linear program (LP) for demands such as AllToAll where sources send different data to each destination and do not benefit from copy (SS 4.1); and through a more general solution we call \(A^{*}\)(Appendix D).
### The general model
We describe our notation in Table 1. Like any other multi-commodity flow problem we need to specify: capacity and flow conservation constraints, and an objective.
But, to model delay, store-and-forward, and copy we need to introduce a few new concepts: chunks, epochs, and buffers.
Our notion of chunks is similar to prior work (_e.g.,_ SCCL): a chunk (like packets) is a block of bytes1.
Footnote 1: We allow our solution to split chunks into smaller blocks when we move to the linear program form.
We use epochs (similar to how SCCL uses rounds) to make time discrete: epochs are fixed periods of time -- our solution produces a schedule that tells the user in which epoch they should send a chunk and on which link.
We discuss chunk sizes and epoch durations in detail in SS 5. For now, we assume \(\tau\) is the epoch duration and \(T_{ij}\) is the capacity of a link (where the units are chunks per second), and that epoch is sufficient for at least one chunk to traverse any link.
We use buffers to model store-and-forward. To simplify the explanation we assume each node has enough buffer to store the entire network demand if it needs to (we show how to remove this assumption in Appendix B).
To model copy, we need to track each chunk: we use \(F_{s,t,i,j,k,c}\) and \(B_{s,i,k,c}\) to track whether chunk \(c\) from source \(s\) is going over link \((i,j)\) or is in node \(i\)'s buffer at epoch \(k\) respectively.
Figure 1. Examples that show why we should model properly: (a) \(\alpha\)-delay: the maximum delay across all the paths is an incorrect estimate; (b) store-and-forward: buffers improve the solver time as there are more solutions (c) Copy: we can leverage copy to use the available bandwidth more efficiently.
Figure 3. An example of why we need integer variables to track each chunk. If we allow partial chunks (§3, §3) and copy at the same time, we run into a situation where the optimization can send the same copy of part of a chunk (§5) to two neighboring nodes (in this case \(d_{1}\) and \(d_{2}\)) and they can forward it along to the destination (\(d_{3}\)). Since the formulation has no way of knowing these two halves are the same, it thinks \(d_{3}\) has received the full chunk.
Figure 2. The relative error in the algorithm bandwidth estimate (the output buffer size / transmission time) of a collective schedule that does not model alpha compared to one that does. We use a proprietary topology from a public cloud with \(2\) chassis, \(8\) GPUs, and \(40\) edges where the \(\alpha\) of inter-GPU and the GPU to the switch links is \(0.6\) and \(0.75\) microseconds respectively.
We need to use integer variables for \(F_{s,i,j,k,c}\) and \(B_{s,i,k,c}\) to model copy -- we cannot allow chunks to be split into smaller pieces. We use the example in Figure 3 to explain why. Source \(s\) sends the first half of a chunk () to both destinations \(d_{1}\) and \(d_{2}\). These nodes then both forward it to \(d_{3}\): they have no way of knowing this is the same half. The optimization now thinks it has delivered the full chunk to \(d_{3}\) while it has only delivered one half of it twice: it will send the second half of the chunk to both \(d_{1}\) and \(d_{2}\) but not to \(d_{3}\). Using integers for \(F_{s,i,j,k,c}\) and \(B_{s,i,k,c}\) allows us to avoid this problem (we do not need this for demands that do not benefit from copy \(\lx@sectionsign\)4.1). We can increase the number of chunks to decrease the size of each individual chunk and support smaller transmission blocks (the optimization automatically consolidates them to bigger transmission units if needed) -- but this increases the number of variables and slows down the optimization.
We now have everything we need:
**Capacity constraints.** Capacity constraints ensure we do not send more data than the link can carry in an epoch. We have:
\[\text{Capacity Constraint}(i,j,k)\triangleq\] \[\sum_{s\in N}\sum_{c\in C}F_{s,i,j,k,c}\leq T_{ij}\tau\]
**Flow conservation constraints.** The purpose of these constraints is to ensure the network does not create or lose traffic. The traditional form of these constraints specifies: a node should either consume or forward all of the traffic it receives. Here, we need to change these constraints to account for: (a) copy -- nodes can create new traffic; (b) delay.
To model delay, we need to ensure a node does not forward a chunk if it has not received it. We first compute \(\delta_{ij}=\frac{\alpha_{ij}}{\tau}\): number of epochs it takes for a chunk to traverse a link. Traffic that node \(i\) sends to node \(j\) at the beginning of epoch \(k\) arrives at node \(j\) by the end of epoch \(k+\lceil\delta_{ij}\rceil\). Node \(j\) can forward a chunk it receives from node \(i\) if node \(i\) sent it \(\lceil\delta_{ij}\rceil\) ago.
Copy, by definition, violates traditional flow conservation constraints: it creates traffic where it didn't exist before. But, the node does not need to copy the chunk on the same link in the same epoch. We use this, along with \(\delta_{ij}\) to rewrite the flow conservation constraints as follows:
\[\text{Flow conservation constraints}(s,n,k,c)\triangleq\] \[B_{s,n,k,c}+\sum_{v_{j}|(j,n)\in E}F_{s,j,n,k-\lceil\delta_{ jn}\rceil,c}\geq\max_{v_{j}|(n,j)\in E}F_{s,n,j,k+1,c}\]
This constraint encodes that what the node \(n\) has in its buffer along with what it receives in epoch \(k\) has to be larger than what it sends out in the next epoch on _each_ of its outgoing links. We track the buffer contents as follows:
\[\text{Buffer constraints}(s,n,k,c)\triangleq\] \[B_{s,n,k,c}=B_{s,n,k-1,c}+\sum_{v_{j}|(j,n)\in E}F_{s,j,n,k- \lceil\delta_{jn}\rceil-1,c}\]
The buffers accumulate all traffic the GPU has received up to that point. Nodes have enough memory for this: for collective demands such as AllGather each GPU needs all the chunks that are sent over the network and stores them anyway. But it is straight-forward to model limited buffers as well if we track what we should remove from the buffer in each epoch (see Appendix B). We evaluate the benefit of buffers using an AllGather demand in \(\lx@sectionsign\)6.
\begin{table}
\begin{tabular}{l l} \hline
**Variable** & **Description** \\ \hline \hline \(N\) & Set of nodes in the graph \\ \(S\) & Set of nodes in the graph that are switches (\(S\subset N\)) \\ \(E\) & Set of edges in the graph (\(E\subseteq 2^{N\times N}\)). Edges are unidirectional. \\ \(C\) & Chunk IDs (\(C=\{0,1,2,\ldots,\mathbb{C}\}\)). Each node has \(\leq\mathbb{C}+1\) number of chunks. \\ \(D\) & Demand function (\(N\times C\times N\rightarrow\{0,1\}\)) where \(D_{s,c,d}\) is whether destination \(d\) wants chunk with id \(c\) from node \(s\) \\ \(\tau\) & Epoch duration \\ \(K\) & The set of epochs (\(K=\{0,1,2,\ldots,\mathbb{K}\}\)) \\ \(F_{s,i,j,k,(c)}\) & Amount of source \(s\) chunks that are going over link \((i,j)\in E\) at epoch \(k\in K\) \\ \(B_{s,i,k,(c)}\) & Amount of source \(s\) chunks that are in node \(i\)’s buffer at the _start_ of epoch \(k\) \\ \(T_{ij}\) & Capacity of link \((i,j)\in E\) \\ \(\alpha_{ij}\) & Fixed latency associated with link \((i,j)\in E\) \\ \(\delta_{ij}\) & Number of epochs contained within an \(\alpha_{ij}\) for each link \((i,j)\in E\) \\ \(\text{R}_{s,d,k}\) & Source \(s\) chunks that node \(d\)_read_ off of the network \(in\) epoch \(k\) \\ \(\mathcal{R}_{s,d,k,(c)}\) & Source \(s\) chunks read off the network by \(d\)_up to_ epoch \(k\). \\ \hline \end{tabular}
\end{table}
Table 1. Our notation. We put in parentheses the index (c) because we only use it when demands benefit from copy. When we model copy the values of \(F\) and \(B\) are integers. We show for some demands (where copy is not useful) we can use real variables instead in \(\lx@sectionsign\)4.1.
The first and last epoch's flow conservation constraints are slightly different from the above: a node does not receive anything in the first epoch and doesn't send anything in the last. We refer the reader to the appendix for details due to space constraints (see Appendix A).
We next need to account for demands: we need to make sure all demands are met at the end.
**Destination constraints.** These constraints ensure each node receives its full demand by the end:
\[\begin{split}&\text{Destination constraints}\big{(}s,d,k,c\big{)}\triangleq\\ &\mathcal{R}_{s,d,k,c}=\min(D_{s,d,c},B_{s,d,k+1,c})\quad\&\\ &\mathcal{R}_{s,d,k,c}=D_{s,d,c}\end{split}\]
where \(\mathcal{R}_{s,d,k,c}\) is whether \(d\) has received chunk \(c\) of source \(s\) by epoch \(k\). These destination constraints are different from their counterparts in traditional TE models. This is because of copy: \(d\) may want a chunk and also relay the chunk to others. Hence, we cannot assume \(d\) wants to consume everything in its buffers. This is why we take the minimum of \(D_{s,d,c}\) and \(B_{s,d,k+1,c}\). We ensure \(d\) eventually receives its full demand by the last epoch \(k\) by setting \(\mathcal{R}_{s,d,k,c}\) to \(D_{s,d,c}\).
**Modeling switches.** So far, we have only modeled the behavior of GPU nodes. While some topologies (_e.g.,_ within a single DGX1 node [5]) only consist of GPUs, almost all larger topologies use switches to connect GPU blocks. We have to model switches differently because they have limited memory: we cannot buffer chunks at the switch. Hence, we set the buffer at each switch to zero.
Traffic needs to pay the \(\alpha\) delay cost of two links to cross a switch: one from the node to the switch and one from the switch to the node.
Most of today's switches support copy [9], and so we model switches with this assumption (switches have the same flow conservation constraint as other nodes). But we can also model switches without this capability to support legacy hardware. One way is to replace the flow conservation constraints at the switch with the traditional TE flow conservation constraints (what comes into the switch must go out).
Another option is to use the approach from TACCL [27]: replace switches with _hyper-edges_ and allow the user to choose which hyper-edges to allow. For this second model we need to add additional constraints and due to limited space we refer the reader to Appendix C for the details.
The former two approaches are easier to use in practice: the user does not need to specify a sketch (which is a crucial in TACCL) or pick which GPU communicates with which other GPU -- when we looked at the TACCL code we found the authors used their uc-min and uc-max strategy along with the user-specified sketch to automatically find which links to enable for switches within the node, but for cross-node links they pre-identified which links perform best manually. We need to understand the topologies well to write such sketches and we found it difficult when we evaluated new topologies with TACCL. In contrast, our solution requires no human in the loop -- the user only needs to specify the topology and the demand matrix -- but the solver is slightly slower.
**The objective.** Our optimization objective is to finish the transfer as quickly as possible. We can encode this as follows:
\[\text{Objective function}\triangleq\sum_{\forall k\in K,\forall s,d\in N :s\neq d}\frac{1}{k+1}\mathcal{R}_{s,d,k}\]
Notice how the objective gives fewer rewards as \(k\) increases: the objective improves if the schedule satisfies the demand as soon as possible. If we combine the objectives with our constraints we arrive at an optimization that maximizes the objective subject to all of the above constraints.
One nuance here is that the optimization has multiple optima: the objective does not discourage solutions where we send flows that do not satisfy any demand (as long as the schedule satisfies all demands as quickly as possible the solution is optimal). Such solutions are clearly wasteful.
To avoid such _silly_ cases, we can do one of two things: (a) we can either add a term to the objective to discourage unnecessary flows; or (b) we can zero out those flows in post-processing the solutions. The first results in higher solver runtimes as it becomes harder for the solver to prove optimality.
We use the latter approach where we run an algorithm similar to a reverse DFS. We start from each destination, and track the flows from that destination to the source until we account for its entire demand. We then remove (zero-out) all remaining flows as there is no demand corresponding to them. This takes \(\mathcal{O}\big{(}|N|+|E|\big{)}\) time where \(N\) is the number of nodes in the graph and \(E\) is the number of edges.
## 4 Scaling
Our formulation is general and pushes beyond the scale boundaries of SCCL and outperforms the solution quality of TACCL. But it is slow for topologies with more than 32 chassis. We next show two methods to scale this solution. The first works in situations where copy is not useful (_e.g.,_ AllToAll) and preserves optimality. The second is general (_i.e.,_ supports copy): it solves the problem by partitioning it in time (its goal, in each time partition, is to make as much progress as it can towards finishing the transfer). This later model is sub-optimal, but outperforms the TACCL heuristic (see SS 6) as it more accurately captures the optimization incentives and constraints. Its formulation allows users to trade-off optimality and speed by changing the number of partitions (smaller partitions increase sub-optimality but improve scalability).
### Scaling by converting to a linear program
There is only one reason we needed integer variables for our model: copy! But some demands do not benefit from copy -- this is when each destination wants a unique segment of information from each source. In these scenarios we can change our formulation into a linear program (LP). LPs are convex optimization programs which we can solve in polynomial time and scale much better than MILPs.
We remove support for copy and modify the flow conservation constraints back to their traditional form. The following constraint dictates: a node either buffers a chunk it received, forwards it in the next epoch, or consumes it. Notice a node can consume a chunk it received at the end of an epoch. We do not track individual chunks since we no longer need to worry about duplicates. This reduces the number of variables.
Flow conservation constraints \[\left(s,n,k\right)\triangleq\] \[\sum_{\left\{j\right\}\left\{\left\{\left\{\left\{\left\{\left\{ \left\{\left\{\left\{\left\{\left\{\left\{\left.\right\}\}\right\}\right\}} \right\}}\right\}}\right.\right)\in E\right\}}F_{s,j,n,k-\left\{\beta_{jn}\right\} \right\}+B_{s,n,k}=\] \[B_{s,n,k+1}+R_{s,n,k}+\sum_{\left\{j\right\}\left\{\left\{\left\{ \left\{\left\{\left\{\left\{\left\{\left.n\right\}\right\}\right\}\right\} \right\}\right\}\right\}}\right.F_{s,n,j,k+1}\]
The flow conservation constraints for switches are different: a switch does not consume chunks and does not buffer them -- we remove those terms from the flow conservation equations.
Since destinations no longer need to both consume _and_ forward chunks, we can modify the destination constraints:
\[\text{Destination constraint}\left(s,d,k\right)\triangleq\] \[\mathcal{R}_{s,d,k}=\sum_{r=0}^{k}R_{s,d,r}\quad\&\] \[\mathcal{R}_{s,d,k}=\sum_{\forall e}D_{s,d,e}\]
Our LP produces a _rate allocation_ to demands that originate from each source on each link. From this we generate a schedule that we then execute in hardware (we translate these rates to paths for each chunk through the same DFS-like solution we described earlier). This is a straight-forward algorithm -- TE solutions also use similar algorithms that we can adopt (Friedman and Kemp, 2012; Friedman and Kemp, 2012) -- and we omit it due to space constraints.
### Scaling using the \(A^{*}\) technique
The LP form allows us to scale the solution to large topologies, but it does not permit copy. Copy is important for demands such as AllGather (see SS 2). We also provide a second scaling method inspired by the \(A^{*}\) technique in robotics (Grover et al., 2013).
We partition the problem into multiple rounds. In each round we no longer find a solution that satisfies all demands but instead motivate the solver to make as much progress towards this goal as it can. These optimizations have fewer variables and are faster. We sequentially solve them one after the other until we reach a round where we meet all demands.
Here we need to address two new modeling challenges:
**Encoding the right incentives.** We need to remove the constraint that required the optimization to meet all demands by the last epoch -- otherwise the optimization in each round may become infeasible. This means our objective function is no longer sufficient: it only says _if_ it is feasible to satisfy a demand do so as fast as possible, but it does not reward incremental progress -- we need to augment our objective with a term that rewards the optimization for moving data closer to the destinations in each round. But how to do this in a way that preserves the MILP format?
We augment our topology with logical links that allow us to compute this reward function: we add logical edges to the graph that connect each node to all the destinations and add weights to each of these logical edges that correspond to the minimum distance -- we compute these weights using the Floyd Warshall algorithm (Friedman and Kemp, 2012) and the \(\alpha\)-delay cost of each edge -- from the node to each destination. We can now use these edges to encode a viable cost function which we can add to our original objective. Due to space constraints we refer the reader to the Appendix D for the details.
**Modeling delay.** Chunks that we send on any link \(\left(i,j\right)\) my not reach \(j\) by the end of the round (because of the \(\alpha_{ij}\)-delay on that link) but instead arrive in a future round. We therefore need to maintain state from one round to the next and incorporate these late arrivals in our formulation.
We refer the reader to the appendix for the full formulation.
## 5. Important Considerations
Earlier we described how to formulate collective communication optimization using a TE approach. All three formulations (the general MILP form, the LP form, and \(A^{*}\)) find solutions for any input demand but only the general MILP form and the \(A^{*}\) model support copy. There are a number of parameters in these formulations we need to choose carefully:
**Epoch durations and chunk sizes.** A side-effect of using integer variables in the MILP formulation and the \(A^{*}\)-based technique is that the choice of chunk-size and epoch duration is important (the LP is not sensitive to these settings) -- smaller epochs allow for finer-grained schedules that better leverage the available network capacity. To find the best chunk size we can sweep a range of values to find the best one quickly. We can also take this as an input -- smaller chunks allow for finer grained schedules but can increase the resource usage on a node. Users can also utilize solutions such as (Friedman and Kemp, 2012) to pick what is optimum for their work-flow.
To set the epoch duration we can do one of two things: (a) to get the best schedule from the vanilla MILP formulation we can set the epoch duration to the time it takes the slowest link to transmit a chunk -- the MILP cannot send anything
if we use smaller epochs because of the capacity constraints; (b) we can set the epoch duration based on the time it takes the _fastest_ link to transmit a chunk. Option (b) enables the MILP to produce finer grained schedules but to use it we have to modify the capacity constraints and the flow conservation constraints: the capacity constraints ensure we don't exceed the capacity constraint on the slowest link and the flow conservation constraints ensure we do not forward a chunk before receiving it. Due to space constraints we refer the reader to the appendix for the details (see Appendix F). We compare the two approaches in SS 6. Option (b) produces better schedules which is why we use it for most of our evaluations.
**Number of epochs.** We need to input an upper bound on the number of epochs which estimates how many epochs it may take to fully satisfy the demand: pick too small a number and the optimization will be infeasible, pick too large of a number and the MILP will be too large and too slow. To streamline finding the right number of epochs -- and to not burden the user with having to identify what numbers to use -- we develop a simple algorithm which finds a loose upper bound on how long we need to satisfy all the demands.
To find this number, we quickly sweep a range of transmission times: for each transmission time, we use coarser grain epoch durations (very large epochs) and run the optimization. Because we use large epoch sizes, we have fewer variables, which allows us to solve the optimization quickly. The solution of these runs is not optimal (because the epochs are too coarse), but it gives us an idea of how long we need when we switch to the optimal epoch duration. We describe the process in detail in Algorithm 1 in the Appendix E. We use the output to initialize the optimization which automatically identifies if a lower number of epochs is sufficient.
**Number of epochs in a round in \(A^{*}\).** We solve round after round of \(A^{*}\) until we deliver all the demands. Users can choose how many epochs to use in each round. The smaller the number of epochs in a round, the faster the optimization and the higher the optimality gap. Picking a small number of epochs per round also impacts the state we need to maintain. In our experiments, we set the number of epochs such that chunks do not arrive later than one round in the future.
**The topology, \(\alpha\), and \(\beta\) inputs.** TE-CCL takes the topology and the values for \(\alpha\) and \(\beta\) as input. We do not provide an independent method for computing these values.
**Which switch model to use.** We provide two switch models: one that allows the switch to copy chunks (to model networks with the SHArP protocol [(9)] enabled) and one which does not (the latter is similar to TACCL's hyper-edge model). It is up to the user to decide which model to use in the optimizer.
**Modeling variable bandwidth.** Our model supports networks with variable bandwidth. To add support for this, we have to assume bandwidth only changes from one epoch to the next. We can then take the capacity matrix for each epoch and use that in our capacity constraints.
**Use in multi-tenant clusters.** TE-CCL supports multi-tenant communication optimization: all our models accept a network demand as input -- to model a multi-tenant environment we have to change the demand matrix to the sum of the demands across all collectives. The capacity constraints will ensure we do not exceed network capacity and the objective ensures we minimize the total completion time across all tenants.
We can also support priorities across tenants (_i.e.,_ prioritizing one tenant's completion time over the others) if we add a separate buffer and read variable for each tenant: we can then add the priorities the objective function. This change increases the number of variables in the MILP which slow it down -- we may have to use \(A^{*}\) in this case but this does not impact the quality of the solution compared to when we solve a single tenant problem at the same scale.
**Scaling through intermediate solutions.** The solver we use, Gurobi [(25)], often finds an optimal solution and then spends a long time proving it is optimal -- often the solution does not improve even after the solver runs for an additional 10 hours. We therefore apply a timeout and stop the solver after 2 hours and use the solution at that point. Gurobi reports its progress through the primal-dual gap [(4)].
## 6. Evaluation
We implement our solution in Python. We use Gurobi [(25)] to solve the optimizations. We convert our solution into MSCCL [(5)], which can then port it into a schedule that runs on the hardware. We plan to release our code.
The goal in this evaluation is to:
* Compare TE-CCL to state-of-the art: both in scale and in terms of solution quality.
* Show TE-CCL scales to the large topologies.
* Show the impact of each of our different design choices.
**Metrics.** We use the following metrics to evaluate TE-CCL:
_Solver time._ The time it takes -- which includes the time to setup the variables and constraints in the solver -- to solve the collective optimization problem.
_Transfer time._ The time it takes for the transfer to complete: for all the nodes to receive their full demand.
_Output buffer size._ The data each GPU receives once we satisfy the demand (we borrow this from TACCL [(27)]).
_Transfer size._ The amount of data each GPU sends to others: for example, a GPU in an AllGather demand with a transfer size of 1 GB sends 1 GB of data to _each_ other GPU.
_Algorithmic bandwidth._ The output buffer size divided by the transfer time (this metric is from TACCL [(27)]).
**Topologies and workloads.** We evaluate TE-CCL using the topologies in Table 2. We use common topologies such as
DGX1, DGX2 [23], and NDv2 [2] as well as two proprietary topologies from a public cloud provider.
**TE-CCL variants.** We use three variants of TE-CCL in our evaluations: the optimal (where we use the vanilla MILP for AllGather and LP for AllToAll), the early-stop version for AllGather (where we use Gurobi's ability to find a good solution - which is at most 30% away from optimal - quickly), and \(A^{*}\) for AllGather.
Gurobi runs into numerical issues with AllToAll on large topologies (more than 64 nodes): we need to run it with a different configuration (method = 2[11]) which causes it to produce a feasible (but not optimal) solution. In those cases, we run the solver in a loop and do a binary search (on the number of epochs) to find the optimal solution.
We set the epoch duration based on the bandwidth of the fastest link. In the cases where \(\alpha>200\times\tau\) we increase the epoch duration by 5\(\times\) to avoid large models (since \(\alpha\) dominates this does not materially impact the solution).
TE-CCL solves optimization problems to produce a schedule, and the optimization is deterministic, outputting the same number of epochs to meet the demand every time we run it. The solver times also do not vary significantly for a given optimization across runs. **Baselines.** We compare our solution to two state-of-the-art solutions: TACCL [27] and SCCL [5].
_TACCL._ We obtained the TACCL code from the authors and track and report the solver time. TE-CCL takes an additional \(\beta\) compared to TACCL to route chunks through a switch: TACCL replaces the switch with direct edges between the nodes and only pays one transmission delay to cross that link whereas TE-CCL models the switch itself and pays two transmission delays -- one from the node to the switch and one from the switch to the node. To compare fairly against TACCL we change our model of the switch to do the same when comparing with TACCL.
_SCCL._ We compare to SCCL using the public SCCL codebase [20] and also re-ran our experiments using the SCCL artifact from their submission (which the authors gave us). We verified and confirmed with the authors we used SCCL correctly and that our numbers are correct.
**Platform.** We use the solvers and the schedules they produce to compute the transfer times and algorithmic bandwidth for SCCL, TACCL, and TE-CCL. We checked using a single 8 GPU DGX1 node that these estimates match what we get from running on hardware for both TE-CCL and TACCL.
We report the capacity and delay for the public topologies in the Appendix H.
**Unexplored avenues.** We show from testing on a DGX1 that TE-CCL's estimates of collective latency match the actual runtimes on prototype hardware. We do not have access and the budget to run hardware experiments at scale on different kinds of GPUs. Thus, the effect of factors such as congestion, message batch sizes and other GPU implementation artefacts on the collective latency remains an unknown. But our results on all of the other metrics such as solver times and our ability to scale to large topologies hold regardless.
### Comparison to SCCL and TACCL
**SCCL.** SCCL has two modes: one minimizes latency (least-steps) and one produces an instance solution (instance) with the number of chunks, rounds, and steps as input.
Our solution is equivalent to the former but the SCCL least-steps command took over a day to produce a solution for AllGather demands with more than 3 chunks and AllToAll demands with more than 1 chunk on a DGX1 topology (the SCCL paper does not evaluate this mode). In contrast, we ran TE-CCL with \(\max K=K=10\) (the maximum number of epochs the optimization can use to satisfy the demand) and \(25KB\) chunks, and it finished in \(\leq 0.65\)s for all AllGather demands and \(\leq 0.97\)s for AllToAll demands with less than 5 chunks.
We used \(25KB\) chunks to capture the impact of \(\alpha\) (\(\alpha=0.7\)\(\mu\)s) on the solutions (Table 3): for all \(>1\) chunk cases TE-CCL outperforms SCCL as it models the \(\alpha\) delay better -- it ensures a node receives a chunk before forwarding it but pipelines traffic; SCCL enforces a barrier instead. SCCL performs better in the 1 chunk case as TE-CCL cannot leverage its ability to pipeline.
We also compare with SCCL's instance solution (due to space constraints, we show the results in the Appendix G). To create an apples-to-apples comparison, we use the number of rounds in SCCL for K in TE-CCL -- since SCCL is no longer running an optimization -- and use \(\alpha=0\) (this is necessary as
\begin{table}
\begin{tabular}{l c c} \hline
**Topology** & **\# of GPUs per chassis** & **\# of edges per chassis** \\ \hline Internal 1 & 4 & 8 \\ Internal 2 & 2 & 2 \\ DGX1 & 8 & 32 \\ NDv2 & 8 & 32 \\ DGX2 & 17 & 32 \\ \hline \end{tabular}
\end{table}
Table 2. Our topologies. The internal topologies are from a large public cloud and are proprietary: \(\alpha\) is \(0.6\mu s\) and \(0.75\mu s\) on their GPU to GPU and GPU to switch links.
\begin{table}
\begin{tabular}{l c c} \hline
**Collective, \# chunks** & **SCCL (\(\mu\)s)** & **TE-CCL (\(\mu\)s)** \\ \hline \hline AllGather, 1 & 3.4 & 4 \\ AllGather, 2 & 5.1 & 5 \\ AllGather, 3 & 8 & 6.1 \\ AllToAll, 1 & 3.4 & 4 \\ \end{tabular}
\end{table}
Table 3. Comparing the transfer time from SCCL least-steps with TE-CCL (\(K=10\) and chunk size = \(25\) KB). TE-CCL can better pipeline chunks and so pays less \(\alpha\) cost with larger transfers.
our model will need more epochs otherwise to account for \(\alpha\)). We use the scenarios from Table 4 in SCCL [5] and run both solvers on a desktop with 6 cores and 32 GB RAM. SCCL failed to produce a solution for AllGather workloads with more than \(1\) chunk even after 3 days. TE-CCL runs faster than SCCL in almost all cases and even improves SCCL's solution quality by 33% in the AllToAll scenario. TE-CCL is slower than SCCL in one instance (\((6,7)\)): this is because in TE-CCL we solve for the optimal number of epochs, and we use a value for K that is too tight -- we can reduce the solver time to \(11\) seconds by increasing K to \(20\) (the quality of the solution does not change). We can use the \(A^{*}\) technique to speed up the solution further.
To fully highlight our runtime advantage over SCCL, we ran an AllToAll demand with 8 chunks using both solvers: SCCL timed out after 10032.7s and did not produce a schedule, whereas ours finished in 1.88s with a valid schedule that finished the transfer in \(21\mu\)s (for 25KB chunks).
**TACCL.** We compare the solver time and algorithmic bandwidth of TE-CCL and TACCL using AllGather and AllToAll demands and on DGX2 and NDv2 based topologies with up to 34 nodes (a 2-chassis DGX2 topology has 34 nodes) and on both internal topologies with up to 128 nodes. We ran all experiments on a Linux Ubuntu 20.04 VM with two Intel Xeon(R) Platinum 8380 CPUs with a total of 80-cores/160-threads and 512 GB RAM and used Gurobi 9.5.2 version as our solver. TACCL AllToAll does not terminate for large topologies (including the 2 chassis DGX2 AllToAll) -- we use a timeout of \(2+2\) hrs or \(4+4\) hrs for their routing and scheduling phases depending on the topology size.
TACCL ran out of memory and did not produce a solution for large Internal 2 topologies (with over 64 chassis) and for almost all Internal 1 topologies (with over 4 chassis). Table 4 reports the numbers for TE-CCL on \(\geq 64\) nodes topologies.
TACCL scales better on the NDv2 topology compared to internal topologies 1 and 2. In NDv2 only 2 nodes in a chassis connect to a switch but in internal topologies 1 and 2 many nodes in a chassis are connected to a switch -- TACCL replaces the switch with direct edges; as we increase the size of internal topologies 1 and 2 the number of such edges increases exponentially. The TACCL authors recommended we use a sketch that only uses a subset of these edges. Doing so improved the runtime for smaller topologies but TACCL still failed to produce a solution after 8 hours for larger ones.
AllToAll numbers for Internal 2 separately for clarity). We report the raw algorithmic bandwidths for TE-CCL variants in the appendix (see Table 8) for NDv2 2 chassis as a sample.
We use Gurobi's early-stop for AllGather demands to improve TE-CCL's ability to scale: this does not materially impact the quality of TE-CCL's solution -- even with an aggressive optimality gap threshold of 30% -- but allows TE-CCL to solve the problem faster in the AllGather scenario (we found TACCL also uses this under the hood - our solver time matches TACCL even when TACCL uses this feature). TACCL uses this early stop mechanism in the AllToAll case as well but we run TE-CCL to completion: TE-CCL always produces schedules that match or beat those of TACCL and in many cases it produces these schedules more quickly. We compare the two solver times in Figure 5.
### Scale
TACCL often crashes on large topologies, either due to requiring more than 400 GB RAM or memory leaks and segmentation faults. TE-CCL also requires a lot of memory in some cases (around 350 GB for AllToAll on large topologies), but we can control this by changing the epoch duration to trade off the quality of the solution with the amount of memory the solver needs. Table 4 summarizes our results on large topologies and reports the scale factor (EM). We use output buffer sizes larger than 16 MB -- as the number of GPUs increases, chunks become too small beyond this point. We adjust the epoch size by a factor of, at most, 4 for these cases to limit memory usage.
### Microbenchmarks
We next evaluate our design choices:
**Copy.** In-network copy is most helpful for large transfers where there is not enough capacity to transfer multiple copies directly from the source to each destination: we see in the largest transfer size (0.21 GB) copy reduces the transfer time by 50% for DGX1, the Internal 1 with \(\alpha=0\) and \(\alpha>0\), and 12.5% for Internal 2. In-network copy does not help with small transfers as there is enough capacity between the source and the destinations to send multiple copies of the data directly from the source. We use 4 chunks to complete these transfers.
**Small vs large epochs.** We investigate how the duration of epochs impacts the solver speed and the quality of the solution (Figure 8 where we use 2 chassis for each topology). In AllGather we only allow chunks to traverse one link in a single epoch: the length of the longest path dominates the transfer time when we use large epochs because the length of the epoch is too large compared to how long it takes for the chunk actually to traverse the link (on faster links). We see this more predominantly in the NDv2 and DGX2 topology where the fast links have 4\(\times\) higher bandwidth (large epoch duration is, therefore, 4\(\times\) small epoch duration) compared to slower ones. In contrast, we do not see a difference on Internal 1, where the links are mostly homogeneous.
**Store and forward.** We find a somewhat surprising result. Buffers don't impact the solution quality but only the solver time (Figure 9)! This is because of the nature of collective demands such as AllGather and AllToAll: because each node needs the same amount of traffic as it has to forward it
\begin{table}
\begin{tabular}{l l c c c} \hline \hline
**Topology** & **Collective** & **\# GPUs** & **EM** & **Solver time** \\ \hline \hline Internal 1 & AG (A*) & 64 & 1 & 3000 \(s\) \\ Internal 1 & AG (A*) & 128 & 1 & 7 \(h\) \\ Internal 2 & AG (A*) & 128 & 1 & 1300 \(s\) \\ Internal 2 & AG (A*) & 256 & 2 & 2.8 \(h\) \\ Internal 1 & AtoA & 16 & 1 & 66 \(s\) \\ Internal 1 & AtoA & 32 & 1 & 215 \(s\) \\ Internal 1 & AtoA & 64 & 1 & 500 \(s\) \\ Internal 1 & AtoA & 128 & 2 & 800 \(s\) \\ Internal 2 & AtoA & 128 & 1 & 2600 \(s\) \\ Internal 2 & AtoA & 256 & 4 & 1500 \(s\) \\ \hline \hline \end{tabular}
\end{table}
Table 4. Large Topologies for which TACCL can’t synthesize the schedule. The solver time is the average TE-CCL time to synthesize the schedule and EM is the epoch multiplier factor to change the epoch duration from the optimal duration for scalability.
Figure 6. We compare TACCL and TE-CCL for AllToAll demands on Internal 2 with different number of chassis. TE-CCL is faster than TACCL in _all_ cases and also produces higher quality solutions.
Figure 7. The benefit of copy: for large transfers, copy helps finish the transfer faster.
can interleave consuming traffic with forwarding it to compensate for the lack of buffers. But in the presence of buffers the feasible space of solutions is larger which in many cases enables the solver to find the optimal solution more quickly (the speedup is \(71\)% and \(61\)% for Internal 1 and DGX1 respectively). We believe it is possible to formally prove this result but defer this proof to future work.
\(A^{*}\) **vs OPT.** We compared the quality of the \(A^{*}\) technique to the optimal on a 16-chassis Internal 2 topology with both \(\alpha>0\) and \(\alpha=0\). We used both single chunk and 2 chunk transfers.
When \(\alpha=0\), \(A^{*}\) finished in 86.61s (263.29s for 2 chunk demands) whereas the optimal took 346s (4392s for two chunks). The optimal solution was 10% better than \(A^{*}\) (6% in the 2 chunk case) -- transfer times were 3.48s vs 3.89s.
The results are similar when \(\alpha>0\): \(A^{*}\) finished in 137.02s (901.25s for the 2 chunk case) whereas the optimal took 363.40s (3047s). The optimal solution was 20% better (8% in the 2 chunk case).
## 7. Related work
TE-CCL provides a scalable method for collective communication optimization by using a network flow-based approach. Our solution supports unsustained demands, store-and-forward, and copy. Our work builds on prior work both in network traffic engineering and in collective optimization:
**Multi-cast TE.** Prior works have looked at traffic engineering for multi-cast networks [10, 22]. Oliveira and Pardalos [24] provide a comprehensive summary of these works. Blink [29] used these techniques to optimize collective communication but does not model delay and store-and-forward.
**WAN TE.** Many prior works in networking use the network flow model to scalably route traffic in wide area networks [1, 14, 16, 21]. However, most of these works assume sustained demands, copy, and store-and-forward. Among these works, Calendaring [17] provides a solution that models unsustained demands. NetStitcher [18] adds to this the support for store and forward but assumes flows do not compete for bandwidth. Neither of these works simultaneously model copy, store-and-forward, and delay.
**Prior work on collective communication optimization.** Many prior work have tackled the collective communication optimization problem [5, 26, 27, 29, 31]. We find these solutions do not scale to the topologies and data sizes we have in production today and those we anticipate for the future. TACCL is the most scalable of these solutions, but it has trouble scaling when it sends more than 1-2 chunks, and is sub-optimal. Work such as [19, 30, 31] aims to co-optimize either topologies and parallelization strategies ([30]) or collective scheduling and execution planning [19]. These works rely on collective communication optimizers as part of their search but do not provide optimal solutions to the problem themselves -- they can use TE-CCL as part of their search. Our work is complementary to these works.
## 8. Conclusion
We presented TE-CCL: a scalable collective communication optimizer that models the problem through a TE-based approach. We provide three algorithms to solve this problem: the MILP approach which optimally solves the general collective communication optimization problem and supports multi-cast; the LP form which is also optimal and much more scalable but removes support for multi-cast; and finally the \(A^{*}\)-based approximation method which is much more scalable than the MILP technique and continues to support multi-cast but is no longer optimal. We show our solution outperforms prior, state-of-the-art, techniques such as SCCL and TACCL by over \(2\times\).
**This work does not raise any ethical concerns.**
Figure 8. We compare the impact of small vs large epochs on the solver speed (a) and solution quality (b). We use 2 chassis for all topologies. Both graphs compute \(\frac{100(\text{small-large})}{\text{large}}\). The solver finds a solution faster with large epochs but produces better quality solutions with small ones.
Figure 9. We evaluate the impact of buffers on (a) and solution quality (b) solver time. We use 2 chassis for all topologies. Both graphs compute \(100(\text{without buffers--with buffers})\). Buffers don’t impact the solution quality in most cases, but only the solver times! The average speedups in solver time are: \(61\)%, \(-28.46\)%, \(0.23\)%, \(71\)% for Internal 1 without \(\alpha\), Internal 1 with \(\alpha\), Internal 2, and DGX1 respectively. |
2305.04105 | "When Words Fail, Emojis Prevail": Generating Sarcastic Utterances with
Emoji Using Valence Reversal and Semantic Incongruity | Sarcasm is a form of figurative language that serves as a humorous tool for
mockery and ridicule. We present a novel architecture for sarcasm generation
with emoji from a non-sarcastic input sentence in English. We divide the
generation task into two sub tasks: one for generating textual sarcasm and
another for collecting emojis associated with those sarcastic sentences. Two
key elements of sarcasm are incorporated into the textual sarcasm generation
task: valence reversal and semantic incongruity with context, where the context
may involve shared commonsense or general knowledge between the speaker and
their audience. The majority of existing sarcasm generation works have focused
on this textual form. However, in the real world, when written texts fall short
of effectively capturing the emotional cues of spoken and face-to-face
communication, people often opt for emojis to accurately express their
emotions. Due to the wide range of applications of emojis, incorporating
appropriate emojis to generate textual sarcastic sentences helps advance
sarcasm generation. We conclude our study by evaluating the generated sarcastic
sentences using human judgement. All the codes and data used in this study has
been made publicly available. | Faria Binte Kader, Nafisa Hossain Nujat, Tasmia Binte Sogir, Mohsinul Kabir, Hasan Mahmud, Kamrul Hasan | 2023-05-06T17:49:41Z | http://arxiv.org/abs/2305.04105v2 | When Words Fail, Emojis Prevail": Generating Sarcastic Utterances with Emoji Using Valence Reversal and Semantic Incongruity
###### Abstract
Sarcasm is a form of figurative language that serves as a humorous tool for mockery and ridicule. We present a novel architecture for sarcasm generation with emoji from a non-sarcastic input sentence in English. We divide the generation task into two sub tasks: one for generating textual sarcasm and another for collecting emojis associated with those sarcastic sentences. Two key elements of sarcasm are incorporated into the textual sarcasm generation task: valence reversal and semantic incongruity with context, where the context may involve shared commonsense or general knowledge between the speaker and their audience. The majority of existing sarcasm generation works have focused on this textual form. However, in the real world, when written texts fall short of effectively capturing the emotional cues of spoken and face-to-face communication, people often opt for emojis to accurately express their emotions. Due to the wide range of applications of emojis, incorporating appropriate emojis to generate textual sarcastic sentences helps advance sarcasm generation. We conclude our study by evaluating the generated sarcastic sentences using human judgement. All the codes and data used in this study has been made publicly available1.
Footnote 1: [https://github.com/Wrightly&Rong/Sarcasm-Generation-with-Emoji](https://github.com/Wrightly&Rong/Sarcasm-Generation-with-Emoji)
## 1 Introduction
Sarcasm is defined as the use of remarks that often mean the opposite of what is said in order to hurt someone's feelings or to criticize something in a humorous way2. Sarcastic remarks are often challenging to interpret considering their literal meaning differs greatly from the speaker's actual intent.
Footnote 2: [https://dictionary.cambridge.org/](https://dictionary.cambridge.org/)
Compared to verbal or in-person conversations, textual sarcasm presents additional challenges due to the absence of visual cues, vocal tone etc.
The presence of sarcasm makes it significantly harder for machines to understand the actual meaning of the textual data. This has motivated research in detecting sarcasm in textual data. In order to train machines to detect sarcasm, we need quality datasets that represent different aspects of sarcasm in text. Even though we have an abundance of social media data and resources, it can be difficult to collect correctly labeled sarcastic texts. Instead, many research have tried to generate texts that can accurately express sarcastic notions Joshi et al. (2015); Mishra et al. (2019); Chakrabarty et al. (2020). Many studies have also investigated strategies in incorporating sarcasm generation into chatbots Joshi et al. (2015, 2017).
Emojis, small ideograms that represent objects, people, and scenes Cappallo et al. (2015), are one of the key elements of a novel form of communication due to the advent of social media. Using emojis within texts can give us additional cues on sarcasm, replicating facial expressions and body language, etc. Incorporating emojis with texts for training will let the machines catch these cues easily Bharti et al. (2016). Subramanian et al. (2019)
\begin{table}
\begin{tabular}{l l} \hline \hline
**Non-Sarcastic Input** & **Sarcastic Output with Emoji** \\ \hline \hline \begin{tabular}{l} I really hate walking \\ in the rain. \\ \end{tabular} & \begin{tabular}{l} I really love the outdoors walking in the rain. I sat feeling \\ thoroughly miserable. \\ \end{tabular} \\ \hline \begin{tabular}{l} Mom is in a bad \\ mood today. \\ \end{tabular} & \begin{tabular}{l} Happy mothers day mom is \\ a well mood today. She \\ sounded tense and angry. \\ \end{tabular} \\ \hline \begin{tabular}{l} That movie was bad. \\ \end{tabular} &
\begin{tabular}{l} That movie was awesome. \\ Bad intelligence and political \\ incompetence. \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 1: Sample sarcastic outputs with emoji generated from non-sarcastic inputs
observed that when emojis were included in the sentence, their emoji-based sarcasm detection model performed noticeably better.
In this study, we propose a new framework in which when given a non-sarcastic text as input, the text is converted into a sarcastic one with emoji where the emoji will specifically help to identify the sarcastic intent of the text. Table 1 shows a few sample non-sarcastic input and sarcastic output pairs with emoji. In order to implement the architecture, we have focused on two major components: Sarcastic text generation and Emoji prediction for the text. For textual sarcasm generation, we are incorporating the works of Chakrabarty et al. (2020) and Mishra et al. (2019) and for Emoji prediction, a deep learning model fine tuned on OpenAI's CLIP (Contrastive Language-Image Pre-training)3Radford et al. (2021) is used. The emoji prediction module along with the sarcasm generation module generates the final sarcastic text including emoji. This work provides two major contributions:
Footnote 3: [https://openai.com/research/clip](https://openai.com/research/clip)
1. Propose a novel multi-modular framework for sarcasm generation incorporating the reversal of valence and semantic incongruity characteristics of sarcasm while also including appropriate emojis.
2. Create and publish a sarcastic corpora which can serve as valuable training data for sarcasm detection models.
As far as our understanding goes, there has been no previous framework proposed on textual sarcasm generation that also incorporates emojis. This framework can aid downstream tasks by allowing a deeper understanding of sarcasm to produce more contextually relevant responses.
## 2 Related Work
Research on sarcasm have been a subject of interest for several decades. The following sub sections provide a brief overview of the past work done on different aspects of sarcasm.
### Studies on Sarcasm Detection
Sarcasm detection is a classification task in its most typical form. From a given text, the task includes classifying the text as sarcastic or non-sarcastic. Sarcasm detection is a fairly recent but promising research field in the domain of Natural Language Processing. Nonetheless, it serves as a crucial part to sentiment analysis Maynard and Greenwood (2014).
Most of these studies on sarcasm detection train and test on already available popular datasets such as the datasets used by Riloff et al. (2013), Khodak et al. (2017) and Cai et al. (2019). We observed that Twitter is predominantly the most popular social media platform used for sarcasm detection datasets although Reddit, Amazon and a few discussion forums were also seen being used. We also saw a shift in Sarcasm detection methodologies from rule-based approaches Riloff et al. (2013); Bharti et al. (2015), machine learning and deep learning approaches Bharti et al. (2017); Poria et al. (2016); Ghosh and Veale (2016) to transformed based approaches Dadu and Pant (2020); Kumar et al. (2021). We include two tables Table 9 and Table 10 summarizing the datasets and methodologies used in sarcasm detection in the appendix (Section A).
Recent works on sarcasm detection include frequent use of BERT Savini and Caragea (2022); Zhang et al. (2023); Pandey and Singh (2023), multi-modal and cross-modal detection tasks Liang et al. (2022); Chauhan et al. (2022); Ding et al. (2022), enhancement of sarcasm detection in complex expressions with sememe knowledge Wen et al. (2022), study on the effect of foreign accent Puhacheuskaya and Jarvikvi (2022), use of vocal and facial cues Aguert (2022) etc. Sarcasm and irony detection from languages other than English i.e. Chinese, Dutch, Spanish, Arabic, Romanian etc. have also been studied in recent works Farha and Magdy (2020); Muaad et al. (2022); Maladry et al. (2022); Wen et al. (2022); Ortega-Bueno et al. (2022); Buzea et al. (2022).
### Characteristics of Sarcasm
Studies have identified a variety of potential sources for sarcasm. According to Gerrig and Goldsvarg (2000), sarcasm stems from a situational disparity between what the speaker desires, believes, or expects and what actually happens. Incongruity between text and a contextual information is mentioned as a factor by Wilson (2006). Context In-congruity Campbell and Katz (2012) is addressed in the works of Riloff et al. (2013) who suggests that sarcasm arises from a contrast between positive verbs and negative situation phrases. Burgers et al. (2012) formulates that for an utterance to be
sarcastic, it needs to have one or more of these five characteristics:
1. the sentence has to be evaluative,
2. it should be based on the reversal of valence of the literal and intended meanings,
3. it should have a semantic incongruuity with the context, which may consist of common sense or general information that the speaker and the addressee share,
4. should be aimed at some target,
5. should be in some manner relevant to the communication scenario. Many studies focused on one or more of these characteristics.
### Sarcasm Generation
Compared to sarcasm detection, research on sarcasm generation is still in its early stages. Joshi et al. (2015) introduced SarcasmBot4, a chatbot that caters to user input with sarcastic responses. SarcasmBot is a sarcasm generation module with eight rule-based sarcasm generators where each of the generators produces a different type of sarcastic expression. During the execution phase, one of these generators is selected based on user input properties. Essentially, it yields sarcastic responses rather than converting a literal input text into a sarcastic one, the latter one being a common practice in future research. This method was later utilized in the author's subsequent work Joshi et al. (2017) where they built SarcasmSuite, a web-based interface for sarcasm detection and generation.
The first work on automatic sarcasm generation conditioned from literal input was performed by Mishra et al. (2019). The authors relied on the Context Incongruity characteristic of sarcasm mentioned by Riloff et al. (2013) and employed information retrieval-based techniques and reinforced neural seq2seq learning to generate sarcasm. They used unlabeled non-sarcastic and sarcastic opinions to train their models, where sarcasm was formed as a result of a disparity between a situation's positive sentiment context and negative situational context. A thorough evaluation of the proposed system's performance against popular unsupervised statistical, neural, and style transfer techniques showed that it significantly outperformed the baselines taken into account.
Footnote 4: [https://github.com/adityajo/sarcasmbot/](https://github.com/adityajo/sarcasmbot/)
Chakrabarty et al. (2020) introduced a new framework by incorporating context in the forms of shared commonsense or world knowledge to model semantic incongruuity. They based their research on the factors addressed by Burgers et al. (2012). Their architecture is structured into three modules: Reversal of Valence, Retrieval of Commonsense Context, and Ranking of Semantic Incongruity. With this framework they were able to simulate two fundamental features of sarcasm: reversal of valence and semantic incongruuity with the context. However, they opted for a rule-based system to reverse the sentiments. The authors also noticed that in a few cases, the simple reversal of valence strategy was enough to generate sarcasm which meant the addition of context was redundant.
Recent similar works in the field include that of Oprea et al. (2021) where they developed a sarcastic response generator, Chandler, that also provides explanations as to why they are sarcastic. Das et al. (2022) manually extracted the features of a
Figure 1: Model Architecture of the proposed system
benchmark pop culture sarcasm corpus and built padding sequences from the vector representations' matrices. They proposed a hybrid of four Parallel LSTM Networks, each with its own activation classifier which achieved 98.31% accuracy among the test cases on open-source English literature. A new problem of cross-modal sarcasm generation (CMSG) that creates sarcastic descriptions of a given image was introduced by Ruan et al. (2022). However, these studies have only focused on generating textual sarcastic sentences, but as described by Subramanian et al. (2019), incorporating emojis improved the overall performance of sarcasm detection and thus can be a potential research scope.
## 3 Methodology
Our model architecture consists of 3 modules which are as follows: Reversal of Valence, Retrieval of Commonsense and Emoji Prediction. The Reversal of Valence module takes in a negative utterance and generates an utterance with positive sentiment. The Retrieval of Commonsense module outputs relevant commonsense context sentence which helps in creating a sarcastic situation. Lastly, the Emoji Prediction module generates an emoji which makes the overall output more sarcastic. With these three modules, we have incorporated two of the fundamental features of sarcasm: reversal of valence and semantic incongruity with the context. A diagram of the overall pipeline is demonstrated in Figure 1. We describe the modules in details in the next few sub sections.
### Reversal of Valence
In the work of Chakrabarty et al. (2020), for the reversal of valence module, they have used a rule-based approach to manually reverse the sentiment of the negative sentence. But a rule-based model cannot reverse sentences that do not follow the traditional structure of sentences such as those used in social media. We have worked on this limitation of this current state-of-the-art sarcasm generation model where we replace their rule-based reversal module with a deep-learning reversal module inspired by the work of Mishra et al. (2019). This module is divided into two parts: Sentiment Neutralization and Positive Sentiment Induction.
#### 3.1.1 Sentiment Neutralization
We implement the Sentiment Neutralization module to filter out the sentiment words from the input utterance, which results into a neutral sentence from a negative one. An example is shown in table 2.
The neutralization model is essentially a sentiment classification model which first detects the sentiment of the given utterance (positive/negative). This model consists of several LSTM layers and a self-attention layer. During testing, the self-attention vector is extracted as done by Xu et al. (2018) which is then inversed and discretized as follows:
\[\hat{a_{i}}=\begin{cases}0,&\text{if }a_{i}>0.95*max(a)\\ 1,&\text{otherwise}\end{cases} \tag{1}\]
where \(a_{i}\) is the attention weight for the \(i^{th}\) word, and \(max(a)\) gives the highest attention value from the current utterance. A word is filtered out if the discretized attention weight for that word is 0. The sentiment detection model architecture is shown in figure 2.
#### 3.1.2 Positive Sentiment Induction
The output from the Sentiment Neutralization module is fed to the Positive Induction module as input. The module takes in a neutral utterance and incorporates positive sentiment into the utterance and returns a sentence with positive sentiment. An example is shown in table 3. For this, we use Neural Machine Translation method built on OpenNMT
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt}} \hline
**Negative Input** & **Neutral Output** \\ \hline Is feeling absolutely bloated and fat from lack of a proper workout & Is feeling absolutely and from a proper workout \\ \hline \end{tabular}
\end{table}
Table 2: Example of sentiment neutralization from input sentence
Figure 2: Sentiment detection model architecture for the Sentiment neutralization module
framework (Klein et al., 2017) where we first train our model with a set of \(<source,target>\) pairs where the source is a neutral sentence and target is its positive counter part. We use the Positive dataset provided by Mishra et al. (2019) which includes a set of positive sentences. We pass this dataset through the sentiment neutralization module to get the neutral source sentence to its positive target sentence and use these \(<source,target>\) pairs to train the positive induction module. The input sentences are transformed into embeddings that go through the translation encoders and decoders. The encoders and decoders are both built with LSTM layers.
### Retrieval of Commonsense
This module is used to retrieve additional context for the sarcastic sentence based on commonsense knowledge. Figure 3 demonstrates a schematic view of this module. We discuss the detailed process in the following sections. Additionally, we show an example input-output pair for this module in table 4.
#### 3.2.1 Generation of Commonsense Knowledge
For generating commonsense knowledge context, \(\text{COMET}^{\text{DIS}}_{\text{TIL}}\)(West et al., 2021) is used. First, we feed the input sentence to \(\text{COMET}^{\text{DIS}}_{\text{TIL}}\). \(\text{COMET}^{\text{DIS}}_{\text{TIL}}\) is a machine trained 1.5B parameters commonsense model generated by applying knowledge distillation (Hinton et al., 2015) on a general language model, GPT-3. It offers 23 commonsense relation types. For our study, we use the **xEffect** relation. From the three variants of \(\text{COMET}^{\text{DIS}}_{\text{TIL}}\) (\(\text{COMET}^{\text{DIS}}_{\text{TIL}}\), \(\text{COMET}^{\text{DIS}}_{\text{TIL}}\) + \(\text{critic}_{\text{low}}\) and \(\text{COMET}^{\text{DIS}}_{\text{TIL}}\) + \(\text{critic}_{\text{high}}\)), we have chosen \(\text{COMET}^{\text{DIS}}_{\text{TIL}}\) + \(\text{critic}_{\text{high}}\) for our work. The model returns a contextual phrase pertaining to the **xEffect** relation with the extracted words of the non-sarcastic sentence. For a non-sarcastic sentence "His presentation was bad", \(\text{COMET}^{\text{DIS}}_{\text{TIL}}\) predicts the contextual phrase with **xEffect** relation - 'is criticized by his boss'.
#### 3.2.2 Retrieval of Relevant Sentences
Once we have the inferred contextual phrase, we retrieve relevant sentences. For doing so, we imply 2 methods - 1. Retrieval from corpus and 2. Generation from the inferred phrase.
* (a) the commonsense concept should appear at the beginning or at the end of the retrieved sentences; (b) to maintain consistency between the length of the non-sarcastic input and its sarcastic variant, sentence length should be less than twice the number of tokens in the non-sarcastic input. Next, we check the consistency of the pronoun in the retrieved sentence and the pronoun in the input sentence. If the pronoun does not match, we modify it to match the non-sarcastic text input. If the non-sarcastic input lacks a pronoun while the retrieved sentence does not, it is simply changed to "I". These constraints for retrieving the sentences and the assessment of grammatical consistency are done following the
\begin{table}
\begin{tabular}{l l} \hline
**Input** & **Commonsense Sentence** \\ \hline His presentation was bad & The manager is criticized by his boss after a presentation \\ \hline \hline \end{tabular}
\end{table}
Table 4: Example of commonsense sentence generation from input sentence
Figure 3: Model Architecture for Retrieval of Commonsense module
work of Chakrabarty et al. (2020).
* the _Subject-inference_ pair for the input "His presentation was bad" becomes ['His', 'is criticized by his boss'], and from this collection of words, the sentence "The manager is criticized by his boss after a presentation." is generated. Footnote 6: [https://huggingface.co/mrm8488/t5-base-finetuned-common_gen](https://huggingface.co/mrm8488/t5-base-finetuned-common_gen)
#### 3.2.3 Selection based on Semantic Incongruity
The module in section 3.2.2 returns several sentences containing the context. Among them, we choose the sentence having the highest semantic incongruuity with the sentence generated after the Reversal of Valence module. For calculating the semantic incongruuity, following Chakrabarty et al. (2020), we have used the RoBERTa-large Liu et al. (2019) model fine-tuned on the Multi-Genre NLI dataset Williams et al. (2017). Considering the non-sarcastic input "His presentation was bad", the Retrieval of Relevant Sentences module yields a list of sentences such as - "The manager is criticized by his boss after a presentation", "He openly criticized the plan as impracticable", and "My boss criticized my sloppy personal appearance". From these sentences, the highest ranked sentence, "The manager is criticized by his boss after a presentation", is returned as the final output to this module as it contains the most semantic incongruuity with the reversed sentence.
### Emoji Prediction
In this module, we use a pre-trained emoji prediction model which is fine tuned on the CLIP Radford et al. (2021)) deep learning model by OpenAI to predict an emoji from a given input. After concatenating the non-sarcastic input and the context retrieved from the Retrieval of Commonsense module, we predict an emoji based on this concatenated sentence. The model employs a masked self-attention Transformer as a text encoder and a ViT-B/32 Transformer architecture as an image encoder. By using a contrastive loss, these encoders are trained to optimize the similarity of (image, text) pairs. One version of the implementation used a Vision Transformer and the other a ResNet image encoder. The variation with the Vision Transformer is used in this case. The dataset7 used for fine-tuning the model consists of two columns: raw tweets and emoji labels. The emoji labels correspond to the appropriate one among a set of 32 emojis shown in figure 4.
Footnote 7: [https://huggingface.co/datasets/vincentclaes/emoji-predictor](https://huggingface.co/datasets/vincentclaes/emoji-predictor)
## 4 Experimental Setup
The dataset, model configurations for the different modules, and the evaluation criteria for our work are all discussed in the following sub sections.
### Dataset
For our experiments, we utilize the Positive and Negative sentiment corpora by Mishra et al. (2019) which contains tweets and short snippets. Tweets have been normalized by eliminating hashtags, usernames, and conducting spell checking and lexical normalization using NLTK Loper and Bird (2002). After filtering out sentences longer than 30 words and running them through all three modules, we get the final dataset of 2k sarcastic sentences from the Mishra et al. (2019) dataset. We have made our dataset8 publicly available.
Footnote 8: [https://github.com/WrightlyRong/Sarcasm-Generation-with-Emoji](https://github.com/WrightlyRong/Sarcasm-Generation-with-Emoji)
### Model Configurations
The sentiment classification model of the neutralization module is trained on the sentiment dataset
Figure 4: Set of 32 emojis
given by Mishra et al. (2019) where the negative sentences are labeled as 1 and the positive sentences are labeled as 0. Each word in the input sentence is first encoded with one-hot encoding and turned into a K-dimensional embedding. Then, these embeddings go through an LSTM layer with 200 hidden units, a self-attention layer, an LSTM layer with 150 hidden units and finally a softmax layer. The classifier is trained for 10 epochs with a batch size of 32, and achieves a validation accuracy of 96% and a test accuracy of 95.7%.
The positive sentiment induction module is built on top of the OpenNMT 3.0 framework, and following Mishra et al. (2019), the embedding dimensions of the encoder and decoder is set to 500, with 2 LSTM layers each consisting of 500 hidden units. Training iteration is set to 100000 and early stopping is incorporated to prevent overfitting. After training, the model produced a corpus-BLEU score of 51.3%.
### Evaluation Criteria
For evaluating the performance of our proposed architecture we incorporate Human judgement. To assess the quality of the generated dataset we compare among 4 systems.
1. **Full Model** contains all the proposed modules of the framework and generates the final dataset.
2. **Without Emoji** system includes the context sentences along with the outputs from the reversal of valence module but does not contain any emoji that goes with each sarcastic sentence.
3. **Without Context** system consists of generations from the reversal of valence module as well as emoji. It does not include any context.
4. \(\mathbf{R}^{3}\) is the state-of-the-art sarcasm generation system proposed by Chakrabarty et al. (2020).
To assess each of the four systems, we randomly choose 100 samples from our sarcastic dataset which totals to 400 output from the four systems. We evaluate these 400 generated sentences for comparing on the basis of the 4 above mentioned systems.
Following the evaluation approach proposed by Chakrabarty et al. (2020), we evaluate the generated sentences on these criteria:
1. Sarcasticness ("How sarcastic is the output?"),
2. Creativity ("How creative is the output?"),
3. Humour ("How funny is the output?"),
4. Grammaticality ("How grammatically correct is the output?").
Previous studies on sarcasm generation have employed sarcasticness as a criterion for evaluating the effectiveness of the generated outputs Mishra et al. (2019); Chakrabarty et al. (2020); Das et al. (2022). As sarcasm exemplifies linguistic creativity Gerrig and Gibbs Jr (1988), creativity has been proposed as a method for operationalizing the quality of sarcastic sentences by Skalicky and Crossley (2018). The association between humor and sarcasm is frequently mentioned in literature as well Dress et al. (2008); Lampert and Ervin-Tripp (2006); Leggitt and Gibbs (2000); Bowes and Katz (2011). The grammaticality criterion assesses the syntactic accuracy and conformity of the generated sentences.
Three human judges have been chosen to rate the outputs from the 4 systems on the 4 criteria mentioned. The label indicates a rating on a scale of 1 (not at all) to 5 (very). All 3 judges label each of the 400 sentences from the 4 systems. The human judges have been chosen based on their high efficiency in English, good grasp in understanding and differentiating between Creativity, Humor and Sarcasticness in English sentences.
To assess the inter-annotator agreement for the ratings, we incorporated the Intraclass Correlation Coefficient (ICC). ICC is a statistical measure used to assess the degree of agreement or correlation among the ratings given by different evaluators or raters for a certain category or metric. The agreement scores are shown in table 6. The ICC score ranges between 0 and 1 where a higher score indicates a greater agreement among the raters. For all the four systems evaluated in our work, the ratings by 3 judges for the 4 evaluation criteria yield ICC scores above 0.9 in each case. A score above 0.9 indicates highly consistent observations and excellent agreement among the 3 judges.
Besides, human evaluation, we also evaluate our generated data against an emoji-based sarcasm detection model trained with existing emoji-based sarcastic dataset. For this, we utilize the work of Subramanian et al. (2019) and use their proposed sarcasm detection model trained with their dataset. Their data samples were tweets with emojis scraped from Twitter and were labeled either 1 (sarcastic) or 0 (non-sarcastic). The model consists of a BiGRU with a text encoder and an emoji encoder. We add 2k non-sarcastic texts with our generated 2k sarcastic texts and test the model with these data. The model's performance is discussed in section 5.
## 5 Experimental Results & Analysis
Table 5 shows the comparison between a few sample sarcastic outputs across the various systems (our full model, output without the context, output without any emoji and lastly the state-of-the-art model Chakrabarty et al. (2020) on different measures (Sarcasticness, Creativity, Humor and Grammaticality). Each score is the average rating given by the three human judges. Table 7 shows the variances among each evaluation criterion for each of the four systems. The variances among the four criteria for the system R\({}^{3}\) are higher than all the other systems.
Table 8 shows the average ratings on 100 samples by human judges for generated sarcastic sentences from the four systems based on the four categories. Our full model achieves the highest average score among all the systems including the state-of-the-art sarcasm generation model by Chakrabarty et al. (2020) on three of the four categories except Grammaticality. Besides the full model, the without
\begin{table}
\begin{tabular}{l l l l l} \hline \multirow{2}{*}{**System**} & \multicolumn{4}{c}{**Intraclass Correlation Coefficient (ICC)**} \\ \cline{2-5} & **S** & **C** & **H** & **G** \\ \hline Full Model & 0.90 & 0.92 & 0.92 & 0.94 \\ \hline Without & 0.95 & 0.96 & 0.95 & 0.92 \\ Emoji & & & & \\ \hline Without & 0.93 & 0.94 & 0.94 & 0.93 \\ Context & & & & \\ \hline R\({}^{3}\) & 0.97 & 0.97 & 0.97 & 0.97 \\ Chakrabarty & & & & \\ \hline \end{tabular}
\end{table}
Table 6: Intraclass Correlation Coefficient (ICC) scores on different metrics for the four systems. Here, S=Sarcasticness, C=Creativity, H=Humor, G=Grammaticality are the 4 evaluation criteria.
\begin{table}
\begin{tabular}{l l l l l} \hline \multirow{2}{*}{**System**} & \multicolumn{4}{c}{**Variance\({}_{\text{eval}}\)**} \\ \cline{2-5} & **S** & **C** & **H** & **G** \\ \hline Full Model & 0.62 & 0.59 & 0.60 & 0.96 \\ \hline Without Emoji & 0.74 & 0.73 & 0.65 & 0.96 \\ \hline Without Context & 0.57 & 0.43 & 0.44 & 1.02 \\ \hline R\({}^{3}\) (Chakrabarty et al., 2020) & 1.48 & 1.17 & 1.16 & 0.99 \\ \hline \end{tabular}
\end{table}
Table 7: Variances among each evaluation criterion for each system. Here, S=Sarcasticness, C=Creativity, H=Humor, G=Grammaticality are the 4 evaluation criteria.
emoji system and without context system also outperform the state-of-the-art on Sarcasticness, Creativity and Humor. Our system lacks in Grammaticality due to the fact that we replace the rule based approach of the reversal of valence module by Chakrabarty et al. (2020) with a deep learning approach which results in a slightly more significant information loss. However, the rule based model performs worse in case of the other three categories as it fails to generalize on all types of sentence structures. It is apparent from the scores that context plays an important role in recognising a sarcastic sentence. Additionally, the notable improvement in the score for full model compared to the without emoji model suggests that emojis obviously help better detect the incongruity that exist in sarcastic utterances.
The emoji based sarcasm detection model by Subramanian et al. (2019) gives an F1-score of 67.28% and an ROC AUC score of 53.33% on our generated data samples. It is to be noted that the model's training data samples have significantly different sentence structure than the test samples.
## Conclusion
We propose a novel multi-modular framework for sarcasm generation with emoji considering two key characteristics of sarcasm: reversal of valence and semantic incongruity between the sarcastic remark and the context. To generate sarcastic sentences, we first neutralize the input sentence's sentiment and then add positive sentiment to the sentence to reverse its meaning. We also incorporate a relevant emoji and its contextual information to enhance the sarcastic effect. We conclude by evaluating our model using human judgement.
## Limitations
Although our proposed architecture successfully generates emoji-based sarcastic sentences from non-sarcastic texts, in some cases, particularly longer sentences, adding commonsense context does not add much to make it more sarcastic as in such cases, the longer sentences already contain the contextual information. In future, we plan to modify our architecture in a way such that it can identify whether or not adding commonsense context would be necessary.
In our work, we have used COMET\({}_{\mathrm{TIL}}^{\mathrm{DIS}}\) to generate additional commonsense context. So the performance of our proposed architecture heavily depends on the accuracy of COMET\({}_{\mathrm{TIL}}^{\mathrm{DIS}}\). In future, we would like to find and incorporate better models for generating commonsense context.
The low grammaticality score by our final model is likely to be caused by the insufficient training data for the Positive Sentiment Induction module for which the model could not generalize properly. We believe that there is still room for improvement here by collecting and adding more training samples to improve the model's performance. To further fix the grammatical errors we plan to add another module after the Positive Induction module where the module will use a Transformer based grammar correction model which will take a sentence with bad grammar and output a grammatically correct sentence.
Lastly, our emoji prediction module only predicts one emoji per sentence. However, to make a sentence sarcastic, it is not uncommon to use more than one emoji. Hence, we plan to explore multi-label emoji prediction in the future.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**System** & **Sarcasticness** & **Creativity** & **Humor** & **Grammaticality** \\ \hline Full Model & **3.44** & **3.29** & **3.16** & 3.72 \\ \hline Without Emoji & 2.77 & 2.83 & 2.69 & 3.7 \\ \hline Without Context & 3.1 & 2.99 & 2.88 & 3.72 \\ \hline R\({}^{*}\)(Chakrabarty et al., 2020) & 2.32 & 2.2 & 2.1 & **4.29** \\ \hline \hline \end{tabular}
\end{table}
Table 8: Average ratings by human judges for outputs from the four systems |
2304.13750 | Sodium enhancement in evolved cataclysmic variables | We present follow-up spectroscopy of 21 cataclysmic variables (CVs) with
evolved secondaries and ongoing or recently-terminated mass transfer.
Evolutionary models predict that the secondaries should have anomalous surface
abundances owing to nuclear burning in their cores during their main-sequence
evolution and subsequent envelope stripping by their companion white dwarfs. To
test these models, we measure sodium (Na) abundances of the donors from the
Fraunhofer "D" doublet. Accounting for interstellar absorption, we find that
{\it all} objects in our sample have enhanced Na abundances. We measure 0.3
$\lesssim$ [Na/H] $\lesssim$ 1.5 dex across the sample, with a median [Na/H] =
0.956 dex, i.e., about an order of magnitude enhancement over solar values. To
interpret these values, we run MESA binary evolution models of CVs in which
mass transfer begins just as the donor leaves the main sequence. These
generically predict Na enhancement in donors with initial donor masses $\gtrsim
1\,M_{\odot}$, consistent with our observations. In the models, Na enrichment
occurs in the donors' cores via the NeNa cycle near the end of their
main-sequence evolution. Na-enhanced material is exposed when the binaries
reach orbital periods of a few hours. Donors with higher initial masses are
predicted to have higher Na abundances at fixed orbital period owing to their
higher core temperatures during main-sequence evolution. The observed [Na/H]
values are on average $\approx$0.3 dex higher than predicted by the models.
Surface abundances of evolved CV donors provide a unique opportunity to study
nuclear burning products in the cores of intermediate-mass stars. | Natsuko Yamaguchi, Kareem El-Badry, Antonio C. Rodriguez, Maude Gull, Benjamin R. Roulston, Zachary P. Vanderbosch | 2023-04-26T18:00:03Z | http://arxiv.org/abs/2304.13750v2 | # Sodium enhancement in evolved cataclysmic variables
###### Abstract
We present follow-up spectroscopy of 21 cataclysmic variables (CVs) with evolved secondaries and ongoing or recently-terminated mass transfer. Evolutionary models predict that the secondaries should have anomalous surface abundances owing to nuclear burning in their cores during their main-sequence evolution and subsequent envelope stripping by their companion white dwarfs. To test these models, we measure sodium (Na) abundances of the donors from the Fraunhofer "D" doublet. Accounting for interstellar absorption, we find that _all_ objects in our sample have enhanced Na abundances. We measure \(0.3\lesssim\) [Na/H] \(\lesssim\) 1.5 dex across the sample, with a median [Na/H] = 0.956 dex, i.e., about an order of magnitude enhancement over solar values. To interpret these values, we run MESA binary evolution models of CVs in which mass transfer begins just as the donor leaves the main sequence. These generically predict Na enhancement in donors with initial donor masses \(\gtrsim 1\,M_{\odot}\), consistent with our observations. In the models, Na enrichment occurs in the donors' cores via the NeNa cycle near the end of their main-sequence evolution. Na-enhanced material is exposed when the binaries reach orbital periods of a few hours. Donors with higher initial masses are predicted to have higher Na abundances at fixed orbital period owing to their higher core temperatures during main-sequence evolution. The observed [Na/H] values are on average \(\approx\)0.3 dex higher than predicted by the models. Surface abundances of evolved CV donors provide a unique opportunity to study nuclear burning products in the cores of intermediate-mass stars.
keywords: binaries: close - white dwarfs - novae, cataclysmic variables - binaries: spectroscopic
## 1 Introduction
Cataclysmic variables (CVs) are binary systems in which a non-degenerate star (the "donor" or "secondary") transfers mass to a white dwarf (WD) companion through stable Roche lobe overflow (see Warner, 1995, for a review). CV evolution is fundamentally governed by angular momentum loss (AML), which shrinks CV orbits and drives mass transfer. The strength and dominant mechanism of AML in CVs is still uncertain, and varies with orbital period. In the standard evolutionary model (e.g. Knigge et al., 2011), AML at orbital periods \(P_{\rm orb}\gtrsim 3\) h is driven primarily by magnetic braking, while gravitational wave radiation dominates at shorter periods. Weakening of magnetic braking at \(P_{\rm orb}\lesssim 3\) h is proposed to give rise to the CV "period gap"; i.e. the observed lack of CVs with orbital periods between 2 and 3 hours (e.g. Rappaport et al., 1983; Knigge, 2006; Knigge et al., 2011; Inight et al., 2021; Pala et al., 2022).
The vast majority of CV donors fall on a tight "donor sequence", in which physical parameters such as mass, radius, temperature, and spectral type depend primarily on orbital period (e.g. Beuermann et al., 1998; Smith & Dhillon, 1998; Knigge, 2006; Abrahams et al., 2020). In the standard evolutionary model, CV donors evolve along this sequence from long to short periods as their donors are whitlted down by mass transfer. Objects on the donor sequence have temperatures and radii similar to main-sequence stars of the same mass. Although mass transfer causes some radius inflation relative to main-sequence stars, this inflation is modest because objects on the donor sequence are only mildly out of thermal equilibrium (e.g. Knigge, 2006; Knigge et al., 2011).
A number of CVs have been discovered over the years with physical parameters that differ significantly from the standard donor sequence. Observationally, these systems stood out from the bulk of the CV population for having unusually warm and luminous donors compared to normal CVs at the same orbital period (e.g. Thorstensen et al., 2002, 2002; Thorstensen, 2013; Rebassa-Mansergas et al., 2014). Several of the same systems also have infrequent outbursts and faint disks, suggesting low mass transfer rates. Evolutionary models predict that such "evolved CVs" can form if mass transfer began when the secondary star was near the end of its main sequence lifetime and thus had a core enhanced in helium (e.g. Podsiadlowski et al., 2003). At the short periods where they are observed, the donors in these systems have semi-degenerate helium cores and thick hydrogen-burning envelopes. This leads them to have quite different physical parameters from ordinary CV donors, which consist mostly of hydrogen.
Evolved CVs are predicted to follow qualitatively different evolutionary pathways from ordinary CVs. Their donor's cores remain radiative (as opposed to donors in normal CVs which become fully convective) so they are not expected to detach at \(P_{\rm orb}\approx 3\) hours as a result of weakened magnetic braking. Instead, this can occur at a
wide range of orbital periods when the donors lose enough of their convective envelopes at \(T_{\rm eff}\gtrsim 6500\) K. These donors are predicted to detach from their Roche lobes and be observable as extremely low mass (ELM) WDs in close binaries (Sun and Arras, 2018; Li et al., 2019; El-Badry et al., 2021).
Evolved CVs provide one possible formation channel for the AM Canum Venaticorum binaries (AM CVns; Tutukov et al., 1985; Podsiadlowski et al., 2003; Kalomeni et al., 2016), which are ultra-compact binaries with helium donors and \(P_{\rm orb}\lesssim 70\) minutes, below the minimum period of normal CVs. AM CVns are especially interesting outcome of evolved CV evolution since they will be among the loudest sources of gravitational waves for LISA (Breivik et al., 2018; Kupfer et al., 2018). At longer periods, CVs with evolved donors can have higher mass transfer rates than ordinary CVs and can be observed as supersoft X-ray sources (e.g. Li and van den Heuvel, 1997; Schenker et al., 2002). These high accretion rates can lead to stable burning on the surface of the accretion WD and may provide one channel for Type Ia (or.Ia) supernovae (Livne, 1990; Bildsten et al., 2007; Nomoto et al., 2007; Wolf et al., 2013; Brooks et al., 2015).
Because evolved CV donors completed most of their main-sequence evolution before the onset of mass transfer, their photospheres - which contain material previously inside their donors' convective cores - are expected to have unusual surface abundances. The first observed evolved CVs, EI Psc and QZ Ser, bore out this prediction, with optical spectra that hinted at helium (He) and sodium (Na) enhancement (Thorstensen et al., 2002, 2020). Ultraviolet spectroscopy soon showed that the same systems were enhanced in nitrogen and deficient in carbon, as expected if material on the donors' surface was previously processed by CNO burning (Haswell et al., 2002; Gansicke et al., 2003; Toloza et al., 2022). Studies based on infrared spectroscopy have similarly reported carbon deficits (Harrison, 2016, 2018).
A large sample of CVs with evolved donors was presented by El-Badry et al. (2021, hereafter E21). Using light curves from the Zwicky Transient Facility (ZTF; Bellm et al., 2019), they identified 21 binaries with \(P_{\rm orb}<6\) h exhibiting large-amplitude ellipsoidal variability and falling below the main-sequence in the color-magnitude diagram. The donors in these systems have comparable temperatures and surface gravities to main-sequence A and F stars, but significantly lower masses and smaller radii. Using low-resolution follow-up spectroscopy, they showed that some objects in the sample have ongoing mass transfer, while others likely just recently became detached. They noted hints of excess Na absorption for all donors with \(T_{\rm eff}\lesssim 7000\) K, suggesting potential Na enhancement (at higher temperatures, more of the Na is ionised so neutral lines in the optical become weaker).
A literature search indeed reveals several other scattered reports of Na enhancement in the donors of evolved CVs (e.g. Thorstensen et al., 2002; Thorstensen, 2013; Harrison, 2018; Green et al., 2020; Zhang et al., 2022). There has, however, been little systematic study or quantitative evolutionary predictions of the expected enhancement. In contrast to CNO lines, Na has strong optical lines, namely the 5900A doublet (i.e. Fraunhofer D-lines, at 5890 and 5896A) and the 8200A doublet (at 8183 and 8195A), making it readily accessible in evolved CVs (where the donor, rather than the disk, dominates in the optical). Na is produced in advanced H burning processes which can only take place in the high-temperature interiors of stars with masses \(M\gtrsim 1~{}M_{\odot}\), making it a potential diagnostic of the initial masses of the donors in evolved CVs.
In this paper, we follow up on the work of E21 and obtain higher-resolution and higher-SNR spectra of their targets to look for and quantify potential excesses in Na absorption lines compared to model spectra. We also use the Modules for Experiments in Stellar Astrophysics (MESA) to model binary evolutionary scenarios that give rise to evolved CVs and test whether they are able to predict the observed abundances. In Section 2, we describe our spectroscopic observations as well as the subsequent data reduction and calculations of several parameters of our objects using the spectra. In Section 3, we explain the process of generating a grid of model stellar spectra with different chemical abundances. By comparing equivalent widths of Na lines in the model spectra to our data, we infer the Na abundances of our objects. We then describe and present the results of our MESA models in Section 4.
## 2 Spectroscopic Observations
We observed all 21 evolved CVs from E21 across 4 nights. A detailed observing log can be found in Appendix A.
We observed 20 targets with the Echellette Spectrograph and Imager (ESI; Sheinis et al., 2002) on the Keck II telescope in echellette mode. For the majority of these observations, we used the 0.5" slit with a 600s exposure which yielded spectral resolution \(R\sim 9300\) and a typical SNR \(\sim 20\) per pixel. A few targets were observed with other ESI slits; see Table A1 for details. All ESI spectra were reduced using the MAuna Kea Echelle Extraction (MAKEE) pipeline, which performs bias-subtraction, flat fielding, wavelength calibration, sky subtraction, and a flexure correction to the wavelength solution using sky lines.
Due to poor weather conditions affecting several observing runs and observability constraints during the year, we observed one object, P_2.74a, with the Low Resolution Imaging Spectrometer (LRIS; Oke et al., 1995) on the Keck I telescope. We observed the binary over a full orbit, with 63 (red) and 78 (blue) \(\times\) 90 second exposures and a 1.0" slit (see Appendix A), but most of the abundance analysis focuses on a single exposure, taken at quadrature. The red and blue arms covered a wavelength range of \(\sim 3150\) to 5600A and 5400 to 10290A, respectively. The data was reduced using LPipe (Perley, 2019). This provided a significantly lower resolution of \(R\sim 1540\) compared to the ESI spectra but as we are primarily concerned with the calculation of equivalent widths, this should be acceptable to complete the sample.
Cutous of the spectra for all objects can be found in Figures 1, 3, and B1, which respectively show spectral regions containing the Mg I triplet, the Na D doublet, and the H\(\alpha\) line.
The phases at which the spectra were taken could be deduced from the recorded mid-exposure time and the orbital ephemeris obtained from light curve fitting, summarized in Table 4 of E21.
E21 also inferred the effective temperatures of all the targets from SED fitting of UV-to-IR photometry from Pan-STARRS, WISE, and 2MASS, using the results from fitting their low-resolution spectra as a weak prior. These temperatures, as well as several other relevant parameters from the same paper, are summarized in Table 1.
### Spectra and radial velocities
For each spectrum, the radial velocity was obtained using the cross-correlation function (CCF) method (Tonry and Davis, 1979), which we implemented as described in Appendix A of Zhang et al. (2021). This involved shifting the spectra in log space of wavelength with a range of radial velocities corresponding to -600 and 600 km/s and finding the shift that maximized the CCF when compared to a standard stellar model spectrum. For measuring RVs, we used the wavelength range 4900-5400A, which includes the MgI triplet (5167, 5173, 5184A), shown for all objects in Figure 1. For the model spectra, we used
BOSZ Kurucz models (Bohlin et al., 2017) with solar abundances and log(g) and \(T_{\rm eff}\) values close to those of the target. We retrieved models with \(R=50,000\) and broadened them to the resolution of each observed spectrum. Instrumental broadening was carried out using convolution with a 1D Gaussian filter kernel (Astropy Collaboration et al., 2022) with FWHM obtained from a gaussian fit to the [OI]5577A line in the sky spectra.
We inferred projected rotation velocities \(v\sin i\) for the donor in each system from the observed spectra (Section 2.2), but at this initial stage, we simply set \(v\sin i=(2\pi R/P_{\rm orb})\sin(i)\) using values from Table 1. Rotational broadening was implemented using the rrotBroad function (as originally described by Gray, 1992) from PyAstronomy (Czesla et al., 2019),with linear limb-darkening coefficients taken from Claret & Bloemen (2011). RV errors were obtained by calculating chi-square (\(\chi^{2}=\sum\left(F_{\rm obs}-F_{\rm model}\right)^{2}/\sigma_{\rm obs}^{2}\)) values between the observed and model spectra at each shift (for the wavelength range considered) and identifying the points at which \(\chi^{2}\) increased by 1 from the minimum value. We note that while the radial velocity that minimizes \(\chi^{2}\) is not precisely the same as that which maximizes the CCF, they typically only differ by \(\lesssim 1\) km/s and the calculated errors should still provide a reasonable estimate of the statistical errors.
Using the RV ephemerides from E21, we can also calculate the expected RV of the donor at the orbital phase of our observed spectra. We compare these to the measured RVs in Table 2. The typical disagreement between measured and predicted RVs is about 30 km/s: small compared to the RV semi-amplitudes, but large compared to the formal uncertainties. We suspect that the main reason for disagreement between the measured and predicted RVs is that the RV zeropoint of the low-resolution spectra obtained by E21 was only stable at the \(\sim 50\) km/s level (see their paper for details), leading to a systematic uncertainty in the binaries' center-of-mass RVs. The RV zeropoint for the ESI spectra analyzed in this work is stable at the few km/s level, so we expect our RVs to be more reliable than the predictions from E21.
### Projected Rotational Velocity
As mentioned in 2.1, we can get an expected value for the projected rotational velocity by using the values of radius, period, and inclination from E21. However, this is only an estimate. In particular, the radii are obtained using fits to the spectral energy distributions (SEDs) while neglecting possible contamination from the companion or the accretion disk. Though comparisons to model spectra do not suggest significant contamination for most targets, strictly speaking, this makes the derived radii upper limits.
Therefore, we also measured best-fit \(v\sin i\) values directly from the rotational broadening of the observed spectra. For each target, this was done by first calculating radial velocities using model spectra rotationally broadened with a range of \(v\sin i\) values from 50 to 150 km/s and shifting the target spectra with the derived RV. Then, the \(\chi^{2}\) value was calculated by comparison with each model and the \(v\sin i\) of the corresponding model that minimized the \(\chi^{2}\) was identified. The errors were obtained by finding the location for which \(\chi^{2}\) increased by one. Note that some of the parameters from E21 used to calculate the \(v\sin i\) values have asymmetric errors. The errors on these have been approximated through standard error propagation using averages of the upper and lower values. The differences in the \(v\sin i\) values from the two calculations are shown in Figure 2.
We find that for three objects, the chi-square method did not result in clear local minima, instead monotonically decreasing towards larger \(v\sin i\). These are marked in red in the figure, placed at the zero line (as the calculated differences have little meaning), and given arbitrarily large error bars for emphasis. This result is not unexpected
\begin{table}
\begin{tabular}{c c c c c c} \hline ID & Gaia eDR3 ID & \(P_{\rm orb}\) [days] & \(T_{\rm eff}\) [K] & \(R_{\rm donor}\) [\(R_{\odot}\)] & \(i\) [deg] \\ \hline P\_2.00a & 4393660804037754752 & 0.08331301(2) & 7734 \(\pm\) 73.0 & 0.17\({}^{+0.01}_{-0.01}\) & 55.49\({}^{+14.14}_{-5.33}\) \\ P\_2.74a & 86154020730947776 & 0.11414278(4) & 4726 \(\pm\) 52.0 & 0.23\({}^{+0.01}_{-0.01}\) & 79.76\({}^{+7.0}_{-7.75}\) \\ P\_3.03a & 1030236970683510784 & 0.12642666(6) & 5910 \(\pm\) 77.0 & 0.27\({}^{+0.03}_{-0.03}\) & 73.10\({}^{+1.29}_{-1.29}\) \\ P\_3.06a & 21339380770763469696 & 0.1274832(2) & 5862 \(\pm\) 70.0 & 0.27\({}^{+0.03}_{-0.02}\) & 79.49\({}^{+7.09}_{-6.61}\) \\ P\_3.13a & 4228735155086295552 & 0.13056512(8) & 66662 \(\pm\) 54.0 & 0.28\({}^{+0.01}_{-0.01}\) & 83.66\({}^{+4.65}_{-7.25}\) \\ P\_3.21a & 45968897530496384 & 0.13375228(7) & 7022 \(\pm\) 72.0 & 0.31\({}^{+0.02}_{-0.02}\) & 71.56\({}^{+2.19}_{-2.38}\) \\ P\_3.43a & 184047622957533440 & 0.1429875(2) & 6193 \(\pm\) 83.0 & 0.26\({}^{+0.01}_{-0.01}\) & 79.21\({}^{+6.44}_{-6.44}\) \\ P\_3.48a & 40023594911602884 & 0.14479504(6) & 6444 \(\pm\) 62.0 & 0.26\({}^{+0.02}_{-0.02}\) & 73.90\({}^{+1.05}_{-1.05}\) \\ P\_3.53a & 1965375973804679296 & 0.146883(3) & 5324 \(\pm\) 66.0 & 0.32\({}^{+0.01}_{-0.01}\) & 79.76\({}^{+6.96}_{-6.67}\) \\ P\_3.81a & 3738738786785825408 & 0.1586339(7) & 6689 \(\pm\) 72.0 & 0.35\({}^{+0.05}_{-0.05}\) & 78.34\({}^{+7.71}_{-7.71}\) \\ P\_3.88a & 1077511538271752192 & 0.1651714(3) & 5875 \(\pm\) 102.0 & 0.28\({}^{+0.02}_{-0.01}\) & 79.64\({}^{+6.96}_{-6.67}\) \\ P\_3.90a & 30535718402222008192 & 0.1624549(1) & 7442 \(\pm\) 87.0 & 0.33\({}^{+0.01}_{-0.01}\) & 64.85\({}^{+0.19}_{-4.42}\) \\ P\_3.98a & 89643828413086336 & 0.165822949(5) & 5122 \(\pm\) 53.0 & 0.40\({}^{+0.03}_{-0.03}\) & 70.36\({}^{+12.0}_{-1.01}\) \\ P\_4.06a & 43582506408140235484 & 0.169898413(4) & 7587 \(\pm\) 72.0 & 0.28\({}^{+0.01}_{-0.01}\) & 59.48\({}^{+8.12.9}_{-8.3}\) \\ P\_4.10a & 306476637617338808 & 0.170810(2) & 4846 \(\pm\) 38.0 & 0.37\({}^{+0.01}_{-0.01}\) & 79.51\({}^{+7.05}_{-4.88}\) \\ P\_4.36a & 212636106765220030 & 0.181743(2) & 5998 \(\pm\) 80.0 & 0.39\({}^{+0.02}_{-0.01}\) & 74.15\({}^{+10.16}_{-1.04}\) \\ P\_4.41a & 2171644870571247872 & 0.1838691(2) & 7013 \(\pm\) 105.0 & 0.35\({}^{+0.01}_{-0.01}\) & 64.02\({}^{+0.16}_{-3.9}\) \\ P\_4.47a & 1315840437462118400 & 0.18604299(9) & 7171 \(\pm\) 108.0 & 0.36\({}^{+0.07}_{-0.05}\) & 73.03\({}^{+6.33}_{-6.3}\) \\ P\_4.73a & 200623282792027904 & 0.1969063(2) & 7619 \(\pm\) 85.0 & 0.38\({}^{+0.01}_{-0.01}\) & 57.45\({}^{+12.3}_{-1.32}\) \\ P\_5.17a & 43829578
given the particularly weak lines of P_2.00a and P_4.47a as well as the broad lines of P_2.74a due to a lower resolution spectrum, all of which can be seen in Figure 1. Meanwhile, the rest of the objects have \(v\sin i\) values that are in agreement to within about \(\pm\) 10 km/s on average, suggesting that the radius and inclination constraints from from E21 are reliable.
### Equivalent Widths
In order to measure the Na abundance in a donor, we first calculate the equivalent width (EW) of its absorption lines in the spectrum, in particular the 5900A doublet. Later, we repeat this process for many model spectra generated over a grid of abundances and make comparisons between the EWs of the models and the data (see Section 3).
To estimate the local continuum, the lines themselves were masked out, then a range 60A above and below the center of the doublet (120A for the one LRIS spectrum) was selected and fitted with a first order polynomial. The integral under the lines, \(\pm\)15A around the center (\(\pm\)30A for LRIS), was calculated numerically using the trapezoidal rule and subtracted from the area under the continuum over the same wavelength range to get the equivalent width (EW). Figure 3 shows these regions around the doublet for all of our objects. It is already clear by eye that for most targets, the observed line is significantly deeper and broader than that of the corresponding model spectrum with solar abundance.
The error on the EW for each target was obtained by calculating the EWs of the same line in 100 Monte Carlo realizations of the observed spectra and taking their standard deviation. These were generated using the normalized flux of the original spectrum and its error as the standard deviation of a Gaussian distribution from which a random sample was drawn at each wavelength.
There are overlaps in the wavelength coverage towards the edge of each order for the ESI spectra. This means that some lines may be found in two orders, which is the case for the 5900A doublet. This allowed us to make the same calculations for the two orders and take their inverse-variance weighted average. For the one LRIS spectra (P_2.74a), the EW was calculated just using the blue side where the line is located and no average was taken. Figure 4 plots the EWs of the 5900A doublet for all objects against their IDs and temperatures, for each of the two orders as well as their average. We see that there is a clear downward trend with increasing temperature: the lines become weaker at high \(T_{\rm eff}\), where most of the Na is ionized.
Figure 1: The Mg I triplet for all objects, plotted with their corresponding model spectra, used in the cross-correlation to obtain RVs. For four objects (P_3.13a, P_3.21a, P_3.43a, P_3.90a), the triplet is noticeably deeper than the standard solar abundance models and fits better to those with He mass fraction \(Y=0.868\). These are used later in the calculation of abundances.
The EW values are also listed on Table 3, along with the derived Na abundances from comparison with models, described in Section 3.3.
#### 2.3.1 Accounting for interstellar or circumbinary absorption
For many of the spectra, there were narrow and deep interstellar lines superimposed on top of the broad Na lines from the donors. Though they do not occupy a large area in most cases, to minimize their effect on our EW calculations, we cut out any data points \(\pm\) 0.6A around the rest wavelengths of the doublet (as the interstellar lines have very small RV shifts) and replaced them with a median of several points above and below this range. This process effectively blocks out any obvious interstellar Na contribution which are present for a few objects (See Appendix C). This was not done for the LRIS spectrum because the resolution was too low that the doublet itself, and the interstellar lines, were not resolved and there were no distinguishable features on the single broad absorption line.
In addition to absorption from the interstellar medium (ISM), it is in principle possible that there is additional Na absorption due to circumbinary material in the immediate vicinity of the targets. We explore this possibility - and conclude that it is unlikely, because all the observed Na lines are broad and RV variable, tracking the donor - in Appendix D.
## 3 Analysis
### Kurucz model spectra
To infer abundances from the observed spectra, we calculated a grid of 1D LTE model atmospheres and synthetic spectra for a range of effective temperatures and Na abundances, which we subsequently compared to the observed spectra. We used ATLAS 12 (Kurucz, 1970; Kurucz, 1979; Kurucz, 1992) to compute the atmosphere structure and SYNTHE (Kurucz, 1993) for the radiative transfer calculations, self-consistently re-computing the atmosphere structure for each model spectra. We use the linelist maintained by R. Kurucz1 and assumed a microturbulent velocity of 2 km s\({}^{-1}\). We generated spectra at resolution \(R=300,000\) and applied instrumental and rotational broadening to match the observed data.
Footnote 1: [http://kurucz.harvard.edu/linelists.html](http://kurucz.harvard.edu/linelists.html)
We assumed a surface gravity \(\log\left[g/\left(\mathrm{cm\,s}^{-2}\right)\right]=4.8\) dex for all models, as the expected value for all the binaries in our sample is within 0.2 dex of this value. We generated models with a spacing of 100 K in \(T_{\mathrm{eff}}\), ranging from 4500 to 7700 K, and a spacing of 0.1 dex in [Na/H], ranging from -0.1 to 1.3 dex. Our fiducial calculations assumed the solar abundance pattern for elements besides Na. As described below, we also generate several grid with He enrichment to explore its effect on the Na lines.
### Helium enhanced models
From Figure 1, we see that for four objects in particular - P_3.13a, P_3.21a, P_3.43a, P_3.90a - the observed Mg lines are noticeably deeper than those of the model with solar abundances. The same is true for other metal lines in these objects.
One possible explanation for this is that the T\({}_{\mathrm{eff}}\) values inferred from SED fitting are overestimated. A higher T\({}_{\mathrm{eff}}\) means more of the Mg will be ionized, thus weakening its neutral lines. To explore this possibility, we compared the observed spectra with several models with lower T\({}_{\mathrm{eff}}\) and found that models cooler by \(\approx 750\) K better matched the depth of the observed Mg triplets. However, these lower temperatures (with a corresponding increase in radius) resulted in a much worse fit to the broadband SEDs (Figure 9 in E21), even when plausible light contributions from the accreting WD and/or disk were taken into account. We thus consider it unlikely that a
\begin{table}
\begin{tabular}{c c c} \hline \hline ID & RV\({}_{\mathrm{spec}}\) [km/s] & RV\({}_{\mathrm{phase,E21}}\) [km/s] \\ \hline P\_2.00a & -326.9\({}^{+5.2}_{-3.1}\) & -397.4 \(\pm\) 10.8 \\ P\_2.74a & -28.17\({}^{+3.3}_{-2.8}\) & -395.9 \(\pm\) 15.6 \\ P\_3.03a & -102.0\({}^{+2.0}_{-2.0}\) & -64.7 \(\pm\) 10.4 \\ P\_3.06a & 9.0\({}^{+1.4}_{-1.3}\) & 14.8 \(\pm\) 9.0 \\ P\_3.13a & -358.65\({}^{+5.7}_{-8.8}\) & -318.4 \(\pm\) 18.4 \\ P\_3.21a & -216.8\({}^{+8.1}_{-1.1}\) & -294.8 \(\pm\) 12.5 \\ P\_3.43a & -279.7\({}^{+2.1}_{-2.0}\) & -236.5 \(\pm\) 9.2 \\ P\_3.48a & 273.0\({}^{+1.8}_{-1.9}\) & 244.4 \(\pm\) 5.4 \\ P\_3.53a & 216.0\({}^{+8.1}_{-8.1}\) & 160.0 \(\pm\) 92.2 \\ P\_3.81a & -201.9\({}^{+2.0}_{-2.0}\) & -229.1 \(\pm\) 5.1 \\ P\_3.88a & 134.0\({}^{+1.7}_{-1.2}\) & 174.2 \(\pm\) 8.0 \\ P\_3.90a & -85.0\({}^{+1.8}_{-2.5}\) & -133.6 \(\pm\) 3.4 \\ P\_3.98a & 35.0\({}^{+3.4}_{-3.4}\) & -17.6 \(\pm\) 8.3 \\ P\_4.06a & -205.9\({}^{+0.7}_{-1.5}\) & -225.9 \(\pm\) 8.6 \\ P\_4.10a & 208.0\({}^{+4.1}_{-4.6}\) & 197.9 \(\pm\) 24.9 \\ P\_4.36a & 235.0\({}^{+2.3}_{-2.3}\) & 196.9 \(\pm\) 7.5 \\ P\_4.41a & -34.0\({}^{+2.6}_{-2.3}\) & -101.8 \(\pm\) 3.0 \\ P\_4.47a & 214.3\({}^{+30.3}_{-2.1}\) & 199.5 \(\pm\) 17.5 \\ P\_4.73a & -313.7\({}^{+1.9}_{-1.7}\) & -329.6 \(\pm\) 6.7 \\ P\_5.17a & -165.9\({}^{+2.7}_{-1.5}\) & -193.3 \(\pm\) 11.0 \\ P\_5.42a & 107.0\({}^{+1.1}_{-3.7}\) & 112.2 \(\pm\) 6.3 \\ \hline \end{tabular}
\end{table}
Table 2: Radial velocities of all objects. RV\({}_{\mathrm{spec}}\) are the results from CCF maximization using the spectra obtained in this work; RV\({}_{\mathrm{phase,E21}}\) are those calculated using the best fit orbital solutions from Table 3 of E21. We find typical disagreements between the two RVs of \(\sim 30\) km/s which can likely be attributed to the improved stability in the RV zeropoints of the ESI spectra compared to the lower resolution spectra from E21.
Figure 2: Difference in the \(v\sin i\) calculated from \(2\pi R/Psin(i)\) using the values derived in E21 and that calculated with \(\chi^{2}\) minimization. The points in red indicate objects for which no good solution was found using the \(\chi^{2}\) method - they have been placed at 0 and their error bars have been arbitrarily extended to emphasize this. We see that most objects show an agreement of the two values to \(\sim 30\) km/s, suggesting that orbital parameters from E21 are reasonable estimates.
too-cool assumed \(T_{\rm eff}\) is the reason for the deeper metal lines in these objects.
An alternative explanation for the deeper-than-expected Mg lines in these sources is surface helium enhancement of the donors. Evidence of enhancement has been observed in the donors of some other evolved CVs (e.g. Harrison, 2018), and some enhancement is also predicted by evolutionary models (Section 4). Letting support to the helium enhancement hypothesis, the same objects that have deeper-than-expected Mg lines also have deeper He I lines at 5876A than predicted by the fiducial spectral models (Figure 3).
To analyze these objects and assess the sensitivity of Na abundance measurements to the assumed helium abundance, we generated several grids of models (with the same spacing in \([Na/H]\) and \(T_{\rm eff}\) as before) with surface He mass fractions, \(Y\), of 0.569 and 0.868 (compared to the solar value of 0.254). We plot the models with \(Y=0.868\) in Figure 1 for the four objects and we see that these have deeper Mg I triplet lines than the model with \(Y=0.254\), in better agreement with the observed spectra. Thus, we carry out the calculation of Na abundances for these objects in Section 3.3 using these He enhanced models.
For the remaining objects, the good agreement of the observed spectra and models with solar helium abundances rules out strong helium enhancement, so we use the solar models. However, it is possible that some modest enhancement could go unrecognized. We explore how unaccounted-for He enhancement would affect our inferred Na abundances in Appendix E, where we conclude that the uncertainty in our inferred Na abundances associated with possible He enhancement is typically about 0.1 dex.
### Abundance calculations
We calculate the EWs of the same Na lines for all model spectra with different abundances at each \(T_{\rm eff}\). This is plotted on Figure 5. At all \(T_{\rm eff}\), we find that with more Na (i.e. higher [Na/H]), the EW of its line is greater. However, the sensitivity of this relation depends strongly on the \(T_{\rm eff}\) whereby the overall slope gets less steep at higher temperatures as more of the Na is ionized.
For each object, we derive the relation between EW and abundance at its temperature \(T_{\rm eff,0}\) using interpolation between the relations for two models lying above and below \(T_{\rm eff,0}\). The corresponding abundance is then obtained at the EW of the line measured from its spectra. The error is calculated by propagating the errors in the EW and \(T_{\rm eff}\). We note that the uncertainty in \(T_{\rm eff}\) obtained from SED fitting (listed on Table 2 of E21, ranging from \(\sim 50-100\) K) is likely
Figure 3: The Na \(5900\AA\) doublet for all targets (for one of the two orders in which it is found). The region of the spectra over which the continuum is calculated is in gray, with the resulting first order polynomial fit in the black dashed line. The orange is the region considered to be occupied by the doublet and that under which the integrals, and thus EW, is calculated. A model spectra with \(T_{\rm eff}\) matching each target (with [Na/H] = 0.0 dex) is also shown in blue for reference. The main takeaway from this figure is that the observed lines are noticeably deeper than those of the solar abundance model, indicating enhancement.
underestimated due to various systematics so we instead take it to be 100 K for all objects.
The resulting plots are shown for all objects in Figure 6. Green plots indicate that that the objects are being compared to He enhanced models with \(Y=0.868\), as discussed in Section 3.2. The abundances with their errors are also summarized in Table 3. We see that all objects show some level of Na enhancement with [Na/H] values ranging from \(\sim\) 0.4 - 1.6 dex, with a median value at a significant 1.024 dex i.e. about an order of magnitude enhancement compared to solar value.
### Non-LTE corrections
In generating the curves of growth (Figures 5 and 6) used to calculate the Na abundances of our objects, we neglected deviations from the standard assumption of local thermodynamic equilibrium (LTE) used in the model atmospheres. It has been found that the inclusion of non-LTE (NLTE) effects can increase the EWs of Na lines at fixed [Na/H] (e.g. Mashonkina et al., 2000; Lind et al., 2011, 2022). To avoid overestimating the abundances, we apply NTLE corrections to our abundances using the results from Lind et al. (2022).
At the measured EWs of all objects (sum of the 5890 and 5896 A lines), we calculate the difference between the predicted NLTE and LTE abundances, interpolated to their effective temperatures. We used values for [Fe/H] = 0.0 dex, log(g) = 5.0, and microturbulence of 2 km/s. The thus-inferred NLTE corrections are shown as a function of temperature in Figure 7. The mean correction is \(-\)0.126 dex, and the correction is more important at higher temperatures. We add these corrections to our previously measured abundances in Section 3.3. These "NLTE-corrected" abundances can be found on Table 3. These values range from \(\sim\) 0.3 - 1.5 dex, with a median value of 0.956 dex, and will be used when referring to [Na/H] in the rest of the paper.
We emphasize that these are approximate corrections, done to avoid overstating the Na enhancements. The LTE curves of growth from Lind et al. (2022) are similar but not identical to those shown in Figure 5 as the synthetic spectra were not calculated with the same code. For several objects, indicated with blue stars on Figure 7, our measured EWs exceeded the maximum value calculated by Lind et al. (2022) (which corresponded to [Na/H]=0.7) so we simply used the NLTE correction at the maximum value. However, as the absolute differences get smaller towards larger EWs, this only results in more conservative values for the final abundances. Lastly, it should be noted that as discussed in Section 3.2, some of our objects were modelled with He enhancement but this was not taken into account in calculating the NLTE corrections.
Figure 4: EWs of the Na 5900Å doublet for all objects, plotted against their IDs (i.e. their orbital periods, _left_) and their temperatures (_right_). The gray points with the circle and triangle markers are the EWs of the line in each of the two orders while the red points with the star markers takes their weighted average. The solid, dash-dot, and dotted blues lines on the figure to the right plots the EW against temperature of model spectra with [Na/H] = 0.0, 1.0, and 1.3 dex respectively. All objects lie above the solar abundance line, meaning that they all show some level of enhancement, while more than half show \(\gtrsim\) an order-of-magnitude enhancement.
Figure 5: EWs of the Na 5900Å doublet against the Na abundances for model spectra with a range of effective temperatures. At a given \(T_{\rm eff}\), we see that EW increases monotonically with [Na/H]. The slope gets shallower at higher \(T_{\rm eff}\) as more Na gets ionized.
## 4 MESA Evolutionary Models
We ran MESA models (Paxton et al., 2011, 2013, 2015, 2018, 2019) to study the evolutionary history of the evolved CVs and to test whether they predict the observed overabundance of Na. Similar to the models used in E21 (described in greater detail in El-Badry et al., 2021a and summarized here), we only follow the evolution of the donor/secondary and model the WD companion/primary as a point mass. For the mass transfer rates, we use those of optically thick Roche lobe overflow described by Kolb & Ritter (1990). We consider fully non-conservative mass transfer meaning that all of the mass transferred to the primary is instantaneously ejected from its vicinity as fast winds, described by the \(\beta\) parameter being set to 1 (note that MESA uses \(\beta\) as defined by Tauris & van den Heuvel 2006 which is 1-\(\beta\) as defined earlier by Rappaport et al. 1982). This is a reasonable assumption in the case of CVs where WDs periodically lose accreted mass by classical novae explosions on timescales short to those over which the orbit evolves. We include the torque applied by magnetic braking as described by equation 36 of Rappaport et al. (1983) and set the gamma exponent to 3. The strength of magnetic braking in CVs at the periods of our sample are uncertain e.g. (e.g. El-Badry et al., 2022), but the adopted magnetic braking law primarily affects how quickly CVs evolve, not their abundances at a fixed evolutionary state.
We made several modifications compared to the MESA models described by E21. Firstly, we used a newer version of MESA (r22.05.1) which called for changes to the options in the'star' module of the secondary. To initialize it as a ZAMS star with solar abundances from Grevesse & Sauval (1998), we used the set_uniform_initial_composition option and manually defined the initial mass fractions of H, He, and metals (0.70, 0.28, 0.02 respectively). We also turned on rotation with the surface velocity set by tidal synchronization (which applies at the short orbital periods of our objects). Most importantly, MESA's default nuclear reaction network (basic.net) does not include Na and thus we instead use the network sagb_NeNa_MgAl.net which follows 22 isotopes from \({}^{1}\)H to \({}^{27}\)Al, including \({}^{21-23}\)Na. It uses rates provided by the JINAA credible database V2.0 (Farmer et al., 2015). The transitions between the CNO, NeNa, and MgAl cycles are depicted in Figure 1 of Boeltzig et al. (2016). The step producing \({}^{23}\)Na is proton capture by \({}^{22}\)Ne or in shorthand notation, \({}^{22}\)Ne(p,\(\gamma\))\({}^{23}\)Na. The NeNa and MgAl cycles are often discussed in the case of AGB stars undergoing Hot Bottom Burning (HBB) where they are activated at the high temperatures reached at the base of the convective layer (Izzard et al., 2007) as well as in novae outbursts (Jose et al., 1999), but \({}^{23}\)Na is generically
Figure 6: Plots of EW against [Na/H] obtained by interpolating those in Figure 5 to the temperature of each object. The red lines indicate extrapolation, where the calculated EW of the object is above the maximum value that was generated. The blue and green lines corresponds to models with solar and enhanced He mass fractions. Most objects have [Na/H] ranging from \(\sim\) 0.4 - 1.3 dex, with the exception of P_3.88 with a highly extrapolated value of 1.576 dex.
produced as long as sufficiently high temperatures are reached. If the NeNa cycle produces excess Na in the cores of CV donors during their main-sequence evolution, the resulting Na enhancement can be observed on the stars' surfaces after their outer layers are stripped.
We calculated models with initial donor masses of \(1.0\,M_{\odot}\) and \(1.5\,M_{\odot}\), assuming a WD mass \(M_{\rm WD}=0.7\,M_{\odot}\) in all cases. For each \(M_{\rm donor}\), we ran models over a grid of initial periods, \(p_{0}\) (ranging from \(\sim 0.5-5\) days), to find the initial periods which would produce systems with properties close to those of the evolved CVs in our sample. We did not consider lower initial donor masses here because such donors would take more than a Hubble time to complete their main-sequence evolution and become evolved CVs. Meanwhile, donors with higher initial masses would undergo a phase of thermal-timescale mass transfer shortly after initial Roche lobe overflow, resulting in a high mass transfer rate and stable H burning on the surface of the accreting WD (e.g. Schenker et al., 2002). This would lead to an increase in the WD's mass that would not be reliably followed by our calculations, which assume mass transfer is fully non-conservative on long timescales.
Figure 8 shows the evolution of several basic parameters of the donor - the mass, effective temperature, Roche lobe filling factor, mass loss rate, and total hydrogen mass - as a function of the orbital period for a range of initial periods. For reference, we also add tracks for a "normal" (i.e. not evolved) CV (black dashed lines) with initial with \(M_{\rm donor}=0.4\,M_{\odot}\) and \(p_{0}=0.3\) days. With increasing \(p_{0}\), the donor takes longer to become Roche lobe filling so that it is more evolved at the onset of mass transfer (although this figure is restricted to the short orbital periods of our objects where most models are already mass-transferring i.e. R/\(R_{\rm Roche,blue}\sim 1\) and log \(\dot{M}_{\rm donor}<0\), plotted on the third and fourth panels respectively). Therefore, systems with a longer \(p_{0}\) end up reaching higher effective temperatures at short periods \(\lesssim 5\) h, as seen on the second panels. The last panels show that they are also left with smaller total masses of remaining hydrogen as they have undergone more H core burning while on the main sequence.
For initial \(M_{\rm donor}=1.0\,M_{\odot}\), models with \(p_{0}\gtrsim 2.5\) days remain mass transferring across the period gap (\(P_{\rm orb}\approx 2-3\) hours, where the normal CV (black, dashed) detach due to the donor becoming fully convective and magnetic braking weakening). This reflects the fact that these donors are sufficiently evolved for their helium-enriched cores to remain radiative so that they do not experience a disruption of magnetic braking at \(P_{\rm orb}\approx 3\) hours and thus have continuous mass transfer. These objects also reach very short periods \(\lesssim 1\) h where they would be observed as AM CVn binaries (e.g. Podsiadlowski et al., 2003). The model with \(p_{0}=2.25\) days has a somewhat less evolved donor that does detach from its Roche lobe when it becomes fully convective, but this only occurs at \(P_{\rm orb}\sim 2\) h - shorter than the 3 hour period where normal CVs with fully unevolved donors detach. We see that while a larger range of initial periods remain in contact across the gap, only the models with \(3.0\lesssim p_{0}\lesssim 3.2\) days reach the range of effective temperatures at the observed periods of our sample, also plotted on the figure (\(T_{\rm eff}\sim 4500-7500\) K at \(P_{\rm orb}\sim 2-6\)). Beyond \(p_{0}\approx 3.2\) days, the donor leaves the main sequence before mass transfer starts (described further below for the \(1.5\,M_{\odot}\), \(p_{0}=0.85\) case).
For initial \(M_{\rm donor}=1.5\,M_{\odot}\), the range of initial periods forming evolved CVs that reach the temperatures of our objects is shorter and narrower, from \(\sim 0.7-0.8\) days. This is expected because more massive stars are hotter and have thinner convective envelopes. This means that magnetic braking is weaker so the stars must be initially closer together to begin mass transfer as the donors leave the main sequence. Furthermore, they have shorter lifetimes and evolve more
\begin{table}
\begin{tabular}{c c c c} \hline ID & EW [Å] & [Na/H] & [Na/H]\({}_{\rm NLTE}\) \\ \hline P\_2.00a & \(0.808\pm 0.429\) & \(1.029\pm 0.526\) & \(0.777\pm 0.526\) \\ P\_2.74a & \(16.176\pm 0.251\) & \(1.335\pm 0.197\) & \(1.296\pm 0.197\) \\ P\_3.03a & \(4.043\pm 0.231\) & \(1.090\pm 0.105\) & \(1.016\pm 0.105\) \\ P\_3.06a & \(5.014\pm 0.174\) & \(1.275\pm 0.104\) & \(1.201\pm 0.104\) \\ P\_3.13a & \(4.177\pm 0.078\) & \(1.136\pm 0.076\)\({}^{*}\) & \(1.016\pm 0.076\)\({}^{*}\) \\ P\_3.21a & \(3.074\pm 0.258\) & \(1.114\pm 0.108\)\({}^{*}\) & \(0.964\pm 0.108\)\({}^{*}\) \\ P\_3.43a & \(3.723\pm 0.138\) & \(0.676\pm 0.085\) & \(0.583\pm 0.085\)\({}^{*}\) \\ P\_3.48a & \(1.936\pm 0.111\) & \(0.843\pm 0.096\) & \(0.736\pm 0.096\) \\ P\_3.53a & \(6.566\pm 0.098\) & \(1.014\pm 0.127\) & \(0.970\pm 0.127\) \\ P\_3.81a & \(2.102\pm 0.366\) & \(1.118\pm 0.177\) & \(0.995\pm 0.177\) \\ P\_3.88a & \(6.251\pm 0.166\) & \(1.576\pm 0.116\) & \(1.502\pm 0.116\) \\ P\_3.90a & \(1.941\pm 0.055\) & \(1.024\pm 0.078\)\({}^{*}\) & \(0.822\pm 0.078\)\({}^{*}\) \\ P\_3.98a & \(6.298\pm 0.168\) & \(0.699\pm 0.142\) & \(0.653\pm 0.142\) \\ P\_4.06a & \(0.581\pm 0.142\) & \(0.608\pm 0.265\) & \(0.328\pm 0.265\) \\ P\_4.10a & \(7.424\pm 0.073\) & \(0.475\pm 0.162\) & \(0.442\pm 0.162\) \\ P\_4.36a & \(4.260\pm 0.169\) & \(1.220\pm 0.098\) & \(1.143\pm 0.098\) \\ P\_4.41a & \(0.834\pm 0.111\) & \(0.565\pm 0.141\) & \(0.397\pm 0.141\) \\ P\_4.47a & \(1.692\pm 0.990\) & \(1.316\pm 0.506\) & \(1.152\pm 0.506\) \\ P\_4.73a & \(0.833\pm 0.110\) & \(0.987\pm 0.143\) & \(0.756\pm 0.143\) \\ P\_5.17a & \(3.054\pm 0.259\) & \(1.048\pm 0.115\) & \(0.956\pm 0.115\) \\ P\_5.42a & \(0.908\pm 0.057\) & \(0.869\pm 0.091\) & \(0.668\pm 0.091\) \\ \hline \end{tabular}
\end{table}
Table 3: Table of the EWs of the 5900Å doublet with the derived [Na/H] from Figure 6. The [Na/H]\({}_{\rm NLTE}\) has been corrected for NTLE effects using values from Figure 7, calculated using curves of growth provided in Lind et al. (2022). [Na/H] values were calculated using solar He abundance (Y = 0.254) models, except those marked with an asterisk (*) which were calculated using He enhanced (Y = 0.868) models. [Na/H] of objects range from \(\sim 0.4\) - 1.6 dex, with a median value of 1.026 dex. Meanwhile, [Na/H]\({}_{\rm NLTE}\) range from \(\sim 0.3\) - 1.5 dex, with the median at 0.956 dex.
Figure 7: The difference in the Na abundances implied from EWs of the Na 5900 Å doublet of synthetic spectra with and without NLTE effects, as a function of effective temperature for all objects. The blue stars indicate objects whose EWs were greater than the maximum value provided by Lind et al. (2022) for which the difference plotted is that at the maximum value. In all cases, we see that inclusion of NTLE effects leads to a reduction in the [Na/H], with a mean value of -0.126 dex, so neglecting them will result in an overestimate of the Na enhancements.
quickly off the main sequence. The \(p_{0}=0.6\) model is a case where the donor is hardly evolved and thus its evolution closely follows that of a normal CV. Meanwhile, the \(p_{0}=0.85\) model represents a system that is above the"bifurcation limit" where the donor has had enough time for it to ascend the giant branch by the time it begins mass transfer (Podsiadlowski et al., 2003). By the time the system reaches the period range of our objects, the donor has evolved into a WD and is on the cooling track, with gravitational radiation slowing bringing it closer to its companion until they are brought back in contact at very short periods. E21 also tested the effect of altering other initial parameters including the metallicity and the exponential overshooting parameter but found that, given the same degree of donor evolution at the onset of mass transfer, these parameters only modestly change the mass of the donor and qualitative evolutionary tracks at short periods.
Figure 9 shows surface abundances of the donor for several elements against the orbital period. Once again, we add the plots for a normal CV in all panels which remain approximately constant at 0.0 dex (i.e. solar) for all elements. For the [Na/H] panel, we also plot points for the measured abundances of our objects on top of the models.
We see that \({}^{4}\)He, \({}^{14}\)N, and \({}^{23}\)Na are enhanced while \({}^{12}\)C and \({}^{16}\)O are depleted at the observed periods. The He enhancement can be explained simply by the fact that we are seeing the core of an evolved donor that has undergone significant H burning and whose envelope has been stripped away as a result of the mass transfer. This also supports the possibility that enhanced He may be what is responsible in deepening of the magnesium lines (Section 3.2; we note that \({}^{24}\)Mg is one stable element in the MgAl cycle - while its abundance was traced in several models, there were no significant changes from the initial solar values as a result of this process). The abundances of C, N, and O can be explained by looking at the reactions in the CNO cycle of which \({}^{14}\)N(p,\(\gamma\))\({}^{15}\)O is the slowest. This results in a build up of \({}^{14}\)N and a corresponding exhaustion of C (and O to a lesser extent). As described earlier, the Na enhancement is likely the result of advanced H burning which only occur in the high temperatures of evolved stars. From the plots of [Na/H], we see that the abundance at \(P_{\rm orb}\sim 2-6\) h is sensitive to the initial period. Thus, for a given initial
Figure 8: MESA calculations for initial donor masses of \(1.0\,M_{\odot}\) (Left) and \(1.5\,M_{\odot}\) (Right), for a range of initial periods, \(p_{0}\). The figures plot the mass, effective temperature, Roche-Lobe filling factor, mass transfer rate, and remaining hydrogen mass of the donor/secondary star against the orbital period. We also add the tracks for a normal CV with an initial donor mass of \(0.4\,M_{\odot}\) and \(p_{0}=0.3\) days (black dashed line). The effective temperatures of our objects are also plotted over the tracks for reference (blue and green points). We see that these temperatures are significantly higher than that of a normal CV. For the \(1.0M_{\odot}\) model, the \(p_{0}\) range which result in the temperatures of our objects (\(\sim 4500-7500\) K) at their orbital periods (\(\sim 2-6\) h) is \(\sim 3.0-3.2\) days. For the \(1.5\,M_{\odot}\) model, this range is \(\sim 0.7-0.8\) days.
donor mass, we are only able to obtain a maximum predicted value of [Na/H] (corresponding to the longest \(p_{0}\) i.e. the most evolved model) at a specified orbital period. For \(M_{\rm donor}=1\ M_{\odot}\), this is at [Na/H] \(\sim 0.6\) dex and for 1.5 \(M_{\odot}\), this is greater at [Na/H] \(\sim 0.7\) dex at \(P_{\rm orb}\sim 4\) h. Thus, a system with an initially more massive secondary (within the mass range that leads to stable mass transfer) results in a greater Na enhancement at the observed periods as it reaches higher temperatures and thus undergoes more nuclear burning which produce more Na.
We see that a majority of the measured abundances of our objects lie above the maximum values, so while we do expect a significant Na enhancement based on the evolved CV models, they under-predict its magnitude compared to observations. There are several possible reasons for this discrepancy. Firstly, the rate of the \({}^{22}\)Ne(p,\(\gamma\))\({}^{23}\)Na reaction have particularly large uncertainties compared to others in the NeNa cycle, stemming from contributions from possible low-energy resonances whose existence and properties have been debated in the literature (for further discussions on this topic, refer to Izzard et al., 2007; Iliadis et al., 2010; Kelly et al., 2017). JINA reachib V2.0 uses rates from Iliadis et al. (2010) (see their Table 2.20). Fortunately, since the dependence of the yield of \({}^{23}\)Na on the reaction rate is not linear and becomes relatively flat at fast rates as the \({}^{22}\)Ne fuel gets used up, the magnitude of the uncertainty in the rate is not directly carried over to the yield (see Izzard et al., 2007 for a more detailed explanation). Nevertheless, the \({}^{23}\)Na abundance calculated by MESA and discussed throughout should only be taken to be accurate to within a factor of a few.
Also, it is seen that irrespective of initial mass, the Na abundance plateaus at \(P_{\rm orb}\lesssim 2\) h. This is likely to be the result of a nuclear statistical equilibrium being reached which prevents more Na from being produced in the outer layers than is initially produced at the core when it reaches the necessary temperature for the NeNa cycle to begin. This is further discussed in Appendix F by looking at radial profiles of the abundances at several points in the evolution.
Lastly, it is possible that our measured abundances are overestimated, which can occur if the donor temperatures are underestimated or if there is He enhancement that was unaccounted for in some objects, as discussed in Section 3.2 and Appendix E. Another possibility is that there are other physics that we have neglected that further
Figure 9: Surface abundances of \({}^{4}\)He, \({}^{12}\)C, \({}^{16}\)O, \({}^{14}\)N, and \({}^{23}\)Na against orbital period for the MESA models in Figure 8. The points in the plots of \({}^{23}\)Na abundance are those of the objects from this paper ([Na/H]\({}_{\rm NLTE}\) from Table 3) to serve as comparison to those predicted by the models (the blue and green points indicate abundances calculating using solar He abundance (Y = 0.254) and He enhanced (Y = 0.868) models, respectively). It is seen that while the MESA models do predict significant Na enhancements, they generally underpredict them compared to observed values.
the Na enhancement of evolved CVs. We explore one such process, namely rotational mixing, in the following section.
### Rotational Mixing
Rotation effects in massive stars have been studied extensively in the literature (see Maeder 1998 for an overview of the various physical processes considered in models of rotating massive stars). Meanwhile, there has been comparatively less work done on this topic in lower mass stars. One reason for this is that the original treatment of mixing in rotating stars, the so-called Eddington-Sweet circulation, has a characteristic timescale \(\propto\tau_{\rm KH}/\epsilon\) where \(\tau_{\rm KH}\) is the Kelvin-Helmholtz timescale and \(\epsilon\) is the ratio of centrifugal to gravitational acceleration (Eddington 1925; Mestel & Moss 1986). This timescale is short and thus this process is very important for fast-rotating massive stars but less so for solar-type stars. However, more recent studies have shown that its effects may be significant even for lower mass stars (Palacios et al. 2003; Chatzopoulos et al. 2012; Istrate et al. 2016). In particular, mixing may change stellar abundances by allowing for advanced nuclear processing in a greater portion of the star, or by counteracting the effect of gravitational settling so that more heavier elements can be found in the outer layers (e.g. Istrate et al. 2016 used MESA models to study this effect in proto-ELM WD binaries and found that it is important in their early stages). Furthermore, there can be contributions from other types of mixing such as the Spruit-Taylor magnetic diffusion (fluid motion resulting from toroidal magnetic field instabilities in differentially rotating stars (Tayler 1973; Spruit 1999); e.g. Chatzopoulos et al. 2012 considered this mechanism in explaining carbon deficits observed in some compact object - solar-type binaries).
Therefore, we also ran a few models including rotational mixing. This is implemented in MESA with am_D_mix_factor (i.e. the rotational diffusion coefficient) which we set to the commonly used value of 1/30 from Heger et al. (2000). We also set D_ES_factor and D_ST_factor to one, corresponding to the inclusion of both the Eddington-Sweet and Spruit-Taylor mixing processes. As an in-depth exploration of rotational effects is beyond the scope of this paper, we just focus on a single comparison of models with mixing turned on and off. Because the timescale of main sequence evolution is highly sensitive to any changes in the internal structure of the stars, the inclusion of rotational mixing has a significant effect on how evolved the donor is when mass transfer begins. Thus, rather than comparing models with the same initial periods, with and without mixing, we should compare models that have similar trends in the effective temperature (i.e. are at similar evolutionary stages at fixed orbital period).
Figure 10 plots the evolutionary tracks of reference (i.e. no mixing) models with initial periods \(p_{0}=2.9948\) and 3.15 (also plotted on Figures 8 and 9), as well as a model again with \(p_{0}=2.9948\) but with mixing. From the panels on the left showing several basic donor parameters, we find that for a given initial period, the inclusion of mixing pushes the donor towards becoming more evolved in a shorter amount of time. In other words, the \(T_{\rm eff}\) curves of the \(p_{0}=2.9948\) model is raised upwards to almost overlap that of the \(p_{0}=3.15\) model.
Now comparing the plots of surface abundances to the right of the reference \(p_{0}=3.15\) model against the \(p_{0}=2.9948\) model with mixing, we see that there is little difference between them across all elements. In particular, there is no consistent rise in the Na abundance. Any small differences can be explained by the fact that they are not identically evolved and as they are smaller than most of the error bars in the measured abundances of our objects, such differences are not enough to account for the discrepancy between the models and observations. Thus, we find that given donors at a similar evolutionary stage, mixing does not have a significant effect on the surface abundances at the orbital periods of our interest.
Moreover, we note that we did not include element diffusion processes (e.g. gravitational settling) which, as explored by Istrate et al. (2016), work against rotational mixing, meaning that any impact on the abundances found here are likely optimistic. For completeness, we did run several models with element diffusion (implemented on MESA with do_element_diffusion) but found that once again, given donors that are similarly evolved, it has little effect on the predicted surface abundances.
### Conclusions
We have obtained follow-up high resolution spectroscopy of the 21 CVs with evolved donors first studied by E21. The donors in these systems first overflowed their Roche lobes near the end of their main-sequence evolution and are thus predicted to have helium-enriched cores and anomalous surface abundances. Our goal was to measure the surface Na abundances of the donors to test for enhancement in Na from nuclear burning, which occured when the material now in the donors' photospheres was inside their convective cores. We also ran evolutionary models to find the range of initial parameters that lead to the formation of evolved CVs and test if they are able to explain the observed abundances. Our main findings are as follows:
1. _Na enhancement_: By measuring equivalent widths (EWs) of the Na 5900 A doublet from the observed spectra and comparing them to those of synthetic spectra generated over a grid of Na abundances (Figure 3), we calculated [Na/H] values for the objects. As described in Section 3.4, we also corrected for non-LTE effects. We find \(0.3\lesssim\) [Na/H] \(\lesssim 1.5\) dex with a median value of 0.95 dex (i.e. nearly an order of magnitude enhancement compared to solar; Table 3). Our study thus shows that surface Na enhancement, which has previously been detected in several CVs, is generically found in CVs with evolved donors.
2. _Evolutionary models for evolved CVs_: We ran binary models using MESA to investigate the initial binary parameters that produce evolved CVs similar to those in the observed sample. We modelled the WD as a point mass of 0.7 \(M_{\odot}\) and tested initial donor masses of 1.0 and 1.5 \(M_{\odot}\) over a range of initial periods (Section 4). We found that for initial \(M_{\rm donor}=1.0\)\(M_{\odot}\), initial periods of \(3.0\lesssim p_{0}<3.2\) days produced evolved donors with effective temperatures similar to the observed systems. For \(M_{\rm donor}=1.5M_{\odot}\), the required period range was \(0.7\lesssim p_{0}<0.9\) days, which is shorter and narrower (Figure 8).
3. _Na enhancement predicted by MESA models_: Models of evolved CVs predict a significant enhancement in the surface Na abundance, consistent with our observations. At the orbital periods of the observed systems, the predicted amount of enhancement is sensitive to both the initial period and initial donor mass. The ubiquitous Na enhancement of evolved CV donors can thus likely be understood as a consequence of nuclear processing in the donors during their main-sequence evolution. Models with higher initial donor masses generally predict stronger Na enhancement, while models for "normal" unevolved CVs predict no enhancement (Figure 9).
4. _Connection to Future Modeling_: A majority of the observed donors have higher inferred Na abundances than predicted by the MESA models. Possible sources of this discrepancy include the large uncertainty in the reaction rate of the step producing \({}^{23}\)Na in the NeNa cycle, or missing physical processes (mixing and diffusion are
described below). These may be worth further investigation when constructing future models. The discrepancy could also be due to an overestimation of the calculated abundances which could result from an underestimation of the donor effective temperature or unaccounted for He enhancement (see Section 3.2 and Appendix E).
5. _Effects of rotational mixing_: We explored the possibility of additional enhancement through rotational mixing by turning on Eddington-Sweet circulation and Spruit-Taylor instabilities in our MESA models. We found that at a given initial donor mass, mixing pushed the donor to become evolved more quickly. However, given donors at the same evolutionary stage (i.e. similar curves of effective temperature at a given period), mixing had very little effect on the predicted Na abundances (Section 4.1). Therefore, while mixing does have an effect on the evolutionary path of a system given the same initial conditions, it is unlikely to be able to account for the extra Na enhancement seen in our objects.
## Acknowledgements
We thank Shrinivas Kulkarni for help acquiring observational data, and John Thorstensen, Tom Marsh, and the ZTF stellar variables group for helpful discussions. We also thank the anonymous referee for the detailed feedback.
The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.
The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
Based on observations obtained with the Samuel Oschin Telescope 48-inch and the 60-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. ZTF is supported by the National Science Foundation under Grant No. AST-2034437 and a collaboration including Caltech, IPAC, the Weizmann Institute for Science, the Oskar Klein Center at Stockholm University, the University of Maryland, Deutsches Elektronen-Synchrotron and Humboldt University, the TANGO Consortium of Taiwan, the University of
Figure 10: Results of MESA models for initial \(M_{\rm donor}=1.0M_{\odot}\) with (blue dashed) and without (blue solid) rotational mixing for \(p_{0}=2.9948\), as well as for \(p_{0}=3.15\) without mixing (black solid). We see that the blue dashed and black solid lines have almost overlapping \(\rm T_{\rm off}\) curves meaning that they are similarly evolved. Thus, at a given initial period, mixing causes the donor to become more evolved in a shorter period of time. We also find that there is very little difference in the black solid and blue dashed lines on the plot of [Na/H] on the right. Therefore, at a given evolutionary stage of the donor, mixing does not have a significant effect on the predicted Na abundance.
Wisconsin at Milwaukee, Trinity College Dublin, Lawrence Livermore National Laboratories, and IN2P3, France. Operations are conducted by COO, IPAC, and UW.
## Data Availability
The data underlying this article are available upon reasonable request to the corresponding author.
|
2308.02588 | Unmasking Parkinson's Disease with Smile: An AI-enabled Screening
Framework | Parkinson's disease (PD) diagnosis remains challenging due to lacking a
reliable biomarker and limited access to clinical care. In this study, we
present an analysis of the largest video dataset containing micro-expressions
to screen for PD. We collected 3,871 videos from 1,059 unique participants,
including 256 self-reported PD patients. The recordings are from diverse
sources encompassing participants' homes across multiple countries, a clinic,
and a PD care facility in the US. Leveraging facial landmarks and action units,
we extracted features relevant to Hypomimia, a prominent symptom of PD
characterized by reduced facial expressions. An ensemble of AI models trained
on these features achieved an accuracy of 89.7% and an Area Under the Receiver
Operating Characteristic (AUROC) of 89.3% while being free from detectable bias
across population subgroups based on sex and ethnicity on held-out data.
Further analysis reveals that features from the smiling videos alone lead to
comparable performance, even on two external test sets the model has never seen
during training, suggesting the potential for PD risk assessment from smiling
selfie videos. | Tariq Adnan, Md Saiful Islam, Wasifur Rahman, Sangwu Lee, Sutapa Dey Tithi, Kazi Noshin, Imran Sarker, M Saifur Rahman, Ehsan Hoque | 2023-08-03T18:23:37Z | http://arxiv.org/abs/2308.02588v1 | # Unmasking Parkinson's Disease with Smile: An AI-enabled Screening Framework
###### Abstract
Parkinson's disease (PD) diagnosis remains challenging due to lacking a reliable biomarker and limited access to clinical care. In this study, we present an analysis of the largest video dataset containing micro-expressions to screen for PD. We collected \(3,871\) videos from \(1,059\) unique participants, including \(256\) self-reported PD patients. The recordings are from diverse sources encompassing participants' homes across multiple countries, a clinic, and a PD care facility in the US. Leveraging facial landmarks and action units, we extracted features relevant to Hypomimia, a prominent symptom of PD characterized by reduced facial expressions. An ensemble of AI models trained on these features achieved an accuracy of \(89.7\%\) and an Area Under the Receiver Operating Characteristic (AUROC) of \(89.3\%\) while being free from detectable bias across population subgroups based on sex and ethnicity on held-out data. Further analysis reveals that features from the smiling videos alone lead to comparable performance, even on two external test sets the model has never seen during training, suggesting the potential for PD risk assessment from smiling selfie videos.
\({}^{1}\) Department of Computer Science, University of Rochester, United States
\({}^{2}\) Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Bangladesh
\({}^{3}\) Department of Neurology, National Institute of Neurosciences & Hospital, Bangladesh
## 1 Introduction
There is no reliable biomarker for diagnosing Parkinson's disease [23], the fastest-growing neurological disorder in the world [10]. In 2017, Parkinson's disease placed a significant economic burden of $52 billion in the United States [23]. Given the projected doubling of PD patients by 2040, the scenario will worsen significantly, surpassing $79 billion even without accounting for inflation [23]. The significance of timely diagnosis and regular checkups for PD cannot be overstated, as they can improve the quality of a patient's life and reduce the burden of neurological care [13]. However, there are challenges, particularly for the elderly population who are more likely to be diagnosed with PD [10]. Consider the scenario of an individual in their sixties or seventies experiencing impaired cognitive and physical ability, leading to immobility. Accessing timely clinical diagnosis becomes challenging if they reside in a remote area where the nearest clinic is not within reasonable driving distance. Furthermore, in many developing or underdeveloped countries, there is a significant scarcity of neurologists (e.g., in India, there were only 1200 neurologists available to serve over 1.3 billion people in 2013 [1].) This limited accessibility to clinical care results in many individuals remaining undiagnosed until the disease has progressed considerably, and the usefulness of medications available to manage involuntary tremors caused by PD becomes very limited.
At-home PD assessment has received significant attention from researchers in recent times [14, 15, 23]. One promising method involves analyzing nocturnal breathing signals obtained from radio waves reflected by the body. This approach has shown potential in detecting PD and could be a non-invasive and convenient way to monitor individuals at home for signs of the disease [23]. Another characteristic often associated with PD is Hypomimia, which refers to a reduction in facial expressions. People with PD may experience Hypomimia due to a decrease in dopamine synthesis critical for facial expression, caused by the loss of certain neurons [12, 13]. While Hypomimia is not exclusive to PD, it is often considered an early and sensitive biomarker for the disease and can be utilized for early screening [24, 15, 16, 17, 18, 19]. One notable advantage of assessing Hypomimia is that it can potentially be done with a computer or laptop that a participant may already have at home. Compared to measuring breathing signals which needs installation of sensors in people's homes, checking for Hypomimia through facial expressions is easier, more accessible, and can be used anywhere in the world.
In this study, we present an AI-based system capable of objectively quantifying signs of Hypomimia, and screening individuals for Parkinson's disease. We collected 3871 videos involving micro-expressions from 1059 global participants, where each participant was asked to mimic three different facial expressions: disgust, smile, and surprise on a web interface. Leveraging advanced computer vision tools like MediaPipe [19] and OpenFace [17], we analyze the facial landmarks and action units [1] to objectively quantify features of Hypomimia following the Movement Disorder Society-Sponsored Revision of the Uni
## 1 Introduction
Figure 1: **A brief overview of the AI-based system for classifying individuals with and without Parkinson’s disease.** Anyone can record three facial expressions (smile, disgust, and surprise) in front of a computer webcam. The system extracts facial keypoints for each frame using Openface and MediaPipe tools, and then computes features by summarizing the temporal dimensions with statistical aggregates. Top-\(n\) features (\(n\) is a hyperparameter) are then fed to an ensemble of support vector machines which finally decides whether the participant has PD or not.
fied Parkinson's disease Rating Scale (MDS-UPDRS). The large video dataset collected from a diverse study population aligns with our effort to make the benefits of the PD screening system available to everyone, irrespective of age, ethnicity, and sex. Testing the system on external cohorts in Bangladesh (representing South-East Asia, 8.5% of global population) and the United States (representing Northern America, 4.7% of global population) further validates its effectiveness across diverse demographics. The majority of our data are collected from participants' homes encompassing a diverse range of environments and recording devices, thus eliminating the need for inconvenient travel and reducing the possibility of data shift when deployed to be used from participants' homes. Using an ensemble of multiple AI models (i.e., support vector machine) that incorporate features from all three micro-expressions, our system aims to deliver a reliable, interpretable screening for Parkinson's disease from the comfort of home and empower potential patients to seek clinical care and effectively manage their PD symptoms. Further analysis suggests that comparable performance is obtainable by merely using the smile videos. In Figure 1, we present an overview of our proposed framework.
## 2 Results
### Data
In this research, we gathered video data from four distinct settings, comprising home-recorded videos from i) global participants (Home-Global)-primarily from North America; ii) a clinic at the University of Rochester Medical Center (Clinic); iii) a Parkinson's disease care facility in Ohio (PD Care Facility); and iv) home-recorded videos from Bangladesh, a country in Southeast Asia (Home-BD). Our study involved \(1059\) participants, with \(256\) identifying themselves as PD patients. Each participant was instructed to mimic three facial expressions (disgust, smile, and surprise) and return to a neutral face, repeating this process three times for each expression. Notably, some participants from the U.S. clinic contributed data on multiple occasions, resulting in \(1236\) videos from participants with PD and \(2665\) videos from participants without it. Table 1 provides details regarding the demographic characteristics of the entire participant set. Data collection was facilitated through a web-based platform named PARK [1]1, allowing participants to conveniently record themselves using their personal laptop's webcam. PARK was translated into the Bengali language to better instruct participants from Bangladesh. Data from Home-Global and PD Care Facility settings were used to train and evaluate the model on held-out data. Data collected in Clinic and Home-BD were solely used for external testing of the predictive model, and were never used during training. Supplementary information provides further specifics on the demographic properties of the data collected from each setting.
Footnote 1: [https://parktest.net/](https://parktest.net/)
### Facial Features as Potential Digital Biomarkers
The loss of facial expressivity, abnormal eye blinking frequency, and lack of spontaneity in smiles are key indicators associated with Parkinson's disease (PD) and are included in the MDS-UPDRS, a clinical guideline for PD assessment [14]. In this study, we aim to objectively and quantitatively capture these clinically relevant features using advanced facial landmark detection tools such as Mediapipe [10] and OpenFace [1]. We extract the intensity of specific facial action units [1] associated with each facial expression task [15]. For instance, when someone smiles, various action units 2 including inner brow raiser, cheek raiser, lip corner puller, dimpler, lips part, jaw drop, and eye blink may be activated. Moreover, certain action units like lips part, jaw drop, and eye blink can be activated by any of the three facial expressions used in this study. Additionally, we measure the degree of eye openness, raised eyebrow height for both the right and left eyes, mouth width and openness, and jaw openness using Mediapipe. These features have shown promise in previous literature for PD detection [14]. All of these features are computed for each frame of the recorded videos, and statistical aggregates (mean, variance, and entropy) are used to summarize the features across all frames. This allows us to train simple machine-learning models to assess PD. For
\begin{table}
\begin{tabular}{|p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}|} \hline
**Characteristics** & **With PD** & **Without PD** & **Total** \\ \hline \multicolumn{4}{|p{113.8pt}|}{Number of Participants, \(n\) (\%)} \\ \hline Sex, n (\%) & & & \\ Male & 150 (58.59\%) & 361 (44.96\%) & **511 (48.25\%)** \\ Female & 106 (41.41\%) & 442 (55.04\%) & **548 (51.75\%)** \\ \hline Age in years (range: 18 - 93.0, mean: 58.9) & n (\%) & & \\ \(<\)20 & 0 (0.06) & 43 (5.35\%) & **43 (4.06\%)** \\
20-39 & 4 (1.56\%) & 96 (11.96\%) & **100 (9.44\%)** \\
40-59 & 39 (15.23\%) & 216 (26.9\%) & **255 (24.08\%)** \\
60-79 & 198 (77.34\%) & 438 (54.55\%) & **636 (60.06\%)** \\ \(>\)=80 & 15 (5.86\%) & 10 (1.25\%) & **25 (2.36\%)** \\ \hline Race, n (\%) & & & \\ White & 116 (45.31\%) & 531 (66.13\%) & **647 (61.1\%)** \\ Asian & 15 (5.86\%) & 187 (23.29\%) & **202 (19.07\%)** \\ Black or African & & & \\ African & 3 (1.17\%) & 45 (5.6\%) & **48 (4.53\%)** \\ American & & & \\ Indian or Alaska & & & \\ Native & & & \\ Others & 3 (1.17\%) & 12 (1.49\%) & **15 (1.42\%)** \\ Not Mentioned & 118 (46.09\%) & 24 (2.99\%) & **142 (13.41\%)** \\ \hline Recording Environment, n (\%) & & & \\ Home-Global & 77 (30.08\%) & 616 (76.71\%) & **693 (65.44\%)** \\ PD Care Facility & 118 (46.09\%) & 24 (2.99\%) & **142 (13.41\%)** \\ Clinic & 47 (18.36\%) & 28 (3.49\%) & **75 (7.08\%)** \\ Home-BD & 14 (54.74\%) & 135 (16.81\%) & **149 (140.47\%)** \\ \hline \end{tabular}
\end{table}
Table 1: **Demographic information of the participants.**
each participant, we combine the task-specific statistical aggregates for all three tasks, resulting in \(126\) comprehensive features. These features are then utilized as inputs for the machine learning models designed to differentiate between individuals with and without PD.
Out of the \(126\) features, \(43\) features were significantly different across participants with and without PD (at significance level, \(\alpha=0.01\)). To find out the most powerful features for discriminating between participants with and without PD, we ran a logistic regression analysis on the entire dataset after normalizing the feature values and ranked the features based on the coefficient of logistic regression. Table 2 identifies the top-\(10\) most significant features.
### Predictive Performance on Held-out Data
With all three facial expressions:Among the videos collected in four different settings, we utilized the datasets from Home-Global and PD Care Facility to train and validate the PD prediction model. To train the model, we combined features from all three facial expression tasks so that each unique participant corresponds to a single data point, ensuring that training and test subjects remain separated. Any participant who did not provide video recordings for all three facial expressions was excluded from the analysis, resulting in \(827\) data points for training and testing. To assess model performance, we ran a \(k\)-fold cross-validation (\(k=10\)) with stratified sampling (i.e., maintaining the same ratio of participants with and without PD across the train and test sets). This approach is also known as stratified \(k\)-fold cross-validation and is preferred over traditional \(k\)-fold cross-validation when the dataset is imbalanced (Kohavici et al., 1995).
Out of several model choices, the best-performing model was an ensemble of \(21\) support vector machines (SVM). Note that these high-performing SVM models differ from each other by the hyper-parameters (including the set of features) they use and they were selected using a meticulous optimization via Bayesian hyper-parameter tuning (details in the Methods section). At each iteration of \(k\)-fold cross-validation, the entire ensemble model was trained on (\(k-1\)) folds out of \(k\) folds, while the remaining fold was held out for testing. \(k\) different iterations selected different folds as the test set, ensuring that the model was evaluated on all the data samples. Given the imbalanced nature of the dataset, we used minority oversampling (Chawla et al., 2002) on the training sets, while the test sets remained unchanged.
Our model exhibited robust performance based on numerous widely accepted metrics when tested on the held-out data (Figure 2). In differentiating between participants with and without Parkinson's disease, our ensemble model achieved an accuracy of \(89.72\%\), with an Area Under the Receiver Operating Characteristic Curve (AUROC) of \(89.31\%\). The model displayed a specificity of \(93.67\%\) and a sensitivity of \(76.92\%\). Additionally, the positive predictive value (PPV) of \(78.95\%\) and the negative predictive value (NPV) of \(92.94\%\) further reinforce the model's reliability in predictive performance.
With only the smile expressionWe also explored whether we can differentiate individuals with and without PD by looking at a single facial expression. Although the performance of the predictive model notably degraded while using only one of the three facial expressions, the performance remained competitive for the smile expression. We followed the same training and testing strategy as previously described but with a slight modification. Rather than utilizing features from all three facial expressions (disgust, smile, and surprise), we focused on using features from a specific facial expression in each experiment. For the smile expression, an ensemble of \(17\) SVM models performed the best, achieving an accuracy of \(87.30\%\) and an AUROC score of \(83.04\%\). In addition, the specificity, sensitivity, PPV, and NPV of the predictive model were \(91.93\%\), \(72.31\%\), \(73.44\%\), and \(91.50\%\), respectively. Unlike the smile expression, the performance of the predictive model was much worse when features from only disgust or surprise expressions were used. The accuracy and AUROC of the best predictive model were \(77.51\%\) and \(76.24\%\) respectively, when only disgust features were used. Similarly, the best model using only surprise features achieved an accuracy of \(77.03\%\) and an AUROC of \(72.45\%\).
Using principal component analysis (Bro and Smilde, 2014), we further observed that the features extracted from the smile task are relatively more separable (silhouette score (Rousseeuw, 1987) = \(0.18\)) between subgroups of participants with and without PD compared to the disgust (silhouette score = \(0.11\)) and surprise tasks (silhouette score = \(0.14\)), as shown in Figure 4. This may potentially explain why the model trained on smile expression features performed better than the other expressions.
### Generalization to External Test Data
We excluded data obtained from two settings, namely Clinic and Home-BD, from the training process of the model. Instead, we specifically reserved this data for external validation. Conducting external validation on data
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline
**Expression** & **Feature** & **Statistic** & **p-value** & **Coefficient** & **Rank** \\ \hline \multirow{4}{*}{Smile} & \multirow{2}{*}{lip corner puller} & mean & \(10^{-24}\) & 203.99 & 1 \\ \cline{3-5} & & variance & \(10^{-7}\) & 65.42 & 7 \\ \cline{2-5} & mouth width & mean & \(10^{-8}\) & 102.67 & 4 \\ \hline \multirow{6}{*}{Digust} & check raiser & variance & \(10^{-4}\) & 42.62 & 8 \\ \cline{2-5} & upper lip raiser & mean & \(10^{-6}\) & 34.64 & 9 \\ \cline{1-1} \cline{2-5} & lid tightener & mean & \(10^{-6}\) & 27.17 & 10 \\ \cline{1-1} \cline{2-5} & lips part & variance & \(10^{-4}\) & 114.01 & 2 \\ \cline{1-1} \cline{2-5} & & mean & \(10^{-4}\) & 94.41 & 5 \\ \hline \multirow{2}{*}{Surprise} & \multirow{2}{*}{eye blink} & variance & \(10^{-6}\) & 108.80 & 3 \\ \cline{1-1} \cline{2-5} & & mean & \(10^{-5}\) & 85.13 & 6 \\ \hline \end{tabular}
\end{table}
Table 2: **Top-10 significant features for discriminating between individuals with and without PD. In the logistic regression model fitted with the complete dataset to classify individuals with or without Parkinson’s disease (PD), the coefficient represents the absolute contribution of a feature (when all the features are normalized), while the p-value indicates its statistical significance. Features are ranked based on the logistic regression coefficients.**
Figure 2: **PD screening from recorded facial expression videos.** ROC curves for differentiating between participants with and without PD (a) on held-out data with k-fold cross validation (\(n=827\)) using features from all three facial expressions, (b) on external test data collected at a U.S. clinic (\(n=75\)) using features from only the smile expression, and (c) on external home-recorded test data collected from Bangladesh (\(n=149\)) using features from only the smile expression. Similarly, the confusion matrix for differentiating between participants with and without PD (d) on held-out data with k-fold cross validation (\(n=827\)), (e) on external test data collected at a U.S. clinic (\(n=75\)), and (f) on external home-recorded test data collected from Bangladesh (\(n=149\)).
obtained from a clinic (located in New York, US) offers preliminary insights into the model's performance when implemented in a clinical environment. Additionally, testing the model on data collected from a different country (Banglades) enables us to gain a further understanding of how the features and potential PD symptoms may differ among individuals from a distinct cultural backgrounds compared to the United States. In general, achieving robust performance on data collected from previously unseen settings and diverse geographical locations, ranging from New York to Ohio to Bangladesh, serves as evidence of the model's reliability and suggests that it is less susceptible to data shift (Zhang et al., 2022).
While the combination of features from all three facial expressions resulted in the highest performance on the held-out data, we observed a significant decrease in performance when testing the model on external test sets. Specifically, when the ensemble model trained on features from all three facial expressions was evaluated on data collected in the Clinic setting, the accuracy dropped to \(72.0\%\) (an absolute decrease of \(17.73\%\)) and the AUROC score dropped to \(72.26\%\) (an absolute decrease of \(17.05\%\)). Similarly, when tested on data collected from Bangladesh in the Home-BD setting, the predictive model achieved an accuracy of \(79.87\%\) and an AUROC of \(77.38\%\). These results highlight a similar decline in performance when the model was applied to external data from different settings, demonstrating the challenges of generalization beyond the training data.
However, when the model was trained solely on smile expression features, the decline in performance was significantly less pronounced. The ensemble of support vector machine (SVM) models trained on smile expression features achieved an accuracy of \(81.33\%\) and an AUROC score of \(83.93\%\) when evaluated on the Clinic setting. Although we observed a decrease of \(5.98\%\) in accuracy, the AUROC score even increased by a small margin of \(0.89\%\) compared to the performance on held-out data. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) on this external test data were \(78.57\%\), \(82.98\%\), \(86.67\%\), and \(73.33\%\) respectively. For the external Home-BD test set, the model performed similarly to the one trained on all three facial expressions, achieving an accuracy of \(78.52\%\) and an AUROC score of \(78.54\%\). It is worth noting that while the specificity (\(78.52\%\)), sensitivity (\(78.57\%\)), and NPV (\(97.25\%\)) remained competitive, we observed a sharp decline in PPV (\(27.5\%\) on the Home-BD setting compared to \(73.44\%\) on held-out data). This sharp drop in PPV (\(27.78\%\) compared to \(78.95\%\) on held-out data) was also observed when all three facial expressions were used.
### Bias Analysis on Held-out Data
In order to make the screening framework proposed in this study accessible to individuals worldwide, it is crucial to assess the model's performance across different subgroups based on factors such as sex, ethnicity, and age. As mentioned earlier, we performed \(k\)-fold cross-validation (\(k=10\)) to evaluate the model using the data samples from Home-Global and PD Care Facility settings. For each of the test data, we log the demographic information, the true label of PD diagnosis, and the model-predicted score. These logs are used to analyze whether the model demonstrated any systematic bias across subgroups. Participants were left out of a subgroup-specific analysis if the sub-grouping attribute (e.g., ethnicity) was missing. For all significance testing, we used a \(95\%\) confidence level. Figure 3 provides a visual overview of the subgroup-based analyses.
Miss-classification.One criterion for bias assessment is whether the model has a higher miss-classification rate for a certain subgroup. To assess this, we performed a two-sampled \(Z\)-test (for proportions) across population subgroups based on sex and ethnicity. On average, the predictive model (ensemble of SVM) exhibited a miss-classification rate of \(11.91\%\) for male subjects (\(n=361\)) and a rate of \(9.01\%\) for female subjects (\(n=466\)). However, the difference in miss-classification rates between the two groups was not statistically significant (test statistic, \(z=1.36\), p-value \(=0.17\)). Similarly, there was no detectable bias when comparing white subjects (\(n=574\)) with non-white subjects (\(n=119\)), as the average miss-classification rates were \(10.45\%\) and \(5.88\%\) respectively (\(z=1.54\), p-value \(=0.12\)). As age is a continuous variable, we performed a Spearman's correlation test to evaluate the relationship between age and the average miss-classification rate among subjects of that age. Age was found to be positively correlated with the miss-classification rate (Spearman's rank correlation, \(\rho=0.26\), p-value \(=0.03\)) meaning the model demonstrated lower accuracy for older subjects.
Underdiagnosis.Many AI models tend to selectively underdiagnose under-served patient populations which can lead to unequal access to clinical care (Seyped-Kalantari et al., 2021). To investigate whether our predictive model demonstrated similar bias, we ran the subgroup analysis of miss-classification only for the subjects who actually had PD. We used a two-sampled \(Z\)-test (for proportions) to assess underdiagnosis bias based on sex. However, we had a small number of samples (\(n=7\)) for non-white subjects with PD. Therefore, we used Fisher's exact test to investigate underdiagnosis bias based on ethnicity. To assess whether age is correlated with the underdiagnosis rate, we used Spearman's correlation test. We did not observe any underdiagnosis bias of the predictive model at a statistically significant level based on sex and gender. Specifically, the underdiagnosis rates for the male (\(n=108\)) and female (\(n=87\)) subjects were \(22.22\%\) and \(24.13\%\) respectively, showing no significant difference (\(z=0.32\), p-value \(=0.75\)). Again, \(37.14\%\) (\(n=70\)) white subjects were underdiagnosed by the model, while the rate was \(28.6\%\) (\(n=7\)) for non-white subjects. Based on Fisher's exact test, this difference was also insignificant (Fisher's odd ratio \(=0.68\), p-value \(=1\)). Although there was a slight negative correlation (\(\rho=-0.28\)) between age and underdiagnosis rate, the correlation was not statistically significant (p-value \(=0.07\)). However, this means that younger subjects were slightly more underdiagnosed to have PD compared to the elderly.
Overdiagnosis.Overdiagnosis is when the model predicts someone to have a condition, although they do not actually have that condition. To test whether our predictive model overdiagnosed any of the subgroups, we used the same methods used for analyzing underdiagnosis bias. However, instead of analyzing the subjects who had PD, we analyzed the subjects who did not have PD. We did not detect any overdiagnosis bias based on sex (\(253\) male subjects, \(379\) female subjects, \(z\)-score \(=1.00\), p-value \(=0.32\)) and ethnicity (\(504\) white subjects, \(112\) non-white subjects, Fisher's odd ratio \(=0.65\), p-value \(=0.52\)). However, older participants were significantly more overdiagnosed by the predictive model, as we observed a positive correlation between age and overdiagnosis rate (\(\rho=0.26\), p-value \(=0.04\)).
### Bias Analysis on External Data
We also conducted an evaluation of our predictive model using two external test datasets, namely Clinic and Home-BD. However, our bias analysis for these datasets was limited due to a lack of representative subgroups. For instance, in the Home-BD dataset, all participants self-identified themselves as Asian, while in the Clinic dataset, all but two participants identified themselves as white. Consequently, we were unable to assess bias based on ethnicity. Furthermore, conducting underdiagnosis or overdiagnosis analyses required subgrouping participants with and without Parkinson's disease, respectively. Unfortunately, many of these subgroups contained fewer than five participants, rendering statistical inference highly unreliable. Therefore, we solely performed bias analysis based on miss-classification rates for these external test sets.
In the Clinic setting, the model exhibited a miss-classification rate of \(28.1\%\) (\(n=32\)) for female participants, compared to only \(11.6\%\) for male subjects (\(n=43\)). However, this difference in performance between genders was not statistically significant according to a two-sample \(Z\)-test (for proportions) (\(z=-1.81\), p-value \(=0.07\)). On the other hand, in the Home-BD setting, the situation was reversed. The model miss-classified male participants (\(n=103\)) \(25.2\%\) times while the rate was only \(13.0\%\) for the females (\(n=46\)). Again, the two-sampled \(Z\)-test (for proportions) yielded a test statistic of \(1.68\) and a p-value of \(0.09\), indicating no significant difference in the miss-classification rates. Additionally, for both settings, the miss-classification rate was not significantly correlated with age (Spearman's rank correlation coefficients were \(-0.10\) and \(-0.10\) for the Clinic and Home-BD settings, respectively, with corresponding p-values of \(0.60\) and \(0.87\)).
### Ablation Studies
We experimented with different setups, including several machine learning baselines to find the best predictive model (please see Table 3). As machine learning baselines, we tried XGBoost (Chen and Guestrin 2016), LightGBM (Ke et al. 2017), Random Forest, AdaBoost, Histogram-based Gradient Boosting (HistGradientBoosting) (Guryanov 2019), and Support Vector Machine (SVM) (Cortes and Vapnik 1995). Rather than relying on a single metric, we employed a holis
Figure 3: **Subgroup analysis of model error.** The rate of (a) miss-classification, (b) underdiagnosis, and (c) overdiagnosis across population subgroups based on sex, ethnicity, and age. The error bars represent \(95\%\) confidence interval. Note that this bias analysis is performed on the predictions made by the ensemble of SVM models trained with features extracted from all three facial expressions and tested on held-out data.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Experimental Setup** & **AUROC** & **Accuracy** & **Sensitivity** & **Specificity** & **PPV** & **NPV** \\ \hline \hline
**Machine Learning Baselines** & & & & & & \\ \hline XGBoost & 87.94 & 80.77 & 77.44 & 81.80 & 56.77 & 92.16 \\ LightGBM & 87.49 & 81.50 & 78.97 & 82.28 & 57.89 & 92.69 \\ Random Forest & 86.33 & 84.04 & 71.28 & 87.97 & 64.65 & 90.85 \\ AdaBoost & 88.17 & 86.58 & 74.36 & 90.35 & 70.39 & 91.95 \\ HistGradientBoosting & 90.27 & 87.42 & 72.82 & 91.93 & 73.58 & 91.64 \\ SVM & 90.11 & 87.30 & 77.44 & 90.35 & 71.23 & 92.85 \\ \hline Ensemble of best models with SVM & 86.21 & 88.63 & 77.44 & 92.09 & 75.12 & 92.97 \\
**Ensemble of best models with LR** & **89.31** & **89.72** & **76.92** & **93.67** & **78.95** & **92.94** \\ \hline
**Feature Scaling Methods** & & & & & \\ \hline No Scaling & 87.23 & 88.15 & 71.28 & 93.35 & 76.80 & 91.33 \\ Standard Scaler & 86.93 & 87.06 & 69.23 & 92.56 & 74.18 & 90.70 \\
**MinMax Scaler** & **89.31** & **89.72** & **76.92** & **93.67** & **78.95** & **92.94** \\ \hline
**Feature Selection Methods** & & & & & \\ \hline No Feature Selection & 87.11 & 87.91 & 70.77 & 93.20 & 76.24 & 91.18 \\ BoostRFE & 86.78 & 89.36 & 71.28 & 94.94 & 81.29 & 91.46 \\ BoostRFA & 88.37 & 90.33 & 73.33 & 95.57 & 83.63 & 92.07 \\
**Logistic Regression** & **89.31** & **89.72** & **76.92** & **93.67** & **78.95** & **92.94** \\ \hline
**Combination of Facial Expressions** & & & & & \\ \hline Smile & 83.04 & 87.30 & 72.31 & 91.93 & 73.44 & 91.50 \\ Disgust & 76.24 & 77.51 & 69.74 & 79.91 & 51.71 & 89.54 \\ Surprise & 72.45 & 77.03 & 61.03 & 81.96 & 51.07 & 87.21 \\ Smile + Disgust & 87.60 & 86.34 & 77.95 & 88.92 & 68.47 & 92.89 \\ Smile + Surprise & 87.42 & 88.03 & 78.46 & 90.98 & 72.86 & 93.19 \\ Disgust + Surprise & 72.51 & 76.06 & 66.15 & 79.11 & 49.43 & 88.34 \\
**Smile + Disgust + Surprise** & **89.31** & **89.72** & **76.92** & **93.67** & **78.95** & **92.94** \\ \hline
**Combination of AU and Landmark Features** & & & & & \\ \hline AU Features & 84.59 & 86.22 & 73.85 & 90.03 & 69.57 & 91.77 \\ Landmark Features & 76.11 & 74.73 & 70.77 & 75.95 & 47.59 & 89.39 \\
**AU + Landmark Features** & **89.31** & **89.72** & **76.92** & **93.67** & **78.95** & **92.94** \\ \hline
**Impact of Minority Oversampling** & & & & & \\ \hline Not Used & 83.59 & 89.60 & 69.23 & 95.89 & 83.85 & 90.99 \\
**Used** & **89.31** & **89.72** & **76.92** & **93.67** & **78.95** & **92.94** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Performance reporting for the ablation studies.**
tic evaluation approach by considering all the reported metrics (i.e., AUROC, Accuracy, Sensitivity, Specificity, Positive Predictive Value (PPV), and Negative Predictive Value (NPV)). Overall, when evaluated holistically, SVM outperformed all other baselines in terms of predictive performance.
We observed that an ensemble approach, incorporating the highest-performing models, delivered improved performance compared to a single model. This method particularly boosted the specificity, PPV, and accuracy, with other evaluation metrics demonstrating comparable performance. In order to consolidate the decisions from the top-performing models, we employed a secondary model, which operated as the final classifier in our ensemble system. The top-\(k\) models' outputs served as input to this secondary model (with \(k\) being a hyper-parameter), which then generated a single binary outcome, denoting the presence or absence of Parkinson's disease (PD) in an individual. We experimented with both Support Vector Machine (SVM) and logistic regression as our final classifier. Between the two, logistic regression demonstrated superior performance, making it our choice for the final stage of decision-making in the ensemble model. In addition, selecting top-\(n\) features (\(n\) is a hyper-parameter and the features are ranked based on the weights of a logistic regression model) and applying MinMax Scalar to ensure all the features are scaled into similar ranges helped boost the performance.
When features from a single expression were used, the smile performed better than the other two expressions. We also experimented with combining features from all possible mixing of three facial expressions. The competitive performances were achieved by adding features from other expressions with the smile features. Moreover, it is noteworthy that all the competitive models included features from the smile task, underscoring the importance of utilizing smile expression as a promising biomarker for PD detection.
We used both Openface and MediaPipe tools to extract facial keypoints. However, only using one of them yielded a relatively poor performance as shown in Table 3. Finally, since the dataset was imbalanced (i.e., the ratio of participants without and with PD was more than \(3:1\)), we applied a synthetic minority oversampling technique named SMOTE [14] on the training data, leading to a better performance on the test data.
## 3 Discussion
This study presents an opportunity to revolutionize the future of technology in the diagnosis and treatment of facial muscle-related diseases. Similar to the advancements in home glucose monitoring devices that have greatly benefited individuals with diabetes, the utilization of AI models and home-recorded micro-expressions for screening Parkinson's disease (PD) holds immense potential. The current methods for diagnosing PD rely on subjective clinical assessments and trained healthcare professionals' observations, which can be time-consuming, costly, and prone to inter-observer variability. In contrast, our proposed non-invasive and scalable method offers a convenient approach for PD screening from the comfort of individuals' homes. The remarkable progress of AI in healthcare demonstrated in areas such as skin cancer detection from images [13] and predicting heart failure risk using electronic health records [10], further emphasizes the transformative power of AI in disease diagnosis. Neurological disorders like Blepharospasm, Meige syndrome, and facial nerve disorders such as Bell's palsy share similarities with Parkinson's disease, as all of these diseases profoundly affect facial muscle function and expressions. Leveraging videos of smiling selfies for diagnosing uncomplicated Bell's palsy could save a significant amount of resources currently spent on imaging [15]. In addition, by allowing individuals to perform initial assessments at home, our proposed invention has the potential to expedite the evaluation of these disorders.
As we explore the potential integration of AI-driven predictive models into clinical care, it is imperative to evaluate the reliability of these models using data generated in environments different from those used for training. This step is crucial to ensure that the models can generalize effectively and maintain their performance when faced with real-world variations and diverse patient populations. To this end, we employed two distinct cohorts: a local clinical cohort consisting of \(75\) subjects (\(47\) diagnosed with PD) from the United States, and a cohort from Bangladesh comprising \(149\) subjects (\(14\) with PD). Bangladesh, being a developing country with a distinct culture compared to the United States, provides a valuable contrast in terms of environmental and socio-cultural factors.
On the clinical cohort, the predictive model achieved \(81.33\%\) accuracy and \(83.93\%\) AUROC score. The PPV of the model was \(86.67\%\), meaning for every \(100\) participant classified by our model to have PD, on average, \(87\) of them actually had PD. Such high PPV is crucial for a screening tool, as it helps minimize false positives and prevents unnecessary distress for the individuals or burden on the already strained clinical care system. By ensuring that individuals with positive screening results are more likely to have PD, we can mitigate the risk of overwhelming the healthcare system with unnecessary clinic visits. However, it is important to note that the sensitivity of the model currently stands at \(78.57\%\) and requires further improvement. Currently, the model may miss about \(21\%\) of participants with PD, and caution should be exercised when interpreting the model's output. Individuals should be informed about the potential errors of the model, and it is recommended that those with access to neurological care and suspicion of having PD symptoms should schedule a clinical visit regardless of the model's output. Nevertheless, our model remains particularly suitable for individuals with limited access to clinical care, offering a valuable screening tool for such populations.
The cohort from Bangladesh presents a unique dataset due to its participants belonging to a culture distinct from that of the United States. The predictive model achieved \(78.52\%\) accuracy and \(79.87\%\) AUROC score when evaluated on the external Bangladesh cohort. While the specificity and sensitivity of the model aligned with those of the clinical cohort,
Figure 4: **Feature visualization and model performance for each facial expression.** All the extracted features from the video of (a) smile, (b) disgust, and (c) surprise expression are reduced into two principal components for visualization. The orange and blue dots separate subjects with and without PD. The ROC curve for the predictive model trained on features extracted only from the (d) smile, (e) disgust, and (f) surprise expression and evaluated on held-out data demonstrates how well the model can separate participants with and without PD using features from a single expression.
a significant discrepancy emerged in the positive predictive value (PPV) at \(27.5\%\). This finding suggests the presence of cultural variances in the expression of smiles, warranting further investigation. It is essential to recognize that for every successful PD diagnosis prompted by our model, there may be approximately four instances in which individuals seek clinical diagnosis but are ultimately found not to have PD. While this may introduce inconvenience and potentially result in unnecessary clinical visits for some individuals, it is crucial to consider the potential impact of the model on improving the quality of life for those who do receive an accurate PD diagnosis. This consideration is particularly important in a resource-constrained country like Bangladesh, where the model can still hold utility within a clinical setting by improving access to care.
In our study, we have identified that features extracted from the smile expression demonstrated superior discrimination between participants with and without PD (see Figure 4). When tested on external data that the predictive models had never seen during training, models trained entirely on smile features displayed greater generalizability (see Figure 2). These models focusing solely on smile features were relatively simpler, requiring fewer parameters, yet outperformed models considering combinations of disgust, smile, and surprise features. This observation aligns with the philosophical principle of Occam's razor (Blumer et al., 1987), suggesting that simpler theories are often better. Furthermore, during the analysis of facial expressions in this study, which included disgust, smile, and surprise, we observed that the smile expression exhibited the least inter-person variability and was easily replicated compared to the other expressions. While individuals may express disgust or surprise in various ways, smiles were more universally recognizable and required fewer instructions for participants, making them highly suitable for our home deployable ubiquitous tool. Furthermore, the predictive model demonstrated its utility across different geographical locations, including the east and midwest regions of the United States, as well as a South Asian country. This observation raises the possibility of investigating the model's potential for screening individuals with PD in regions where access to neurologists is extremely limited (e.g., African countries). It is notable that we deployed a translated version of the PARK tool where the instructional videos were recorded by native speakers, which might be an important criterion to consider to investigate similar technology for countries with different languages and cultures.
It is imperative to enhance data diversity and representation within the training set before deploying the model in different socio-cultural settings to minimize potential risks and ensure its optimal performance. Although the web-based PARK tool we used for collecting training data is intended to be used from anywhere in the world, the participants were dominated by U.S. residents (\(754\) out of \(827\), \(91.2\%\)). This lack of representation from diverse geographic regions is a limitation that should be addressed to improve the generalizability of the model. Additionally, it is worth mentioning that our evaluation of the predictive model was conducted on a single external clinical cohort and one cohort from a developing country. To draw more robust and conclusive results regarding the effectiveness of the proposed model, it may have been advantageous to have test data from multiple clinics, PD care facilities, and countries. Targeted advertisement and cross-country collaboration could help us obtain data from other countries, which remains a future goal of this study.
Ensuring that the benefits of a predictive tool are accessible to individuals regardless of ethnicity, sex, and age requires a critical examination of potential biases within the model. To this end, we performed extensive bias analysis across held-out and external test data, although some analyses on the external test data were limited due to small subgroup sizes. On held-out data, our model did not demonstrate any detectable bias based on protected attributes such as ethnicity and sex. However, the model was less accurate for older participants, the underdiagnosis rate was slightly higher for the younger subjects, and the overdiagnosis rate was higher for the older subjects. Similar findings also apply to a clinical setting where young PD patients may have a longer journey to diagnosis because they do not fit the profile of a typical patient (Post et al., 2020). Propagation of existing bias in AI models is undesired and remains a limitation of this study.
To minimize the potential harm of mispredictions, it is indeed crucial to evaluate the reliability of model predictions. One approach to enhance the clinical utility of the model is to reframe the PD screening problem as a three-way classification instead of a binary (individuals with PD vs without it) classification. This revised approach would involve categorizing individuals into three groups: (i) likely to have PD, (ii) unlikely to have PD, and (iii) uncertain. An ideal screening framework should indeed demonstrate high performance in correctly classifying individuals into the likely and unlikely to have PD categories. These categories would encompass the majority of the data, where the model's predictions are expected to be reliable and accurate. This high-performance level ensures that individuals who likely have PD are appropriately identified for further evaluation and necessary care, while those without PD are accurately classified, minimizing false positives. However, it is important to acknowledge that no screening model is perfect, and there will always be a portion of data where the model's predictions are uncertain. This uncertainty category accounts for cases where the model's confidence falls below a certain threshold or where the features extracted from the data are inconclusive. This small subset of uncertain predictions serves as a reminder that further clinical evaluation and assessment are necessary to determine the true status of these individuals. However, reliably assessing uncertainty in a predictive model that follows a frequentist approach is indeed challenging (King et al., 2019). In the future, we plan to explore Bayesian approaches which can provide more direct and intuitive measures of uncertainty to investigate screening as a three-way classification problem.
Methods
### Data Collection
#### Data Collection Framework.
Our study employed a video dataset sourced from \(1059\) participants, comprising \(256\) of them diagnosed with Parkinson's disease (PD) and \(803\) without the condition. We used PARK3(Langevin et al., 2019), a web-based framework for data collection. PARK is a web-based tool designed to collect videos of participants performing a series of tasks. For each task, participants are presented with a brief instructional video illustrating how to accurately perform it. Among the recorded videos of more than \(20\) tasks, we collected the videos corresponding to three facial expressions (disgust, smile, and surprise) for this study. In each of the facial expression tasks, participants were asked to portray that particular expression as expressively as possible, sustain the expression for a few seconds, and then revert to a neutral facial expression. This cycle was to be repeated three times in total for each of the facial expressions, resulting in videos that typically spanned \(8-12\) seconds. In addition to the video recordings, PARK also collected demographic information about the participants such as their age, sex, ethnicity, and country of origin. Besides, participants self-reported whether they were diagnosed with Parkinson's disease or not, which was used as ground truth for training and evaluating the models.
Footnote 3: [https://parktest.net/](https://parktest.net/)
#### Dataset Details.
We collected data from four different settings:
* **Home recorded videos from global participants (Home-Global):** We advertised the PARK tool using social media, emailed participants who expressed interest in PD research recorded in a clinical study registry, and verbally reached out to PD patients interested to contribute in PD research to collect home-recorded videos from global participants. This way, we collected data from \(693\) global participants who recorded themselves using a computer webcam PARK website. However, this cohort was mostly dominated by the US residents (\(620\) out of \(693\)), and participants who did not have PD (\(616\) out of \(693\)).
* **Videos recorded at a PD care facility (PD Care Facility):** We deployed the PARK tool at InMotion4, a PD care facility located in Ohio, United States. The facility provides comprehensive support to PD patients, offering activities from exercises to educational sessions and counseling on maintaining motivation and leading a fulfilling life. We were able to collect video data (recorded by a laptop webcam) from both their clients (\(118\) participants with PD) and their caregivers (\(24\) participants without PD). Note that, in many cases, during data collection, the participants were assisted by caregivers at the InMotion facility. Footnote 4: [https://beinmotion.org/](https://beinmotion.org/)
* **Videos recorded in a US Clinic (Clinic):** As part of a clinical study conducted by the University of Rochester Medical Center (URMC) located in New York, United States, willing participants recorded themselves using the PARK tool. Some of the participants were supervised and/or assisted by clinical study team members during the recording process. We collected data from \(75\) participants (\(47\) with PD) in this setting.
* **Home recorded videos from Bangladesh (Home-BD):** In this setting, data collection took place in Bangladesh, a Southeast Asian country. This was specifically facilitated by the Bangladesh University of Engineering & Technology (BUET). In the previous three settings, the PARK tool was administered in English, which could present a language barrier for the elderly population whose native language is not English. To address this issue, we undertook the task of translating the entire PARK website into Bengali, the local language, and re-recording the instructional videos (in Bengali) with native speakers. A portion of data collection was conducted under the direct supervision of our study team members at BUET, and the rest of the participants recorded their videos at home. This setting contributed a valuable dataset from \(149\) participants, including \(14\) with Parkinson's disease (PD), significantly enhancing the geographical and cultural diversity of our data.
#### Data Quality Assurance.
The performance of any machine learning model largely depends on the quality of the dataset. Therefore, to ensure the utmost quality of our data, we took extensive measures to facilitate clear and straightforward instruction videos for the participants. This was supplemented by guidelines encouraging participants to record themselves in well-lit environments, against distinguishable backgrounds, and to position their full faces within the video frame. Despite these guidelines, we recognized an inherent trade-off between the optimal quality of recorded videos and the home accessibility of our video recording framework. Notably, many of our participants were elderly individuals, and expressions of disgust and surprise could be nontrivial for them to mimic accurately. Additionally, in the case of Home-BD setting, some of the recordings took place under the direct supervision of our collaborative research group at BUET. While this offered greater control over the recording conditions, it occasionally made the participants feel uncomfortable or embarrassed, which may have influenced their ability to mimic the guided facial expressions authentically. Consequently, the data quality from Home-BD may not be as high as that from other settings.
### Feature Extraction
In our study, we extracted two types of features from the facial expression videos collected from the participants - facial action units and facial landmarks. To extract these features, we used two very popular feature extraction tools OpenFace (Baltrusaitis et al., 2018) and MediaPipe (Lugaresi et al., 2019). Both OpenFace and MediaPipe are widely used due to their efficiency, accuracy, and the ability to process video streams in real-time. They also benefit from being open-source, allowing researchers and developers to use and extend their functionality to suit a wide range of applications.
Facial Action Unit Features.We leveraged the Facial Action Coding System (FACS), developed by Ekman et al. (Ekman and Friesen, 1978) to taxonomize human facial expressions. FACS describes facial expressions in terms of individual components of facial movement, or Facial Action Units (AUs). The AUs are associated with the muscle movements of the face and activation of a particular AU indicates the movement of a fixed set of facial muscles. In the literature, OpenFace has been proven to be very accurate in detecting the presence and intensity of these following AUs (Amos et al., 2016; Baltrusaitis et al., 2018).
These action units are contingent on the contraction or relaxation of one or more muscles. For instance, AU06 and AU12, provided by OpenFace, are crucial when examining facial expressions, particularly smiles. AU06, or "Cheek Raiser", involves the raising of the cheeks due to the contraction of the circularis oculi, pars orbitals muscle around the eye socket. This results in a tightening of the skin around the eyes, often leading to the formation of 'crow's feet' wrinkles at the outer corners of the eyes. AU12, also known as "Lip Corner Puller", represents the movement caused by the zygomaticus major muscle, which pulls the corners of the lips upwards and outwards, creating a smile. These two action units, when combined, are key indicators of what is referred to as a Duchenne smile (Ekman, Davidson, and Friesen, 1990), a sincere and genuine smile associated with spontaneous joy and happiness.
Utilizing Openface, we extracted the AU values for each frame of the three distinct facial expression videos from each participant. OpenFace software gives a binary activation (\(0\) or \(1\)) and a raw magnitude (ranging \(0\) to \(5\)) of each AU for each frame of a video that contains a human face. We evaluated three distinct statistical measures - mean, variance, and entropy - of the raw action unit when the corresponding action unit is active (i.e., the activation value is \(1\)). The variance signals the extent of facial muscle movement during a facial expression, while the mean indicates the average intensity of muscle engagement for each facial expression throughout the video frames. Conversely, entropy provides a measurement of unpredictability or randomness in the activation of facial muscles during expressions. Higher entropy corresponds to more complex or varied facial movements.
While assessing signs of Parkinson's disease from a facial expression task, The Movement Disorder Society-Sponsored Revision of the Unified Parkinson's Disease Rating Scale (MDS-UPDRS) instructs the clinician to observe the patient's _eye-blinking frequency_, _masked facies or loss of facial expression_, _spontaneous smiling_, and the _parting of lips_. We engineered the facial features so that they reflect these four criteria of the MDS-UPDRS. The expressiveness (and spontaneity for smiles) of each of our three facial expressions is linked with four AUs. For example, AU01 (Inner Brow Raiser), AU06 (Cheek Raiser), AU12 (Lip Corner Puller), and AU14 (Dimpler) were observed to have three distinct peaks in smiling facial expression videos that ask participants to repeat the smile expression three times. Similarly, the expressiveness of a disgusted face is linked with AU04 (Brow Lowerer), AU07 (Eye Lid Tightener), AU09 (Nose Wrinkler), and AU10 (Upper Lip Raiser); and a surprised face is linked with AU01 (Inner Brow Raiser), AU02 (Outer Brow Raiser), AU04 (Brow Lowerer), and AU05 (Upper Lip Raiser). In addition, to objectively gauge a participant's ability to keep their lips together while their mouths are at rest, we recorded the values of AU25 (Lips Part) and AU26 (Jaw Drop). These two action units are consistently selected across all three facial expressions. Finally, AU45 (Blink) offers an objective measurement for the frequency of eye blinking during all three facial expression videos. This way, we extracted frame-by-frame values of seven facial action units for each of the facial expression videos. We then summarized the frame-by-frame values to represent the entire video using three statistical aggregates - mean, variance, and entropy. Therefore, for each participant, the total number of digital features prepared from AUs using OpenFace is \(63\) (\(3\) facial expressions \(\times\)\(7\) AUs for each expression \(\times\)\(3\) statistical aggregates for each AU).
Facial Landmark Features.In addition to the action unit features, we extracted facial attributes that simulate clinical assessments typically carried out in person as suggested in prior literature (Gomez et al., 2021):
* **Opening of the left and right eye**: Patients with PD may experience a decreased frequency of eye blinking. This can lead to more prolonged openings of the eyes, subtly altering the natural dynamics of facial expressions. By including the measurement of the degree of eye openings in our feature list, we aim to capture this nuanced change.
* **Rising of the left and right eyebrows**: As a part of mimicking the surprise facial expression, the participants in our study were instructed to raise their eyebrows. However, in the case of Parkinson's patients, who may exhibit reduced facial expressiveness due to hypomimia, this eyebrow movement can be less pronounced. By taking into account the extent of eyebrow-raising, our analysis considers these subtle variations.
* **Opening of the mouth**: As we have discussed earlier, one of the visible manifestations of PD can be an inability to fully close the mouth when the face is at rest, especially noticeable in the moderate to severe stages of the disease. We thus selected the extent of mouth opening as one of our target features as it can provide valuable insights into the degree of facial muscle control loss, a critical indicator of PD progression.
* known as the "Parkinson's mask" (Tickle-Degnen and Lyons, 2004)
- the width of the mouth during smiling can play a significant role in identifying the presence of the disease.
* **Opening of the jaw**: The degree of jaw opening, often correlated with the symptom of masked faces, is another distinctive facial feature in Parkinson's disease. Patients may have difficulties fully controlling their jaw movement due to muscular rigidity, one of the primary motor
symptoms of Parkinson's. Therefore, we chose to include the measurement of jaw opening in our feature set.
To compute these attributes, we used the face mesh solution of MediaPipe. MediaPipe, a product developed by Google, provides \(478\) 3D facial landmarks from a video frame containing a face, using a lightweight machine learning model. It first detects a face in the image or video stream and then applies the face mesh model to estimate the facial landmarks. Although distinct Action Units (AUs) with some overlaps were extracted for different facial expression videos using the OpenFace tool, we consistently extracted the same seven facial attribute features for each of the facial expressions using Mediapipe. This systematic approach ensures that each facial expression contributes an equivalent amount of information to the final feature space. Following the extraction, we performed a statistical aggregate on these features extracted at the frame level, computing mean, variance, and entropy. This process results in another set of \(63\) digital features (\(3\) facial expressions \(\times\) \(7\) facial attributes per expression \(\times\) \(3\) statistical aggregates per attribute), further enriching the feature space.
### Model Training, Inference, and Evaluation
After combining the features derived from facial action units and landmarks extracted by MediaPipe, we had a \(126\)-dimensional feature space (\(63\) from AU features and \(63\) from facial landmark features) to represent all three facial expressions for each participant. When all three facial expressions were used to train predictive models, all of these \(126\) features were considered. However, we also investigated using only one facial expression or a different combination of two facial expressions. In those cases, we only used the corresponding features related to the facial expressions we are using. For example, only \(42\) relevant features (i.e., one-third of all features) were considered when we trained the predictive model with merely the smile expression. The model training phase includes feature selection, feature scaling, creating synthetic samples for the minority class to account for class imbalance, and a training loop where the model gradually learned from data. During inference, we applied the same feature selection and scaling techniques and then used the learned model to make a prediction.
Feature Selection.Feature selection is a widely adopted approach aimed at reducing the dimensionality of data. By selecting a subset of relevant features, this technique helps prevent overfitting and enables the training of simpler and more generalizable models. We tried three different feature selection techniques using (i) logistic regression coefficients, (ii) Boosted Recursive Feature Elimination (BoostRFE), and (iii) Boosted Recursive Feature Addition (BoostRFA). Each of these methods ranks the features based on their importance in constructing a predictive model that can effectively differentiate participants with PD from those without the condition. After the features are ranked, we select top-\(n\) features as input to the predictive model. \(n\) is considered a hyperparameter and tuned with other parameters in order to derive the best-performing model.
For the logistic regression-based feature ranking, we ran a logistic regression on the full dataset (after scaling the feature values), and use the absolute coefficient of each feature as a proxy of its importance. Both BoostRFE and BoostRFA approach build an estimator (i.e., gradient boosting) with all available features and rank them based on feature importance obtained by SHapley Additive exPlanations (SHAP) (Lundberg and Lee, 2017). BoostRFE iteratively removes the least important feature one by one, as long as the model's performance improves, until only the top-n features remained. On the other hand, BoostRFA starts with the most important feature and incrementally adds the next most important feature if it led to performance improvement. The process is continued until \(n\) features are added to the model.
In our study, we found that feature selection, in general, led to performance improvement, and the logistic regression-based feature ranking outperformed the other two approaches (Table 3). Therefore, the logistic-regression-based feature selection was used as the default for other experiments.
Feature Scaling.Feature scaling is a crucial preprocessing step in machine learning. It ensures that all features are on a similar scale or range, and often leads to performance improvement. We have tried two different feature scaling methods mentioned below:
* **MinMax Scaling** scales the values of each feature between 0 and 1 using the following formula: \[X_{\text{scaled}}=\frac{(X-X_{\min})}{(X_{\max}-X_{\min})}\] where \(X\) is the original feature value, and \(X_{\min},X_{\max}\) are the minimum and the maximum value of the feature, respectively, in the entire dataset.
* **Standard Scaling** ensures the values of the input features are scaled to have a mean of 0 and a standard deviation of 1. The formula for Standard Scaling is: \[X_{\text{scaled}}=\frac{X-\mu}{\sigma}\] where \(X\) is the original feature vector, and \(\mu,\sigma\) are the mean and standard deviation of the feature, respectively, in the entire dataset.
Based on ablation studies (Table 3), feature scaling helped boost the performance of the predictive model. Among the two methods, we select the MinMax Scaler as the default scaling method as it yielded the best performance.
Minority Oversampling.The dataset we studied was imbalanced, as it contained \(803\) participants without PD compared to \(256\) with the condition. Such class imbalance poses a challenge for training machine learning models as it can lead to a biased model that might overfit the majority class and under-represent the minority class, resulting in poor generalization. To address this issue, we employed the Synthetic Minority Oversampling Technique (SMOTE) (Chawla et al., 2002). SMOTE is a widely-used approach to balance
class distribution through the generation of synthetic instances of the minority class. As shown in Table 3, incorporating SMOTE in model training helped improve the performance of the predictive model. However, please note that the entire dataset was split into train and test sets first, and then SMOTE was applied only on the train set. This ensures that the test set only contained real data, and none of the synthetic data in the training set was derived from any test data.
Evaluation.We used \(k\)-fold cross-validation with stratified sampling to assess the performance of the predictive model on held-out data. This approach involves dividing the dataset into \(k\) different folds, with each fold containing \(\frac{1}{k}\) portion of the dataset. The key distinction of stratified sampling is that it ensures each fold maintains a proportional representation of samples from different classes, mirroring the distribution in the entire dataset. To illustrate, consider an example where the dataset consists of \(100\) samples, with \(10\) samples belonging to class 1 and \(90\) samples belonging to class 2 (\(1:9\) ratio). In \(10\)-fold stratified cross-validation, each fold would include \(1\) samples from class 1 and \(9\) samples from class 2, preserving the ratio of samples per class observed in the full dataset. This ensures that the model is trained and evaluated on diverse samples from all classes. The evaluation process encompasses \(k\) iterations, with each iteration reserving one of the folds as the test set, while the model is trained on the remaining data. This procedure allows for a comprehensive assessment of the model's performance across the entire dataset while addressing potential biases introduced by class imbalances.
In addition, we used two different external test sets collected in (i) Clinic and (ii) Home-BD settings. For evaluating our model on these test sets, the model was solely trained on all data collected in the other two settings (i.e., Home-Global and PD Care Facility). As a result, the model has never seen any data from either Clinic or Home-BD setting. For each of the external test sets, we selected and scaled the features according to the training process, and ran inference using the trained model.
We used a comprehensive set of evaluation metrics widely adopted by machine learning for the healthcare community: (i) Area Under the Receiver Operating Characteristic (AUROC), (ii) binary Accuracy, (iii) Sensitivity, (iv) Specificity, (v) Positive Predictive Value (PPV), (vi) Negative Predictive Value (NPV). Instead of relying on a single metric, we selected the best model based on a holistic evaluation of all of the above metrics.
Hyper Parameter Tuning.Parameter tuning [21, 14] plays a crucial role in enhancing the performance of machine learning models. It involves adjusting the hyperparameters of these models to optimize their predictive performance. For this study, we employed a range of classifiers, including Support Vector Machine (SVM), AdaBoost, HistBoost, XGBoost, and Random Forest. The hyperparameters of these classifiers were tuned using Weights & Biases (_WandB_) [1], a machine learning tool designed for this specific purpose. _WandB_ helped us identify the optimal combination of parameters that yielded the best model performance. The core principle underlying hyperparameter tuning is to traverse the search space of various parameter combinations with the goal of maximizing or minimizing a specific performance metric, for which, we used a Bayesian approach enabled by WandB.
For example, in our study, the parameters tuned for the SVM model included the number of top significant features (\(n\)), the penalty parameter of the error term (\(C\)), and the kernel coefficient for 'rbf', 'poly' and'sigmoid' kernels (_gamma_). For AdaBoost, apart from \(n\), the number of weak learners (\(n\_estimators\)) and learning rate were tuned. Similarly, for the XGBoost and HistBoost models, we tuned parameters such as maximum depth of a tree (_max_depth_), minimum sum of instance weight needed in a child (_min_child_weight_), and boosting learning rate (_eta_), among others. In the Random Forest classifier, the number of trees in the forest (\(n\_estimators\)) and the maximum depth of the tree (_max_depth_) were the key parameters tuned. These tuned models were then used in the subsequent steps of our experiment to achieve more robust and accurate results. The scripts for hyper-parameter tuning will be released alongside the code upon acceptance of the manuscript.
### Use of Large Language Models
ChatGPT5, a sophisticated language model developed by OpenAI6, was employed as an editorial aid during the preparation of this manuscript. It facilitated the refinement of the manuscript's language, grammar, and stylistic elements through its intelligent suggestions. It's noteworthy that all recommendations proposed by ChatGPT underwent thorough examination by an author prior to their final incorporation into the text. Importantly, the function of ChatGPT was strictly limited to proposing amendments to the existing content and it was not utilized to generate any new material for this manuscript.
Footnote 5: [https://chat.openai.com/chat](https://chat.openai.com/chat)
Footnote 6: [https://openai.com/](https://openai.com/)
### Ethics
This study received the necessary approval from the Institutional Review Board (IRB) of the University of Rochester, and all the experimental procedures were conducted in compliance with the approved study protocol. For data collected from Bangladesh, we received IRB approval from the Bangladesh University of Engineering & Technology (BUET). Given the predominantly remote administration of this study, written consent from participants was not obtained. However, informed consent was duly collected electronically from the participants, authorizing the use of their data for analysis and the inclusion of their photos in the figures presented within this study.
## Code and Data Availability
The recorded videos were collected using a web-based tool. The tool is publicly accessible at [https://parktest.net](https://parktest.net). The codes for video processing, feature extraction, and model
training will be made publicly available upon the acceptance of this paper. We will provide a link to the repository containing the codes.
Unfortunately, we are unable to share the raw videos due to the Health Insurance Portability and Accountability Act (HIPAA) compliance. However, we are committed to sharing the extracted features upon receiving an email request at [email protected]. The features will be provided in a structured format that can be easily integrated with existing machine-learning workflows.
|
2306.13129 | A proposal to demonstrate non-abelian anyons on a NISQ device | In this work we present a proposal for realising non-Abelian anyons on a NISQ
device. In particular we explore the feasibility of implementing the quantum
double model $D(D_4)$. We propose techniques to drastically simplify the
circuits for the manipulation and measurements of anyons. Numerical simulations
with realistic noise models suggest that current NISQ technology is capable of
probing signatures of non-Abelian anyons far beyond elemental properties such
as the non-commutativity of braids. In particular, we conclude that
experimentally measuring the full modular data of the model is feasible. | Jovan Jovanović, Carolin Wille, Daan Timmers, Steven H. Simon | 2023-06-22T18:00:01Z | http://arxiv.org/abs/2306.13129v4 | # A proposal to demonstrate non-abelian anyons on a NISQ device
###### Abstract
In this work we present a proposal for realising non-Abelian anyons on a NISQ device. In particular we explore the feasibility of implementing the quantum double model \(D(D_{4})\). We propose techniques to drastically simplify the circuits for the manipulation and measurements of anyons. Numerical simulations with realistic noise models suggest that current NISQ technology is capable of probing signatures of non-Abelian anyons far beyond elemental properties such as the non-commutativity of braids. In particular, we conclude that experimentally measuring the full modular data of the model is feasible.
###### Contents
* 1 Introduction
* 2 Summary of results
* 2.1 A suitable topological phase
* 2.2 Ground state preparation
* 2.3 Manipulating anyons
* 2.4 Charge measurements
* 2.5 Probing non-abelian signatures
* 3 Quantum double models
* 3.1 Anyon content
* 3.2 Ribbon operators
* 3.3 Charge measurements
* 3.4 Quantum double of \(D_{4}\)
* 4 Probing non-abelian anyons
* 4.1 Achieving low circuit depth
* 4.2 Elemental protocols
* 4.3 Anyon interferometry
* 5 Numerical experiments
* 5.1 Elemental protocols
### Linking and twist matrices
* 6 Extension to \(S_{3}\)
* 7 Conclusions and outlook
* A Ribbon types
* B Representation theory of \(D(D_{4})\)
* C Elementary circuits for the case of \(D(D_{4})\)
* additional data
* E Uncertainty estimation and measurement bias
* E.1 Polarisation uncertainty
* E.2 Measurement bias and post selection
## 1 Introduction
In 1977 Leinaas and Myrheim [1] first proposed the idea of _anyons1_ - particles in 2+1 dimensions with fractional statistics that are neither Bosons nor Fermions. Shortly later, Tsui, Stormer and Gossard [3] discovered the fractional quantum Hall effect, and very rapidly [4, 5] it was understood that such fractional quantum Hall systems harbour anyons. Since then, the investigation of topological order and its signature - anyonic excitation - has become a major topic in modern condensed matter physics.
Footnote 1: The term was later invented by Frank Wilczek [2].
Beyond the study of unconventional phases of matter, topological order has been explored and
praised for its potential applications in quantum computation [6] and simultaneously the study of its underlying, rather sophisticated, mathematical structure [7] has received a lot of interest from the mathematical community.
Today, almost half a century later, our theoretical understanding of anyons in 2+1 dimensions is slowly approaching completion. However, unambiguous experimental evidence of anyons in 'natural' physical systems is still scarce. Recent experiments have beautifully demonstrated the existence of quasiparticles outside the Boson-Fermion dichotomy in quantum Hall systems [8, 9]. However, the more complex types of anyons, so called _non-abelian anyons_ for which braiding two particles changes the (vector-valued) wave-function by a unitary rotation instead of just a phase factor, have not been unambiguously observed so far.
On that grounds, one may argue that most topological phases of matter are just too complicated to exist in nature and are thus more of a mathematical curiosity than an actual physical phenomenon. However, a series of striking experiments [10, 11, 12] performed recently on noisy intermediate scale quantum computers (NISQ) strongly refutes such criticism. Two experiments performed on superconducting qubits [11, 12] demonstrated the non-abelian braiding of mobile lattice defects which behave like Ising anyons embedded into an abelian phase [13, 14]. Another experiment performed on trapped ions [10] prepared the topologically ordered ground state of a non-abelian phase and detected an intrinsically non-abelian braiding processes (Borromean rings) via anyon interferometry.
All three experiments indicate that today's quantum computers are capable of simulating states of matter whose complexity exceeds that of abelian topological order which can be seen as a significant step towards topologically protected quantum computation. While open questions of the scalability and the improvement of noise levels are left for the future to decide, it is clear that by now non-abelian anyons have descended from the somewhat esoteric mathematical realm to the concrete and tangible.
Motivated by these findings, we propose an alternative scheme to realise non-abelian anyons on a NISQ device. This scheme has certain advantages and disadvantages compared to Refs. [10, 11, 12] which we will elaborate on in the next section. Our proposal focuses on Kitaev's quantum double models - a discrete realisation of lattice gauge theory - and in particular the topological phase \(D(D_{4})\). This phase is (Morita) equivalent to the phase realised in Ref. [10]. However, its microscopic Hamiltonian and the protocols we propose to demonstrate non-abelian braiding are quite different from the ones in Ref. [10].
We note, that the simulation of quantum double models and the more general string-net models [15], which include quantum double models as a subset, has been investigated previously in several studies [16, 17, 18, 19]. The focus of our work is on the concrete implementation and the simplifications devised to achieve feasibility on a state of the art NISQ device.
In the next section we will present a non-technical summary of our main methods and results. All following sections are devoted to a more technical and in-depth discussion, starting with a review of Kitaev's quantum double models in Section 3, where we also discuss the implementation of ribbon operators, charge measurements, and the concrete example of \(D(D_{4})\). In Section 4 we present a detailed description of the protocols to probe non-abelian anyons and their concrete implementation as quantum circuits of low depth. In Section 5 we show the results of numerical simulations. Section 6 discusses the feasibility of our protocols for other gauge groups, in particular \(S_{3}\), which would be universal for quantum computation in contrast to \(D_{4}\). In Section 7 we summarise our results and comment on future perspectives.
## 2 Summary of results
The main challenge that needs to be overcome in any experiment that realises non-abelian topological order on a NISQ device is an intrinsic and a profound one. By definition, a NISQ device is noisy meaning that beyond a certain circuit depth quantum information is scrambled beyond recognition. On the other hand, preparing a topologically ordered state without measurements and feed-forward protocols (which are prohibitive on certain modern architectures) requires a unitary circuit whose depth scales linearly with the system size. In addition to that, moving anyons on a topological background (again without measurements) requires operators whose circuit depth again scales with the length of the paths. Thus,
realising non-abelian anyons on a NISQ device becomes a challenging game of finding ways to circumvent these rather daunting limitations.
### A suitable topological phase
There are three levels on which to tackle this problem. The first and most important is to identify a suitable type of topological order. It is reasonable to further refine our classification of anyons beyond the basic distinction of abelian versus non-abelian. Non-abelian anyons in particular, can be classified by their computational power which correlates with the difficulty of realising them to some extent.
For some anyon theories, such as the Fibonacci anyons, braiding alone allows one to perform universal quantum computation [20]. In contrast, all anyons obtained from quantum double models of finite groups are not universal for braiding alone. However, their computational power can be further divided and is determined by the complexity of the underlying group. In particular, for non-nilpotent groups like \(S_{3}\), universal quantum computation can be performed with additional measurements [21] while for nilpotent groups such a scheme does not exist. When it comes to the implementation of a quantum double model for a group \(G\) on some quantum hardware, we note that the degrees of freedom take values in \(G\). Thus, the order of the group needs to be small. For any hardware that relies on qubits, which is the case for most set-ups, \(|G|=2^{n}\) immensely simplifies the design of circuits. Lastly, we note that the property of being solvable is beneficial for the reduction of the circuit depth for certain operations. This will be explored in more detail below in Section4. With this in mind, we identify \(D_{4}\), the dihedral group and \(Q_{8}\), the quaternion group, both of order eight, which are both solvable and nilpotent groups as the most suitable candidates. While \(D(S_{3})\) would be more desirable due to the fact that one can use it for universal topological quantum computation, we find that its order not being a power of two makes it significantly more difficult to implement on a qubit architecture. For an architecture with native qutrits, its implementation would require circuits of similar, if not lower, depths than the ones used for \(D(D_{4})\) or \(D(Q_{8})\). An example of an architecture supporting qutrit operations is the photonic simulator featured in Ref. [19] on which a proof-of-principle simulation of the fusion rules for \(D(S_{3})\) on a single lattice site has recently been performed.
For concreteness, we will focus on \(D(D_{4})\) in the following. In Ref. [10] the topological phase chosen is \(D_{\alpha}(\mathbb{Z}_{2}^{3})\), i.e., a _twisted_ quantum double model of the abelian group \(\mathbb{Z}_{2}^{3}\). This phase is (Morita) equivalent to \(D(D_{4})\)[22, 23], which further indicates that \(D(D_{4})\) is just of the right complexity - simple enough to be realised on a NISQ device, and complex enough to host non-abelian anyons.
### Ground state preparation
Having identified a reasonable phase, i.e., \(D(D_{4})\) in our case, the next task is to find a suitable microscopic realisation of the model and to prepare its ground state. Here, our protocol differs drastically from that presented in Ref. [10]. First of all, the Hamiltonian chosen in Ref. [10] is that of \(D_{\alpha}(\mathbb{Z}_{2}^{3})\). This means, its degrees of freedom (dof) are valued in \(G=\mathbb{Z}_{2}^{3}\), while for us the dof are \(G=D_{4}\)-valued. In fact, the Hamiltonian in Ref. [10] is best understood as a gauged version of a symmetry protected topological phase and the ground state preparation reflects that.
To be more precise, Ref. [10] starts with the preparation of a \(\mathbb{Z}_{2}^{3}\) symmetry protected topological (SPT) phase that can be prepared by a constant depth quantum circuit. The internal symmetry of the SPT is then gauged such that the system acquires intrinsic topological order. This gauging protocol is performed using a feed-forward protocol in which the system is entangled to an extensive number of ancillas which are then measured. The measurement outcomes correspond to successful ground state preparation or the preparation of a state with residual, but abelian anyons. The latter can be deterministically removed using error correction such that no post-selection is necessary. However, we emphasise that a feed-forward protocol in which the circuits to be executed depend on intermediate measurement outcomes, is not suitable for all machines, since with certain architectures measurement is expensive - requiring as much depth as tens to hundreds of gates [24].
Therefore, in our protocol we refrain from using any feed-forward protocols and prepare the ground state directly via a unitary circuit. This has the disadvantage of limiting the achievable
lattice size. However, we note, that in order to demonstrate signatures of non-abelian braiding, it is not necessary to use a lattice which is fully two-dimensional. In fact, it is sufficient to consider a quasi-one dimensional geometry, which we refer to as a _braiding ladder_ as shown in Fig. 6. For this geometry, we can prepare the ground state with a depth-two circuit. While this ground state does not feature long-range entanglement, this is not needed for the demonstration of non-abelian braiding as we will show explicitly in Section 5. We also consider a small truly two-dimensional lattice, just for the sake of proving that a direct unitary circuit preparation of the ground state is feasible and feed-forward protocols are not mandatory for the preparation of non-abelian topological order.
The braiding protocols we propose in the following are independent of the specific method of ground state preparation and can be applied on any lattice.
### Manipulating anyons
With the ground state preparation in place we lastly turn to the operators which allow us to create and move anyons and to the measurement. All of these operations need comparably short circuits. It is in this area where we think that our work contributes the most and provides results that can be generalised to other quantum double models. To elucidate our achievements we need to briefly review the basics of quantum double models. A full recap is deferred to the main text. The dof in a quantum double model are \(G\)-valued and group multiplication is an operation as elemental as a spin-flip in a spin-\(1/2\) system. Unfortunately, a single group multiplication requires several Toffoli gates which are non-Clifford and quite costly on most architectures. To be concrete, a single Toffoli gate translates to a depth-\(6\) circuit of elemental gates on Google's Sycamore chip, which we took as the benchmark for current state NISQ devices [24]. However, a careful investigation reveals that full group multiplications can be entirely avoided for the creation, manipulation and measurements of anyons. This realisation is one of our main contributions.
To see this, we remind the reader that in a quantum double model for each anyon there is a so-called _ribbon operator_ which creates an anyon pair at its end-points. As the name suggest, a ribbon operator is a quasi-one dimensional operator that can be defined for any path and has a finite \(\mathcal{O}(1)\)-width. The ribbon operator corresponding to a non-abelian anyon is non-unitary and is most conveniently implemented with ancilla qubits which are measured at the end of the protocol [16]. In applying the ribbon operators, a sequence of entangling operations between the ancillas and the dof on the lattice are performed giving rise to states that 'know' about the presence of anyons. These entangling operations depend on the anyon type. While they formally involve group multiplication and are, as such, costly, closer inspection reveals that for all anyon types drastic simplifications of the circuits can be performed once we tailor the circuits to the anyon type in question rather than applying a 'one size fits all' protocol. The key here is to make use of the structure of the excitations in the quantum double model. In particular each anyon corresponds to a pair \((\mathcal{C},\chi)\), where \(\mathcal{C}\) is a conjugacy class of \(G\) and \(\chi\) is an irreducible representation of the centraliser \(Z_{r}\), \(r\in\mathcal{C}\). The group multiplication involved in the ribbon operators needs to be performed only for elements of the respective conjugacy class. We show, that exploiting this property drastically reduces the circuit depth and removes all Toffoli gates. Such a complexity reduction for the ribbon operators generalises to other groups, in particular to solvable groups including \(S_{3}\).
### Charge measurements
Finally, we aim to design a protocol which can detect and uniquely determine topological charges. This is a non-trivial task and has so far not been demonstrated. In the experiments performed recently, the existence of multiple fusion channels and the action of non-abelian braiding on the latter has been demonstrated indirectly by measuring abelian charges before and after the braid. However, no experiment directly probed a state where a superposition of several charges had been created from the fusion of non-abelian anyons and identified the anyon content.
This might be due to the inherent difficulty of uniquely determining non-abelian charge content. In the quantum double model the operators needed to measure general charges are explicitly known, however, they require full group multiplications on several lattice-of and are therefore
prohibitively costly. To circumvent this we propose a _partial charge measurement_. To determine the total charge, one needs to evaluate how a given state transforms under the full group. However, one can instead measure how it transforms under a subgroup. Due to partial orthogonality of the characters of a subgroup with those of the group, a measurement outcome reveals partial information about the charge. In the case of \(D(D_{4})\) one finds that a certain outcome is only compatible with at most two different charges. Repeating the measurement for three different subgroups, we can unambiguously infer the charge. This procedure avoids costly multiplications with the full group and removes unfavourable Toffoli gates from the circuit at the cost of repeating the protocol three times.
### Probing non-abelian signatures
With these simplification in place we argue that it is possible to demonstrate the properties of non-abelian anyons of \(D(D_{4})\) on a NISQ device. In particular we propose two elemental protocols, anyon fusion and anyon braiding, demonstrating the existence of multiple fusion outcomes and non-commutativity of exchange operations, respectively. We furthermore propose protocols for anyon interferometry that allow us to measure the entries of the S- and T-matrices, which fully characterises the anyon content of \(D(D_{4})\). For the protocols proposed we provide numerical simulations using Google's realistic noisy quantum circuit simulator. All protocols proposed are ready to be run on the actual Sycamore chip and the results obtained from the simulations are representative of the actual experiments, if they were performed on a chip with similar lay-out and noise levels.
Our numerical findings indicate that current NISQ technology is ready to demonstrate the full signatures of non-abelian anyons in the \(D(D_{4})\) model. Similar results hold for \(D(Q_{8})\). We also investigate how our protocols need to be adapted for \(D(S_{3})\) which hosts non-abelian anyons that can be used for measurement assisted universal topological quantum computation. We find that on a device with native qutrits that support \(\mathbb{Z}_{3}\) multiplication, the simplifications discussed for \(D_{4}\) carry over to \(S_{3}\). However, for a device with qubits, the circuits for all individual aspects of the protocols are considerably more complicated and involve several Toffoli gates where the equivalent operation for \(D_{4}\) require just CNOT operations. When this is translated into device ready circuits of two-qubit gates, this yields to an increase of the depth by a factor ten or larger, rendering it unsuitable for execution on current NISQ devices.
## 3 Quantum double models
In this section, we will review Kitaev's quantum double models [25] and discuss their ground states and anyonic excitations. We will also present our protocols for creating and manipulating anyons, and for measuring topological charge.
Kitaev's quantum double models are (2+1d) Hamiltonian formulations of lattice gauge theory for finite gauge groups. Gauss' law is enforced energetically at each vertex by a Hamiltonian term and the model is at the deconfinement fixed point, where there are no electric field terms. The Hamiltonian, therefore, has two sets of terms - the gauge-invariant (magnetic) plaquette terms and the Gauss' law vertex terms [25, 26].
Quantum double models can also be understood as a subclass of the more general string-net models [15], which describe all non-chiral (2+1d) topological phases of matter, or as a generalisation of Kitaev's toric code [25] for which the gauge group \(\mathbb{Z}_{2}\) is generalised to an arbitrary discrete group \(G\). While all quantum double models have anyonic excitations, the anyons for models with abelian gauge group are themselves abelian. In order to obtain non-abelian anyons, it is necessary to consider non-abelian gauge groups \(G\). While the models for the latter are conceptually still very similar to the toric code, their definitions require slightly more care and notation, which we will introduce in the following.
**Hamiltonian.** For a given group \(G\) we can define its quantum double model on any arbitrary _directed_ graph. The local degrees of freedom are \(|G|\)-dimensional and assigned to the edges. The basis of their local Hilbert space is labeled by the group elements, i.e., we think of edges as being labeled by elements \(g\in G\). The Hamiltonian is given by a sum of mutually commuting terms that act on vertices \(V\) and plaquettes \(P\), respectively
\[H=-\sum_{v\in V}\mathbf{B}_{v}-\sum_{p\in P}\mathbf{A}_{p}\:. \tag{1}\]
Note, that we will here use a formulation of the theory on the _dual_ lattice compared to the lattice of the original work in Ref. [25]. We will now discuss these terms in more detail. As mentioned above, the vertex term enforces Gauss' law. To achieve this we first introduce a general vertex operator \(B_{v}(h)\) for every vertex \(v\). This operator projects onto all states for which the group elements assigned to the edges adjacent to the vertex multiply to \(h\). To make the product unambiguous we need to order the edges. This ordering has to fulfil additional constraints to be specified momentarily. In addition, a group element \(g\) assigned to an incoming (outgoing) edge enters as \(g\) (\(g^{-1}\)). E.g., for the trivalent vertex depicted in Fig. 0(a) we have
\[B_{v}(h)|g_{1},g_{2},g_{3}\rangle=\delta_{g_{1}g_{2}g_{3},h}|g_{1},g_{2},g_{3} \rangle\;. \tag{2}\]
Gauss' law is then enforced by choosing \(\mathbf{B}_{v}=B_{v}(e)\), where \(e\) denotes the identity element of the group. To ensure that the vertex projector commutes with the plaquette projector introduced below, the ordering of the edges needs to be consistent with the orientation of the latter. This can be done by endowing both with a counter-clockwise orientation. The ordering is then obtained by additionally specifying a starting edge for each vertex.
The plaquette term \(\mathbf{A}_{p}=\frac{1}{|G|}\sum_{g\in G}A_{p}(g)\) is defined in terms of operators \(A_{p}(g)\) which shift the labels of the edges forming the plaquette by \(g\). As alluded to previously, the plaquettes have an _orientation_. If the edge direction is aligned (anti-aligned) with this orientation, the shift acts as \(g_{i}\to gg_{i}\) (\(g_{i}\to g_{i}g^{-1}\)). E.g., for the plaquette shown in Fig. 0(b), we have
\[A_{p}(g)\,|g_{1},g_{2},\ldots\rangle=|gg_{1},g_{2}g^{-1},\ldots\rangle\,. \tag{3}\]
**Ground state.** It is not hard to verify that all terms in the Hamiltonian commute. Hence we can diagonalise it term by term. One can show that the leftover degeneracy depends only on the genus of the surface the graph is embedded in [25, 26]. We will work on a sphere-topology, for which the ground state is unique. For most quantum computing architectures, interactions need to be local, which limits the accessible topologies. However, a disk or sphere topology (a disk closed off by one large plaquette) is accessible.
All terms in the Hamiltonian are also projectors. Hence, one way to construct the ground state is to apply all projectors onto a state that has non-zero overlap with the ground state. In particular, we can start with the state \(|\{e\}\rangle\), where every edge is labelled by the identity element. This state trivially obeys all vertex projectors, so we just need to apply all plaquette projectors
\[|\psi\rangle=\prod_{p\in P}\mathbf{A}_{p}\,|\{e\}\rangle\,. \tag{4}\]
This state is the unique ground state and corresponds to the equal weight superposition of all states respecting Gauss' law.
### Anyon content
In the following, we will discuss the anyonic excitations in quantum double models and the algebra describing them.
**The algebra \(D(G)\).** The ground state is stabilised2 by the projectors in Eq. (1). Together with the property \(A_{p}(g_{1})A_{p}(g_{2})=A_{p}(g_{1}g_{2})\) this implies the stronger condition
Footnote 2: Meaning, it is the \(+1\) eigenstate.
\[A_{p}(g)\,|\psi\rangle =|\psi\rangle\,, \tag{5}\] \[B_{v}(e)\,|\psi\rangle =|\psi\rangle\,,\]
for all \(v\in V\), \(p\in P\) and \(g\in G\). Therefore, the elementary excitation above the ground state will violate one or more of the equations above and are characterised by how the operators \(B_{v}(h)\) and \(A_{p}(g)\) act on them. More precisely, these operators form an algebra [25, 26] (the quantum double algebra \(D(G)\)), which contains the full information about the anyonic excitations. In particular its irreducible representations (irreps) label the anyons.
For the toric code model, the \(A\) and \(B\) operators always commute and their algebras can be investigated independently. One finds the well-known \(e\) (for electric) and \(m\) (for magnetic) particles associated to vertex and plaquette violations, respectively, and their combination, the fermionic \((e,m)\)-particle. However, for non-abelian gauge-groups the operators \(A_{p}(g)\) and \(B_{v}(g)\) no longer commute, if the vertex \(v\) intersects the plaquette \(p\). To discuss their joint algebra, we consider _sites_\(s_{i}=(v_{i},p_{i})\) of adjacent vertices and plaquettes.
On the same site we have the following alge
braic relations
\[\begin{gathered} A_{s}(g)A_{s}(h)=A_{s}(gh),\\ B_{s}(g)B_{s}(h)=\delta_{g,h}B_{s}(h),\\ A_{s}(g)B_{s}(h)=B_{s}(ghg^{-1})A_{s}(g).\end{gathered} \tag{6}\]
This is the on-site representation of the quantum double algebra \(D(G)\)[25; 26].
We will now discuss its irreducible representations. However, we will refrain from providing any derivations (see e.g. Ref. [27]) and just state the results.
The irreducible representations are labelled by two objects, a conjugacy class \(C\) of the group \(G\) and an irreducible representation \(\chi\) of the centraliser \(Z(r)\) of the class representative \(r\in C\). The vector space on which \((C,\chi)\) acts is spanned by a basis \(\ket{\mu}=\ket{c,i}\), where \(c\in C\) and \(i\in\{1,2,\ldots,\dim\chi\}\), i.e., the first index goes over the conjugacy class elements while the second goes over the vector indices of the irreducible representation \(\chi\).
Note, that in the case of abelian groups, in particular the toric code, the conjugacy classes are trivial and identical to the group elements themselves. Their center is \(G\), which has \(\abs{G}\) one-dimensional representations isomorphic to \(G\) itself. Hence, the irreducible representations are given by \(\abs{G}^{2}\) tuples \((g,\rho_{i})\), where any group element \(g\) is paired with any irreducible representation \(\rho_{i}\), \(i=1,\ldots,\abs{G}\).
In the general, non-abelian case the irreducible representations do not factorise as can be seen from the action of the algebra generators on the vector space spanned by \(\ket{\mu}=\ket{c,i}\)
\[\begin{gathered} B_{\mu\nu}(h)=\bra{c,i}B(h)\ket{c^{\prime},i^{ \prime}}=\delta_{c,h}\delta_{c,c^{\prime}}\delta_{i,i^{\prime}},\\ A_{\mu\nu}(g)=\bra{c,i}A(g)\ket{c^{\prime},i^{\prime}}=\delta_{c, gc^{\prime}g^{-1}}\Gamma^{\chi}_{i,i^{\prime}}(g).\end{gathered} \tag{7}\]
Here, \(\Gamma^{\chi}_{c}\) is a map from the entire group, \(G\), onto the \(\chi\)-representation matrices defined by composing the representation matrices \(\Gamma^{\chi}\) themselves and a projection map \(g\to q_{c}^{-1}gq_{c^{\prime}}\), where \(q_{c}\) is a group element that satisfies \(q_{c}cq_{c}^{-1}=r\) and \(c^{\prime}=g^{-1}cg\).
To get a better understanding of the meaning behind these expressions, we consider three simple examples. Let us start with the vacuum (or trivial) representation, labelled by \((\{e\},\mathbb{1})\). This representation is one-dimensional and spanned by \(\ket{e,0}\)
\[\begin{gathered} B(h)\ket{e,0}=\delta_{h,e}\ket{e,0},\\ A(g)\ket{e,0}=\ket{e,0}.\end{gathered} \tag{8}\]
Hence, Eq. (5) implies that for the ground state every site houses the trivial representation.
Other important examples are pure charges and pure fluxes. A pure flux is labelled by a conjugacy class and the trivial representation of its centre, \((C,\mathbb{1})\). Its basis vectors are \(\ket{c,0}\) for \(c\in C\), with
\[\begin{gathered} B_{\mu\nu}(h)=\bra{c,0}B(h)\ket{c^{\prime},0}= \delta_{c,h}\delta_{c,c^{\prime}},\\ A_{\mu\nu}(g)=\bra{c,0}A(g)\ket{c^{\prime},0}=\delta_{c,gc^{\prime}g ^{-1}}.\end{gathered} \tag{9}\]
Pure flux excitations only violate the vertex term, the \(B\)-term.
Pure charge excitations are labelled by the group identity and a representation of the group \(G\) itself, \((\{e\},\chi)\). Its basis vectors are \(\ket{e,i}\) for \(i\in\{1,2,\ldots,\dim\chi\}\), with
\[\begin{gathered} B_{\mu\nu}(h)=\bra{e,i}B(h)\ket{e,i^{\prime}}= \delta_{e,h},\\ A_{\mu\nu}(g)=\bra{e,i}A(g)\ket{e,i^{\prime}}=\Gamma^{\chi}_{ii^{ \prime}}(g).\end{gathered} \tag{10}\]
Pure charge excitations only violate the plaquette term, the \(A\)-term.
Figure 1: The vertex and plaquette operators.
In particular, if we have a gauge field state, \(|\chi,p;i\rangle\), where each site houses a trivial representation except for one, \((v,p)\), which is occupied by a pure charge \((\{e\},\chi)\)3, this state satisfies all the constraints in Eq. (5) except for
Footnote 3: Note, that such a configuration is impossible on a sphere, but may occur on manifolds of genus \(g>0\).
\[A_{p}(g)\,|\chi,p;i\rangle=\sum_{i^{\prime}}\Gamma_{ii^{\prime}}^{\chi}(g)\,| \chi,p;i^{\prime}\rangle\,, \tag{11}\]
where \(i\) and \(i^{\prime}\) are the internal degrees of freedom of the charge4. This is the way a charged state transforms under gauge transformations in gauge field theory. Hence, we say that the plaquette terms generate gauge transformations.
Footnote 4: Note that the charge can be vector valued for non-abelian symmetry groups.
All other excitations which are neither pure charge nor pure flux are called dyons. They violate vertex and plaquette terms simultaneously, meaning they have a flux component associated with a vertex of the site \((v,p)\) and a charge component associated with its plaquette, but unlike the toric code fermion they cannot generally be broken down to a combination of pure charge and pure flux sitting next to one another.
**Non-abelian anyons.** To understand the distinction between abelian and non-abelian anyons we will focus on the physical meaning of the dimension \(d=\dim(C,\chi)=|C|\dim(\chi)\) of the irreducible representations.
If we have a gauge field state with an anyon of type \((C,\chi)\) at a site \(s\), the plaquette and vertex terms of that site will transform this state in accordance with that \(d\)-dimensional algebra representation. This implies that specifying the type and location of this anyon does not uniquely fix the gauge field state. Instead, there is a \(d\)-dimensional subspace \(\mathcal{H}_{s}(C,\chi)\) of the total Hilbert space associated with this any occupying this site. This \(d\)-fold degeneracy can be interpreted as a spin-like internal degree of freedom of the anyon.
Generalising this, we find that for a state with specified charge content \(\{(C_{s},\chi_{s})\}_{s}\) on all (non-overlapping) sites \(s\) the subspace associated to this configuration is
\[\mathcal{H}_{\{s\}}=\bigotimes_{s}\mathcal{H}_{s}(C_{s},\chi_{s}). \tag{12}\]
A more powerful alternative to this local description can be derived, if we notice that there is an algebra associated with the tensor product of representations, analogous to the Clebsch-Gordan (CG) decomposition of tensor products of linear representations of a group into the direct sum of irreducible representations.
In particular, if we have two charges \(a\) and \(b\) the associated Hilbert space can be written as a direct sum of the Hilbert space associated to charges \(c\). We write this as
\[a\otimes b=\bigoplus_{c}N_{ab}^{c}c, \tag{13}\]
with \(a\), \(b\) and \(c\) going over a set of anyon labels \((C,\chi)\) and \(N_{ab}^{c}\) being integer coefficients.
How this manifests physically is that if we have two anyons \(a\) and \(b\) in some region and measure the topological charge associated to that region we may get any label \(c\) for which \(N_{ab}^{c}\neq 0\). This process is referred to as anyon fusion.
The general expression for \(N_{ab}^{c}\) is cumbersome. For pure charge anyons \((\{e\},\chi_{i})\) it readily reduces to the well-known decomposition of a tensor product of group irreps into the direct sums of irreps \(\chi_{i}\otimes\chi_{j}=\bigoplus_{k}n_{ij}^{k}\chi_{k}\).
If the gauge group is abelian, all algebra representations are one-dimensional and there is no degeneracy once the charge content of a gauge field is specified. The fusion is unambiguous. We can see that by looking at the dimensions of the LHS and RHS of Eq. (13). For every \(a\) and \(b\) there is only one \(c\) for which \(N_{ab}^{c}=1\) and it is zero for all other \(c\).
The Hilbert space associated with the presence of multiple non-abelian anyons is the stage on which all striking phenomena of non-abelianness are played out. Besides the possibility of multiple fusion outcomes discussed above, also moving anyons around one another (braiding) acts non-trivially on this space and corresponds to a unitary operation. In such a braiding process the order of exchanges matter as in general the unitary matrices associated to the individual exchanges do not commute. The full theory that describes the braiding and fusion of anyons is a so-called unitary modular tensor category (UMTC) [25].
### Ribbon operators
In the following, we will explain how to create and move anyons. Anyons can always be created in pairs from the vacuum and for any anyon type, \((C,\chi)\), and any path on the graph between two
sites, see Figure 2, there is a _ribbon operator_ that creates a pair of said anyons at the end sites.
If the ribbons are closed and contractable, the associated ribbon operators leave the ground state unchanged. Moreover, they span the loop operator algebra that leave the ground state invariant. It is in that way that the quantum double ground state knows about the anyon spectrum.
We will not explain the derivation of the ribbon operators themselves, see Ref. [25; 26], just how to apply them to a state. The ribbon operators are not unitary in general. If the anyons have a dimension larger than one, the operators that create and manipulate them are non-local projectors. To simulate them on a digital quantum computer requires ancillas and measurements [16].
In Ref. [12] the group overcame this issue by means of unitary lattice deformations, unitarily transforming from a state with a set of non-abelian defects on one graph to another state with the same defect content but defined on a different graph, hence they were able to move the non-abelian Majorana defects unitarily. However, these are extrinsic and static lattice defects on top of a theory that is an abelian \(\mathbb{Z}_{2}\) gauge theory, hence, the nature of their non-abelian anyons is different from intrinsic gauge field excitations.
As we mentioned, a pair of \(d\)-dimensional non-abelian anyons defines a degenerate subspace of the gauge field Hilbert space that encodes the outcomes of their fusion. This encoding is increasingly non-local as we separate the anyons. However, we require accessing this information locally to move the anyons. To this end, we keep one \(d\)-dimensional ancilla qudit per end of the ribbon.
Every ribbon (cf. Fig. 2) is made up from two types of elementary triangles
1. A triangle consisting of two vertices and one plaquette center of the underlying graph. One side of the triangle coincides with an edge of the graph.
2. a triangle consisting of two plaquette centers and one vertex of the underlying graph. One of the triangle's sides crosses an edge of the graph.
Moving anyons means appending elementary triangles onto a ribbon. Each triangle type corresponds to a specific operator, the details of which also depend on the orientations of the edges and whether the triangle is attached to the back or front end of the ribbon. A detailed list is provided in Fig. 24 in Appendix A.
**The algorithm** that creates two anyons of type \((C,\chi)\) along a ribbon path in the internal state \(\ket{\alpha;\beta}=\ket{c^{\prime},i^{\prime};c,i}\) is the following
1. Initialise two (\(d=\ket{C}\dim\chi\)-dimensional ancilla qudits in the states \(\ket{\alpha;\beta}=\ket{c^{\prime},i^{\prime};c,i}\). We will think of the first qudit as the ribbon's back end and the second as its front end.
2. For each triangle in the ribbon path, depending on the triangle _type_ (and the edge orientations), sequentially apply one of the following unitaries acting on ancilla qudits \(\ket{c,i}\) and qudits encoding the group element \(g\) associated to the edge of the lattice \(\ket{g}_{\text{phys}}\) 1. Multiply on the coinciding edge \(\ket{c,i}\ket{g}_{\text{phys}}\rightarrow\ket{c,i}\ket{cg}_{\text{phys}}\). 2. Generalised-conjugate by the crossed edge \(\ket{c,i}\ket{h}_{\text{phys}}\rightarrow\ket{hch^{-1}}\Gamma^{\chi}_{c}(h) \ket{i}\ket{h}_{\text{phys}}\).
3. To complete the application of the ribbon operator project the ancilla qudits to the Bell state \(\bra{\Phi^{+}}=\frac{1}{\sqrt{d}}\sum_{\nu}\bra{\nu;\nu}\). The projection is done by measurement and post-selection.
Figure 2: An example of a ribbon, \(R=\{t_{1},t_{2},\dots,t_{7}\}\), between site \(s_{1}\) and \(s_{2}\). The black lines are the edges of the graph. The ribbon is made up from four type I elementary triangles and three type II elementary triangles.
For the other variants of Step 2 (for different edge orientations etc.) consult Figure 24 in Appendix A.
In the way presented above, we have started building up the ribbon from start-to-finish using only forward-type elementary triangles. Similarly, we could have started in the middle and extended in parallel both forward and backwards with appropriate types of elementary triangles, see Appendix A. Backwards-type elementary triangles, analogously to Step **2.** stated above, act on the backwards ancilla qudit.
As we can see, the operation is sequential so at best the depth of a circuit implementing this is \(\mathcal{O}(|R|)\), with the best depth achieved by starting at the middle and growing it both ways.
Of course, we can make this constant depth by separating the main ribbon into \(|R|\) smaller ribbons which however requires \(|R|\) pairs of qudits. To merge these ribbons, we also need \(|R|\) Bell-pair projections. As these are done via measurement and post-selection this requires an exponential in \(|R|\) number of repetitions for a \(\mathcal{O}(1)\) number of successful projections.
In the variant of the protocol we proposed, the post-selection will necessarily raise concerns about the scaling of the number of total runs needed to build up an adequate number of successful runs. In fact, the protocol for applying an open ribbon operator onto the ground state succeeds with a probability of \(1/d^{2}\) (neglecting the effect of noise). However, if the ribbon is closed, due to the no-flux condition of the ground state, the protocol succeeds with certainty. The number of runs needed scales exponentially with the number of open ribbon operators in the braiding protocol, which, in the cases we consider, is either one or two.
To interpret what we have here, let us look at the quantum resources. We have qudits representing the matter \(\mathcal{H}_{\text{matter}}\) and ancilla qubits representing the internal state of the anyons, or the fusion space, \(\mathcal{H}_{\text{ancillas}}\equiv\mathcal{H}_{\text{fusion}}\) which is already embedded non-locally in \(\mathcal{H}_{\text{matter}}\). The two types of triangles couple the two spaces in two different ways. Dor type I the \(\mathcal{H}_{\text{ancillas}}\) controls an action on \(\mathcal{H}_{\text{matter}}\) and for type II it is the other way around. After the measurement that disentangles the redundant copy of \(\mathcal{H}_{\text{fusion}}\), \(\mathcal{H}_{\text{matter}}\) is left in a state with the ribbon operator imprinted on it as signalled by the topological charge, braiding amplitude and phase.
### Charge measurements
In this section, we will explain how we measure the topological charge, i.e., the anyon label. This is the last element of our toolkit for probing topological order. An ideal charge measurement would differentiate any type of excitation on any site, such measurement is associated with the following set of projectors [26]
\[P_{s}^{(C,\chi)}=\frac{\chi(e)}{|Z(r)|}\sum_{c\in C}\sum_{z\in Z(r)}\chi^{*}(z )B_{s}^{(c)}A_{s}^{(q_{c}z\bar{q}_{c})}, \tag{14}\]
for each irreducible representation of \(D(G)\).
Since we work in a basis where projectors onto a given \(G\)-valued flux through a vertex \(v\), \(B_{v}(h)\), are diagonal, we can just do a full projective measurement of the gauge field degrees of freedom in that basis. The result of such a measurement is a labelling of all edges by group elements from which we can compute the flux through any vertex.
However, in order to measure the charge component, one needs to implement a controlled \(A_{s}(g)\) operator which needs a controlled group multiplication applied to all edges of \(s\)'s plaquette, i.e. \(\left|g\right\rangle\left|g_{i}\right\rangle\rightarrow\left|g\right\rangle \left|gg_{i}\right\rangle\) for all \(\left|g_{i}\right\rangle\) around the plaquette. This requires circuits which, in the case of non-abelian groups such as \(D_{4}\) which we will study in more detail, are prohibitively expensive.
The main idea to circumvent this problem is to instead use partial charge measurements that have a substantially reduced circuit depth. Such partial measurements do not determine the charge completely. However, when we combine a set of different partial measurements we are able to deduce the full charge content from the measurement outcomes.5
Footnote 5: Note, that these measurements are destructive, meaning we need to prepare the state again for the next measurement. However, if we only need to use a small number of different subgroups (in the case of \(D(D_{4})\) considered here at most three different subgroups are sufficient), this trade-off is acceptable.
The key idea behind the partial measurement protocol is that controlled \(A_{s}^{(g)}\) becomes significantly simpler to perform once we restrict ourselves to a proper subgroup of the full group, i.e. \(g\in H\subset G\).
With Eq. (11) in mind, we propose the following algorithm for the \(H\)-partial charge measurement on a plaquette \(p\)
1. Prepare an ancilla qudit, \(a\), encoding the elements of \(H\subset G\), in an equal superposition over all elements, so that the joint total state of the system is \[\sum_{h\in H}\left|h\right\rangle_{a}\left|\psi\right\rangle_{\text{phys}},\] where \(\left|\Psi\right\rangle_{\text{phys}}\) is the physical system.
2. Apply an \(a\)-controlled \(A\)-multiplication onto the edges of the plaquette \(p\) \[\sum_{h\in H}\left|h\right\rangle_{a}\left|\psi\right\rangle_{\text{phys}} \rightarrow\sum_{h\in H}\left|h\right\rangle_{a}A_{p}(h)\left|\psi\right\rangle _{\text{phys}}.\]
3. Apply a unitary \[U_{a}=\sum_{\chi_{H}}\sum_{i^{\prime},j^{\prime}}\sum_{h^{\prime}\in H}\Gamma _{i^{\prime}j^{\prime}}^{\chi_{H}}(h^{\prime})\left|\chi_{H};i^{\prime},j^{ \prime}\right\rangle_{a}\left\langle h^{\prime}\right|_{a}\] onto the ancilla qudit \(a\). Here \(\chi_{H}\) labels the irreducible representations of \(H\subset G\) and \(\Gamma^{\chi_{H}}(h^{\prime})\) are the representation matrices, with \(i^{\prime}\) and \(j^{\prime}\) being the vector indices for a given representation \(\chi_{H}\).
4. Measure the ancilla qudit \(a\).
To see how this protocol works, let us examine the case of a gauge field with well-defined pure charge on a plaquette \(\left|\psi\right\rangle_{\text{phys}}=\left|\chi;i\right\rangle\). Using Eq. (11) we can write the joint state after Step 2 as
\[\sum_{h\in H}\left|h\right\rangle_{a}A_{p}(h)\left|\chi;i\right\rangle=\sum_{h \in H}\left|h\right\rangle_{a}\Gamma_{ij}^{\chi}(h)\left|\chi;j\right\rangle. \tag{15}\]
If \(H=G\), the state after Step 3 becomes
\[\sum_{h\in G}U_{a}\left|h\right\rangle_{a}\Gamma_{ij}^{\chi}(h) \left|\chi,p;j\right\rangle=\] \[\sum_{h\in G}\sum_{\chi^{\prime}}\sum_{i^{\prime},j^{\prime}} \Gamma_{i^{\prime}j^{\prime}}^{\chi^{\prime}}(h)\Gamma_{ij}^{\chi}(h)\left| \chi^{\prime};i^{\prime},j^{\prime}\right\rangle_{a}\left|\chi,p;j\right\rangle=\] \[\sum_{j}\left|\chi;i,j\right\rangle_{a}\left|\chi,p;j\right\rangle \tag{16}\]
The decoupling we see in the last line after summing over \(h\) is guaranteed by Schur's orthogonality lemma.
In the case above, the result of the measurement in Step 4 is a label \((\chi,i,j)\), representing the charge, the internal state before the measurement and the internal state after the measurement.
If we, however, take \(H\subset G\), then the charge information is partial. By partial charge information, we mean that the result of the measurement in the last step, the label \((\chi_{H},i,j)\) is no longer compatible with only one charge but a set of charges, i.e., the charge is not fully determined.
We may repeat the procedure using different subgroups \(H\in G\) to gather further partial information on the charge in the hope that we will be able to deduce the charge fully. In the considered examples, choosing different subgroups proves to be sufficient. This relies on the partial orthogonality of character tables of a group and its subgroup, and is demonstrated for the example of the group \(D_{4}\) below.
**Beyond pure charges.** The partial charge measurement scheme is exact, i.e., unambiguous, when the chosen subgroup coincides with the centre of the conjugacy class of an anyon \((C,\chi)\), i.e., \(H=Z(r)\) for \(r\in C\). The label \(\chi_{H}\) in fact is the label of the irreducible representation of the \(Z(r)\) labelling the dyon.
The full measurement protocol, in general, then consist of performing a \(H\)-partial charge measurement and then reading-off the flux \(f\) on the same site. We then compute the center of the measured flux \(Z(f)\) and consider the following three cases.
If \(H=Z(f)\), the protocol is complete. The measurement outcome corresponds to \((C_{f},\chi_{H})\).
If \(H\subset Z(f)\), we need to perform partial charge measurements for other subgroups of \(G\) that are also subgroups of \(Z(f)\). We then combine these results to determine the charge label uniquely. This requires partial orthogonality of the character tables, see Section 3.4 for an example.
If \(H\not\subset Z(f)\) the measurement is discarded in post-selection and we switch to a different subgroup \(H\).
### Quantum double of \(D_{4}\)
Throughout the rest of the paper (except for Section 6) we will focus on the group \(D_{4}\) and its lattice gauge theory. Hence in this section, we will describe the group structure of \(D_{4}\), its quantum double algebra, as well as the representation theory of both.
The dihedral group of order \(8\) is the symmetry group of a square. It is generated by a \(\pi/2\)-rotation \(r\) and a reflection \(m\) along a diagonal. The group law is defined by the following identities
\[r^{4}=e,\;m^{2}=e,\;mr=r^{3}m. \tag{17}\]
This group is solvable and all of its proper subgroups are abelian. It can be decomposed as \(D_{4}=\mathbb{Z}_{2}^{m}\ltimes\mathbb{Z}_{4}^{r}\). This decomposition is not unique, \(D_{4}=\mathbb{Z}_{2}^{m}\ltimes(\mathbb{Z}_{2}^{r^{2}}\times\mathbb{Z}_{2}^{mr})\) is also a valid decomposition.6
Footnote 6: The superscripts in \(\mathbb{Z}_{n}^{x}\) label the group generator.
The conjugacy classes of \(D_{4}\) alongside their centres are listed in Table 1.
There are five conjugacy classes, which implies that there are also five irreducible representations. Their dimensions are \((1,1,1,1,2)\). The characters, i.e., the traces of the representation matrices, are given in Table 2.
When labelling the representations of the algebra \(D(D_{4})\), we will also need the irreducible representations of the centres of the conjugacy classes. The centres of the three non-trivial conjugacy classes are abelian and have four elements. Hence, they have four one-dimensional irreducible representations. Their character tables are shown in Table 3.
**Anyon content of \(D(D_{4})\).** The task of listing all of the irreducible representations of \(D(D_{4})\) and their dimensions from this data is straightforward. There are \(2\times 5+3\times 4=22\) irreducible representations, i.e., types of anyons in the \(D_{4}\) quantum double model.
Eight of them are one-dimensional, i.e. abelian, while the rest are two-dimensional. Other than the vacuum, four of them are pure charges and four are pure fluxes. For dyons with the \(\mathcal{C}_{r^{2}}\) flux component we say that they have a trivial flux even though it is not the vacuum flux. The flux can be factored out, just like in the case of the toric code fermion. This comes from the fact that \(r^{2}\) commutes with all group elements, just like the identity.
We label the vacuum and the trivial flux as \(0\equiv(\mathcal{C}_{e},1)\) and \(\tilde{0}\equiv(\mathcal{C}_{r^{2}},1)\), respectively. Likewise, the pure charges and dyons with vacuum and trivial flux are labelled as \(\Sigma_{\chi}\equiv(\mathcal{C}_{e},\chi)\) and \(\tilde{\Sigma}_{\chi}\equiv(\mathcal{C}_{r^{2}},\chi)\), with \(\chi\) being one of the nontrivial irreducible representations of \(D_{4}\). Nontrivial pure fluxes are labelled \(\Psi_{x}=(\mathcal{C}_{x},1)\), with \(C_{x}\) being one of the nontrivial conjugacy classes of \(D_{4}\). The rest of the dyons have less informative labels such as \(\{\tilde{\Psi}_{x},\Phi_{x},\tilde{\Phi}_{x}\}\) and can be found in Appendix B, where we also provide the quantum dimensions and the topological twists together with the fusion rules.
**Charge measurements reprise.** Let us review the partial charge measurements for \(D(D_{4})\). We choose the subgroups \(\{H_{r},H_{m},H_{mr}\}\) for the measurement protocol. All the irreducible representations of these subgroups, \(\chi_{H_{x}}\), are one dimensional. This means that the measurement outcome of the partial charge measurement is just the irrep label \((\chi_{H_{x}})\). Looking at their character tables, we find a partial orthogonality with the characters of the irreps of \(G\). Concretely, we
\begin{table}
\begin{tabular}{|c|c|} \hline \(C\) & \(Z(r)\), \(r\in C\) \\ \hline \(\mathcal{C}_{e}=\{e\}\) & \(D_{4}\) \\ \(\mathcal{C}_{r^{2}}=\{r^{2}\}\) & \(D_{4}\) \\ \(\mathcal{C}_{r}=\{r,r^{3}\}\) & \(\mathbb{Z}_{4}^{r}\equiv H_{r}\) \\ \(\mathcal{C}_{m}=\{m,mr^{2}\}\) & \(\mathbb{Z}_{2}^{m}\times\mathbb{Z}_{2}^{r^{2}}\equiv H_{m}\) \\ \(\mathcal{C}_{mr}=\{mr,mr^{3}\}\) & \(\mathbb{Z}_{2}^{mr}\times\mathbb{Z}_{2}^{r^{2}}\equiv H_{mr}\) \\ \hline \end{tabular}
\end{table}
Table 1: The conjugacy classes of \(D_{4}\) alongside their centres.
\begin{table}
\begin{tabular}{|c|c c c c|} \hline \(D_{4}\) & \(\mathcal{C}_{e}\) & \(\mathcal{C}_{r^{2}}\) & \(\mathcal{C}_{r}\) & \(\mathcal{C}_{m}\) & \(\mathcal{C}_{mr}\) \\ \hline \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(\alpha_{r}\) & \(1\) & \(1\) & \(1\) & \(-1\) & \(-1\) \\ \(\alpha_{m}\) & \(1\) & \(1\) & \(-1\) & \(-1\) & \(1\) \\ \(\epsilon\) & \(2\) & \(-2\) & \(0\) & \(0\) & \(0\) \\ \hline \end{tabular}
\end{table}
Table 2: Character table of \(D_{4}\).
\begin{table}
\begin{tabular}{|r|r r r r r|} \hline \(H_{r}\) & \(e\) & \(r\) & \(r^{2}\) & \(r^{3}\) \\ \hline \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \(i\) & \(1\) & \(i\) & \(-1\) & \(-i\) \\ \(-1\) & \(1\) & \(-1\) & \(1\) & \(-1\) \\ \(-i\) & \(1\) & \(i\) & \(-1\) & \(i\) \\ \hline \end{tabular}
\begin{tabular}{|r|r r r r|} \hline \((1,1)\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \((1,-1)\) & \(1\) & \(-1\) & \(1\) & \(-1\) \\ \((-1,1)\) & \(1\) & \(-1\) & \(-1\) & \(-1\) \\ \((-1,-1)\) & \(1\) & \(-1\) & \(-1\) & \(1\) \\ \hline \end{tabular}
\end{table}
Table 3: Character tables of relevant subgroups of \(D_{4}\). The groups \(H_{m}\) and \(H_{mr}\) are isomorphic.
compute \(\langle\chi_{H_{x}},\chi\rangle=\sum_{h\in H_{x}}\chi_{H_{x}}(h)\chi(h)\) for the five different charges and list the results in Table 4.
For example, imagine we performed a \(H_{m}\)-partial charge measurement on a plaquette \(p\) and obtained the measurement outcome \((1,1)\). This label has a non-vanishing overlap with the trivial charge \(1\) and charge \(\alpha_{m}\). Hence, both \(0\) and \(\Sigma_{\alpha_{m}}\) can be anyons present on the plaquette.
Now imagine we have performed all three measurement and obtained the set of labels \(\{-1,(1,1),(1,-1)\}\). This set is only compatible with the charge \(\alpha_{m}\) and hence \(\Sigma_{m}\) must be on a plaquette \(p\).
**Beyond pure charges.** The four main subgroups we consider in the partial charge measurement are also the centralisers of the respective conjugacy classes, \(H_{x}=Z(x)\). Hence, as mentioned in the last section, if we read-off the flux whose centre is the subgroup we used in the partial charge measurement, the result uniquely determines the dyon label.
It is only for dyons of trivial flux, \(\{e,r^{2}\}\), that we need to use the repeated partial charge measurements with different subgroups alongside partial orthogonality of the character tables of \(D_{4}\) and its four-element subgroups to uniquely determine the topological charge.
## 4 Probing non-abelian anyons
In this section, we will present a set of protocols that allow us to demonstrate the non-abelian character of the anyons of \(D(D_{4})\) on realistic to-date quantum hardware. The main obstacle to overcome here is the noise which limits the depth of the circuits. We will show how low circuit depths can be achieved for all protocols. A benchmark of the proposed experiments is presented in the next section showing numerical simulation results for a realistic noise model.
The main experiments showcasing the non-abelian nature of the excitations we propose are the following.
1. Anyon fusion. Here, the non-abelian nature is signalled by non-unique fusion outcomes.
2. Non-abelian braiding. The order of the braids does not commute.
3. S- and T-matrix measurements. This data describes the amplitudes of links and twists and almost uniquely7 characterises the full anyon theory. Footnote 7: There are some exceptions, where the anyon theory, i.e., the UMTC is not determined uniquely by the \(S\)-and \(T\)-matrices alone [28]. The smallest known example where this is the case has 49 particle types.
### Achieving low circuit depth
In this section, we will discuss how to achieve low circuit depths for creating and moving the anyons of \(D(D_{4})\). We will present the encoding of group elements into qubits, and show how short circuits for the elementary triangles of the ribbon operators discussed in Section 3.2 can be obtained.
**Encoding.** The order of \(D_{4}\) is \(8\), hence we need three qubits to encode a group element. We chose the following map
\[\left|g\right\rangle\equiv\left|a\right\rangle_{m}\left|b\right\rangle_{r} \left|c\right\rangle_{r^{2}}\iff g=m^{a}r^{b}(r^{2})^{c}, \tag{18}\]
where \(a,b,c\in\{0,1\}\), \(r\) is the \(90^{\text{o}}\) rotation and \(m\) is the reflection.
When encoding the internal space of the anyon, which we need for the ribbon operator protocol, we note that the basis \(\left|\nu\right\rangle=\left|c,i\right\rangle\), contains a group element \(c\in C\) restricted to a conjugacy class. All the non-trivial conjugacy classes contain just two elements and can be encoded with
\begin{table}
\begin{tabular}{|c|c c c c c|} \hline \(\langle\chi_{H_{x}},\chi\rangle\) & \(1\) & \(\alpha_{r}\) & \(\alpha_{m}\) & \(\alpha_{mr}\) & \(\alpha_{\epsilon}\) \\ \hline \(1\) & \(4\) & \(4\) & \(0\) & \(0\) & \(0\) \\ \(-1\) & \(0\) & \(0\) & \(4\) & \(4\) & \(0\) \\ \(i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(4\) \\ \(-i\) & \(0\) & \(0\) & \(0\) & \(0\) & \(4\) \\ \hline \end{tabular} \begin{tabular}{|c|c c c c|} \hline \(\langle\chi_{H_{m}},\chi\rangle\) & \(1\) & \(\alpha_{r}\) & \(\alpha_{m}\) & \(\alpha_{mr}\) & \(\alpha_{\epsilon}\) \\ \hline \((1,1)\) & \(4\) & \(0\) & \(4\) & \(0\) & \(0\) \\ \((1,-1)\) & \(0\) & \(4\) & \(0\) & \(4\) & \(0\) \\ \((-1,1)\) & \(0\) & \(0\) & \(0\) & \(0\) & \(4\) \\ \((-1,-1)\) & \(0\) & \(0\) & \(0\) & \(0\) & \(4\) \\ \hline \end{tabular}
\begin{tabular}{|c|c c c c|} \hline \(\langle\chi_{H_{mr}},\chi\rangle\) & \(1\) & \(\alpha_{r}\) & \(\alpha_{m}\) & \(\alpha_{mr}\) & \(\alpha_{\epsilon}\) \\ \hline \((1,1)\) & \(4\) & \(0\) & \(0\) & \(4\) & \(0\) \\ \((1,-1)\) & \(0\) & \(4\) & \(4\) & \(0\) & \(0\) \\ \((-1,1)\) & \(0\) & \(0\) & \(0\) & \(0\) & \(4\) \\ \((-1,-1)\) & \(0\) & \(0\) & \(0\) & \(0\) & \(4\) \\ \hline \end{tabular}
\end{table}
Table 4: Partial orthogonality of \(D_{4}\) with respect to its three four-element subgroups.
only one qubit. Since the centralisers of these conjugacy classes are abelian and their representations one-dimensional, we can encode the full internal space of the anyon with just one qubit. For anyons with \(m\)-flux, \((C_{m},\chi)\), we have
\[\ket{c,0}\equiv\ket{a}_{a}\iff c=m^{1}r^{0}(r^{2})^{a}\in\{m,mr^{2}\}, \tag{19}\]
for anyons with \(mr\)-flux, \((C_{mr},\chi)\), we have
\[\ket{c,0}\equiv\ket{a}_{a}\iff c=m^{1}r^{1}(r^{2})^{a}\in\{mr,mr^{3}\}, \tag{20}\]
and for anyons with \(r\)-flux, \((C_{r},\chi)\), we have
\[\ket{c,0}\equiv\ket{a}_{a}\iff c=m^{0}r^{1}(r^{2})^{a}\in\{r,r^{3}\}. \tag{21}\]
Similarly the subgroups are encoded with two qubits. For example for \(h\in H_{r}\) we use
\[\ket{h}\equiv\ket{a}_{r}\ket{b}_{r^{2}}\iff h=m^{0}r^{a}(r^{2})^{b}. \tag{22}\]
To encode the representations of the subgroups \(\chi_{H}\) for \(H_{m}\) and \(H_{mr}\) we use
\[\ket{(-1)^{a},(-1)^{b}}\equiv\ket{a}_{a}\ket{b}_{b}, \tag{23}\]
and for \(H_{r}\) we use
\[\ket{i^{2a+b}}\equiv\ket{a}_{a}\ket{b}_{b}. \tag{24}\]
**Circuits.** To implement the anyon protocols we need circuits for the following operations.
1. Controlled group multiplication \(\ket{g,h}\to\ket{g,gh}\) 1. full domain variant (\(g\in G\)), used for ground state preparation, 2. restricted domain variant (\(g\in C\)), used in ribbon operators, 3. restricted domain variant (\(g\in H\)), used for partial charge measurements.
2. Controlled generalised conjugation \(\ket{c}\ket{i}\ket{g}\to\ket{gcg^{-1}}\Gamma_{c}^{\chi}\ket{i}\ket{g}\).
3. Decoupling unitary of the partial charge measurement \[U_{a}=\sum_{\chi h}\sum_{i^{\prime},j^{\prime}}\sum_{h^{\prime}\in H}\Gamma_{ i^{\prime}j^{\prime}}^{\chi_{H}}(h^{\prime})\ket{\chi_{H}};i^{\prime},j^{ \prime}\rangle_{a}\bra{h^{\prime}}_{a}.\]
We will focus on a set of examples illustrating the expected circuit depths. The full domain group multiplication (see Fig. 3) consists of three Toffoli and 2 CNOT gates. Note, that on most hardwares a 3-qubit Toffoli gate must be decomposed into 2-qubit gates. Hence, each Toffoli gate should be seen as a depth-6 circuit in itself6. With this in mind, the circuit depth for a single group multiplication is 22. Current noise levels limit the circuit depth to about 100 gates. Therefore, a naive protocol based on full domain group multiplication is doomed to fail.
Footnote 6: If the coupling gates are restricted to act on neighbouring qubits, additional swaps may be needed which increase the depth to up to 12.
We now contrast this with the reduced domain multiplication, where the control qubits are restricted to a certain conjugacy class. A representative example is shown in Fig. 3. The circuit is reduced to a depth-1 circuit with only one CNOT gate. We can appreciate a dramatic reduction of circuit depth achieved by restricting the domain of the controlled multiplication circuit.
This is one of the main facts that allows us to greatly simplify many circuits related to ribbon operators and partial charge measurements. However, the direct ground state preparation does not benefit from these simplifications. We will discuss the ground state preparation separately in the next section.
Let us move on to the generalised conjugation. The circuit depth for this heavily depends on the type of anyon. For a pure flux the representation matrix \(\Gamma\) is trivial and the generalised conjugation simplifies to a conventional conjugation which can be implemented by a circuit of depth one or two (see Fig. 4 for an example). In contrast, for a dyon with a one-dimensional but non-trivial irrep of the center, the circuit is more involved. Since the irrep is one dimensional, it corresponds to a phase factor and does not need to be encoded in an additional vector space. I.e., we can still use \(\ket{c}\) instead of \(\ket{c,i}\). However, the conjugation of \(c\) has to be accompanied by appropriate phase factors implementing the action of \(\Gamma_{c}^{\chi}(g)\). A circuit showing such a generalised conjugation for the \(m\)-dyon \(\tilde{\Phi}_{m}\equiv(C_{m},(-1,-1))\) is shown in Fig. 4. We note, that there is still no need for Toffoli gates, however, the circuit is more complex than for pure fluxes. The full list of circuits for generalised conjugations is given in Appendix B.
We note that there are adaptive constant-depth circuits (measurement-based schemes) for applying ribbon operators that have better scaling for
solvable gauge groups [29] (such as \(D_{4}\)), but the measurement overhead makes them less preferable for small systems.
Lastly, we look at the decoupling unitaries for partial charge measurements. The unitaries for subgroups \(H\subset G\) are shown in Figure 5.
**Geometry and ground state preparation.** Our proposal for ground state preparation does not use any feed-forward protocols. Therefore, it is suitable for an experimental set-up, where measurements terminate the circuit. We also note, that so far known feed-forward protocols for ground state preparation only cover quantum double models for solvable groups excluding more complex quantum double models and the more general string-net models based on fusion categories beyond groups.
Not using a feed-forward protocol, however, comes at the cost of limiting the lattice geometry to either small two-dimensional graphs or quasi one-dimensional graphs in order to keep the circuit depth short.
For the majority of our result, we will focus on a quasi one-dimensional graph, that we call _braiding ladder_. This geometry is shown in Figure 6.
The ground state on this geometry can be prepared with a constant-depth circuit, i.e., it does not scale with the system size. The ground state on an \(n\)-segment ladder is given by
\[\ket{GS}=\sum_{g_{1},g_{2},\ldots,g_{n}}\ket{g_{1},g_{1},g_{2},g_{2},\ldots,g_ {n},g_{n}}\;. \tag{25}\]
This state has no long-range entanglement and factorises into a sequence of qudit Bell-pairs. However, entanglement is built up once one introduces anyons via ribbon operators. In fact, this geometry is sufficient to correctly show the braiding statistics of the anyons. The only requirement for correctly reproducing the braiding statistics is that the ribbon paths only _touch_, but do not intersect and is fulfilled here.
The shallow circuit preparing this state is shown in Figure 6.
We also examine fusion on a small two-dimensional graph, as shown in Figure 7.
### Elemental protocols
**Anyon fusion** is the first and most basic protocol that can demonstrate non-abelianness.
After preparing the ground state, we apply two ribbon operators. One between sites \(s_{1}\) and \(s_{2}\)
Figure 4: Circuits for generalised conjugation. The first three qubits encode the physical, group valued, gauge field \(\ket{g}\). The last qubit encodes the ancilla qubit representing the internal state of the (two-dimensional) anyon \(\ket{c}\). Left: The conjugation unitary for a pure flux \(\Psi_{m}\)\(\ket{c}\ket{g}\rightarrow\ket{gcg^{-1}}\ket{g}\). Right: The generalised conjugation unitary for the dyon \(\hat{\Phi}_{m}\): \(\ket{c}\ket{g}\rightarrow\Gamma(g)\ket{gcg^{-1}}\ket{g}\), where the representation ‘matrix’ \(\Gamma(g)\in U(1)\) and \(S=\text{diag}(1,i)\).
Figure 5: The decoupling unitary map used in partial charge measurements for subgroups of \(D_{4}\). Left: \(H_{m},H_{mr}\simeq\mathbb{Z}_{2}\times\mathbb{Z}_{2}\). Right: \(H_{r}\simeq\mathbb{Z}_{4}\). In the circuits above \(H\) denotes the Hadamard gate.
Figure 3: Circuits for controlled group multiplication \(\ket{g,h}\rightarrow\ket{g,gh}\). Left: Both \(g\) (first three qubits) and \(h\) (last three qubits) are unrestricted, i.e., \(g,h\in G\). Center: \(g\in H_{m}\) is encoded by just the first two qubits. Right: \(g\in C_{m}\) is encoded by just the first qubit \(a\) (cf. Eq. (19)).
and the other between sites \(s_{2}\) and \(s_{3}\). This creates two pairs of anyons and fuses them on site \(s_{2}\). We then perform a partial charge measurement and flux readout for this site. This experiment can be performed on both the braiding ladder and the small two-dimensional graph.
We can do this experiment for any anyon types. Here, we will give an example for the case of fusing the pure flux \(\Psi_{m}\) with itself. In contrast to the abelian case, this fusion is not restricted to yield vacuum. Instead we have
\[\Psi_{m}\times\Psi_{m}=0+\tilde{0}+\Sigma_{m}+\tilde{\Sigma}_{m},\]
where \(0\) is the vacuum, \(\tilde{0}\) is the abelian flux corresponding to the other element of the centre of \(D_{4}\), \(r^{2}\), and the other two anyons are a pure charge associated with the \(\alpha_{m}\) representation of \(D_{4}\) and the dyon of this pure charge and the abelian flux.
All four of these outcomes can be differentiated by reading out the flux and performing the partial charge measurement using \(H_{r}\) or \(H_{mr}\). For concreteness, we choose \(H_{mr}\) for which we expect to only observe the measurement outcomes \((1,1)\) and \((1,-1)\), corresponding to no charge and \(\alpha_{m}\), respectively. A discussion of this protocol implemented on a realistic hardware device and a corresponding numerical simulation are shown in Section 5.1.
**Anyon braiding.** The second phenomenon that gives the non-abelian anyons their name is the fact that the image of the braid group, as represented by physically braiding these anyons, is non-abelian. This means that the order of interchanges matter, i.e. \(\sigma_{12}\sigma_{23}\neq\sigma_{23}\sigma_{12}\), where \(\sigma_{i(i+1)}\) is the clockwise interchange of particle \(i\) and \(i+1\), the generator of the braid group.
The braiding procedures we want to implement to show this fact are shown in Figure 8. We create two pairs of anyons from the vacuum, perform two interchanges \(\sigma_{12}\sigma_{23}\), and then fuse pairwise. Then we repeat the same protocol with the inverted order of the interchanges, i.e., \(\sigma_{23}\sigma_{12}\). For concreteness, we again consider pairs of pure fluxes \(\Psi_{m}\).
In the second protocol, we annihilate the pairs that have a fixed fusion channel. Given that they
Figure 8: The two braiding protocols, differing only in the order of exchanging the two anyons. C stands for creation, M for measurements. The protocol on the left will have 4 fusion outcomes, while the protocol on the right can only produce vacuum.
Figure 6: Quasi one-dimensional lattice allowing for shallow ground state preparation of the quantum double model \(D(D_{4})\). Left: Yellow bars denote individual spins associated to edges, which are composed of three qubits each. Edge orientations are needed to define the vertex- and plaquette operators of the corresponding Hamiltonian. Right: Circuit for groundstate preparation per loop.
Figure 7: A small two-dimensional graph. Left: Yellow bars denote individual spins associated to edges, which are composed of three qubits each. Edge orientations are needed to define the vertex- and plaquette operators of the corresponding Hamiltonian. Right: The ground state on this small two-dimensional graph. The circuit needed for its preparation is discussed in Section 5.1.
are created from the vacuum they will fuse to the vacuum. In the first protocol, we annihilate the pairs whose fusion channel is not fixed, hence all four fusion outcomes are expected. We see that the two braidings indeed produce two different states.
For further discussion of a concrete implementation and numerical results, see Section 5.1.
### Anyon interferometry
In order to measure the relative phase between different braiding processes, we need to devise an interference protocol. This is done by setting up a control qubit \(c\), whose state is entangled with different braiding protocols
\[\ket{\psi}_{c}\ket{\text{GS}}\rightarrow\ket{0}_{c}\ket{\Psi_{0}}+\ket{1}_{c }\ket{\Psi_{1}}, \tag{26}\]
where \(\Psi_{i}\) are the two wave functions of the matter degrees of freedom after two different braiding operations.
If the charge content of the two states is the same \(\ket{\Psi_{0}}\) and \(\ket{\Psi_{1}}\) may only differ by a constant. Hence we can write
\[\ket{0}_{c}\ket{\Psi_{0}}+\ket{1}_{c}\ket{\Psi_{1}}=\left(\ket{0}_{c}+C_{01} \ket{1}_{c}\right)\ket{\Psi_{0}}, \tag{27}\]
and by the means of tomography on the control qubit \(c\) we can extract the relative constant \(C_{01}\).
For a suitable choice of the two braiding protocols, this constant reveals elements of the S- and T-matrix as we will see next.
**S-matrix elements.** To measure the S-matrix elements, we create a superposition of two states by conditioning an equal time (closed) ribbon operator shown in blue in Fig. 10 on the control qubit.
Note that the S-matrix appearing in Figure 10 is normalised
\[\tilde{S}(a,b)=\frac{\mathcal{D}}{d_{a}d_{b}}S(a,b), \tag{28}\]
where \(\mathcal{D}=\sqrt{\sum_{i}d_{i}^{2}}=|G|\) is the total quantum dimension, which makes \(|\tilde{S}(a,b)|\leq 1\). To see that note that [7]\(S_{ab}=\frac{1}{\mathcal{D}}\sum_{c}N_{ab}^{c}\frac{\theta_{a}}{\theta_{a} \theta_{b}}d_{c}\) and \(|\theta_{x}|=1\). We also know that \(\sum_{c}N_{ab}^{c}d_{c}=d_{a}d_{b}\), so it follows that \(|S_{ab}|\leq\frac{1}{\mathcal{D}}d_{a}d_{b}\).
The conceptually simplest interferometry scheme is shown in Figure 9(b). Here we apply a ribbon of flavour \(a\), then, depending on the state of the controlling qubit, either thread a second ribbon of flavour \(b\) around the first ribbon or do nothing. This is very similar to the ideal scheme shown in Figure 9(a).9
Footnote 9: Compared to Fig.9(a), Fig. 9(b) is only missing a measurement checking that applying a closed ribbon \(b\) on it’s own doesn’t introduce any phase, which it doesn’t.
However, in this protocol every single qubit gate of the \(b\)-ribbon operator becomes a two qubit gate (since the ribbon is conditioned on the control \(c\)) and every two qubit gate becomes a three qubit gate (unitarily similar to a Toffoli gate). Hence, the number of entangling gates grows quickly with the ribbon length.
A smarter alternative is to condition the ribbon type instead (see Figure 9(c)). In this case we identify where the ribbon operators for the two anyon types differ, and condition only those operations. The circuit for this can be much shorter, see Fig. 9 for an example. In particular, for the case of \(D(D_{4})\), it turns out, that, if \(b\) has a non-trivial flux content, it is easier to compare the linking of anyons \(a\) and \(b\) to the linking of anyons \(a\) and a reducible charge \(0\oplus\tilde{0}\). (Note, that this is not a valid anyon label since the representation is reducible.) Thus, we condition whether we apply the ribbon \(b\) or the ribbon corresponding to \(0\oplus\tilde{0}\). The ribbon operator protocol defined for irreducible representations carries over for reducible ones. Note that the representations \(0\oplus 0\) and \(0\oplus\tilde{0}\) are two-dimensional and hence distinct from \(0\) and \(\tilde{0}\) respectively.
However, this protocol requires additional knowledge of the theory. Concretely, if we are interested in the S-matrix element \(S_{ab}\), we need the additional knowledge of \(S(a,0)\) and \(S(a,\tilde{0})\). In fact, both can be measured easily by the protocol in Figure 9(b) since anyons \(0\) and \(\tilde{0}\) are abelian and their ribbon operator only have single qubit gates. A similar protocol was used to measure the \(S\)-matrix in the case of the toric code [30].
If \(\tilde{S}(a,0)=\tilde{S}(a,\tilde{0})=1\), we can just read off \(\tilde{S}(a,b)\) after tomographing the control qubit. In the case of \(\tilde{S}(a,\tilde{0})=-1\), there is a two qubit gate we need to apply between the controlling qubit and one of the ribbon ancillas before we can simply read off \(\tilde{S}(a,b)\) via tomography.10 The exact method of tomography will be presented alongside the numerical results in Section 5.2.
Footnote 10: See Appendix C.
**T-matrix elements.** In this section, we will describe the interference protocol for measuring the
matrix elements of the diagonal T-matrix, or the spin of the anyons. Here we note that ribbon operators are in-fact ribbons in space-time, hence they can acquire a twist. Each twist of anyon \(a\) contributes a phase factor to the wave function, \(T_{aa}=e^{i\theta_{a}}\).
The protocol is illustrated in Figure 0(a). We identify where the twisted and untwisted paths of a ribbon operator associated with some anyon \(a\) differ and apply a controlled circuit of the form illustrated in Figure 0(b) accordingly. If for the untwisted (twisted) version, we need to apply the unitary \(U\) (\(V\)), the circuit implementing is straight-forward and shown in Figure 0(b).
The endpoint of the two ribbons are on the same site, hence the charge content is the same and we can factor out the gauge field wave function. The control qubit is in a pure state (assuming there is no noise) so we can tomograph and read off the twisting phase.
## 5 Numerical experiments
In this section, we provide numerical evidence for the feasibility of our proposal on state of the art NISQ devices. We performed simulations using Google's 'cirq_google' python package on Google's cloud computing platform 'Google Colab'. This package executes the quantum trajectory simulation of the circuit using the Kraus operators obtained from the direct Pauli transfer matrix tomography on various single and two qubit gates on the Sycamore chip [24]. This chip comes with two principle constraints. First, we can only perform two-qubit gates between adjacent qubits. Second, there is a limited set of elementary gates that can be implemented.
Knowing the characteristics of the single and two-qubit gates, and single- and multi-qubit readout performances of the chip [24] we have chosen a suitable part of the chip for our simulations. A similar setup would be suitable for an actual experiment on the Sycamore chip. However, the optimal allocation of recourses will be chip dependent. The layout we used is shown in Figure 12.
The number of qubits we could simulate classically using this software is limited to 30, which is less than the number of qubits currently available on an actual machine. To see how this comes about, let us note that even though in our protocols there are only a few non-Clifford gates, we can not exploit the advantage of Clifford simulators because we simulate _noisy circuits_.
### Elemental protocols
In this section, we present the simulation results for the fusion and braiding experiments.
**Circuit characteristics.** Before we present the results, we report on the qubit layout used for the fusion and braiding experiments, as well as circuit depths achieved, in order to put the noise observed in a useful context.
_Ground state._ As mentioned, we prepare the ground state directly. On the braiding ladder that procedure is depth 2 (see Fig. 6). We prepare the qubits on all the bottom edges in an equal superposition via Hadamard gates and then CNOT the qubits above controlled by the one below.
On the planar graph in Figure 7 this process is more complicated. Repeating the process as for the braiding ladder gives us the following state \(\ket{\Psi}=\sum_{g_{12},g_{34}}\ket{g_{12},g_{12},g_{34},g_{34}}\). We then apply the full controlled multiply circuit from qubits of the second edge onto the qubits of the third edge. This procedure gives us the state defined in Figure 7.
_Fusion._ The exact ribbon operators and the layout of the qubits on the Sycamore chip used in the fusion experiment are shown in Fig. 12.
Using the circuits listed in Sec. 4.1 to implement the ribbon operators defined in Sec. 3.2 generates a circuit that is not directly implementable on the Sycamore chip due to the constraints mentioned above. We first need to implement swap gates such that all the multi-qubit gates appearing in the original circuit are acting on adjacent
Figure 9: Controlled multiply circuits of an elementary triangle of a ribbon operator conditioned on a control qubit \(c\) (first qubit) acting on a physical edge (middle three qubits) and a ribbon ancilla qubit (last qubit) Left. Implementing a \(\Psi_{m}\)-elementary triangle vs vacuum, represented as \(0\oplus 0\). Right. Implementing an \(\Psi_{m}\)-elementary triangle vs \(0\oplus 0\).
Figure 11: The interferometry scheme to measure the phase difference between two paths alongside with a circuit diagram implementing the difference of the two paths.
Figure 10: The interference protocols used for phase sensitive measurement of the (normalised) S-matrix elements \(\tilde{S}(a,b)=\frac{\left|Da\right|}{d_{a}d_{b}}S(a,b)\).
qubits. In addition, we need to compile multi-qubit gates into native single- and two-qubit coupler gates. In our case we chose the CZ gate as the coupler (two-qubit) gate. The single qubit gates are unrestricted.
The additional swaps make up a considerable portion of all coupling gates used in the numerical experiments and hence the positioning of qubits is one of the key factors in minimising the circuit depth.
It is also worth noting that not all anyons are equal in complexity. The \(r\)-dyon ribbon operator require considerably deeper circuits since the group multiplication controlled by the elements of the \(\mathcal{C}_{r}\) conjugacy class always involves at least one Toffoli gate. Let us recall that the Toffoli gate needs to be compiled into a circuit of at least depth 6 using the CZ as the two qubit coupler, however this neglects any swaps needed to place the qubits acted on by the CZ gates adjacent to one another. Hence, reducing the number of Toffoli gates is the main goal when designing the experiments.
For the fusion of \(\Psi_{m}\)-fluxes on the small planar graph, we obtained a device-ready circuit of depth 70. This circuit prepares the ground state, implements the ribbon operators and performs a partial charge measurement. On the braiding ladder the same protocol leads to a much shorter circuit of depth 37. This is due to a significantly simpler ground state preparation.
_Braiding._ The ribbon operators for the two braiding protocols are shown in Fig. 13. The qubit layout on the Sycamore chip is the same as for the fusion protocol on the ladder geometry (see Fig. 12).
The circuit depths achieved for braiding the \(\Psi_{m}\) fluxes are 60 and 68 for the cases of \(\sigma_{23}\sigma_{12}\) and \(\sigma_{12}\sigma_{23}\), respectively. We performed these experiments on the double braiding ladder. This is due to constraints of the simulation. Adding 6 extra qubits needed for a triple-ladder, dramatically slows down the classical simulation run times to the point of impracticality. On a real quantum device this problem would not occur.
**Results.** The results of the fusion experiments on the ladder and the planar graph, as well as the braiding on the ladder are shown in Figure 14.
_Fusion._ On both lattices we measure the charge after fusion via a partial \(H_{mr}\)-measurement and see the signatures of four fusion
Figure 12: Ribbon operators and qubit layout for the fusion of two \(\Psi_{m}\) anyons on the braiding ladder (top) and a small planar graph (bottom). Note, that the lattice is embedded into a sphere, so the outside plaquette that we labelled twice in the braiding ladder diagram should be identified. Red and blue shadings denote the two ribbon operators, respectively. Purple circles mark the plaquette on which we perform the \(H_{mr}\)-partial charge measurement. Red circles mark the vertex on which we measure the flux.
outcomes
\[\Psi_{m}\otimes\Psi_{m}=0\oplus\tilde{0}\oplus\Sigma_{m}\oplus\tilde{\Sigma}_{m},\]
corresponding to measuring no flux or \(r^{2}\) flux combined with no charge (\((1,1)\)-outcome) or a non-trivial charge (\((1,-1)\)-outcome) identified as \(\alpha_{m}\). In the case of the small planar graph we see a significantly increased background noise due the deeper circuit used in the ground state preparation.
The number of runs for the fusion on the braiding ladder was 16000. The expected post-selection probability for the projection of the two ribbon ancilla pairs is \(1/4^{2}=0.0625\), while the actual rate of success was \(0.113\), due to the circuit noise and measurement readout bias. The four main bins corresponding to the expected topological charges count \(270\pm 5\) while the largest noise peak is 77 leaving us with a signal to noise ratio of approximately 3.5.
The number of runs for the fusion on the small planar graph was 64000. The expected post-selection probability for projection of the two ribbon ancilla pairs is again \(0.0625\), while the actual number of successes was \(0.0826\). The four main bins count \(450\pm 7\) with the biggest noise peak being 242 leaving us with a signal to noise ratio of approximately 1.9.
_Braiding._ Looking at the results of the charge measurement for the two braiding protocols we clearly see what we expected. The first braiding protocol results in multiple fusion outcomes while for the second braiding protocol the resulting fusion outcome is only vacuum.
The number of runs for both protocols was 16000. The expected post-selection probabilities for the two protocols are \(0.0625\) for \(\sigma_{12}\sigma_{23}\) and \(0.25\) for \(\sigma_{23}\sigma_{12}\), while the actual rates of successes were \(0.123\) for \(\sigma_{12}\sigma_{23}\) and \(0.315\) for \(\sigma_{23}\sigma_{12}\), respectively.
In the case of the first braiding we see the four main peaks corresponding to the expected topological charges counting \(255\pm 5\) with the biggest peak coming from the noise counting \(109\) and resulting in a signal to noise ratio of about \(2.3\). In the second case the peak corresponding to the vacuum counts 1890. The largest peak coming from the noise counts 450 giving a signal to noise ratio of about 4.2.
**Supplementary measurements.** In the analysis above we relied on the knowledge of the fu
Figure 13: Ribbon operators for the two braiding experiments. On the last step we also draw a circle around the plaquette where we perform the \(H_{mr}\)-partial charge measurement. Note, that the lattice is embedded into a sphere, so the outside plaquette that we labelled twice for clarity should be identified.
Figure 14: The partial topological charge measurements for the fusion and braiding protocols.
sion outcomes to match the observed measurement outcomes of the \(H_{mr}\)-partial charge measurement with the corresponding charges.
However, we can still determine what charges the outcomes correspond to even if we do not rely on the knowledge of the fusion rules. This is done by repeating the partial charge measurements for another subgroup, as explained in Section 3.3.
Repeating the protocol with a \(H_{m}\)-partial charge measurement we see only two peaks corresponding to the charge measurement outcome \((1,1)\) combined with no or \(r^{2}\) flux. This is due to the fact that all anyons that emerge from the fusion carry either no charge or \(\alpha_{m}\) charge, which both correspond to the measurement outcome \((1,1)\) for the \(H_{m}\) subgroup. Given that we observed the outcomes \((1,1)\) and \((1,-1)\) for \(H_{mr}\), we can conclude that the charges present are the trivial and the \(\alpha_{m}\) charge (data shown in Appendix D).
### Linking and twist matrices
In this section, we present the results of simulating the interference protocols for measuring the magnitude and the phase of the \(S\)- and \(T\)-matrix elements.
**Circuit characteristics.** Figure 15 depicts the qubit layout and the exact ribbon operators used for the \(S\)-matrix protocol. The application or the flavour of the blue (equal-time) ribbon is conditioned on the state of the control qubit. The layout on the chip is chosen such as to reduce the number of two-qubit gates acting on the control qubit, avoiding an accumulation of errors. This turned out to be more relevant for obtaining reliable estimates than the overall circuit depth.
For the \(T\)-matrix experiment we chose the same layout as for the \(S\)-matrix experiment and the concrete form of the ribbon is shown in Fig. 16.
The depths of the circuits for the \(S\)-matrix protocol heavily depend on the anyons involved. In the following, we present in detail the circuits and results for the exemplary \(S\)-matrix elements \(\tilde{S}(\Psi_{m},\Psi_{m})=1\), \(\tilde{S}(\Psi_{m},\tilde{\Psi}_{m})=-1\) and \(\tilde{S}(\Psi_{m},\Psi_{r})\) which are of intermediate depth. For conditioning the ribbon type we have 58 for \(S(\Psi_{m},\Psi_{m})\) and \(S(\Psi_{m},\tilde{\Psi}_{m})\) and 64 for \(S(\Psi_{m},\Psi_{r})\), while for conditioning the existence of the ribbon we find 84 and 90, respectively. We will return to a more detailed discussion of the circuit depths and results for all other \(S\)-matrix
Figure 16: The two ribbon paths between same sites that differ by one twist used in our path interference protocol.
Figure 15: (Top) The ribbon operators applied in the simulation of the \(S\)-matrix protocol. The existence or the type of the blue (equal-time) ribbon is conditioned on the state of the control qubit. (Bottom) The qubit layout for the interference protocols.
elements at the end of this section.
For the \(T\)-matrix elements we chose to measure \(T(\Phi_{r},\Phi_{r})=i\), which together with \(T(\tilde{\Phi}_{r},\tilde{\Phi}_{r})=i\) is the most non-trivial entry corresponding to the two semions (all other anyons have topological spin \(\pm 1\)). The circuit depth of 89 for this protocol is larger than for all other \(T\)-matrix elements and leads to reasonable results. Hence, we did not investigate the other \(T\)-matrix entries as they seem to be less challenging to measure.
**Interference and tomography.** In Section 4.3, we proposed an interference scheme that entangles an ancilla qubit with the space-time history of the gauge field excitations in such a way that by the end of the protocol the qubit and the field are disentangled and the qubit is left in a state that depends only on the topological properties of the spacetime history of the anyons - the \(S\)- and \(T\)-matrix elements.
The fact that the qubit is meant to be disentangled from the gauge field and any additional ancillas implies that the qubit is ideally left in a pure state. Hence, it is easy to tomograph and to extract the aforementioned topological properties. The noise in the system alters the situation. For conceptual clarity, we will thus first discuss the ideal case and subsequently comment on the effect of the noise.
The final pure state of the ancilla qubit after the ideal protocol is
\[\begin{split}\left|\psi\right\rangle_{c}&=\frac{1}{ \sqrt{1+|\vec{S}_{ab}|^{2}}}(\left|0\right\rangle_{c}+\tilde{S}_{ab}\left|1 \right\rangle_{c}),\\ &\rho_{c}=\frac{1}{1+|\vec{S}_{ab}|^{2}}\begin{pmatrix}1&\tilde{S }_{ab}^{*}\\ \tilde{S}_{ab}&|\tilde{S}_{ab}|^{2}\end{pmatrix}.\end{split} \tag{29}\]
We write this density matrix in its Pauli basis
\[\rho_{c}=\frac{\mathbb{1}+\vec{r}\cdot\vec{\sigma}}{2}, \tag{30}\]
where \(r_{i}=\text{Tr}(\sigma_{i}\rho_{c})\) is the Bloch vector and \(\vec{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})\) is the vector of Pauli matrices and find
\[\vec{r}=\left(\frac{2\text{Re}\tilde{S}_{ab}}{1+|\vec{S}_{ab}|^{2}},\frac{2 \text{Im}\tilde{S}_{ab}}{1+|\vec{S}_{ab}|^{2}},\frac{1-|\vec{S}_{ab}|^{2}}{1+| \vec{S}_{ab}|^{2}}\right). \tag{31}\]
Note that \(|\vec{r}|=1\) as expected for a pure state. The phase and the magnitude of \(\tilde{S}_{ab}\) can be read off from the orientation of the Bloch vector only and the magnitude of \(\vec{r}\) is not needed to determine \(\tilde{S}_{ab}\). The magnitude is strongly affected by the noise in the system as we explain in the following.
The noise in the gates will cause entanglement between the control ancilla qubit \(c\) and the gauge field. Hence, when we tomograph the control qubit, we probe a mixed state whose Bloch vector is \(|\vec{r}|<1\). We will assume, that the noise only shortens this Bloch vector, i.e. it acts uniformly across all channels. Under this assumption, we can still read off the S-matrix elements as they only depend on the direction of the Bloch vector. Note, that this assumption is rather strong given that the generic dephasing process usually pulls the Bloch vector towards the \(z\)-axis. Nevertheless, the assumption is used in order to make the problem tractable without additional machine specific information. In an actual experiment, more elaborate error mitigation techniques and data about the noise bias could be used to replace this simple model.
_Tomography._ In order to determine the orientation of the Bloch vector we measure in a set of different bases. Each basis is parametrised by a vector \(\vec{s}_{i}\) with \(\sigma_{s_{i}}=\vec{s}_{i}\cdot\vec{\sigma}\) being the associated Hermitian operator.
Given a basis \(\vec{s}_{i}\) and a Bloch vector of a mixed state \(\vec{r}\), the quantum mechanical probabilities for the two measurement outcomes are
\[\begin{split} p_{QM}(1|\vec{s}_{i},\vec{r})&=\frac{1}{ 2}(1+\vec{r}\cdot\vec{s}_{i})\,,\\ p_{QM}(0|\vec{s}_{i},\vec{r})&=\frac{1}{2}(1-\vec{r} \cdot\vec{s}_{i})\,.\end{split} \tag{32}\]
This probability is, however, modulate by the readout bias
\[p(b|\vec{s}_{i},\vec{r})=(1-\epsilon_{b})p_{QM}(b|\vec{s}_{i},\vec{r})+ \epsilon_{b}p_{QM}(\bar{b}|\vec{s}_{i},\vec{r}), \tag{33}\]
where \(\epsilon_{b}\) is the probability to measure the qubit in state \(\bar{b}\) (\(\equiv 1-b\)) even though it is in the state \(b\).
We perform the measurement \(N=n_{0}+n_{1}\) times and record the measurement outcomes \((n_{0},n_{1})\) and define the estimator
\[P(\vec{s}_{i})=\frac{n_{1}-n_{0}}{N}, \tag{34}\]
which we call _polarisation_. Introducing \(\bar{\epsilon}=(\epsilon_{1}+\epsilon_{0})/2,\Delta\epsilon=\epsilon_{1}- \epsilon_{0}\) and using that
\[p(b|\vec{s}_{i},\vec{r})=\lim_{N\rightarrow\infty}\frac{n_{b}}{N},\]
we find
\[\lim_{N\rightarrow\infty}P(\vec{s}_{i})=(1-2\bar{\epsilon})\vec{s}_{i}\cdot \vec{r}+\Delta\epsilon. \tag{35}\]
Performing a sequence of measurements for different bases \(\vec{s}_{i}\) allows us to extract \(\vec{r}\), up to a multiplicative constant, i.e., we determine its direction.
For the sake of concreteness we chose the following sets of measurements bases
1. _Equatorial Scan._ Fixing the value of the polar angle to \(\theta=\pi/2\), we vary the azimuthal angle \(\phi\in[0,2\pi)\). From this scan we extract \(\phi_{\text{max}}\) which has the largest polarisation.
2. _Meridian Scan._ Fixing the value of the azimuthal angle to \(\phi=\phi_{\text{max}}\), we scan the polar angle \(\theta\in[0,2\pi)\)11 From this scan we extract \(\theta_{\text{max}}\) which has the largest polarisation. Footnote 11: The domain is extended on purpose.
The extraction of the relevant angles is done by fitting Eq. (35). The two angles then fix the value of \(\tilde{S}_{ab}\) via
\[\tilde{S}_{ab}=\sqrt{\frac{1-\cos\theta_{\text{max}}}{1+\cos\theta_{\text{ max}}}}e^{i\phi_{\text{max}}}.\]
The other two parameters, the amplitude and the offset of the polarisation, do not convey any physical information. The offset determines the difference of the effective readout biases, \(\Delta\epsilon\), and the amplitude determines the product of the mean readout bias and the length of the Bloch vector, \((1-2\bar{\epsilon})|\vec{r}|\).
In the discussion above, we have neglected the fact that in addition to the measurement of the control qubit, we measure the ribbon ancillas and post-select on their biased measurement outcomes. This causes an increased observed effective readout bias \(\Delta\epsilon\), obtained from the fitting procedure described in the last section, which exceeds the \(\Delta\epsilon\) estimate from calibration data. A more detailed discussion of this issue is given in Appendix E.2.
For the \(T\)-matrix protocol the Bloch vector of the control qubit after performing the ideal circuit reads
\[\vec{r}=\left(\text{Re}T_{aa},\text{Im}T_{aa},0\right). \tag{36}\]
We perform the same type of tomography to estimate \(\vec{r}\) and calculate the magnitude and phase of \(T_{aa}\) via the equation above.
_Results._ The numerical results for the interference protocols determining \(S_{ab}\) with \(a=\Psi_{m}\) and \(b=\Psi_{m},\bar{\Psi}_{m},\Psi_{r}\) are shown in Fig. 17. The first three panels show the results of the protocol in which the type of the second ribbon is conditioned to be \(\Psi_{b}\) or \(0\oplus\tilde{0}\) (cf. Fig. 10c), while the last one shows the results for conditioning the existence of the ribbon (cf. Fig. 10b). In both cases we note a systematic drift of the Bloch vector towards the \(z\)-axis. The magnitude of the S-matrix element is dictated by the angle of the Bloch vector with the z-axis and hence this drift leads to an underestimation of \(|\tilde{S}_{ab}|\). The systematic drift is significantly less dramatic in the case of the first protocol, i.e., conditioning on \(\Psi_{b}\) or \(0\oplus\tilde{0}\). Hence, the estimate of the magnitudes of the S-matrix elements is much closer to the theoretical value. This is due to the fact that the circuit implementing this protocol is much shallower due to the simpler forms of the conditional ribbon operators, one Toffoli gate less in the conditional multiplication circuit, cf. Figure 9. The phases of all measured \(\tilde{S}_{ab}\) are estimated well and agree with the theoretically predicted values.
The result for the T-matrix element \(T(\tilde{\phi}_{r},\tilde{\phi}_{r})\) determined by the path conditioning protocol in Figure 11a is shown in Figure 18 and agrees with the theoretical value.
_Uncertainty._ In the experiment we used 1000 shots per measurement basis for the S-matrix measurements and 5000 for the T-matrix measurements. See Appendix E.1 to see how we estimated the final uncertainties in the measured braiding amplitudes, i.e., \(S\)- and \(T\)-matrix elements, as well as for each datapoint in our plots.
We expected a post selection probability of \(1/4\) for all T-matrix protocols and S-matrix protocols with measured \(|S|\neq 0\). For the case of measured \(|S|=0\) the probability drops to \(1/8\). These probabilities were observed in the numerical experiment.
**Other S-Matrix elements.** As stated above some anyon ribbons are harder to compile than others. The bottleneck of this anyon interference protocol is the difficulty of the conditioned ribbon.
We can divide the \(S\)-matrix protocols into six different difficulty classes measured by the depth of the required circuits. The biggest factor determining those is whether for the anyon in question, \((C,\chi)\) obeys \(\chi(r^{2})=\chi(e)\). If this is not the case, the representation \(A(g)\) is faithful. This means that all group elements must be included in the
Figure 17: Numerical results of the control qubit tomography for the S-matrix interference protocol (a)-(c) conditioning the type of the equal time ribbon (cf. Figure 9(c)) and (d) conditioning its existence (cf. Figure 9(b)). The measurement basis was scanned across two planes, see the Bloch sphere diagram (blue dotted circles). The polarisation \(P\) was estimated from these measurements by fitting Eq. (35). It yields the Bloch vector of \(\rho_{c}\) (green arrow) and \(\tilde{S}_{ab}\).
circuit which requires many SWAP gates and increases the circuit depth significantly.
The second factor to consider is the number of Toffoli gates in the controlled multiplication and generalised conjugation circuits. The semions \(\Phi_{r},\tilde{\Phi}_{r}\) are difficult in both regards.
In Fig. 19 we show the \(S\)-matrix with all its entries color coded according to the difficulty class. In the list below we present the circuit depths and numerical simulation results for the diagonal S-matrix elements for representative cases of all six difficulty classes. For all protocols, except for 1. and 4., we chose to condition the ribbon \(b\) vs \(0\oplus 0\) rather than ribbon \(b\) vs \(0\oplus 0\).
1. Conditioning an abelian anyon: \(\tilde{S}(\Sigma_{m},\Sigma_{m})=0.969(6)e^{i\pi 0.004(2)}\), theoretical prediction \(\tilde{S}(\Sigma_{m},\Sigma_{m})=1\). The total compiled circuit depth is 23 with 0 Toffoli gates corresponding to the difficulty class shaded in green in Fig. 19.
2. Conditioning a \(m\)- or \(mr\)-dyon with \(\chi(r^{2})=1\): \(\tilde{S}(\Psi_{m},\Psi_{m})=0.89(1)e^{i\pi 0.004(4)}\), theoretical prediction \(\tilde{S}(\Psi_{m},\Psi_{m})=1\). The total compiled circuit depth is 58 with 2 Toffoli gates corresponding to the difficulty class shaded in light green in Fig. 19.
3. Conditioning a \(r\)-dyon with \(\chi(r^{2})=1\): \(\tilde{S}(\Psi_{r},\Psi_{r})=0.82(1)e^{i\pi 0.001(4)}\), theoretical prediction \(\tilde{S}(\Psi_{r},\Psi_{r})=1\). The total compiled circuit depth is 77 with 4 Toffoli gates corresponding to the difficulty class shaded in yellow in Fig. 19.
4. Conditioning a non-abelian charge: \(\tilde{S}(\Sigma_{\epsilon},\Sigma_{\epsilon})=0.123(8)e^{i\pi 0.01(2)}\), theoretical prediction \(\tilde{S}(\Sigma_{\epsilon},\Sigma_{\epsilon})=1\). The total compiled circuit depth is 125 with 4 Toffoli gates corresponding to the difficulty class shaded in light orange in Fig. 19.
5. Conditioning a \(m\)- or \(mr\)-dyon with \(\chi(r^{2})=-1\): \(\tilde{S}(\Phi_{m},\Phi_{m})=0.19(2)e^{i\pi 0.04(4)}\), theoretical prediction \(\tilde{S}(\Phi_{m},\Phi_{m})=1\). The total compiled circuit depth is 133 with 4 Toffoli gates corresponding to the difficulty class shaded in orange in Fig. 19.
6. Conditioning a \(r\)-dyon with \(\chi(r^{2})=-1\) (semions): \(\tilde{S}(\Phi_{r},\Phi_{r})=0.06(1)e^{i\pi 1.0(3)}\), theoretical prediction \(\tilde{S}(\Phi_{r},\Phi_{r})=-1\). The total compiled circuit depth is 171 with 6 Toffoli gates corresponding to the difficulty class shaded in red in Fig. 19.
We assume that the first element is conditioned and largely determines the circuit depth while the specific choice of the other, unconditioned ribbon operator only mildly influences the depth. This means that the difficulty is set by the row of the S-matrix and one can use the symmetry of the S-matrix to pick the better of the two interference procedures. For example for measuring \(S(\Phi_{r},\Sigma_{m})\) one would condition the \(\Sigma_{m}\) ribbon and not the semion \(\Phi_{r}\).
We observe that the magnitude of the \(S\)-matrix in the simulation results decays strongly with the circuit depth. This is due to accumulating errors on the control qubit. We speculate that the main
Figure 18: Numerical results of the control qubit tomography for the T-matrix interference protocol where the paths of the ribbon was conditioned (cf. Figure 11a). Tomography performed as for the S-matrix (cf. Fig.17). \(T(\tilde{\Phi}_{r},\tilde{\Phi}_{r})=1.04(1)e^{i\pi 1.496(4)}\) measured, \(T(\Phi_{r},\tilde{\Phi}_{r})=-i\) predicted.
error channel responsible for the drift of the estimated angle \(\theta_{\text{max}}\), and hence the braiding amplitude, is dephasing. Dephasing reduces the \(r_{x}\) and \(r_{y}\) components of the Bloch vector, while keeping \(r_{z}\) fixed, so any error in estimating this angle due to any small offset \(\delta r_{z}\neq 0\) will be amplified. On the upside, this error channel does not change the ratio between \(r_{x}\) and \(r_{y}\) and hence the braiding phase estimates are well within the theoretical values.
## 6 Extension to \(S_{3}\)
The anyons of \(D_{4}\) lattice gauge theory are not suitable for universal topological quantum computation. This is due to the simple structure of \(D_{4}\). All of its subgroups are abelian and the group is nilpotent and solvable.
However, the slightly more complex gauge group \(S_{3}\), can be used for universal topological quantum computation, if we allow measurements. Hence, it is interesting to investigate how our schemes carry over to this case. We find that the main tricks we use to achieve a manageable circuit depth for \(D_{4}\), in particular the simplified group multiplication circuits for elements restricted to subgroups or conjugacy classes, would carry over straight forwardly to \(S_{3}\) if the quantum hardware had native _qutrits_. However, for a qubit based hardware implementing \(S_{3}\) is considerably more difficult. In the following, we will discuss the difficulties and increase in circuit depth for all steps in detail.
**Encoding and basic circuits.** The first obstacle we face is the fact that the qubit encoding of elements of \(S_{3}\) is not trivial, i.e. some states of the encoding qubits will not be allowed.
We assume the same encoding as for the case of \(D_{4}\), i.e., \(|i_{1},i_{2},i_{3}\rangle\rightarrow|m^{i_{1}}r^{i_{2}}r^{2i_{3}}\rangle\). However, since \(r^{3}=e\) in \(S_{3}\) we need to project out the states \(|0,1,1\rangle\) and \(|1,1,1\rangle\).
This alone complicates the \(\mathbb{Z}_{3}\) multiplication of \(S_{3}=\mathbb{Z}_{2}\ltimes\mathbb{Z}_{3}\) significantly. In Figure 20, we show the general controlled left and right \(S_{3}\)-multiplication circuit. For the left multiplication the controlled SWAP gates implement the defining commutation rule between the two generators, \(mr=r^{2}m\), while the middle portion implements the \(\mathbb{Z}_{3}\)-multiplication. The circuit contains 8 Toffoli gates, once the SWAPs are decomposed. The circuit for the right multiplication is simpler, containing 6 Toffoli gates.
There are two reasons for this complication. One is the aforementioned unnatural encoding, while the second is the fact that \(S_{3}\) has a trivial center. In the case of \(D_{4}\) the commutation of the two generators amounts to multiplying the expression by \(r^{2}\in Z(G)\) which is a simple operation.
The situation is simplified significantly if we encode the group elements by a qubit and a qutrit, assuming that we have a full one-qutrit control as one has for qubits in current devices. A possible controlled group multiplication circuit utilising this recourse is shown in Figure 21.
**Anyon content.** The group \(S_{3}\) has three conjugacy classes, \(\mathcal{C}_{e}=\{e\}\), \(\mathcal{C}_{r}=\{r,r^{2}\}\) and \(\mathcal{C}_{m}=\{m,mr,mr^{2}\}\). The two nontrivial conjugacy classes have abelian centres. The anyons of this theory alongside their braiding properties have been completely studied in Ref. [27]. In this section, we just note that there are two abelian, four two-dimensional and two three-dimensional anyons.
In our ribbon application protocol we rely on two operations, \(\mathcal{C}\)-controlled multiplication and generalised conjugation. When restricted to a conjugacy class the controlled multiplication circuit simplifies significantly for the two-dimensional anyons with \(\mathcal{C}_{r}\) flux. \(\mathcal{C}_{r}\) is naturally encoded by a qubit and the \(\mathcal{C}_{r}\)-controlled multiplication circuit has only one Toffoli gate for both left and right multiplication.
For the three dimensional anyons with \(\mathcal{C}_{m}\)-flux, we can reduce the number of Toffoli gates to 5 in the case of right and to 2 in the case of left controlled multiplication. The conjugacy class is encoded with two qubits with one disallowed state. Figure 22 shows these circuits.
The generalised conjugations are derived from the representation matrices \(A(g)\) and can be compiled into circuits of similar complexity as the controlled multiplication circuits of restricted domain.
As in the case of the unrestricted controlled multiplication circuit we would see significant reduction in circuit depth, if we had access to well controlled qutrits.
**Ground state.** The ground state preparation protocol (without feed-forward) requires setting up the superposition \(\sum_{g\in S_{3}}|g\rangle=|+\rangle\otimes|+_{3}\rangle\), where \(|+_{3}\rangle=\frac{|00\rangle+|10\rangle+|01\rangle}{\sqrt{3}}\). We therefore need
Figure 21: The general controlled multiplication circuit for the group \(S_{3}\). The left circuit is the left multiplication, \(|g,h\rangle\rightarrow|g,gh\rangle\), and the circuit on the right is the right multiplication, \(|g,h\rangle\rightarrow|g,hg\rangle\). The first qubit-qutrit encode the first group element and the second pair the second group element. The gate \(C(23)\) is the controlled permutation implementing the commutation relation of the \(S_{3}\) group and the gate \(C(\mathbb{Z}_{3})\) is the qutritrit generalisation of the CC gate, i.e. \(\mathbb{Z}_{3}\)-addition.
Figure 19: The S-matrix of the \(D(D_{4})\) theory. The color shading represents the difficulty to observe the values experimentally, where green to red denotes increasing difficulty. For the entries highlighted in bold face and blue we numerically obtained the values by simulating the phase-sentitive measurement protocols.
Figure 20: The general controlled multiplication circuit for the group \(S_{3}\). The top circuit is the left multiplication, \(|g,h\rangle\rightarrow|g,gh\rangle\), and the bottom circuit is the right multiplication, \(|g,h\rangle\rightarrow|g,hg\rangle\). The first three qubits encode the first group element and the second three the second group element.
a circuit that implements the unitary \(U_{+_{3}}\left|00\right\rangle=\left|+_{3}\right\rangle\). This unitary can be chosen to be purely real, i.e., \(U_{+_{3}}\in SO(4)\) such that we can use the minimal gate decomposition from Ref. [31] to write it as \(U_{+_{3}}=M^{\dagger}(A\otimes B)M\), where \(M\) is the magic basis rotation that can be implemented with a single CNOT and two single qubit gates (see Fig. 2 of Ref. [31]). A suitable choice for \(A\) and \(B\) is \(B=\mathbb{1}\) and \(A=R_{z}(5\pi/4)R_{y}(\theta_{m})R_{z}(\pi/4)\), where \(R_{z}\) and \(R_{y}\) are rotations around \(z\) and \(y\) axis, respectively and \(\theta_{m}=2\arctan(1/\sqrt{2})\).
**Charge measurement.** Arguably, where we loose out the most is with the partial charge measurement. The group \(S_{3}\) has only few very simple proper subgroups, \(H_{r}=\mathbb{Z}_{3}^{r}\), \(H_{m}=\mathbb{Z}_{2}^{m}\), \(H_{mr^{2}}=\mathbb{Z}_{2}^{mr^{2}}\) and \(H_{mr}=\mathbb{Z}_{2}^{mr}\).
The circuits involved in the partial charge measurement with respect to the subgroups \(H_{m}\) and \(H_{r}\) are shown in Figure 23. We chose to implement left and right multiplication for the two subgroups respectively because the circuits are significantly shallower. If the orientation of the arrows differs from what we need we can always reverse using the inverse circuit shown in the same figure.
Table 5 shows the partial orthogonality of character tables of \(S_{3}\) and two of its proper subgroups. The fact that given the results of both \(H_{m}\)- and \(H_{r}\)-partial charge measurements we can uniquely determine the charge still holds for this group as well.
**Summary.** The computational universality of \(D(S_{3})\) complemented with measurements comes at the expense of significantly longer elementary circuits for our protocols. The elementary circuits are about ten times deeper once compiled into native gates that the Sycamore chip can perform, neglecting the SWAPs needed to place the relevant qubits into appropriate proximity to one another.
Figure 23: (Left) The controlled left multiplication circuit, \(\left|g,h\right\rangle\rightarrow\left|g,gh\right\rangle\), for \(g\in H_{m}\). (Middle) The controlled right multiplication circuit, \(\left|g,h\right\rangle\rightarrow\left|g,hg^{-1}\right\rangle\), for \(g\in H_{r}\). (Right) The inverse circuit, \(\left|g\right\rangle\rightarrow\left|g^{-1}\right\rangle\), needed for reversing arrows.
\begin{table}
\begin{tabular}{|l|c c c|} \hline \(\left\langle\chi_{H_{r}},\chi\right\rangle\) & \(1\) & \(-1\) & \(\epsilon\) \\ \hline \(1\) & \(3\) & \(3\) & \(0\) \\ \(\omega\) & \(0\) & \(0\) & \(3\) \\ \(\bar{\omega}\) & \(0\) & \(0\) & \(3\) \\ \hline \hline \(\left\langle\chi_{H_{m}},\chi\right\rangle\) & \(1\) & \(-1\) & \(\epsilon\) \\ \hline \(1\) & \(2\) & \(0\) & \(2\) \\ \(-1\) & \(0\) & \(2\) & \(2\) \\ \hline \end{tabular}
\end{table}
Table 5: Partial orthogonality of the character table for \(S_{3}\) with respect to its two proper subgroups.
Figure 22: The \(S_{3}\) controlled left multiplication with the domain restricted to a single conjugacy class, \(\mathcal{C}_{r}\) on the top and \(\mathcal{C}_{m}\) on the bottom. Note, that the encoding of \(\mathcal{C}_{r}\) elements is slightly different from the encoding of the general group elements.
This is mainly a consequence of the unnatural way we encode the group elements. Unnatural in the sense that it does not respect the group structure \((\mathbb{Z}_{2}\ltimes\mathbb{Z}_{3})\). A more suitable encoding is achieved by introducing qutrits alongside the full one-qutrit control and ability to condition arbitrary one-qutrit gates. In this encoding all of the elementary circuits become even shallower than in the case of \(D_{4}\).
However, for any qubit based architecture we reckon that realising \(D(S_{3})\) is beyond the limits of current NISQ technology.
## 7 Conclusions and outlook
In this work we explored the feasibility of preparing the ground state of the quantum double model \(D(D_{4})\), as well as creating, manipulating, and measuring its (non-abelian) anyons on current NISQ technology, in particular Google's Sycamore chip. We demonstrated that by exploiting the structure of the group \(D_{4}\) one can achieve moderate circuit depths for the creation and manipulation of anyons with ribbon operators. We also proposed a partial charge measurement which uniquely determines the anyon content without relying on prohibitively costly full group multiplications. Our numerical results suggest that current NISQ technology is capable of probing the full modular data of \(D(D_{4})\) on a quasi one-dimensional lattice architecture and that ground state preparation without feed-forward protocols are possible on small two-dimensional lattices.
The qubit layout we used respects the geometry of the physical lattice and can straightforwardly be extended to larger system sizes once more qubits are available. However, with current noise levels for larger two-dimensional lattices, feed-forward protocols would need to be invoked for the ground state preparation.
We note, that in this proposal we did not include error mitigation or noise reducing strategies
## Appendix A Ribbon types
In this appendix, we present all variants of the elementary triangles that constitute the ribbon operators introduced in Section 3.2. Apart from the main distinction into type I and type II triangles which correspond to controlled group multiplication and controlled generalised conjugation, respectively, there are different variants of the concrete operation which depend on how exactly we couple the ancilla qudit with the gauge field degrees of freedom.
In the main text we have chose one particular case to keep the description of the algorithm readable. In this section, we provide an exhaustive list of all sixteen variants (8 per type). All triangles can be freely rotated and we chose to rotate them such that the lattice edge of type I triangles is the bottom edge, and such that type II triangles stand on their tip. The triangles are then further distinguished by whether the triangle is appended to the front end or the back end of the ribbon, i.e., involving the ancilla qubit \(a_{f}\) or \(a_{b}\), by the orientation of the lattice edge and by the direction of extension (being aligned or anti-aligned with the lattice edge orientation). All operators are shown explicitly in Figure 24.
The controlled group multiplication \(U_{\textit{CM}}\) of type I triangles is given by - depending on these specifications - right or left multiplication with the group element \(c\) encoded by the forward or backward ancilla \(a_{f}\) or \(a_{b}\) or its inverse \(c^{-1}\).
For type II triangles we need to apply different variants of generalised conjugation \(U_{GC}\). In the main text in Section 3.2 we defined the generalised conjugation as \(U_{GC}^{(C,\chi)}:\left|c,i\right\rangle\left|h\right\rangle_{\text{phys}} \rightarrow\left|hch^{-1}\right\rangle\Gamma_{\mathcal{C}}^{\chi}(h)\left|i \right\rangle\left|h\right\rangle_{\text{phys}}\), where \(\Gamma_{\mathcal{C}}^{\chi}(g)\) are matrices defined by the representation of the algebra \((C,\chi)\) spanned by basis vectors \(\left|c,i\right\rangle\). In this form it is obvious why we call this operation generalised conjugation, however, this operator is more conveniently defined by the \((C,\chi)\)-representation matrices \(A(h)\) defined in Section 3.1. To make the connection between the two, we unify the two indices of the \((C,\chi)\)-representation into one multi index \(\left|c,i\right\rangle\equiv\left|\mu\right\rangle\) and identify \(\left|c,i\right\rangle\rightarrow\left|hch^{-1}\right\rangle\Gamma_{\mathcal{ C}}^{\chi}(h)\left|i\right\rangle\) with \(\left|\mu\right\rangle\rightarrow A^{(C,\chi)}(h)\left|\mu\right\rangle\). The superscript \((C,\chi)\) is dropped, if no confusion arises. The different type II triangle variants then implement \(A(h)\) or \(A(h^{-1})\) or their transpose to the ancilla qubits \(a_{f}\) or \(a_{b}\).
## Appendix B Representation theory of \(D(d_{4})\)
In this appendix, we cover the representation theory, viz. the anyon content, of the quantum double algebra of \(D_{4}\). Tab. 6 below shows the naming convention of the anyons and Tab. 7 lists the corresponding representation matrices used for the generalised conjugation.
The (abelian) fusion algebra of the excitations of this theory is highly symmetrical and can be broken down into a set of rules listed below. For writing down these rules we introduce the function \(\left|\,\right|:G\rightarrow\left\{e,m,r,mr\right\}\equiv G/Z(G)\) that divides out \(r^{2}\) and use the notation \(\Sigma_{e}\equiv 0\).
1. Fusion with the vacuum: \(0\otimes a=a\).
2. Fusion with the trivial flux \(\tilde{0}\): 1. \(\tilde{0}\otimes\tilde{0}=0\), 2. \(\tilde{0}\otimes\Sigma_{x}=\tilde{\Sigma}_{x}\), \(x\in\left\{m,r,mr,\epsilon\right\}\), 3. \(\tilde{0}\otimes\Psi_{x}=\Psi_{x}\) and \(\tilde{0}\otimes\Phi_{x}=\tilde{\Phi}_{x}\), \(x\in\left\{m,r,mr\right\}\).
3. Fusion with abelian charges \(\Sigma_{x}\), \(x\in\left\{m,r,mr\right\}\): 1. \(\Sigma_{x}\otimes\Sigma_{y}=\Sigma_{\left|xy\right|}\), \(y\in\left\{m,r,mr\right\}\), 2. \(\Sigma_{x}\otimes\Sigma_{\epsilon}=\Sigma_{\epsilon}\), 3. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 4. \(\Sigma_{x}\otimes\Phi_{y}=\delta_{xy}\Phi_{y}\oplus(1-\delta_{xy})\tilde{\Phi}_ {y}\),
4. Fusion with non-abelian charge \(\Sigma_{\epsilon}\): 1. \(\Sigma_{\epsilon}\otimes\Sigma_{\epsilon}=0\oplus\Sigma_{m}\oplus\Sigma_{r} \oplus\Sigma_{mr}\), 2. \(\Sigma_{\epsilon}\otimes\Psi_{x}=\Phi_{x}\oplus\tilde{\Phi}_{x}\) and \(\Sigma_{\epsilon}\otimes\Phi_{x}=\Psi_{x}\oplus\tilde{\Psi}_{x}\), for \(x\in\left\{m,r,mr\right\}\).
5. Fusion with dyons of nontrivial flux \(x\in\left\{m,r,mr\right\}\): 1. \(\Psi_{x}\otimes\Psi_{x}=\tilde{\Psi}_{x}\otimes\tilde{\Psi}_{x}=0\oplus\tilde {0}\oplus\Sigma_{x}\oplus\tilde{\Sigma}_{x}\) and \(\Psi_{x}\otimes\tilde{\Psi}_{x}=\bigoplus_{y\in\left\{m,r,mr\right\}}(1- \delta_{xy})\Sigma_{y}\oplus\tilde{\Sigma}_{y}\), 2. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 3. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 4. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 5. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 6. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 7. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 8. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 9. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 10. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 11. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 12. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 13. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 14. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 15. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 16. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 17. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 18. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 19. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 19. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 19. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 20. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 21. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 22. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 23. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 24. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 25. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 26. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 27. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 28. \(\Sigma_{x}\otimes\Psi_{y}=\delta_{xy}\Psi_{y}\oplus(1-\delta_{xy})\tilde{\Psi}_ {y}\), 29.
\[U_{CM}\left[c^{\prime},i^{\prime}\right)_{a_{b}}\left[c,i\right)_{a_{f}}\left[g_{ i}\right]_{\text{phys}}=\]
Figure 24: The 16 variants of the elementary triangles of type I (top) and type II (bottom).
* \(\Psi_{x}\otimes\Phi_{x}=\Sigma_{\epsilon}\oplus\tilde{\Sigma}_{\epsilon}\),
* \(\Phi_{x}\otimes\Phi_{x}=0\oplus\Sigma_{x}\oplus(\bigoplus_{y\in\{m,r,mr\}}(1- \delta_{xy})\tilde{\Sigma}_{y})\),
* \(\Psi_{x}\otimes\Psi_{y}=\Psi_{[xy]}\oplus\tilde{\Psi}_{[xy]}\) and \(\Psi_{x}\otimes\Phi_{y}=\Phi_{[xy]}\oplus\tilde{\Phi}_{[xy]}\), for \(x\neq y\) and \(y\in\{m,r,mr\}\),
* \(\Phi_{x}\otimes\Phi_{y}=\Psi_{[xy]}\oplus\tilde{\Psi}_{[xy]}\), for \(x\neq y\) and \(y\in\{m,r,mr\}\).
The fusion rules not covered above can be derived by the following prescription. Pick one anyon \(X\) on the LHS (\(\otimes\)-side) and let \(X\to\tilde{X}\). Take all elements on the RHS (\(\oplus\)-side) and take all \(Y\to\tilde{Y}\). Here we use the definition \(\tilde{\tilde{X}}=X\). Note that this prescription is only suitable to derive the rules not explicitly stated above. In particular, it is _not_ a symmetry (or a \(\mathbb{Z}_{2}\)-grading) of the algebra and is not true for pairs involving \(\Psi_{x}\) and \(\tilde{\Psi}_{x}\), for \(x\in\{m,r,mr\}\), as can be seen from 5(a) above.
## Appendix C Elementary circuits for the case of \(D(d_{4})\)
In Appendix A, we have presented all the elementary operations associated with the application of the ribbon operators and in Section 3, the other elementary operations used for charge measurements and ground state preparation. In Section 3.4 we have spelled out some of the above mentioned operations for the actual group element-to-qubit encoding used in our simulations of the protocols. In this Appendix, we present the concrete circuit elements for all relevant operations.
**Controlled multiplication.** There are four kinds of controlled multiplications that appear in all of our elementary protocols
\[U_{CM}^{(1)}:\left|g,h\right\rangle\to\left|g,gh\right\rangle,\quad U_{CM}^{(2 )}:\left|g,h\right\rangle\to\left|g,hg\right\rangle,\quad U_{CM}^{(3)}:\left|g,h\right\rangle\to\left|g,g^{-1}h\right\rangle,\quad U_{CM}^{(4)}:\left|g,h \right\rangle\to\left|g,hg^{-1}\right\rangle. \tag{37}\]
Depending on the context in which it appears, i.e., weather the controlled multiplication is a part of the ground state preparation, the partial charge measurement or the ribbon operator application, the controlling group element \(g\) is unrestricted \(g\in G\) or is restricted to one of the subgroups \(\{H_{m},H_{r},H_{mr}\}\) or one of the conjugacy classes \(\{C_{m},C_{r},C_{mr}\}\). As mentioned in the main text, the circuits for the latter two cases are drastically simplified compared to the unrestricted case. The circuits for all cases above are shown in Figure 25.
\begin{table}
\begin{tabular}{l||c c c c|c c c c c} \(D(D_{4})\) & \(O\) & \(\Sigma_{r}\) & \(\Sigma_{mr}\) & \(\Sigma_{m}\) & \(\Sigma_{\epsilon}\) & \(\tilde{O}\) & \(\tilde{\Sigma}_{r}\) & \(\tilde{\Sigma}_{mr}\) & \(\tilde{\Sigma}_{m}\) & \(\tilde{\Sigma}_{\epsilon}\) \\ \hline \(\mathcal{C}\) & \(\mathcal{C}_{e}\) & & & & & \(\mathcal{C}_{r^{2}}\) & & & \\ \(\chi\) & 1 & \(\alpha_{r}\) & \(\alpha_{mr}\) & \(\alpha_{m}\) & \(\epsilon\) & 1 & \(\alpha_{r}\) & \(\alpha_{mr}\) & \(\alpha_{m}\) & \(\epsilon\) \\ \hline T & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & -1 \\ \(\dim\,d\) & 1 & 1 & 1 & 1 & 2 & 1 & 1 & 1 & 1 & 2 \\ \end{tabular}
\begin{tabular}{l||c c c c|c c c c} \(D(D_{4})\) & \(\Psi_{m}\) & \(\tilde{\Psi}_{m}\) & \(\Phi_{m}\) & \(\tilde{\Phi}_{m}\) & \(\Psi_{mr}\) & \(\tilde{\Psi}_{mr}\) & \(\Phi_{mr}\) & \(\tilde{\Phi}_{mr}\) \\ \hline \(\mathcal{C}\) & \(\mathcal{C}_{m}\) & & & & & \(\mathcal{C}_{mr}\) & & & \\ \(\chi\) & \((1,1)\) & \((1,-1)\) & \((-1,1)\) & \((-1,-1)\) & \((1,1)\) & \((1,-1)\) & \((-1,1)\) & \((-1,-1)\) \\ \hline T & 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 \\ \(\dim\,d\) & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 \\ \end{tabular}
\end{table}
Table 6: Anyon content of \(D(D_{4})\) defined by flux-charge pairs \((\mathcal{C},\chi)\), listing the topological spin as given by the (diagonal) \(T\)-matrix entry and the quantum dimension \(d\).
Figure 25: The circuits implementing the controlled group multiplications \(U_{CM}\) defined in Eq. (37) for \(g\in G\) (unrestricted), \(g\) restricted to subgroups \(H\) and \(g\) restricted to conjugacy classes \(C\).
**Generalised conjugation.** Another building block of our circuits is the generalised conjugation, \(U_{GC}\left|g\right\rangle\left|\alpha\right\rangle_{a_{i}}=\left|g\right\rangle A ^{T}(g)\left|\alpha\right\rangle_{a_{i}}\). The \(A\)-matrices for \(D(D_{4})\), are shown in Appendix B. In Appendix A we see that we need four variants of the map: \(A(g)\), \(A(g^{-1})\), \(A^{T}(g)\), \(A^{T}(g^{-1})\). Figure 26 shows the corresponding circuits which are to be supplemented by the appropriate unitaries from Table 7.
## Appendix D Partial charge measurement - additional data
In this appendix, we show the results of the partial charge measurements with respect to all four-element subgroups of \(D_{4}\) in the anyon fusion experiment. Fig. 27 shows the data for the experiment performed on the braiding ladder and Fig. 28 shows the data for the experiment performed on the small planar graph. If one does not want to rely on the knowledge of the fusion algebra to label the peaks in the histograms shown in Figure 14 in the main text, these results are necessary and sufficient as argued in Section 3.3 to uniquely determine the charge labels.
## Appendix E Uncertainty estimation and measurement bias
In this appendix we provide more details on the measurement uncertainty and the effect of the measurement bias relevant in the \(S\)-and \(T\)-matrix protocols in Section 5.2.
### Polarisation uncertainty
In the following, we discuss the uncertainty estimations for the polarisation \(P\) - the central quantity of interest in the interference protocols in Section 5.2, and the resulting uncertainty for the amplitude and phase of the \(S\)-and \(T\)-matrix elements. The polarisation was determined for different measurement
\begin{table}
\begin{tabular}{|l||l|l|l|l|l|l|l|l|} \hline \(A(g)\) & \(e\) & \(r\) & \(r^{2}\) & \(r^{3}\) & \(m\) & \(mr\) & \(mr^{2}\) & \(mr^{3}\) \\ \hline \hline \(O\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline \(O\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline \(\Sigma_{r}\) & 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 \\ \hline \(\Sigma_{mr}\) & 1 & -1 & 1 & -1 & -1 & 1 & -1 & 1 \\ \hline \(\Sigma_{m}\) & 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 \\ \hline \(\Sigma_{r}\) & 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 \\ \hline \(\Sigma_{mr}\) & 1 & -1 & 1 & -1 & -1 & 1 & -1 & 1 \\ \hline \(\Sigma_{m}\) & 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 \\ \hline \(\Sigma_{\epsilon}\) & \(\mathbbm{1}\) & \(-i\sigma_{y}\) & \(-\mathbbm{1}\) & \(i\sigma_{y}\) & \(\sigma_{z}\) & \(-\sigma_{x}\) & \(-\sigma_{z}\) & \(\sigma_{x}\) \\ \hline \(\Sigma_{\epsilon}\) & \(\mathbbm{1}\) & \(-i\sigma_{y}\) & \(-\mathbbm{1}\) & \(i\sigma_{y}\) & \(\sigma_{z}\) & \(-\sigma_{x}\) & \(-\sigma_{z}\) & \(\sigma_{x}\) \\ \hline \(\Psi_{r}\) & \(\mathbbm{1}\) & \(\mathbbm{1}\) & \(\mathbbm{1}\) & \(\mathbbm{1}\) & \(\sigma_{x}\) & \(\sigma_{x}\) & \(\sigma_{x}\) & \(\sigma_{x}\) \\ \hline \(\Phi_{r}\) & \(\mathbbm{1}\) & \(i\sigma_{z}\) & \(-\mathbbm{1}\) & \(-i\sigma_{z}\) & \(\sigma_{x}\) & \(\sigma_{y}\) & \(-\sigma_{x}\) & \(-\sigma_{y}\) \\ \hline \(\Psi_{r}\) & \(\mathbbm{1}\) & \(-\mathbbm{1}\) & \(\mathbbm{1}\) & \(-\mathbbm{1}\) & \(\sigma_{x}\) & \(-\sigma_{x}\) & \(\sigma_{x}\) & \(-\sigma_{x}\) \\ \hline \(\Phi_{r}\) & \(\mathbbm{1}\) & \(-i\sigma_{z}\) & \(-\mathbbm{1}\) & \(i\sigma_{z}\) & \(\sigma_{x}\) & \(-\sigma_{y}\) & \(-\sigma_{x}\) & \(\sigma_{y}\) \\ \hline \(\Psi_{m}\) & \(\mathbbm{1}\) & \(\sigma_{x}\) & \(\mathbbm{1}\) & \(\sigma_{x}\) & \(\mathbbm{1}\) & \(\sigma_{x}\) & \(\mathbbm{1}\) & \(\sigma_{x}\) \\ \hline \(\Psi_{m}\) & \(\mathbbm{1}\) & \(\sigma_{x}\) & \(\mathbbm{1}\) & \(\sigma_{x}\) & \(-\mathbbm{1}\) & \(-\sigma_{x}\) & \(-\mathbbm{1}\) & \(-\sigma_{x}\) \\ \hline \(\Phi_{m}\) & \(\mathbbm{1}\) & \(i\sigma_{y}\) & \(-\mathbbm{1}\) & \(-i\sigma_{y}\) & \(\sigma_{z}\) & \(\sigma_{x}\) & \(-\sigma_{z}\) & \(-\sigma_{x}\) \\ \hline \(\Phi_{m}\) & \(\mathbbm{1}\) & \(i\sigma_{y}\) & \(-\mathbbm{1}\) & \(-i\sigma_{y}\) & \(-\sigma_{z}\) & \(-\sigma_{x}\) & \(\sigma_{z}\) & \(\sigma_{x}\) \\ \hline \(\Psi_{mr}\) & \(\mathbbm{1}\) & \(\sigma_{x}\) & \(\mathbbm{1}\) & \(\sigma_{x}\) & \(\sigma_{x}\) & \(\mathbbm{1}\) & \(\sigma_{x}\) & \(\mathbbm{1}\) \\ \hline \(\Psi_{mr}\) & \(\mathbbm{1}\) & \(\sigma_{x}\) & \(\mathbbm{1}\) & \(\sigma_{x}\) & \(-\sigma_{x}\) & \(-\mathbbm{1}\) & \(-\sigma_{x}\) & \(-\mathbbm{1}\) \\ \hline \(\Phi_{mr}\) & \(\mathbbm{1}\) & \(i\sigma_{y}\) & \(-\mathbbm{1}\) & \(-i\sigma_{y}\) & \(-\sigma_{x}\) & \(\sigma_{z}\) & \(\sigma_{x}\) & \(-\sigma_{z}\) \\ \hline \end{tabular}
\end{table}
Table 7: \(A_{g}\) matrices for every representation of \(D(D_{4})\)
bases \(\vec{b}\), but here we focus on a single fixed measurement basis first. To estimate \(P\), we performed \(N\) measurements and recorded the number of measurement outcomes \(n_{s}\), \(s\in\{0,1\}\). From these results we estimated the polarisation \(P=\frac{p_{0}-p_{1}}{p_{0}+p_{1}}\) via \(p_{s}=n_{s}/N\). The probability to find the outcome \(s\) in \(n_{s}\) of the \(N\) shots is given by the binomial distribution, \(p(n_{s})=\binom{N}{n_{s}}p_{s}^{n_{s}}p_{\bar{s}}^{N-n_{s}}\), where \(\bar{s}=1-s\). Thus, the unbiased estimator for the polarisation is \(\hat{P}=\frac{n_{0}-n_{1}}{n_{0}+n_{1}}\), with
\[\text{Mean}(\hat{P})=P\text{ and }\text{Var}(\hat{P})=N\sigma_{P}^{2}=P(1-P),\]
where \(\sigma_{P}=\sqrt{P(1-P)/N}\) is the error in determining this mean. This procedure of uncertainty estimation was used for all measurement bases and yields the error bars in Figures 17 and 18.
To obtain the amplitude and phase of the \(S\)- and \(T\)-matrix entries from the interference protocols we used that the measurement basis dependent polarisation \(P(\vec{b})\) is related to the Bloch vector \(\vec{r}\) of the single qubit density matrix and the read-out biases of the machine via
\[P(\vec{b})=(1-2\bar{\epsilon})\vec{r}\cdot\vec{b}+\Delta\epsilon.\]
Given the estimates for the function \(P(\vec{b})\) we extracted the relevant quantities \((\vec{r}/|\vec{r}|,\Delta\epsilon,|(1-2\bar{\epsilon})\vec{r}|)\) by fitting. To this end, we chose two particular sets of bases referred to as \(\phi\)-scan and \(\theta\)-scan. For the \(\phi\)-scan we fitted the function \(f(\phi)=A\cos(\phi-\phi_{\text{max}})+B\), to extract the angle which maximises the polarisation \(\phi_{\text{max}}\). Then, with fixed \(\phi=\phi_{\text{max}}\), we performed a \(\theta\)-scan and fitted to it the more informative function \(g(\theta)=|(1-2\bar{\epsilon})\vec{r}|\cos(\theta-\theta_{\text{max}})+\Delta\epsilon\). The two angles \(\phi_{\text{max}}\) and \(\theta_{\text{max}}\) then determine the direction of the Bloch vector and via that, the \(S\)- or \(T\)-matrix entry denoted \(\mathcal{A}\) via
\[\mathcal{A}=\left(\sqrt{\frac{1-\cos\theta_{\text{max}}}{1+\cos\theta_{\text {max}}}}\pm\frac{\sigma_{\theta_{\text{max}}}}{|1+\cos\theta_{\text{max}}|} \right)e^{i(\phi_{\text{max}}\pm\sigma_{\phi_{\text{max}}})}.\]
Here, the angles and their uncertainty are obtained by the usual \(\chi^{2}\)-fitting method.
Figure 26: The four variants (\(A^{T}(g)\), \(A^{T}(g^{-1})\), \(A(g^{-1})\) and \(A(g)\), left to right top to bottom, respectively) of the generalised conjugation circuits. Depending on the label \((C,\chi)\) of the ribbon operator the appropriate single qubit unitaries from Table 7 are inserted.
Figure 27: The results of the partial charge measurements for different four-element subgroups of \(D_{4}\) at the end of the fusion protocol performed on the braiding ladder.
Figure 28: The results of the partial charge measurements for different four-element subgroups of \(D_{4}\) at the end of the fusion protocol performed on the small planar graph.
### Measurement bias and post selection
We investigate the effect of post selecting on biased measurement outcomes for the paradigmatic example of two qubits. We consider a general two-qubit density matrix
\[\rho=\frac{1}{4}r_{\alpha\beta}\sigma_{\alpha}\tau_{\beta},\quad r_{00}=1,\quad \alpha,\beta=0,\dots,4\;, \tag{38}\]
that represents the state after applying a noisy circuit. We tomograph the first qubit while post-selecting on the measurement outcome \(0\) in the \(z\)-basis of the second qubit. The bias to measure \(a\) while the outcome should be \(\bar{a}\) is \(\epsilon_{a}\) and assumed to be the same for both qubits.
After measuring \(0\) we obtain with probability \(\epsilon_{0}\) the false positive state
\[\tilde{\rho}_{01}=\frac{r_{\alpha 0}-r_{\alpha 3}}{4}\sigma_{\alpha}\ket{1} \bra{1} \tag{39}\]
and with probability \(1-\epsilon_{1}\) the true state
\[\tilde{\rho}_{00}=\frac{r_{\alpha 0}+r_{\alpha 3}}{4}\sigma_{\alpha}\ket{0} \bra{0}\;. \tag{40}\]
We now perform a measurement of the first qubit in the \(\vec{s}\)-basis, which amounts to a full tomography of the first qubit, if we choose several different bases \(\vec{s}\). The probability to measure \(0\) is given again by a combination of true and false positive outcomes and reads
\[p_{0}=\epsilon_{0}\operatorname{tr}\tilde{\rho}S_{1}+(1-\epsilon_{1}) \operatorname{tr}\tilde{\rho}S_{0},\quad\tilde{\rho}=\epsilon_{0}\tilde{\rho} _{01}+(1-\epsilon_{1})\tilde{\rho}_{00},\quad S_{0}=\frac{1+\vec{S}\cdot \vec{\sigma}}{2},\quad S_{1}=\frac{1-\vec{S}\cdot\vec{\sigma}}{2}\;. \tag{41}\]
Similarly,
\[p_{1}=\epsilon_{1}\operatorname{tr}\tilde{\rho}S_{0}+(1-\epsilon_{0}) \operatorname{tr}\tilde{\rho}S_{1} \tag{42}\]
More explicitly, we introduce \(s_{\alpha}^{\pm}\) with \(s_{0}=\pm 1,s_{i}^{\pm}=\vec{s}_{i}\) and \(r_{\alpha}^{\pm}=r_{\alpha 0}\pm r_{\alpha 3})\) and obtain
\[p_{0}= \frac{1}{4}[\epsilon_{0}s_{\alpha}^{+}(\epsilon_{0}r_{\alpha}^{-} +(1-\epsilon_{1})r_{\alpha}^{+})-(1-\epsilon_{1})s_{\alpha}^{-}(\epsilon_{0} r_{\alpha}^{-}+(1-\epsilon_{1})r_{\alpha}^{+})]\;. \tag{43}\]
Likewise the probability to measure \(1\) is
\[p_{1}= \frac{1}{4}[-\epsilon_{1}s_{\alpha}^{-}(\epsilon_{0}r_{\alpha}^{-} +(1-\epsilon_{1})r_{\alpha}^{+})+(1-\epsilon_{0})s_{\alpha}^{+}(\epsilon_{0} r_{\alpha}^{-}+(1-\epsilon_{1})r_{\alpha}^{+})]\;. \tag{44}\]
This yields the polarisation
\[P=p_{0}-p_{1}=a+\vec{s}\cdot\vec{b}\;, \tag{45}\]
where \(a,\vec{b}\) are polynomials in \(\epsilon_{0/1}\) and \(r_{\alpha}^{\pm}\), in particular
\[\begin{split} a&=(\epsilon_{0}-\epsilon_{1})\frac{r_ {0}^{+}}{2}+\epsilon_{0}^{2}\frac{r_{0}^{-}}{2}-\epsilon_{0}\epsilon_{1}\frac{ r_{0}^{+}+r_{0}^{-}}{2}+\epsilon_{1}^{2}\frac{r_{0}^{+}}{2}\;,\\ b_{j}&=\frac{r_{j}^{+}}{2}+\epsilon_{0}\frac{r_{j} ^{-}-r_{j}^{+}}{2}-\epsilon_{1}r_{j}^{+}-\epsilon_{0}^{2}\frac{r_{j}^{-}}{2}+ \epsilon_{0}\epsilon_{1}\frac{r_{j}^{+}-r_{j}^{-}}{2}+\epsilon_{1}^{2}\frac{r_ {j}^{+}}{2}\;.\end{split} \tag{46}\]
We see that to leading order in \(\epsilon\), the offset is proportional to \(\Delta\epsilon=\epsilon_{1}-\epsilon_{0}\), while the vector \(\vec{b}\) is dominated by terms \(\mathcal{O}(1)\).
For a general setting, where we have \(n\) qubits, post-select on the biased measurement outcomes of \(k\) qubits and tomograph one qubit, the polarisation will have the same structure, just that \(a\) and \(b_{j}\) are higher polynomials in \(\epsilon\), with the coefficients depending on the full density matrix. The polarisation can be written as
\[P=\vec{s}\cdot(\vec{b}^{\text{no bias}}+\tilde{\epsilon}_{-}\vec{b}^{\text{ bias}})+\tilde{\epsilon}_{+}\;, \tag{47}\]
where \(\tilde{\epsilon}_{\pm}\) are effective errors that depend on \(\rho\). In our experiment we are only interested in the direction of the vector \(\vec{b}\) which contains the information of the \(S\)- or \(T\)-matrix entry. The offset is purely due to read-out errors and is discarded. In the tomography, we determine \(\vec{b}^{\text{observed}}=\vec{b}^{\text{no bias}}+\tilde{\epsilon}_{-}\vec{b}^{ \text{bias}}\) and do not try to distinguish between the two contributions, since we are dealing with an unknown density matrix. |
2302.08940 | Primordial Black Holes Dark Matter and Secondary Gravitational Waves
from Warm Higgs-G Inflation | We explore the role of dissipative effects during warm inflation leading to
the small-scale enhancement of the power spectrum of curvature perturbations.
In this paper, we specifically focus on non-canonical warm inflationary
scenarios and study a model of warm Higgs-G inflation, in which the Standard
Model Higgs boson drives inflation, with a Galileon-like non-linear kinetic
term. We show that in the Galileon-dominated regime, the primordial power
spectrum is strongly enhanced, leading to the formation of primordial black
holes (PBH) with a wide range of the mass spectrum. Interestingly, PBHs in the
asteroid mass window $\sim (10^{17}$ -- $10^{23}$) g are generated in this
model, which can explain the total abundance of the dark matter in the
Universe. In our analysis, we also calculate the secondary gravitational waves
(GW) sourced by these small-scale overdense fluctuations and find that the
induced GW spectrum can be detected in the future GW detectors, such as LISA,
BBO, DECIGO, etc. Our scenario thus provides a novel way of generating PBHs as
dark matter and a detectable stochastic GW background from warm inflation. We
also show that our scenario is consistent with the swampland and the
trans-Planckian censorship conjectures and, thus, remains in the viable
landscape of UV complete theories. | Richa Arya, Rajeev Kumar Jain, Arvind Kumar Mishra | 2023-02-17T15:23:45Z | http://arxiv.org/abs/2302.08940v2 | # Primordial Black Holes Dark Matter and Secondary Gravitational Waves from Warm Higgs-G Inflation
###### Abstract
We explore the role of dissipative effects during warm inflation leading to the small-scale enhancement of the power spectrum of curvature perturbations. In this paper, we specifically focus on non-canonical warm inflationary scenarios and study a model of warm Higgs-G inflation, in which the Standard Model Higgs boson drives inflation, with a Galileon-like non-linear kinetic term. We show that in the Galileon-dominated regime, the primordial power spectrum is strongly enhanced, leading to the formation of primordial black holes (PBH) with a wide range of the mass spectrum. Interestingly, PBHs in the asteroid mass window \(\sim(10^{17}\) - \(10^{23})\) g are generated in this model, which can explain the total abundance of the dark matter in the Universe. In our analysis, we also calculate the secondary gravitational waves (GW) sourced by these small-scale overdense fluctuations and find that the induced GW spectrum can be detected in the future GW detectors, such as LISA, BBO, DECIGO, etc. Our scenario thus provides a novel way of generating PBHs as dark matter and a detectable stochastic GW background from warm inflation. We also show that our scenario is consistent with the swampland and the trans-Planckian censorship conjectures and, thus, remains in the viable landscape of UV complete theories.
Introduction
The inflationary paradigm [1; 2; 3; 4] of the early Universe successfully explains the observations of anisotropies in the cosmic microwave background (CMB) radiation and generates seed inhomogeneities for the large scale structure (LSS) formation [5] (for reviews, see [6; 7; 8; 9]). Despite its phenomenal success, the underlying particle physics model of inflation remains elusive to date [10]. In the Standard Model (SM) of particle physics, the only scalar field is the Higgs boson, whose self-interactions are so strong that it can not act as an inflaton (a scalar field that drives inflation) in a minimal setup and thus, can not explain the cosmological observations. However, by extending it to a non-minimal configuration, one can construct a viable inflationary model within the SM (for a review, see Ref. [11]). This can be achieved, for example, by introducing a non-minimal coupling of the Higgs field to gravity [12; 13; 14], or with a non-canonical kinetic term of the Higgs field, such as in the \(k\)-inflation [15], ghost condensate [16], and Dirac-Born-Infeld inflation [17] models.
One such construction with a general kinetic term and additional non-linear terms are the Galileon-like models whose Lagrangian can be written as
\[\mathcal{L}_{\phi}=K(\phi,X)-G(\phi,X)\Box\phi, \tag{1}\]
where \(X=-\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi\) is the standard kinetic term of the Galileon field \(\phi\), \(K(\phi,X)=X-V(\phi)\) where \(V(\phi)\) is the field potential, and \(G(\phi,X)\) is an arbitrary function of \(\phi,X\). This specific form of the non-linear kinetic term is special, as it does not lead to extra degrees of freedom or ghost instabilities and maintains the gravitational and scalar field equations at the second order [18; 19; 20]. The Galileon field is named so because it possesses a Galileon shift symmetry in the Minkowski spacetime. Phenomenologically, Galileon-type scalar fields have been extensively studied in the context of dark energy, modified gravity [21; 22; 23; 24; 25; 26; 27] as well as inflation [28; 29; 30; 31; 32]. In Ref. [33], the authors explored Higgs inflation in the presence of Galileon-like non-linear derivative interactions and demonstrated that this model is compatible with cosmological observations. Also, the tensor-to-scalar ratio for this model was found to be within the sensitivity of future experiments, which can be used to discriminate it from the standard inflationary model with a canonical kinetic term. However, when the Galileon term dominates over the standard kinetic term in this model, the dynamics of reheating is modified. For large self-coupling \(\lambda\), there is no oscillatory phase and moreover, the square of sound speed \(c_{s}^{2}\) is negative during reheating, leading to instabilities of small-scale perturbations [32].
In order to alleviate this problem, the authors in Ref. [34] extended the Higgs Lagrangian with higher order kinetic terms to obtain \(c_{s}^{2}>0\), thereby avoiding instabilities. Alternatively, this problem would not arise in the first place, if there is no separate reheating phase after inflation, as it could be in the case of warm inflation. This was explored by the authors in Refs. [35; 36; 37], and forms the basis of this paper.
Warm inflation [38; 39; 40] is a general description of inflation in which the dissipative effects during inflation play an important role in the inflationary dynamics. The basic idea of warm inflation is that the inflaton is sufficiently coupled with other fields, such that it dissipates its energy into them during its evolution, which leads to a thermal bath (with temperature \(T\)) in the Universe during the inflationary phase. Therefore, a separate reheating phase is not necessarily needed for particle production. For reviews on warm inflation, see Refs. [41; 42; 43].
The background dynamics of the inflaton as well as its fluctuations are modified in warm inflation, therefore, the primordial power spectrum has distinct signatures on cosmological observables, as compared to cold inflation. For instance, the tensor-to-scalar ratio in warm inflation is lowered, thus certain models of cold inflation, although ruled out from Planck observations, could be viable models in the warm inflation description [44; 45; 46; 47; 48]. Warm inflationary models also predict unique non-Gaussian signatures that can be used to test these models [49; 50; 51; 52]. As the inflationary dynamics is modified in warm inflation, the slow roll conditions which demand an extremely flat potential, are also relaxed in these models [53]. Further, some warm inflation models can provide a unified description for inflation, dark matter, and/or dark energy [54; 55; 56; 57]. Another interesting aspect of warm inflation models is that they can also explain the baryon asymmetry of the Universe [58; 59]. Warm inflation studies also show that for some models with a large value of the dissipation parameter, the swampland conjectures can be satisfied, thus making them in agreement with a high energy theory [60; 61]. We will discuss this aspect of our model in detail in Section IV. Since all these features of warm inflation arise from the fundamental principles of a dissipating system, it is very crucial to study warm inflation to understand the physics of the early Universe.
Although the large scale imprints of warm inflation are well studied through the observations of the CMB anisotropies, the small scale features have recently acquired much attention. One of the novel probes of small scale physics of inflation is the formation and abundance of primordial black holes (PBH) [62; 63; 64]. In contrast to the astrophysical black holes, which are the end stages of a star, PBHs are primordial in origin and may form by different mechanisms, such as the collapse of overdense fluctuations [65; 64], bubble collisions [66], collapse of strings [67], domain walls [68], etc. In order to produce PBHs by the gravitational collapse, the amplitude of primordial curvature power spectrum at small scales has to be \(\sim\mathcal{O}(10^{-2})\). Such an enhancement by several orders can be achieved in different models, such as the hybrid inflation [69; 70; 71], running-mass inflation [72; 73; 74; 75], hilltop inflation [76], inflating curvaton [77], axion curvaton inflation [78; 79], double inflation [80], thermal inflation [81], single field inflation with a broken scale invariance [82], or by introducing an inflection point (plateau) in the potential [83; 84; 85; 86], or bump/dip in the potential [87], running of the spectral index [88; 89; 90], suppression of the sound speed [91; 92] or resonant instability [93; 94], etc. PBHs are crucial probes of the small scale features of the primordial power spectrum and hence different inflationary models [95; 96; 97; 98].
The abundance of PBHs is constrained through various observations, e.g. PBHs with mass \(M_{PBH}<10^{15}\) g, would have evaporated by today, and thus their consequences on the Big-Bang nucleosynthesis (BBN) can provide constraints on their initial mass fraction. The PBHs with mass \(M_{PBH}\lesssim 10^{9}\) g would completely evaporate by BBN, and therefore the bounds on their abundance are not very stringent. Recent studies show that such ultralight PBHs might induce interesting observational imprints such as an extra contribution to the dark radiation and dark relics [99]. It is also possible that these PBHs may dominate the universe for a short duration before the radiation dominated epoch, and lead to a secondary resonant enhancement of the induced GWs [100, 101]. For PBHs with \(M_{PBH}>10^{15}\) g, the constraints arise from their gravitational effects, like lensing, dynamical effects on interaction with astrophysical systems, LIGO/Virgo gravitational wave (GW) merger events, etc. (For details, see Refs. [102, 103]). Also, PBHs can constitute a significant or total fraction of the dark matter (DM) density in the mass window (\(10^{17}-10^{23}\)) g (see recent reviews [104, 105]). Thus, PBHs are very important from the aspects of DM phenomenology. Further, as for PBH generation, the amplitude of scalar fluctuations at smaller scales is hugely enhanced, there is an inevitable GW spectrum sourced by these large density fluctuations [106, 107, 108, 109], which can have interesting observational consequences in the future GW detectors.
Motivated by this, we investigated the small scale features of warm inflation in our earlier works [110, 111] and considered minimally coupled single-field models with a canonical kinetic term. We explored the formation of PBHs and the scalar induced GW spectrum from a model of warm inflation in Refs. [110, 111]. In our analysis, we found that for our model, the primordial power spectrum is red-tilted (\(n_{s}<1\)) for the CMB scales, but turns blue-tilted (\(n_{s}>1\)) at the small scales with a large amplitude. This generates PBHs of mass nearly \(10^{3}\) g and an associated GW spectrum over the frequency range \((1-10^{6})\) Hz. Further, these tiny mass PBHs would have evaporated by today, but the calculated initial abundance of these PBHs favors the possibility that the Planck-mass remnants of these PBHs could constitute the total dark matter.
Similar analysis was performed in another theoretically interesting scalar warm little inflaton model [112]. In this model, the dissipation coefficient \(\Upsilon\), which is a measure of the inflaton interactions with other fields, switches its behaviour from \(\Upsilon\propto 1/T\) to \(\Upsilon\propto T^{\kappa}\) (\(\kappa>0\)) as inflation proceeds. It is found that the very small scale modes grow sufficiently to generate PBHs of mass \(\sim 10^{6}\) g and a GW spectrum peaked at frequency \((10^{5}-10^{6})\) Hz. More recently, the authors in Ref. [113] explain the full abundance of dark matter in the form of PBHs in mass range \((10^{17}-10^{22})\) g from a model of warm natural inflation. Another interesting study considering inflaton dissipation to be effective only for a few efolds, rather than for the full duration (as in warm inflation), was carried out recently in Ref. [114]. By choosing the peak of the primordial power spectrum at some specific scales, PBH in the above mass range
are constructed from this model that can explain the full dark matter abundance. Thus, a study of the dissipative effects during inflation leading to small scale features is crucial for understanding the physics of the early Universe and the dark matter.
All the above models of warm inflation considered a minimally coupled inflaton with a canonical kinetic term. In this study, we explore the effects of a non-canonical kinetic term in the evolution of warm inflationary models and consider the warm Higgs-G model for the formation of PBHs and the associated GW spectra. While the motivation of Refs. [35; 37] was to explore the parameter space consistent with the large scale CMB observations, we focus on simultaneously explaining the CMB, as well as identifying the parameter space for the required abundance of PBHs as dark matter. The formation of PBH and secondary GW from G-inflation have also been motivated in the standard cold description, such as in Refs. [115; 116; 117; 118; 119].
We consider two cases of warm Higgs-G inflation models with a quartic Higgs potential and a dissipation coefficient with a linear dependence on temperature \(\Upsilon(\phi,T)\propto T\). For the Galileon term we consider a general form \(G(\phi,X)\propto\phi^{2p+1}X^{q}\) and work with two models characterised by (\(p=0\), \(q=1\)) and (\(p=1\), \(q=1\)). The presence of the dissipation term as well as the non-canonical kinetic term damp the inflaton evolution in these models. While PBHs could not be produced in this canonical warm inflationary setup, we find that in the non-canonical models in the presence of G-term as considered in our paper, the power spectrum is hugely enhanced at the small scales. As a result, PBHs over a wide mass range can be generated in our scenario, which includes the asteroid mass range (\(10^{17}-10^{23}\)) g, for which PBHs can constitute the full dark matter abundance. We further calculate the scalar induced GW spectrum associated with these PBHs and find that GW with peak frequency in the sensitivity limits of the forthcoming detectors, such as LISA, BBO, DECIGO, etc. are generated in our models. Thus, future detections or non-detections of GW would be a useful test to these inflationary models. Moreover, we find that these models are also consistent with the swampland and the TCC conjectures and thus, they belong to the viable landscape of UV complete theories.
This paper is organised as follows: We begin with an introduction to the basics of warm inflation in Section II. Then we describe the inflationary dynamics in warm-G inflation in Section III. We then introduce our warm Higgs-G inflation model in Section IV. After parameterizing it in terms of the model parameters and discussing the predictions with CMB observations and swampland conjectures, we discuss the theory of PBH formation and the associated scalar induced GW in Section V. Then we discuss the results of our model in Section VI and summarize the paper in Section VII.
Basics of warm inflation
In warm inflation, one considers the dissipative processes during inflation based on the principles of non-equilibrium field theory for interacting quantum fields. The inflaton is assumed to be near-equilibrium and evolving slowly as compared to the microphysics timescales in the adiabatic approximation, see Refs. [120; 121; 122]. The dissipation of the inflaton field to radiation remains active throughout the duration of inflation which naturally ends when either the energy density of the radiation bath becomes larger than that of the inflaton or the slow roll conditions no longer hold.
The equation of motion of the inflaton \(\phi\) slowly rolling down a potential \(V(\phi)\) during warm inflation is modified due to an additional dissipation term \(\Upsilon\dot{\phi}\) arising from the inflaton coupling to other fields and is given as
\[\ddot{\phi}+3H\dot{\phi}+\Upsilon\dot{\phi}=-V_{,\phi}. \tag{2}\]
Here, an overdot denotes the derivatives with respect to cosmic time \(t\), \(H\) is the Hubble parameter and the subscript \({}_{,\phi}\) represents derivative with respect to \(\phi\). The term \(\Upsilon(\phi,T)\) is the dissipation coefficient which usually depends on \(\phi\) and temperature of the Universe \(T\). One can also rewrite Eq. (2) in terms of a dissipation parameter \(Q\equiv\frac{\Upsilon}{3H}\), as
\[\ddot{\phi}+3H(1+Q)\dot{\phi}+V_{,\phi}=0. \tag{3}\]
Since \(Q\) is dimensionless, \(Q>1\) is usually called the strong dissipative regime while \(Q<1\) is referred as the weak dissipative regime of warm inflation. Due to the inflaton dissipation, there is an energy transfer from the inflaton to the radiation component, given as
\[\dot{\rho}_{r}+4H\rho_{r}=\Upsilon\dot{\phi}^{2}. \tag{4}\]
It is assumed that the radiation thermalizes quickly after being produced, thus, \(\rho_{r}=\frac{\pi^{2}}{30}g_{*}T^{4}\), where \(T\) is the temperature of the thermal bath and \(g_{*}\) is the effective number of relativistic degrees of freedom present during warm inflation. Note that, the energy-momentum tensor associated with the inflaton and radiation are not individually conserved while the total energy-momentum tensor of the system remains conserved.
### Slow roll approximation
To achieve an adequate duration of inflation, the potential of inflaton needs to be sufficiently flat, so that the slow roll conditions are satisfied. This is described in terms of the slow roll parameters,
\[\epsilon_{\phi}=\frac{M_{Pl}^{2}}{2}\,\left(\frac{V_{,\phi}}{V} \right)^{2},\hskip 28.452756pt\eta_{\phi}=M_{Pl}^{2}\,\left(\frac{V_{,\phi \phi}}{V}\right) \tag{5}\]
where \(M_{Pl}=\sqrt{\frac{1}{8\pi G_{N}}}\simeq 2.44\times 10^{18}\) GeV is the reduced Planck mass and \(G_{N}\) is the gravitational constant. In addition, in warm inflation, there are other slow roll parameters [123; 124]
\[\beta_{\Upsilon}=M_{Pl}^{2}\,\left(\frac{\Upsilon_{,\phi}\,V_{,\phi}}{\Upsilon \,V}\right),\hskip 14.226378ptb=\frac{TV_{,\phi T}}{V_{,\phi}}\,\hskip 14.226378ptc=\frac{T \Upsilon_{,T}}{\Upsilon}. \tag{6}\]
Here the subscript \({}_{,T}\) represents derivative with respect to \(T\). These additional slow roll parameters are a measure of the field and temperature dependence of the inflaton potential and the dissipation coefficient. The stability analysis of warm inflationary solution leads to the following conditions [124]
\[\epsilon_{\phi}\ll 1+Q,\hskip 21.339567pt|\eta_{\phi}|\ll 1+Q,\hskip 21.339567pt| \beta_{\Upsilon}|\ll 1+Q,\hskip 21.339567pt0<b\ll\frac{Q}{1+Q},\hskip 21.339567pt|c| \leq 4. \tag{7}\]
Note that, the conditions on the slow roll parameters \(\epsilon_{\phi}\) and \(\eta_{\phi}\) are weaker than the corresponding conditions for cold inflation. For large dissipation parameter \(Q\), the upper limit on \(\epsilon_{\phi}\) and \(\eta_{\phi}\) is increased, and therefore, the \(\eta\) problem is relaxed in warm inflation [53]. The condition on \(b\) implies that warm inflation is only feasible when the thermal corrections to the inflaton potential remain small.
In the slow roll approximation, we can neglect \(\ddot{\phi}\) in Eq. (3), which gives
\[\dot{\phi}\approx\frac{-V_{,\phi}}{3H(1+Q)}, \tag{8}\]
and since \(\dot{\rho}_{r}\) is small in Eq. (4), we can approximate \(\dot{\rho}_{r}\approx 0\) and obtain
\[\rho_{r}\approx\frac{\Upsilon}{4H}\dot{\phi}^{2}=\frac{3}{4}Q\dot{\phi}^{2}, \tag{9}\]
which indicates that the radiation energy density is determined by the dissipation parameter, as expected.
### Dissipation coefficient
The dissipation coefficient \(\Upsilon(\phi,T)\) is determined by the microphysics of the coupled inflaton-radiation system, which includes the channel of the inflaton decay to radiation, its coupling strength with other fields, mass and multiplicities of the radiation fields with which inflaton couples, the temperature of the thermal bath, etc. [125; 126; 127]. Depending on the form of the interaction Lagrangian, there arise different forms of the dissipation coefficient, as discussed in the literature (see, for instance, Refs. [121; 128; 129] for a review on the various studies about the finite temperature quantum field theory calculations of the dissipation coefficient). For our purpose, we will use a dissipation coefficient, which is linearly proportional to the temperature of thermal bath, i.e. \(\Upsilon(\phi,T)\propto T.\) This particular form
arises in a supersymmetric warm inflation model wherein the inflaton undergoes a two-stage decay mechanism via an intermediate field into radiation fields [125]. In the high temperature limit, when the mass of the intermediate field is smaller than the temperature of thermal bath, one gets the above expression of dissipation coefficient. Also, this form of dissipation coefficient arises in the "Little Higgs" models of electroweak symmetry breaking with Higgs as psuedo-Nambu Goldstone boson of a broken \(U(1)\) gauge symmetry [130]. For this work, we do not specify any particular microphysical description for the interaction Lagrangian and choose to work with the form \(\Upsilon\propto T\). This form of dissipation coefficient has been previously considered in different warm inflation models in the context of CMB observations [45; 47; 48]. In this paper, we are interested in the small scale imprints arising from this form of the dissipation coefficient.
## III Warm G-inflation
In this model, the inflaton also has a non-minimal kinetic term in the Lagrangian, as shown in Eq. (1). By the definition of warm inflation, the inflaton couples to radiation fields and dissipates its energy into them. The full action including the interaction and the radiation Lagrangian is given as,
\[\mathcal{S}=\int d^{4}x\sqrt{-g}\left[\frac{M_{Pl}^{2}}{2}R+X-V(\phi,T)-G(\phi,X)\Box\phi+\mathcal{L}_{r}+\mathcal{L}_{int}\right]. \tag{10}\]
As mentioned before, \(X\) is the standard kinetic term, \(G(\phi,X)\Box\phi\) is the Galileon-like non-linear kinetic term with \(G(\phi,X)\) an arbitrary function of \(\phi\), \(X\). Here \(V(\phi,T)\) is the field and temperature dependent inflaton potential, \(\mathcal{L}_{r}\) is the Lagrangian for radiation fields and \(\mathcal{L}_{int}\) is the interaction Lagrangian for the inflaton and the fields into which it dissipates its energy. The radiation energy density remains subdominant than that of the inflaton during the inflationary phase.
By varying the action in Eq. (10) with respect to the metric, one obtains the components of stress-energy momentum tensor for the inflaton as [35; 28]
\[\rho_{\phi} =X+V(\phi,T)+6HG_{,X}X\dot{\phi}-2G_{,\phi}X, \tag{11}\] \[p_{\phi} =X-V(\phi,T)-2(G_{,\phi}+G_{,X}\ddot{\phi})X, \tag{12}\]
where \(G_{,\phi}=\partial G/\partial\phi\), and \(G_{,X}=\partial G/\partial X\). Also, due to the presence of Galileon-like non-minimal kinetic term, the Klein-Gordon equation for the inflaton gets modified as [27],
\[\mathcal{B}\ddot{\phi}(t)+3H\mathcal{A}\dot{\phi}(t)+V_{,\phi}(\phi,T)=0, \tag{13}\]
with
\[\mathcal{B}=1+6HG_{,X}\dot{\phi}+6HG_{,XX}X\dot{\phi}-2G_{,\phi}-2G_{,X\phi}X, \tag{14}\]
\[\mathcal{A}=1+Q+3HG_{,X}\dot{\phi}+\frac{\dot{H}\dot{\phi}G_{,X}}{H}-2G_{,\phi}+2G_ {,X\phi}X-\frac{G_{,\phi\phi}\dot{\phi}}{3H}. \tag{15}\]
In the limit, \(G\to 0\) in Eq. (13), one recovers the warm inflation scenario in a minimal setup, as given in Eq. (2). From Eq. (13), (3), we see that both the dissipation and the Galileon interaction terms contribute as a damping term that further slows down the inflaton evolution. However, the radiation energy density evolves in the same way as in Eq. (4).
### Slow roll conditions
In warm G-inflation, apart from the slow roll parameters given in Eqs. (5) and (6), one also has the following dimensionless parameters [35]
\[\delta_{X}=\frac{X}{M_{Pl}^{2}H^{2}},\hskip 28.452756pt\delta_{GX}=\frac{ \dot{\phi}XG_{,X}}{M_{Pl}^{2}H},\hskip 28.452756pt\delta_{G\phi}=\frac{XG_{, \phi}}{M_{Pl}^{2}H^{2}}. \tag{16}\]
The requirement of slow roll during inflation demands the validity of the following conditions, as shown in Ref. [35]
\[|\epsilon_{\phi}|,|\eta_{\phi}|,|\beta_{\Upsilon}|\ll\mathcal{A},\hskip 14.226378pt |\delta_{X}|,|\delta_{GX}|,|\delta_{G\phi}|\ll 1,\hskip 14.226378pt0<b\ll \frac{Q}{\mathcal{A}},\hskip 14.226378pt|c|\leq 4,\hskip 14.226378pt|G_{,\phi}|= \frac{\delta_{G\phi}}{\delta_{X}}\ll 1. \tag{17}\]
In the slow roll approximation, \(\dot{H},G_{,\phi},G_{,X\phi},G_{\phi\phi}\) terms are small, and thus Eqs. (14) and (15) can be approximated as
\[\mathcal{B}\approx 1+6HG_{,X}\dot{\phi}+6HG_{,XX}X\dot{\phi}. \tag{18}\]
\[\mathcal{A}\approx 1+Q+3HG_{,X}\dot{\phi}. \tag{19}\]
Moreover, the contribution of \(\ddot{\phi}\) is negligible in the slow roll approximation, thus Eq. (13) can be written as
\[3H\mathcal{A}\dot{\phi}(t)+V_{,\phi}\simeq 0. \tag{20}\]
Also, the radiation energy density does not evolve much during the slow roll, thus Eq. (4) is approximated as
\[\rho_{r}\approx\frac{3}{4}Q\dot{\phi}^{2}. \tag{21}\]
Next, one can define an effective Galileon dissipation parameter as
\[Q_{G}\equiv Q/\mathcal{B}\]
and from Eqs. (20) and (21), obtain
\[3H\mathcal{B}\dot{\phi}\left(Q_{G}+\frac{\delta_{X}+3\delta_{GX}}{\delta_{X}+ 6(\kappa_{X}+1)\delta_{GX}}\right)+V_{,\phi}\simeq 0 \tag{22}\]
where \(\kappa_{X}=\frac{XG_{,XX}}{G_{,X}}\), and
\[\rho_{R}=C_{R}T^{4}\approx\frac{3}{4}\mathcal{B}Q_{G}\dot{\phi}^{2} \tag{23}\]
where \(C_{R}=\frac{\pi^{2}}{30}g_{*}.\) When the G-term dominates the evolution, i.e., \(|\delta_{X}|\ll|\delta_{GX}|\), then Eq. (22) becomes
\[3H\mathcal{B}\dot{\phi}\left(Q_{G}+\frac{1}{2(\kappa_{X}+1)}\right)+V_{,\phi} \simeq 0. \tag{24}\]
This equation governs the dynamics of the inflaton field during warm G-inflation.
### Primordial Power Spectrum
Due to the presence of non-zero temperature during warm inflation, the primordial curvature power spectrum is dominantly sourced by the thermal fluctuations [131; 132; 123]. The inflaton fluctuations are obtained by solving the stochastic Langevin equation sourced by thermal noise in the radiation bath. The intensity of noise depends on the dissipation coefficient through the fluctuation-dissipation theorem [120; 121; 122]. When the dissipation coefficient has a temperature dependence (\(\Upsilon\propto T^{c}\)), the radiation fluctuations also couple to the inflaton fluctuations and leads to a growth in the power spectrum for \(c>0\)[131]. Following the calculation for warm inflation of Ref. [131], the power spectrum for warm G-inflation model has been calculated in Ref. [37], and is given as1
Footnote 1: In this expression, thermal fluctuations are the dominant contributions to the primordial power spectrum.
\[\Delta_{\mathcal{R}}^{2}|_{c_{s}k=aH}=\left(\frac{\sqrt{3}H^{3}T\sqrt{1+Q_{G}} }{4\pi\sqrt{\pi}c_{s}\dot{\phi}^{2}}\right)\left(1+\frac{Q_{G}}{Q_{c}}\right) ^{3c}. \tag{25}\]
Here the first bracketed term corresponds to the case when there is no coupling of radiation fluctuations to the inflaton fluctuations, i.e. there is no temperature dependence in \(\Upsilon\) (\(c=0\)). The term \(\left(1+\frac{Q_{G}}{Q_{c}}\right)^{3c}\) represents the growth factor, and for a fixed \(c\) and \(Q_{G}\), it varies inversely to \(Q_{c}\). While if we fix \(c\) and \(Q_{c}\), the behaviour of growth factor is directly proportional to value of \(Q_{G}\). In the above expression, the effective sound speed \(c_{s}\) is obtained from
\[c_{s}^{2}=\frac{\delta_{X}+4\delta_{GX}}{\delta_{X}+6(\kappa_{X}+1)\delta_{GX}} \tag{26}\]
and the factor \(Q_{c}\) is given as
\[Q_{c}=\left(\left[\frac{G_{1,3}^{3,1}\left(\frac{1}{12c_{s}^{2}}|_{2-c/2,\ 0,\ 5/2} \right)}{2^{3c}\Gamma_{R}(\frac{3c}{2})\Gamma_{R}(\frac{3c}{2}+\frac{5}{2}) \Gamma_{R}(2+c)}\right]\frac{\Gamma_{R}(3c+\frac{3}{2})}{\Gamma_{R}(\frac{3} {2})}\right)^{-\frac{1}{3c}} \tag{27}\]
where \(G_{1,3}^{3,1}\) is the Meijer-G function, and \(\Gamma_{R}\) refers to the Gamma function. From this expression, we can see that \(Q_{c}\) is inversely proportional to \(c_{s}\). As \(c_{s}\) decreases, \(Q_{c}\) increases, which in turn makes the growth weaker, and vice-versa. We also stress the importance of parameter \(c\), which is a measure of temperature dependence in the dissipation coefficient, i.e. \(\Upsilon\propto T^{c}\). For positive values, as we increase \(c\), the growth factor increases. In the G-dominated regime (\(|\delta_{X}|\ll|\delta_{GX}|\)), Eq. (26) can be approximated as
\[c_{s}^{2}\approx\frac{2}{3(\kappa_{X}+1)}=\frac{2G_{,X}}{3(G_{,X}+XG_{,XX})}. \tag{28}\]
The effects of Galileon-like terms appears in modifying the sound speed and in the limit \(G\to 0\), \(c_{s}^{2}\to 1\), one recovers the canonical warm inflation. However, in this study, we are interested in the G-dominated regime and using the expression (25) for parameterizing the power spectrum in terms of model parameters.
The primordial power spectrum could also be damped because of the shear viscous pressure in the radiation [133]. This would lead to a modified expression of primordial power spectrum with a comparatively slower growth rate or even a total suppression of the growth caused by inflaton and radiation coupled fluctuations. However, in this work, we do not consider any such effects and it would be noteworthy to consider them in future studies.
## IV Warm Higgs-G model
We will now describe the model we have considered in this study, warm Higgs-G inflation. As the name suggests, we have a Galileon-like non-minimal kinetic term to the Standard Model Higgs boson in a warm inflation setup. The action for Higgs-G in warm inflation is given from the Refs. [33; 34; 37] as
\[\mathcal{S}_{WHGI}=\int d^{4}x\sqrt{-g}\left[\frac{M_{Pl}^{2}}{2}R-|D_{\mu} \mathcal{H}|^{2}-\lambda(|\mathcal{H}|^{2}-v^{2})^{2}-\frac{2\mathcal{H}^{ \dagger}}{M^{4}}D_{\mu}D^{\mu}\mathcal{H}|D_{\mu}\mathcal{H}|^{2}+\mathcal{L} _{r}+\mathcal{L}_{int}\right] \tag{29}\]
where \(D_{\mu}\) is the covariant derivative with respect to the Standard Model gauge symmetry, \(\mathcal{H}\) is the SM Higgs doublet, \(M\) represents some mass parameter. Higgs inflation is driven by the neutral component of Higgs field \(\phi\), which has a self-coupling constant \(\lambda\) and the vacuum expectation value of Higgs after electroweak symmetry breaking, \(v=246\) GeV. We will consider the scenario when \(\phi\gg v\), in which case the simplified action is given by
\[\mathcal{S}_{WHGI}=\int d^{4}x\sqrt{-g}\left[\frac{M_{Pl}^{2}}{2}R+X-\frac{ \lambda\phi^{4}}{4}-\frac{\phi X}{M^{4}}\Box\phi+\mathcal{L}_{r}+\mathcal{L} _{int}\right], \tag{30}\]
where we have approximated the Higgs potential as quartic potential,
\[V(\phi)\simeq\frac{\lambda\phi^{4}}{4}. \tag{31}\]
In this study, we would choose a general form of non-linear Galileon interaction term (\(G(\phi,X)\Box\phi\) term in the Lagrangian) as [34; 35; 134]
\[G(\phi,X)=-\frac{\phi^{2p+1}X^{q}}{M^{4q+2p}} \tag{32}\]
where \(p\) and \(q\) are some positive integers. We consider two models with G-term as \(-\frac{\phi X}{M^{4}}\) (for \(p=0,q=1\)) and \(-\frac{\phi^{3}X}{M^{6}}\) (for \(p=1,q=1\)). For the warm inflation setup, we choose the interaction Lagrangian in inflaton-radiation system such that the dissipation coefficient is given by [125; 126; 130]
\[\Upsilon(\phi,T)=C_{T}T. \tag{33}\]
As discussed before, this form of the dissipation coefficient is obtained in certain microphysical descriptions of warm inflation. The temperature dependence in the dissipation coefficient couples the inflaton fluctuations to the fluctuations in the radiation fields and leads to a growth function in the primordial power spectrum. Other well-motivated forms of temperature and field dependence in the dissipation coefficient are equally interesting to explore further.
### Evolution equations in warm Higgs-G inflation
Now, we will study the inflationary dynamics for our warm Higgs-G inflation model, focussing on the regime wherein the G-term dominates the inflaton evolution. We begin with parameterizing the primordial power spectrum in Eq. (25) in terms of model parameters. First, we write the Friedmann equation for our model when the inflaton potential dominates the energy density of the Universe, as
\[H^{2}\simeq\frac{\lambda}{12}{M_{Pl}}^{2}\left(\frac{\phi}{M_{Pl}}\right)^{4}. \tag{34}\]
Next, we want to evaluate \(\dot{\phi}\). For this, we simplify Eq. (18) for our model and obtain
\[\mathcal{B}=-6Hq^{2}\dot{\phi}\ \frac{\phi^{2p+1}X^{q-1}}{M^{4q+2p}} \tag{35}\]
where \(X=-\frac{\dot{\phi}^{2}}{2}.\) Then by substituting the expression for \(\mathcal{B}\) in Eq. (24), we get
\[\dot{\phi}=-M_{Pl}^{2}\zeta_{1}^{\frac{1}{2q}}\left(Q_{G}+\frac{1}{2q}\right)^ {-\frac{1}{2q}}\left(\frac{\phi}{M_{Pl}}\right)^{-\frac{(p+1)}{q}}, \tag{36}\]
where \(\zeta_{1}=\frac{2^{q}}{3q^{2}}\left(\frac{M}{M_{Pl}}\right)^{2p+4q}\). We would further express the field evolution in terms of the parameter \(Q_{G}\). From the form of dissipation coefficient in Eq. (33), and the definitions of
\(\Upsilon,Q,Q_{G}\), we can write \(T=\frac{3H\mathcal{B}Q_{G}}{C_{T}}\), and then substitute it in Eq. (23). Then using Eq. (36), we get a relation between \(\phi\) and \(Q_{G}\) as
\[\frac{\phi}{M_{Pl}}=\left[\frac{Q_{G}^{3}\left(\frac{1}{2q}+Q_{G}\right)^{\frac {5-6q}{2q}}}{\zeta_{2}}\right]^{-\frac{q}{5p+11q+5}}. \tag{37}\]
Later, we also plot and discuss the total field excursion for different values of \(Q_{G},p\) and \(q\) in our warm inflation models in the context of swampland conjectures.
We next evaluate the expression for temperature of the thermal bath of radiation. Using Eqs. (23), (35) and (36), we get
\[\frac{T}{M_{Pl}}=Q_{G}^{\frac{1-3a}{4}}\left(\frac{1}{2q}+Q_{G}\right)^{\frac {2q(3a-1)-5a-1}{8q}}\left(\frac{\sqrt{3\lambda}}{2C_{R}}\zeta_{1}^{\frac{1}{2q }}\zeta_{2}^{a}\right)^{\frac{1}{4}} \tag{38}\]
where \(\zeta_{2}=\frac{\sqrt{3}C_{T}^{4}}{2C_{R}\lambda^{7/2}}\zeta_{1}^{\frac{5}{2 q}}\) and \(a=\frac{-p+q-1}{5p+11q+5}\). Finally, with the Eqs. (34), (36), (37) and (38), we have parameterized the primordial power spectrum in terms of parameters \(Q_{G}\), \(\lambda\), \(C_{T}\), \(M\), \(p,q\), and \(c_{s}\) (which is a function of \(p,q\)). For a chosen model i.e. fixed \(p,q\) (and \(c_{s}\) accordingly), and a fixed \(Q_{G}\) value at the pivot scale, we have three unknown parameters, \(\lambda,C_{T}\), and \(M\). The variable \(Q_{G}\) is dynamical during inflation. We know that the end of warm inflation is determined from the equation \(\epsilon_{H}=-\frac{\dot{H}}{H^{2}}=1\), which gives
\[\frac{1}{2\zeta_{3}}(Q_{G}^{end})^{-3b}\left(\frac{1}{2q}+Q_{G}^{end}\right)^{ \frac{(6q-5)b+1}{2q}}=1. \tag{39}\]
where \(\zeta_{3}=2\sqrt{\frac{3}{3}}\zeta_{1}^{\frac{1}{2q}}{\zeta_{2}}^{-b}\) and \(Q_{G}^{end}\) is the value of \(Q_{G}\) at the end of inflation. Thus, for a warm inflation model with a fixed \(\lambda\), the above equation gives a relation between \(Q_{G}^{end}\), \(C_{T}\) and \(M\). But, since we also have \(Q_{G}^{end}\) as unknown, we need to evaluate the evolution of \(Q_{G}\) as a function of number of efolds \(N_{e}\) for our warm inflation model. In our notation, we count the number of efolds from the end of inflation, i.e. \(N_{end}=0\). From Eq. (37), we have a relation between \(\phi\) and \(Q_{G}\). On differentiating both sides with \(N_{e}\) and using the fact that \(dN_{e}=-Hdt\), we have \(d\phi/dN_{e}=-\dot{\phi}/H\). On substituting for \(\dot{\phi}\) from Eq. (36), we get
\[\frac{dQ_{G}}{dN_{e}}=-\zeta_{3}(22q+10p+10)\frac{Q_{G}^{3b+1}}{5Q_{G}+3}\left( \frac{1}{2q}+Q_{G}\right)^{-\frac{2q(3b-1)-5b+1}{2q}} \tag{40}\]
where \(b=\frac{p+3q+1}{5p+11q+5}.\) We next integrate this equation from pivot scale till the end of inflation, to obtain \(Q_{G}^{end}\), as
\[\zeta_{3}N_{e}=F(Q_{G}^{end})-F(Q_{G}^{P}) \tag{41}\]
where
\[F(Q_{G})=-\frac{qQ_{G}^{-3b}\left(\frac{1}{2q}+Q_{G}\right)^{\frac{6bq-5b+1}{2q}} \left(5-\frac{{}_{2}F_{1}(1,\frac{1-5b}{2q};1-3b;-2qQ_{G})}{b}\right)}{(5b-1)(5p +11q+5)}.\]
With this relation, for a fixed number of efolds of inflation \(N_{e}=50\) (or \(60\)) and a chosen value of \(\lambda\) and \(Q_{G}\) at the pivot scale (\(Q_{G}^{P}\)), we again obtain the \(Q_{G}\) value at the end of inflation (\(Q_{G}^{end}\)) as a function of \(C_{T}\) and \(M\). Since we have three variables (\(Q_{G}^{end},C_{T},M\)) and two equations (39), (41), we need to provide an additional constraint to know these exactly. We use the normalisation of primordial power spectrum at the pivot scale (\(k_{P}=0.05\) Mpc\({}^{-1}\)) as \(\Delta_{\mathcal{R}}^{2}(k_{P})=2.1\times 10^{-9}\) and therefore obtain the values of all the variables. In this way, we parameterize the power spectrum for our warm Higgs-G model in terms of model parameters and estimate their values compatible with the CMB observations.
### Constraints on warm Higgs-G inflation model from CMB
Here we show the results for different cases of our warm Higgs-G inflation model. We have chosen \(\lambda=0.13\) (the largest value of Higgs self-coupling from quantum field theory) for our study. In all the plots, for each value of \(Q_{G}\) (chosen at the pivot scale), we have a different set of \(C_{T}\) and \(M\) values that satisfy the normalisation condition for the primordial power. We consider the following set of values for \(p\) and \(q\).
**Model I:**\(p=0,q=1\), which effectively represents the Galileon interaction term, \(G=-\frac{\phi X}{M^{4}}\). For these values of \(p,q\), the sound speed corresponds to \(c_{s}=\sqrt{\frac{2}{3}}=0.816\). In this model, we first identify the parameter space of \(Q_{G}\) values, which are consistent with the \(n_{s}\) bounds from the Planck observations. For this, we plot the spectral index \(n_{s}\) versus \(Q_{G}\) value at the pivot scale in Fig. 1(a) for \(50\) and \(60\) efolds of inflation. The colored band in the Figure represents the Planck \(2\sigma\) allowed range for the \(n_{s}\). We obtain that for \(N_{e}=50\), \(10^{-0.98}\leq Q_{G}\leq 10^{-0.135}\) is consistent within the \(n_{s}-2\sigma\) bounds, while for \(N_{e}=60\), the allowed range is \(10^{-0.68}\leq Q_{G}\leq 10^{-0.25}\). We can also see from the Figure that the spectrum is red-tilted (\(n_{s}<1\)) for small \(Q_{G}\) values, while it tends to become blue-tilted (\(n_{s}>1\)) as \(Q_{G}\) increases.
**Model II:**\(p=1,q=1\), which effectively represents the Galileon interaction term, \(G=-\frac{\dot{\phi}^{3}X}{M^{6}}\). For these values of \(p,q\) also, the sound speed \(c_{s}=\sqrt{\frac{2}{3}}\). To estimate the range of \(Q_{G}\) values consistent with the CMB, we plot the \(n_{s}\) versus \(Q_{G}\) value at the pivot scale for \(50\) and \(60\) efolds of inflation in Fig. 1(b). We obtain that for \(N_{e}=50\), \(10^{-1.14}\leq Q_{G}\leq 10^{-0.17}\) is consistent within the \(n_{s}-2\sigma\) bounds, and for \(N_{e}=60\), the allowed range is \(10^{-0.88}\leq Q_{G}\leq 10^{-0.24}\). Also, the power spectrum is red-tilted at the large scales but turns blue tilted for the small scales. This is favorable for PBH production, which we will see in the next section.
### Swampland conjectures and our model
As inflation is described by a low energy effective field theory, it has to obey some criteria, such as the swampland and trans-Planckian censorship conjectures (TCC), in order to embed it in a UV complete theory, as follows [135; 136; 137; 138]:
* **Distance conjecture:** This criteria puts a limit on the scalar field range traversed during inflation, and is given as \[\frac{|\Delta\phi|}{M_{Pl}}\leq a\] (42) where the constant \(a\sim\mathcal{O}(1)\). This implies that small field models of cold inflation are more favorable than the large field models.
* **de Sitter conjecture:** This criteria gives a lower limit on the slope of inflationary potential as \[M_{Pl}\frac{|V_{,\phi}|}{V}\geq b\] (43) where the constant \(b\sim\mathcal{O}(1)\). This condition implies that the steep potentials are favorable for inflationary dynamics, which is not actually supported in the standard cold inflation in a minimal setup.
* **Trans-Planckian Censorship conjecture (TCC):** This criteria demands that the super-Planckian quantum fluctuations never become superhorizon during inflation,
Figure 1: Plots of the spectral index \(n_{s}\) vs. \(Q_{P}\) (equal to the value of \(Q_{G}\) at the pivot scale) for 50 (red) and 60 (blue) efolds of inflation for 1(a) Model I: \(p=0,q=1\), and 1(b) Model II: \(p=1,q=1\). The grey shaded bands represent the \(1\sigma\) and \(2\sigma\) allowed ranges from the Planck observations.
which sets a limit on the scale of inflation as [139] \[V^{1/4}<3\times 10^{-10}M_{Pl}.\] (44) This implies that the energy scale of inflation has to be lower than \(\sim 10^{9}\) GeV to satisfy TCC bound. Such a low energy scale inflation yields a negligible amplitude of primordial gravitational waves and tensor-to-scalar ratio, \(r<2.7\times 10^{-31}\).
It is extremely challenging to satisfy these conjectures in a single-field cold inflation model with a canonical kinetic term and a Bunch Davies vacuum, to embed them in a UV complete theory. However, if one extends to multifield inflation [140] or curvaton models or by choosing different initial vaccuum state [141] or a non-canonical kinetic term, one can construct inflationary models compatible with the swampland distance and de Sitter conjectures [136; 142].
Contrary to the cold inflation, warm inflation is an interesting framework in which these conjectures could be satisfied due to the modified dynamics of inflaton field. There exists a few warm inflationary studies [143; 144; 60; 145; 146; 147] in this context, which show that with a strong dissipation, the energy scale of inflation is sufficiently lowered, satisfying the above conjectures. Here we discuss the status of these conjectures for our warm Higgs-G model. In Fig. 2, we plot the field excursion \(|\Delta\phi|/M_{Pl}\) and the slope of inflationary potential \(M_{Pl}|V_{,\phi}|/V\) for our models with different \(Q_{G}\) value at the pivot scale (denoted by \(Q_{P}\)). The solid (dashed) red and blue lines represent Model I (Model II) for 50 and 60 efolds of inflation, respectively. It is evident from the figure that the swampland distance and de-Sitter conjectures are in agreement with our models, implying that these models lie in the UV complete landscape theories of inflation. Further we plot the scale of inflation \(V^{1/4}/M_{Pl}\) and the tensor-to-scalar
Figure 2: Plots of the field excursion \(|\Delta\phi|\) (in units of \(M_{Pl}\)) and the slope of inflationary potentials \(M_{Pl}\frac{|V_{,\phi}|}{V}\) vs. \(Q_{P}\) for our warm inflation models. The solid (dashed) red and blue lines represent 50 and 60 efolds of inflation with Model I: \(p=0,q=1\) (Model II: \(p=1,q=1\)).
ratio \(r\) for our warm inflation models in Fig. 3. We find that both the TCC and the constraint on the tensor-to-scalar ratio are satisfied for these models with 60 efolds of inflation, however for 50 efolds, there is some parameter range of \(Q_{P}\) which does not satisfactorily obey the bound.
To conclude, we have seen that for some parameter space of our warm inflation models, the CMB constraints on \(n_{s}-r\) are well obeyed. Further, the swampland and TCC conjectures are also in agreement with these models, implying that they lie in the viable landscape theories of inflation. We next study the small scale features of these models in the context of formation of PBH.
## V PBH formation and scalar induced gravitational waves
In the previous Section, we found that in our models, the primordial power spectrum is turning blue-tilted on the small scales and therefore can generate a significant abundance of PBHs, if the amplitude of the fluctuations is large. Further, these overdense perturbations also source the tensor perturbations and lead to secondary gravitational waves. In this Section, we will briefly discuss the PBH formation and the associated GW generation. For more detailed derivation, see Refs. [107; 108; 109; 110; 111].
Figure 3: Plots of the scale of inflation \(V^{1/4}\) (in units of \(M_{Pl}\)) and the tensor-to-scalar ratio \(r\) vs. \(Q_{P}\) for our warm inflation models. The solid (dashed) red and blue lines represent 50 and 60 efolds of inflation with Model I: \(p=0,q=1\) (Model II: \(p=1,q=1\)). The black dot-dashed lines represent the upper bound on both the quantities.
### PBH formation
As mentioned before, PBHs can be produced via the collapse of large inhomogeneities generated during inflation. We consider that the PBHs are generated in the radiation-dominated era when fluctuations reenter the horizon. The mass of the generated PBHs is a fraction, \(\gamma\) of the horizon mass at that epoch and is given as [95]
\[M_{PBH}(k)=\gamma\frac{4\pi}{3}\rho\left.H^{-3}\right|_{k=aH}. \tag{45}\]
where, \(\rho\), \(a\), and \(H\) represents the energy density, scale factor and Hubble expansion rate at the time of PBH formation, respectively. Also, as different comoving fluctuation modes \(k\) reenter the horizon at different epochs, the mass of the generated PBHs are given as,
\[M_{PBH}(k)\simeq 5\times 10^{15}\mathrm{g}\left(\frac{g_{*0}}{g_{*i}} \right)^{1/6}\left(\frac{10^{15}\mathrm{Mpc}^{-1}}{k}\right)^{2} \tag{46}\]
where \(\Omega_{r0}\) is the present day radiation energy density fraction, \(g_{*0}\) and \(g_{*i}\) represent the relativistic degrees of freedom present in the Universe today and at the time of PBH formation, respectively. The fraction of the horizon mass collapsing into the PBHs is taken as \(\gamma=0.2\)[65]. From Eq. (46), we see that \(M_{PBH}\propto k^{-2}\), which suggests that large (small) \(k\) leads to small (large) mass PBHs. Further, an important quantity, known as the initial mass fraction of PBHs, \(\beta(M_{PBH})\) is defined as
\[\beta(M_{PBH})=\frac{\rho_{PBH,i}}{\rho_{total,i}} \tag{47}\]
where, \(\rho_{PBH,i}\) and \(\rho_{total,i}\) are the energy density of PBHs and total energy density of the Universe at the time of PBH formation. Assuming that PBHs are formed in the radiation-dominated era, the expression of \(\beta(M_{PBH})\) can be expressed in the form of present day observables as
\[\beta(M_{PBH})= \frac{\Omega_{PBH,0}(M_{PBH})}{\Omega_{r0}^{3/4}}\left(\frac{g_{ *i}}{g_{*0}}\right)^{1/4}\left(\frac{M_{PBH}}{M_{0}}\right)^{1/2}\gamma^{-1/2 }\ . \tag{48}\]
Here, \(\Omega_{PBH,0}(M_{PBH})=\rho_{PBH,0}/\rho_{crit,0}\) is the present day PBH energy density fraction, with \(\rho_{PBH,0}\) and \(\rho_{crit,0}\) representing the present energy density of PBHs and the critical energy density of the Universe, respectively.
The abundance of PBHs is theoretically obtained by the Press-Schechter theory [148]. For an overdense fluctuation reentering the horizon during a radiation-dominated era with an amplitude above a critical value \(\delta_{c}\), the initial mass fraction of PBHs with mass \(M_{\mathrm{PBH}}\) is given as [148],
\[\beta(M_{PBH})= \frac{2}{\sqrt{2\pi}\sigma(R)}\int_{\delta_{c}}^{1}\exp\left( \frac{-\delta^{2}(R)}{2\sigma^{2}(R)}\right)d\delta(R)=\mathrm{Erfc}\left( \frac{\delta_{c}}{\sqrt{2}\sigma(R)}\right) \tag{49}\]
where, Erfc is the complementary error function, and \(\sigma(R)\) is the mass variance evaluated at the horizon crossing defined as,
\[\sigma^{2}(R)=\int_{0}^{\infty}\tilde{W}^{2}(kR)P_{\delta}(k)\frac{dk}{k} \tag{50}\]
where \(P_{\delta}(k)\) is the matter power spectrum, and \(\tilde{W}(kR)\) is the Fourier transform of the window function. From the Eq. (48), we see that the initial mass fraction of PBH is highly sensitive to the value of \(\delta_{c}\). Therefore, any uncertainity in \(\delta_{c}\) would lead to large discrepancy in \(\beta\). For discussion on this, see review [103]. In our analysis, we assume \(\delta_{c}=0.5\), and window function as a Gaussian, i.e., \(\tilde{W}(kR)=\exp(-k^{2}R^{2}/2)\). The primordial curvature power spectrum \(\Delta_{\mathcal{R}}^{2}(k)\) for the fluctuations generated during the inflation can be related to the density power spectrum \(P_{\delta}(k)\) as
\[P_{\delta}(k)=\frac{4(1+w)^{2}}{(5+3w)^{2}}\left(\frac{k}{aH}\right)^{4} \Delta_{\mathcal{R}}^{2}(k), \tag{51}\]
where \(w\) is the equation of the state of the fluid, equal to \(1/3\) for a radiation-dominated era. Substituting Eq. (50) and (51) in (49), we obtain the theoretical estimate of initial mass fraction for PBHs from our model.
The abundance of PBHs is bounded from above through different observations, e.g. for the evaporating PBHs, the bounds arise from consequences of evaporation and for non-evaporating PBHs, the upper bound on \(\beta\) comes from gravitational lensing, dynamical effects on interaction with astrophysical objects, gravitational wave merger rates, etc. [95; 96; 97; 98]. This in turn then provides upper limit on the primordial power spectrum. It is seen that for a significant PBH abundance, to have any measurable consequences, we require \(\Delta_{\mathcal{R}}^{2}(k)\sim\mathcal{O}(10^{-2})\). Thus, we require a blue-tilted power spectrum with an amplitude of this order for significant PBH generation.
### PBH as dark matter candidate
Furthermore, primordial black holes which have not evaporated completely by now, can constitute the present dark matter abundance. We assume a monochromatic mass function for PBHs. The fraction of dark matter in the PBHs is defined as,
\[f_{PBH}(M_{PBH})=\frac{\Omega_{PBH,0}(M_{PBH})}{\Omega_{CDM,0}} \tag{52}\]
where \(\Omega_{CDM,0}=\rho_{CDM,0}/\rho_{crit,0}\) is the fractional cold dark matter (CDM) density at the present, with \(\rho_{CDM,0}\) representing the present energy density of the CDM. From Eq. (48), we can substitute for \(\Omega_{PBH,0}(M_{PBH})\) and get
\[f_{PBH}(M_{PBH})=\beta(M_{PBH})\frac{\Omega_{r0}^{3/4}}{\Omega_{CDM,0}}\left( \frac{g_{*i}}{g_{*0}}\right)^{-1/4}\left(\frac{M_{PBH}}{M_{0}}\right)^{-1/2} \gamma^{1/2} \tag{53}\]
where \(M_{0}=\frac{4\pi}{3}\rho_{crit,0}\)\(H_{0}^{-3}\approx 4.62\times 10^{22}M_{\odot}\) is the present horizon mass. This expression gives the fractional abundance of primordial black holes in the form of dark matter. As can be seen from Eq. (53), for any mass PBH, the fraction in DM is directly proportional to its initial mass fraction. Thus, a bound on the initial abundance limits the fraction of PBHs in the dark matter. However, for a mass window (\(10^{17}-10^{23}\)) g, the bounds on the initial mass fraction or the fraction of dark matter are not very restrictive. For related discussion, see Ref. [149]. Thus, there is a possibility that the PBHs in the asteroid mass range (\(10^{17}-10^{23}\)) g can constitute the full abundance of dark matter. Hence, PBHs are interesting from the viewpoint of dark matter phenomenology.
### Scalar induced gravitational waves
Now we discuss the gravitational wave spectrum associated with the primordial black holes. As we have discussed above, for PBH production, the amplitude of primordial fluctuations are enhanced at the small scales. Thus, these overdense scalar modes can source the tensor modes at the second order of cosmological perturbation theory and inevitably lead to the generation of scalar induced gravitational waves (SIGW) spectrum [106; 107; 108; 109].
The gravitational wave energy density parameter per logarithmic \(k\) interval is given by [108; 109]:
\[\Omega_{\rm GW}(\eta,k)=\frac{\rho_{\rm GW}(\eta,{\rm k})}{\rho_{\rm tot}( \eta)}=\frac{1}{24}\left(\frac{{\rm k}}{a(\eta)H(\eta)}\right)^{2}\overline{P_ {h}(\eta,{\rm k})} \tag{54}\]
where \(\rho_{\rm tot}(\eta)\) is the total energy density and \(\overline{P_{h}(\eta,{\rm k})}\) is the dimensionless tensor power spectrum averaged over time given by [108; 109]
\[\overline{\mathcal{P}_{h}(\eta,k)}=4\int\limits_{0}^{\infty}dv\int\limits_{|1- v|}^{|1+v|}du\left[\frac{4v^{2}-(1+v^{2}-u^{2})^{2}}{4vu}\right]^{2}\overline{ \mathrm{I}^{2}(v,u,x)}P_{\zeta}(kv)P_{\zeta}(ku) \tag{55}\]
where \(x\equiv k\eta\). In the late time limit \(x\rightarrow\infty\), the function \(\overline{\mathrm{I}^{2}(v,u,x)}\) is obtained as [109]
\[\overline{\mathrm{I}^{2}(v,u,x\rightarrow\infty)}=\frac{9}{2x^{2 }}\left(\frac{u^{2}+v^{2}-3}{4u^{3}v^{3}}\right)^{2}\times\] \[\bigg{[}\left(-3uv+(u^{2}+v^{2}-3)\log\left|\frac{3-(u+v)^{2}}{3- (u-v)^{2}}\right|\right)^{2}+\pi^{2}(u^{2}+v^{2}-3)^{2}\theta(u+v-\sqrt{3}) \bigg{]}. \tag{56}\]
Having obtained the expression for gravitational wave energy density in Eq. (54), the observational relevant quantity, the present energy spectrum of secondary gravitational waves, \(\Omega_{\rm GW,0}(k)\) is estimated as [112]
\[\Omega_{\rm GW,0}(k)=0.39\left(\frac{g_{*}}{106.75}\right)^{-\frac{1}{3}} \Omega_{r,0}\ \Omega_{\rm GW}(\eta_{c},k) \tag{57}\]
where, \(g_{\star}\) is the effective number of relativistic degree of freedom in the radiation-dominated era and \(\Omega_{r,0}\) is the present radiation energy density. Here, \(\eta_{c}\) represents the conformal time at an epoch when a perturbation is inside the horizon after re-entry during radiation dominated era. To estimate \(\Omega_{\text{GW},0}(k)\) as of frequency, we replace \(k\) with frequency, \(f\) from the relation
\[f=\frac{k}{2\pi}=1.5\times 10^{-15}\left(\frac{k}{1\text{ Mpc}^{-1}}\right)\text{Hz}. \tag{58}\]
## VI Results
After describing the basic formalism of PBH formation and the generation of secondary gravitational waves, we now discuss the results obtained for our warm Higgs-G inflation model. As discussed before, we will work with the parameter space for \(Q_{G}\) value at the pivot scale (denoted by \(Q_{P}\)) (and the corresponding \(C_{T}\), \(M\) values) consistent with the CMB observations.
### Primordial curvature power spectrum
Using the parameterization of our warm Higgs-G model, as carried out in Section IV, we evolve \(Q_{G}\) as a function of the comoving scale \(k\) or number of efolds \(N_{e}\), and obtain the evolution of the primordial power spectrum. Here we show the results of our calculations.
We plot the primordial curvature power spectrum as a function of \(k\) in Fig. 4 and Fig. 5 for Model I (\(p=0,q=1\)) and Model II (\(p=1,q=1\)), respectively. In these figures, we show the results for both 50 and 60 efolds of inflation. Also, we show the observational constraints on the power spectrum obtained from \(\mu-\)distortion, PTA and SKA, taken from Ref. [150]. The cyan shaded region represents the excluded region from PBH overproduction. The black solid line in these plots represents the standard power-law form of primordial power spectrum, \(\Delta_{\mathcal{R}}^{2}(k)=A_{s}\left(\frac{k}{k_{p}}\right)^{n_{s}-1}\), which is red-tilted (\(n_{s}<1\)) at all the scales, thus cannot form PBHs. Whereas, in both Fig. 4 and Fig. 5, we can see that the spectrum for warm inflation is red-tilted at the CMB scales, and it turns blue-tilted (\(n_{s}>1\)) at the small scales with a huge enhancement in the amplitude. As we increase the value of \(Q_{P}\), we see that the power spectrum rises steeper and attains the value \(\sim 10^{-2}\) at comparatively smaller \(k\) value. As discussed before, PBHs of an observationally significant abundance are formed when the amplitude of the primordial power spectrum is of the order of \(\Delta_{\mathcal{R}}^{2}(k)\sim\mathcal{O}(10^{-2})\).
We see from Fig. 4, for the case of 50 efolds of Model I (\(p=0,q=1\)), this is achieved for \(1.37\times 10^{12}<k<1.5\times 10^{15}\) Mpc\({}^{-1}\), while for 60 efolds, as shown in Fig. 4, this condition is achieved for \(2.2\times 10^{15}<k<1.78\times 10^{17}\) Mpc\({}^{-1}\). Similarly, from Fig. 5 for Model II (\(p=1,q=1\)), we find that for 50 efolds, the amplitude of primordial power spectrum is of
order \(10^{-2}\) for scales2\(1.14\times 10^{10}<k<1.46\times 10^{15}\) Mpc\({}^{-1}\). For 60 efolds, we have a sufficient amplitude of power on the scales \(4.2\times 10^{12}<k<4.8\times 10^{15}\) Mpc\({}^{-1}\), as shown in Fig. 5(b).
Footnote 2: For our further calculations, we take these cutoff values of comoving wavenumbers at which primordial power spectrum \(\sim\mathcal{O}(10^{-2})\). Similar approach was also taken in Ref. [89].
We would like to point out that in a canonical setup, the \(\lambda\phi^{4}\) warm inflation model with linear dissipation coefficient \(\Upsilon\propto T\) does not lead to a sufficient enhancement of the primordial power, while a cubic dissipation \(\Upsilon\propto T^{3}\) could lead to PBH generation [110]. However, in the non-canonical model, even the linear dissipation coefficient is sufficient to cause enough enhancement in the power spectrum and leads to PBH formation. Also as discussed before, the analytical expression for primordial power spectrum given in Eq. (25) could be studied more carefully, considering the effects of shear viscosity in the radiation fluid. Further, its range of validity has to be examined with the numerical solutions of the coupled stochastic inflaton-radiation system, as done in Refs. ([112; 114; 131]). It would be further interesting to construct model similar to Ref. [114] in this setup of warm G-inflation, so as to restrict the inflaton dissipation and the growth of primordial power spectrum. We will address these issues in future works.
Figure 4: The plots of primordial curvature power spectrum as a function of comoving scale \(k\) for the different values of \(Q_{G}\) allowed by CMB for Model I (\(p=0,q=1\)). Fig. 4(a) and Fig. 4(b) correspond to 50 and 60 efolds of inflation, respectively. The black solid line represents the usually considered power-law primordial power spectrum. The observational constraints from \(\mu-\)distortion, PTA and SKA are taken from Ref. [150]. The cyan shaded region represents the excluded region from PBH overproduction.
### Initial mass fraction of PBHs
Till now, we have identified the range of comoving scales for our warm inflation models which will be of interest to us. When these modes reenter the horizon during the radiation-dominated era, they collapse into PBHs. We now show the results for the mass and abundance of the generated PBHs from our models, using the formalism discussed in Section V. We plot the initial mass fraction, \(\beta^{\prime}(M_{PBH})\) (defined equal to \(\gamma^{1/2}\left(\frac{g_{*}}{106.75}\right)^{-1/4}\left(\frac{h}{0.67}\right) ^{-2}\beta(M_{PBH})\)) versus \(M_{PBH}\) for our WI models in Fig. 6 and Fig. 7 corresponding to Model I (\(p=0,q=1\)) and Model II (\(p=0,q=1\)), respectively. We infer from these plots that the larger is the \(Q_{G}\) value, the more massive are the PBHs. This is because for larger \(Q_{G}\), the growth of primordial power is steeper and the large amplitude is attained at a smaller \(k\) value. As \(M_{PBH}\) is inversely proportional to \(k\), this implies a more massive PBH generation. For a review on the observational constraints on \(\beta^{\prime}(M_{PBH})\), see Ref. [103].
We can see from Fig. 6 that for 50 efolds of inflation of Model I, PBHs over a mass range \(9.3\times 10^{14}\) g \(<M_{PBH}<5.6\times 10^{20}\) g can be generated, corresponding to the scales quoted in the previous subsection. Some part of this range lies in the interesting asteroid mass window, thus leading to a possibility of explaining the full dark matter abundance. For 60 efolds of inflation of Model I, the mass of generated PBHs varies from \(9.5\times 10^{10}\) g \(<M_{PBH}<4\times 10^{14}\)
Figure 5: The plots of primordial curvature power spectrum as a function of \(k\) for the different values of \(Q_{G}\) allowed by CMB for Model II (\(p=1,q=1\)). Fig. 5 and Fig. 5 correspond to 50 and 60 efolds of inflation, respectively. The black solid line represents the usually considered power-law primordial power spectrum. The observational constraints from \(\mu-\)distortion, PTA and SKA are taken from Ref. [150]. The cyan shaded region represents the excluded region from PBH overproduction.
g, as shown in Fig. 6. These PBHs would have evaporated into Hawking radiation by today and are thus constrained through the PBH evaporation bounds.
Similarly, in Fig. 7, we plot \(\beta^{\prime}(M_{PBH})\) versus \(M_{PBH}\) using Model II (\(p=1,q=1\)) for both 50 and 60 efolds of inflation. We find that for 50 efolds, shown in Fig. 7, PBHs over a mass range \(1.2\times 10^{15}\) g \(<M_{PBH}<2\times 10^{25}\) g can be generated, while for 60 efolds of inflation, mass of the produced PBHs lies between \(1\times 10^{14}\) g \(<M_{PBH}<9.8\times 10^{19}\) g, as can be seen in Fig. 7. The PBHs are also constrained through the PBH evaporation bounds.
Figure 6: Plots of \(\beta^{\prime}(M_{PBH})\) versus \(M_{PBH}\) using allowed parameter space of \(Q_{G}\) for 6(a): 50 efolds of inflation, and 6(b): 60 efolds of inflation in Model I (\(p=0,q=1\)). The different observational constraints are taken from Ref. [103]. Here the acronyms stand for: BBN, CMB, extragalactic \(\gamma\)-ray background (EGB), galactic \(\gamma\)-ray background (GGB), cosmic ray (CR), DM, gravitational lensing (GL). The constraints here are for monochromatic mass function of PBHs.
Figure 7: Plots of \(\beta^{\prime}(M_{PBH})\) versus \(M_{PBH}\) using allowed parameter space of \(Q_{G}\) for 7(a): 50 efolds of inflation, and 7(b): 60 efolds of inflation in Model II (\(p=1,q=1\)). The different observational constraints are taken from Ref. [103]. The acronyms are the same as in the previous figure.
be seen in Fig. 7(b). Therefore, we emphasize that in both the scenarios, our model predicts a possibility that PBHs might constitute the total dark matter abundance.
### PBHs as dark matter
PBHs can contribute significantly or total fraction of the present day DM energy density in certain mass ranges. Here we explore this possibility for our warm inflation models. For this, we calculate and plot the fraction of DM in PBHs, \(f_{PBH}\) as a function of PBH mass in Fig. 8 and Fig. 9 using the formalism in Section V. In these plots, we focus on the PBHs mass range greater than \(\sim 10^{15}\) g, because the smaller mass PBHs would have evaporated through Hawking radiation and therefore will not contribute much to the DM. The PBHs can explain the full DM when \(f_{PBH}=1\). From Fig. (8) we see that in Model I (\(p=0,q=1\)) with 50 efolds of inflation, there is a production of the asteroid mass range PBHs which can explain the full DM abundance. We further point out that this model also predicts smaller mass PBHs, with low DM fractional abundance.
Similarly, Fig. 9(a) indicates that Model II (\(p=1,q=1\)) with 50 efolds also generates asteroid mass range PBHs that can constitute the full DM abundance. Along with that, this model also leads to comparably higher and lower mass PBHs, and are constrained through microlensing or evaporation observations, respectively. Further, from Fig. 9(b), we infer that for 60 efolds, this model also leads to a possibility of production of PBH DM in the asteroid mass range. This model also predicts smaller mass PBHs which do not contribute significantly to the DM, but are consistent with the evaporation bounds.
Figure 8: Fraction of DM in PBHs, \(f_{PBH}\) vs. \(M_{PBH}\) using the allowed parameter space of \(Q_{G}\) for 50 efolds of inflation in Model I (\(p=0,q=1\)). The evaporation and gravitational lensing constraints are taken from Ref. [104]. The maximum fraction in PBHs is bounded from above as \(f_{PBH}\leq 1\).
### Spectrum of scalar induced gravitational waves
The large density fluctuations needed for PBH formation inevitably leads to secondary gravitational wave generation, as discussed before. From Eqs. (46) and (58), we see that the peak GW frequency inversely depends on the PBHs mass. The larger is the mass of PBH, the smaller is the associated peak GW frequency. To explore the spectrum of scalar-induced gravitational waves, we plot \(\Omega_{GW}h^{2}\) as a function of frequency in Fig. 10 and Fig. 11 corresponding to the Model I (\(p=0,q=1\)) and Model II (\(p=1,q=1\)), respectively. The sensitivity plots for future GW detectors are taken from Ref. [109].
From Fig. 10(a), we see that Model I (\(p=0,q=1\)) with 50 efolds of inflation produces \(9.3\times 10^{14}\) g \(<M_{PBH}<5.6\times 10^{20}\) g mass PBHs, and generates a GW of peak frequency lying between \(\sim(10^{-2}\ -10)\) Hz. These can be detected in the future GW detectors such as LISA, BBO, DECIGO, ET, and CE. Further, the same model with 60 efolds of inflation produces comparably lower PBH mass range, \(9.5\times 10^{10}\) g \(<M_{PBH}<4\times 10^{14}\) g, and generate comparably higher peak frequency of GW lying between \(\sim(10-500)\) Hz, as can be seen in Fig. 10(b). These can be detected in the future GW detector sensitive to high frequency such as ET and CE.
Similarly, Fig. 11(a) suggests that Model II (\(p=1,q=1\)) with 50 efolds of inflation produces \(1.2\times 10^{15}\) g \(<M_{PBH}<2\times 10^{25}\) g mass PBHs and generates GW of peak frequencies lying between \(\sim(10^{-4}-10)\) Hz. These can be detected in GW observations such as LISA, BBO, DECIGO, and ET. Further, the same model with 60 efolds of inflation produces PBH mass range, \(1\times 10^{14}\) g \(<M_{PBH}<9.8\times 10^{19}\) g, which generate peak frequency of GW lying between \(\sim(0.1-30)\) Hz, as shown in Fig. 11(b). These can also be detected in the future
Figure 9: Fraction of dark matter in PBHs, \(f_{PBH}\) versus \(M_{PBH}\) using the allowed parameter space of \(Q_{G}\) for 9(a): 50 efolds of inflation, and 9(b): 60 efolds of inflation in Model II (\(p=1,q=1\)). The evaporation and gravitational lensing constraints are taken from Ref. [104].
GW detectors such as LISA, BBO, DECIGO, ET, and CE, and thus used to test these models of inflation.
Figure 11: Plot of the gravitational wave energy density, \(\Omega_{GW}h^{2}\) induced by the scalar perturbations as a function of frequency for the allowed parameter space of \(Q_{G}\) for 11(a): 50 efolds of inflation, and 11(b): 60 efolds of inflation, for Model II (\(p=1,q=1\)). The sensitivity plots for future GW detectors are taken from Ref. [109].
Figure 10: Plot of the gravitational wave energy density, \(\Omega_{GW}h^{2}\) induced by the scalar perturbations as a function of frequency for the allowed parameter space of \(Q_{G}\) for 10(a): 50 efolds of inflation, and 10(b): 60 efolds of inflation, for Model I (\(p=0,q=1\)). The sensitivity plots for future GW detectors are taken from Ref. [109]. The constraints here are for monochromatic mass function of PBHs.
## VII Summary and discussion
The inflationary paradigm of the early Universe has been extremely successful in explaining various cosmological observations, however, the underlying particle physics model to describe this accelerating phase is not clearly known. The present demand of model building is not only to construct viable models with a physical motivation and interesting phenomenology but also successfully embed them in a UV complete theory. The framework of warm inflation is a general and well-motivated description of inflation wherein the dissipative processes in a coupled inflaton-radiation system drive the evolution of the early Universe. The inflaton dissipates its energy into radiation fields, as a result of which there is a non-zero temperature in the Universe throughout the inflationary phase. The inflaton background evolution as well as its perturbations are modified due to the presence of the thermal bath. Thus, warm inflation leads to unique signatures on the large and small scale observations, and hence important to study.
In this paper, we have studied the scenario of warm Higgs-G inflation, wherein the Standard Model Higgs boson plays the role of the inflaton, and has a Galileon-like non-linear kinetic term, which contributes as an additional frictional term in its evolution. For the quartic Higgs potential and a dissipation coefficient linear in temperature, we have studied two different cases of warm Higgs-G inflation models, with \(G(\phi,X)\propto\phi^{2p+1}X^{q}\) and parameter sets (\(p=0,q=1\)) and (\(p=1,q=1\)). For a wide range of other parameters, we found that these models are compatible with the CMB Planck observations at large scales. Moreover, contrary to the usual cold inflation, we have shown that these scenarios are consistent with the swampland and the TCC conjectures, implying that they lie in the viable landscape of UV complete theories. In these scenarios, the non-linear kinetic term plays an important role, and we found that when it dominates, the background dynamics of the inflaton and the evolution of small scale perturbations are modified. This leads to a blue-tilted spectrum with an enhanced amplitude at small scales, inducing the generation of PBHs and their associated observational imprints, such as the induced GWs.
In our analysis, we found that PBHs over a wide mass range are generated in these models, in particular, in the asteroid mass range (\(10^{17}-10^{23}\)) g, which can explain the total dark matter abundance at present. This particular mass range is interesting as it remains unconstrained from observations, and therefore, PBHs produced in any scenario can be the entire dark matter only in this mass range, while in other mass ranges, they can, at most, be a fraction. We have further calculated the secondary scalar induced GWs sourced by these small scale overdense fluctuations in our set-up and found that the induced GW spectrum can be detected in future GW detectors, such as LISA, BBO, DECIGO, ET and CE. In particular, for some GWs spectra, there also exists an interesting possibility of their simultaneous detection with many GW observatories. We conclude that warm inflationary models have interesting prospective
signatures, and the forthcoming cosmological probes would be very useful for testing these models.
Furthermore, it is well known that primordial non-Gaussianities strongly affect the abundance of PBHs as they form from the tail of the density distribution. Studies show that in warm inflation, the amplitude and shapes of non-Gaussianities are very different in the strong and weak dissipative regimes. It will therefore be interesting to include and understand the effects of such non-Gaussianities in the warm inflationary models for the calculation of PBH abundance as well as for the induced GW background. We will investigate these interesting aspects in future works.
## Acknowledgements
RA acknowledges the support from the National Post-Doctoral Fellowship by the Science and Engineering Research Board (SERB), Department of Science and Technology (DST), Government of India (GOI)(PDF/2021/004792). The work of AKM is supported through Ramanujan Fellowship (PI: Dr. Diptimoy Ghosh) offered by the DST, GOI (SB/S2/RJN-088/2018). RKJ acknowledges financial support from the new faculty seed start-up grant of the Indian Institute of Science, Bengaluru, India, SERB, DST, GOI, through the Core Research Grant CRG/2018/002200 and the Infosys Foundation, Bengaluru, India through the Infosys Young Investigator Award.
|
2308.03696 | Universal shot-noise limit for quantum metrology with local Hamiltonians | Quantum many-body interactions can induce quantum entanglement among
particles, rendering them valuable resources for quantum-enhanced sensing. In
this work, we derive a universal and fundamental bound for the growth of the
quantum Fisher information. We apply our bound to the metrological protocol
requiring only separable initial states, which can be readily prepared in
experiments. By establishing a link between our bound and the Lieb-Robinson
bound, which characterizes the operator growth in locally interacting quantum
many-body systems, we prove that the precision cannot surpass the shot noise
limit at all times in locally interacting quantum systems. This conclusion also
holds for an initial state that is the non-degenerate ground state of a local
and gapped Hamiltonian. These findings strongly hint that when one can only
prepare separable initial states, nonlocal and long-range interactions are
essential resources for surpassing the shot noise limit. This observation is
confirmed through numerical analysis on the long-range Ising model. Our results
bridge the field of many-body quantum sensing and operator growth in many-body
quantum systems and open the possibility to investigate the interplay between
quantum sensing and control, many-body physics and information scrambling | Hai-Long Shi, Xi-Wen Guan, Jing Yang | 2023-08-07T16:13:01Z | http://arxiv.org/abs/2308.03696v2 | # Universal shot-noise limit for quantum metrology with local Hamiltonians
###### Abstract
Quantum many-body interactions can induce quantum entanglement among particles, rendering them valuable resources for quantum-enhanced sensing. In this work, we derive a universal and fundamental bound for the growth of the quantum Fisher information. We apply our bound to the metrological protocol requiring only separable initial states, which can be readily prepared in experiments. By establishing a link between our bound and the Lieb-Robinson bound, which characterizes the operator growth in locally interacting quantum many-body systems, we prove that the precision cannot surpass the shot noise limit at all times in locally interacting quantum systems. This conclusion also holds for an initial state that is the non-degenerate ground state of a local and gapped Hamiltonian. These findings strongly hint that when one can only prepare separable initial states, nonlocal and long-range interactions are essential resources for surpassing the shot noise limit. This observation is confirmed through numerical analysis on the long-range Ising model. Our results bridge the field of many-body quantum sensing and operator growth in many-body quantum systems and open the possibility to investigate the interplay between quantum sensing and control, many-body physics and information scrambling.
_Introduction.--_Quantum entanglement is a valuable resource in quantum information processing. In quantum metrology, quantum Fisher information (QFI) [1; 2; 3; 4; 5; 6], quantifying the precision of the sensing parameter, scales linearly with the number of probes if the probes are uncorrelated, known as the shot noise limit (SNL), also known as the standard quantum limit, which also appears in sensing with classical resources. Quantum entanglement can lead to a quadratic scaling known as the Heisenberg limit (HL) or even beyond the quadratic scaling, i.e., the so-called super-HL. Entanglement can be utilized in two ways, either in the stage of state preparation [7; 8; 9; 10; 11; 12] or in the stage of signal sensing via the many-body interactions between individual sensors [13; 14; 15; 16; 17], which is the main essence of many-body quantum metrology. Recently, the subject matter has gained renewed interest. However, the existing protocols require to prepare the initial state in the highly entangled Greenberger-Horne-Zeilinger (GHZ)-like states, whose preparation is very challenging and time-consuming. One natural way to address this issue is to combine the protocols of quantum state preparation and quantum metrology, see e.g., Ref. [18; 19; 20], where an entangled initial state is prepared before the sensing process. Nevertheless, to evaluate the time cost of preparing a highly entangled state from separable states but taking into account the restrictions of the accessible Hamiltonians can be very challenging [21; 22; 23].
As such, as an alternative way to get around the overhead of quantum state preparation, in this work we propose to prepare the probes or sensors initially in a separable state, which can be prepared with the current experimentally feasible technology [24; 25; 26]. In our protocol shown in Fig. 1(a), entanglement is established in the signal sensing process due to the interactions in the many-body sensing Hamiltonian. It is in sharp contrast with the protocol in Ref. [18], also shown in Fig. 1(b), where the entangled initial state is explicitly prepared through the time evolution generated by a locally interacting preparation Hamiltonian while the sensing Hamiltonian is non-interacting. As we have alluded to earlier, the time cost to prepare the entangled initial state in the protocol is difficult to estimate.
It is well known in the literature that when the initial states are restricted to separable states, for a non-interacting sensing Hamiltonian that is multiplicative with respective to the estimation parameter, the precision is limited by SNL [7; 8; 9].
Figure 1: Comparison between our protocol (a) with the protocol in Ref. [18] (b). In our protocol (a), the information of the estimation parameter is encoded into the many-body quantum states through the many-body dynamics \(U_{\lambda}(t)=e^{-i(\lambda\cdot\sum_{\lambda}t_{\lambda})t_{\lambda}t_{\lambda}}\) while in Ref. [18], the encoding dynamics given by \(U_{\lambda}=e^{-i\lambda\sum_{\lambda}t_{\lambda}}\) with \(X_{j}=\{j\}\). In our protocol, the initial state is chosen to be either a separable state or the non-degenerate ground states of a gapped and local Hamiltonian while in Ref. [18] the initial state is prepared through the many-body dynamics \(U_{0}(t)\).
In our protocol, due to the many-body interactions, the state can become entangled after the sensing process. The central question we ask is whether many-body interactions can break the SNL? It is also intimately related to recent studies on operator growth and quantum chaos in quantum many-body systems [27; 28; 29; 30; 31].
To answer this question, we derive a universal bound for the growth of the quantum Fisher information in time. Our bound can characterize the role of quantum entanglement in information scrambling, operator growth, and quantum chaos. We apply our bound to the quantum sensing protocol with time-independent many-body Hamiltonians as shown in Fig. 1(a) and estimate our bound using the celebrated Lieb-Robinson bound [32; 33; 34; 35] for quantum many-body systems with local interactions. We find that it is impossible to surpass the SNL with local interactions. Such observation is not only valid for separable initial states, but also holds when the initial state is the non-degenerate ground state of a locally gapped Hamiltonian, which can be experimentally prepared by cooling. Therefore, if only separable states are accessible in experiments, nonlocal or long-range interactions are essential to beat the SNL and bring real quantum advantage in many-body quantum metrology. We exemplify our findings in magnetometry with the short-range transverse-field Ising (TFI) model, the chaotic Ising (CI) model, and the long-range Ising (LRI) model.
_Universal bound on the growth of the QFI._ --We consider the following sensing Hamiltonian
\[H_{\lambda}(t)=H_{0\lambda}(t)+H_{1}(t), \tag{1}\]
where \(\lambda\) represents the estimation parameter, and \(H_{1}(t)\) involves interactions among sensors induced by either intrinsic interactions or external coherent controls. In the formal case, \(H_{1}\) is usually time-independent, while in the later case, \(H_{1}(t)\) becomes time-dependent. The generator for the quantum sensing \(\lambda\)[36; 13] is given by
\[G(t)=\int_{0}^{t}[\partial_{\lambda}H_{\lambda}(\tau)]^{(\mathrm{H})}\,d\tau, \tag{2}\]
where an operator in the Heisenberg picture is defined as \(\mathcal{O}^{(\mathrm{H})}(t)=U^{\dagger}(t)\mathcal{O}^{(\mathrm{S})}(t)U(t)\). The QFI is determined by the variance of \(G(t)\) over the initial state denoted as \(|\psi_{0}\rangle\), i.e.,
\[I(t)=4\mathrm{Var}[G(t)]_{|\psi_{0}\rangle}. \tag{3}\]
Optimal control theory has been proposed to simultaneously optimize the initial state \(|\psi_{0}\rangle\) and \(H_{1}(t)\), resulting in a bound \(I(t)\leq 4\left(\int_{0}^{t}\|\partial_{\lambda}H_{\lambda}^{(\mathrm{S})}( \tau)\|d\tau\right)^{2}\)[13; 15; 37; 38]. Here, the semi-norm \(\|\cdot\|\) is defined by the spectrum width of an operator, i.e., the difference between its maximum eigenvalue and minimum eigenvalue.
Eq. (3) hints that \(\dot{G}(t)=[\partial_{\lambda}H_{\lambda}(t)]^{(\mathrm{H})}\) can characterize the growth of quantum Fisher information qualitatively. In this regard, we derive a universal bound [39]:
\[\frac{d\sqrt{I(t)}}{dt}\leq\Gamma(t)\equiv 2\sqrt{\mathrm{Var}\left([ \partial_{\lambda}H_{\lambda}(t)]^{(\mathrm{H})}\right)_{|\psi_{0}\rangle}}. \tag{4}\]
The saturation condition can be found in the Supplemental Materials [39]. Alternatively, one can rewrite
\[\Gamma(t)=2\sqrt{\mathrm{Var}\left(\partial_{\lambda}H_{\lambda}^{(\mathrm{S}) }(t)\right)_{|\psi(t)\rangle}}, \tag{5}\]
where \(|\psi(t)\rangle=U(t)\,|\psi_{0}\rangle\).
A few comments in order: First, Eq. (4) is our first main result, which holds universally for all initial states, both time-independent and driven quantum systems. Second, \(\Gamma(t)\) depends on the control Hamiltonian \(H_{1}(t)\) and the initial state \(|\psi_{0}\rangle\). Optimizing \(\Gamma(t)\) over all possible unitary dynamics and initial states lead to
\[\Gamma(t)\leq 2\|\partial_{\lambda}H_{\lambda}^{(\mathrm{S})}(t)\|. \tag{6}\]
By combining this bound with \(I(t)\leq\left(\int_{0}^{t}\Gamma(\tau)d\tau\right)^{2}\), which can be obtained by integrating both sides of Eq. (4), one immediately reobtains the bound given in previous works [37; 15; 38]. Compared to these studies, our bound (4) provides a feasible approach to study the scaling behavior of the QFI when the initial state \(|\psi_{0}\rangle\) is restricted to a specific set of initial states.
_SNL for short-range local interactions._--Here we will demonstrate that our bound, characterizing the growth of QFI, is closely linked to the Lieb-Robinson bound, which characterizes operator complexity in quantum many-body with short-range local interactions. We consider time-independent many-body Hamiltonian of the following form:
\[H_{\lambda}=\lambda\sum_{i=1}^{N}h_{X_{i}}+H_{1}, \tag{7}\]
where \(h_{X_{i}}\) is supported on the set \(X_{i}\) with cardinality \(|X_{i}|=R\) and diameter \(\mathrm{diam}(X_{i})=\max_{i,R,X_{i}}|k-l|\) and \(H_{1}\) representing the interactions between the spins. We require \(H_{\lambda}\) only contains local and short-range interactions, imposing that \(\mathrm{diam}(X_{j})\) is independent of \(N\)[33; 35; 40] and \(h_{X_{j}}\) is a local operator. Equation (7) is the model used in magnetometry, where \(\lambda\) represents the magnetic field [41]. We note that the bound (4) can be written in terms of dynamic correlation matrices of local operators, i.e.,
\[\Gamma(t)=2\sqrt{\sum_{jk}\mathrm{Cov}[h_{X_{j}}^{(\mathrm{H})}(t)h_{X_{k}}^{( \mathrm{H})}(t)]_{|\psi_{0}\rangle}}, \tag{8}\]
where \(\mathrm{Cov}[AB]_{|\psi_{0}\rangle}\equiv\frac{1}{2}\langle\{A,\ B\}|-\langle A \rangle\langle B\rangle\) and
\[h_{X_{i}}^{(\mathrm{H})}(t)\equiv e^{\mathrm{i}H_{i}t}h_{X_{i}}e^{-\mathrm{i}H _{i}t}. \tag{9}\]
As indicated by Eq. (6), we observe that \(\Gamma(t)\leq 2N\), implying \(I(t)\leq 4N^{2}t^{2}\)[36; 37]. Such an HL can be saturated by utilizing GHZ-like initial states [13; 15; 16], while also ensuring that \(H_{1}\) commutes with \(\sum_{i=1}^{N}h_{X_{i}}\). As we have mentioned before, it is challenging to prepare GHZ state experimentally and estimate the corresponding time cost theoretically.
In general, the many-body interaction \(H_{1}\) can generate the entanglement between the probes except that \(H_{1}\) commutes
with \(\sum_{i=1}^{N}h_{X_{i}}\), where the states will remain separable throughout the process of signal sensing. In this special case, the precision remains at SNL at all times, i.e., \(I(t)=4N\bar{r}^{2}\). Generically, \(H_{1}\) does not commute with \(\sum_{i=1}^{N}h_{X_{i}}\), Thus one naturally ask the question: For separable initial states, what is the tight bound that limits the precision? Is it possible to surpass the SNL using many-body interactions?
To address this question, we analyze the scaling of \(\Gamma(t)\) with respect to \(N\) with the Lieb-Robinson bound for local Hamiltonians [32, 33, 34, 35], which imposes a restriction on the connected correlation functions. Specifically, if the sensing Hamiltonian (7) only contains local or short-range interactions, the static correlation \(\mathrm{Cov}[h_{X_{j}}h_{X_{k}}]_{\ket{\psi_{0}}}\) between two disjoint local operators \(h_{X_{j}}\) and \(h_{X_{k}}\) decays exponentially, provided the initial state \(\ket{\psi_{0}}\) is separable or the non-degenerate ground state of some local and gapped Hamiltonians, not necessarily the same as Eq. (7). In this case, the dynamic correlation function also decays exponentially,
\[\left|\mathrm{Cov}[h_{X_{j}}^{(\mathrm{H})}(t)h_{X_{k}}^{(\mathrm{H})}(t)]_{ \ket{\psi_{0}}}\right|\leq\mathcal{C}\exp(-[d(X_{j},\,X_{k})-v_{\mathrm{LR}}t ]/\xi), \tag{10}\]
where \(\mathcal{C}\) and \(\xi\) are constants that solely depend on the topology of the sites, \(d(X_{j},\,X_{k})\) is the distance between \(X_{j}\) and \(X_{k}\), and \(v_{\mathrm{LR}}\) is the celebrated Lieb-Robinson velocity. Using Eq. (10), we demonstrate that the scaling of \(\Gamma(t)\) is lower bounded by \(\sqrt{N}\)[39]:
\[\Gamma(t)\leq 2\gamma(t)\sqrt{N}, \tag{11}\]
where \(\gamma(t)\) is only a function of time and independent of \(N\). It remains finite as long as \(t\) is finite and behaves as \(e^{\imath\pi t/\xi}\) as \(t\to\infty\).
Eq. (11) is the main result of this work. Clearly, for finite but fixed times it implies that the QFI is limited by SNL. On the other hand, at sufficiently long times, for time-independent systems, one can show that \(I(t)/t^{2}\) is independent of time [39, 42] and is only a function of \(N\). We further show that in [39] the time scale to reach this regime corresponds to the case where \(t\) is much larger than the inverse of the minimum energy gap for the system. In this regime, when \(N\) is large, \(I(t)\sim t^{2}N^{\alpha}\). Since Eq. (11) is valid for all times and all \(N\), combined with Eq. (4) we conclude \(\alpha\leq 1\). Therefore, in local short-range models where operator growth is constrained by the Lieb-Robinson bound, SNL cannot be surpassed.
_The spread of the generator of the metrological bound. --_ We can characterize the structure \([\partial_{\lambda}H_{\lambda}(t)]^{(\mathrm{H})}\), the generator of the bound (4). We note that although \(h_{X_{i}}^{(\mathrm{H})}(t)\) spreads over the lattice, the metrological generator \([\partial_{\lambda}H_{\lambda}(t)]^{(\mathrm{H})}\), being a sum of these non-local operators, may still remain local as a whole, thus keeping the precision limited to the SNL. This observation provides an alternative perspective to understand the SNL for separable initial states in local or short-range models. A trivial example is when \(H_{1}\) commutes with \(\sum_{i}h_{X_{i}}\) while \(H_{\lambda}\) does not commute with each individual \(h_{X_{i}}\), in which case \(\Gamma(t)\) remains the SNL.
Now we present a non-trivial example: we assume \([\partial_{\lambda}H_{\lambda}(t)]^{(\mathrm{H})}\) can be expanded in terms of two-body basis operators
\[[\partial_{\lambda}H_{\lambda}(t)]^{(\mathrm{H})}=\sum_{i=1}^{N}\sum_{j\geq i }^{N}\sum_{\alpha}\eta_{ij}^{\alpha}\mathcal{O}_{ij}^{\alpha} \tag{12}\]
where we have suppressed the time dependence for simplicity and for spin systems \(\mathcal{O}_{ij}^{\alpha}\) is a basis spin operator, such as \(\sigma_{i}^{x}\sigma_{j}^{y}\) while for fermionic systems \(\mathcal{O}_{ij}^{\alpha}\) is a Hermitian basis fermionic operator, such as \(c_{i}^{\dagger}c_{j}\)+h.c.or \(\{c_{i}^{\dagger}c_{j}\)-h.c.\(\}\). It should be emphasized that the number of different types of operators indexed by \(\alpha\) are finite and does not scale with \(N\). If the initial state is separable and \([\partial_{\lambda}H_{\lambda}(t)]^{(\mathrm{H})}\) is described by fast-decaying long-range two-body interactions, i.e.,
\[\lim_{k\to\infty}\lim_{N\to\infty}\sum_{i\leq k}\sum_{j\geq k}|\eta_{ij}^{ \alpha}|=\lim_{k\to\infty}\int_{1}^{k}dx\int_{k}^{\infty}dy|\eta_{xy}^{\alpha} |<\infty, \tag{13}\]
then the SNL can not be surpassed. The proof can be found in the Supplement Materials [39]. We can express \([\partial_{\lambda}H_{\lambda}(t)]^{(\mathrm{H})}=\sum_{i=1}^{N}\tilde{\mathcal{ O}}_{i}\), where \(\tilde{\mathcal{O}}_{i}=\frac{1}{2}\sum_{\alpha}\left(\sum_{j\geq i}^{N}\eta_{ij}^{ \alpha}\mathcal{O}_{ij}^{\alpha}+\sum_{j\leq i}^{N}\eta_{ij}^{\alpha} \mathcal{O}_{ji}^{\alpha}\right)\) has support across the whole chain [39]. Essentially, the condition (13) ensures that \(\tilde{\mathcal{O}}_{i}\) does not scale with \(N\) so that it behaves effectively as a local operator, although it may have a support across the entire chain. It is crucial to note that \(\tilde{\mathcal{O}}_{i}\) can be different from \(h_{X_{i}}^{(\mathrm{H})}(t)\), which is generically non-local. We will further elaborate this observation with the example of magnetometry using the TFI model.
_SNL in the TFI model._--We consider the integrable TFI chain
\[H_{\lambda}^{\mathrm{TFI}}=-\left(J\sum_{i=1}^{N}\sigma_{i}^{x}\cdot\sigma_{i+1 }^{x}+\lambda\sum_{i=1}^{N}\sigma_{i}^{z}\right), \tag{14}\]
with the periodic boundary condition \(\sigma_{1}^{z}=\sigma_{N+1}^{z}\), where \(J\), \(\lambda>0\). In the thermodynamic limit \(N\to\infty\), when \(J\gg\lambda\) the ground state is ferromagnetic and degenerate, represented by \(\ket{++\cdots+}\) or \(\ket{--\cdots\cdots-}\), while for \(J\ll\lambda\) the ground state is paramagnetic \(\ket{\uparrow\uparrow\cdots\uparrow}\).
For any initial separable state, Eq. (4) predicts that the QFI cannot surpass the SNL. On the other hand, this model can be exactly solved by mapping it to a free fermion model [43, 44] and therefore one can compute \([\partial_{\lambda}H_{\lambda}(t)]^{(\mathrm{H})}\) explicitly. We show in the Supplemental Materials [39] that indeed in this case \([\partial_{\lambda}H_{\lambda}(t)]^{(\mathrm{H})}\) has the structure of Eq. (12) with four types of fermionic operators: \(\mathcal{O}_{ij}^{(1)}=(c_{i}^{\dagger}c_{j}+\mathrm{h.c.})\), \(\mathcal{O}_{ij}^{(2)}=(c_{i}c_{j}^{\dagger}+\mathrm{h.c.})\), \(\mathcal{O}_{ij}^{(3)}=(c_{i}^{\dagger}c_{j}^{\dagger}+\mathrm{h.c.})\), and \(\mathcal{O}_{ij}^{(4)}=(c_{i}^{\dagger}c_{j}^{\dagger}-\mathrm{h.c.})\). The expression for the \(\eta\)-functions, characterizing the weights of different operators spreading from the \(i\)-th site to the \(j\)-th site, can be found in [39]. In the thermodynamic limit, \(\eta_{ij}^{\alpha}\) behaves like \(p^{j-i}\) for \(j\geq i\), where \(p=J/\lambda\) for \(J<\lambda\), \(p=\lambda/J\) for \(\lambda<J\), and \(p=0\) for \(J=\lambda\)[39]. The power-law decay of \(\eta\)-functions indicates that the evolved operator remains extremely local, as shown in Fig. 2(b), which ensures the condition (13), i.e.,
\[\int_{1}^{k}dx\int_{k}^{\infty}dy|\eta_{xy}^{\alpha}|\sim\frac{1-p^{k-1}}{(\ln p )}<\infty, \tag{15}\]
as \(k\rightarrow\infty\). Therefore, the locality of evolved operator suggests that QFI beyond the SNL cannot be achieved by initial separable probe states in this integrable TFL model. Fig. 2(a) characterizes the diffusion of the correlators, suggesting that the numerical choices of \(t=0.5\) and \(t=5\times 10^{4}\) can be considered as the time scales for the part and full spread of local operators, respectively. Fig. 2(c-d) numerically verify that only SNL can be achieved for the different initial separable spin coherent states parameterized by \(|\psi_{0}\rangle=\bigotimes_{i=1}^{N}[\cos(\theta/2)|\uparrow\rangle_{i}+\sin (\theta/2)e^{i\phi}|\downarrow\rangle_{i}]\).
Furthermore, if we consider the initial state as the ground state of the TFI model with known values of parameters \(\lambda_{*}\) and \(J\), which can be prepared by cooling. The asymptotic behavior of the QFI with respect to the unknown parameter \(\lambda\) under the the Hamiltonian (14) is
\[\lim_{t,N\rightarrow\infty}\frac{I(t)}{Nt^{2}}=f(\lambda,J,\lambda_{*}) \sim O(1), \tag{16}\]
where the expression of \(N\)-independent function \(f\) is given in the Supplemental Materials [39], confirming the claim that only SNL can be achieved even with the ground state of local and gapped Hamiltonians. Taking \(\lambda_{*}\rightarrow+\infty\), where the ground state becomes the spin coherent state with \(\theta=\phi=0\), we find
\[\lim_{t,N\rightarrow\infty}\frac{I(t)}{Nt^{2}}=\frac{J^{2}(4\lambda^{2}-3J^{ 2})}{\lambda^{4}} \tag{17}\]
for \(J<\lambda\) and \(\lim_{t,N\rightarrow\infty}I(t)/(Nt^{2})=1\) for \(J\geq\lambda\), which is also verified in Fig. 2(d).
_SNL in the chaotic Ising model.--_Different from the integrable models, the operator complexity in chaotic models grows very rapidly [27, 28, 29, 30, 31]. Nevertheless, Eq. (4) predict that even if the model is chaotic, where local operators are expected to growth fast than integrable models as long as the model only contains local interactions, the SNL cannot be surpassed by using separable states. To see such an example, we consider the Ising model with both transverse and longitudinal fields and the Hamiltonian is given by
\[H_{\lambda}^{\text{CI}}=-\sum_{i=1}^{N}(J\sigma_{i}^{x}\sigma_{i*i}^{x}+h \sigma_{i}^{x}+\lambda\sigma_{i}^{z}), \tag{18}\]
where open boundary conditions are adopted. Energy-level spacing statistics indicate that this model is quantum chaotic for \(J=h=\lambda\)[29, 45]. Fig. 3(c-f) verify the prediction by Eq. (11) that separable states cannot surpass the SNL even in such chaotic short-range systems. To surpass the SNL, we are thus motivated to explore long-range models. The effect of quantum chaos in quantum metrology has been studied in Ref. [46] in the context of kick top, which is a single-spin model. Here, we show that quantum chaos plays no role in local chaotic many-spin models.
_Beyond SNL with LRI model.--_As we have shown above, it is only possible to break the SNL in long-range and non-local systems, which violates the Lieb-Robinson inspired bound (11). Thus, we consider the long-range Ising model with power-law decay,
\[H_{\lambda}^{\text{LRI}}=-\left(J\sum_{i<j}\frac{\sigma_{i}^{x}\sigma_{j}^{x} }{|i-j|^{\alpha}}+\lambda\sum_{i}\sigma_{i}^{z}\right), \tag{19}\]
which reduces to the TFI model as \(\alpha\rightarrow\infty\). For \(\alpha=0\), this model corresponds to the Lipkin-Meshkov-Glick model [47]. In this long-range model, the breakdown of exponential decay in connected correlation function Eq. (10) will result in the failure of bound presented in (11). Consequently, we expect that for small \(\alpha\) where the long-range interactions decay sufficiently slowly, it is possible to surpass the SNL with separable initial states. As depicted in Fig. 3(g-h), we have identified specific instances of this scenario.
_Conclusion and outlook.--_In conclusion, we have derived a universal bound on the growth of the QFI under arbitrary dynamics and initial states. We apply our bound to the case of separable initial states or the non-degenerate ground state of a gapped and local sensing Hamiltonian. We prove that with these particular set of initial states, the QFI cannot surpass the SNL, as we have explicitly demonstrated with TFI and CI models. This indicates that either initial entanglement or long-range interactions are essential resource to bring the advantage of quantumness, as we have demonstrated with the LRI.
These findings suggest an intriguing connection between operator growth and the growth of the QFI. As such, our results shed light on many aspects on the interplay between many-body physics, quantum control theory, and information
Figure 2: (a) Numerical calculation of the operator diffusion in the TFI chain with \(N=10\). (b) Coefficient \(|\eta_{ij}^{(1)}|\) characterizing the decay of the two-body interactions. (c-f) Scaling of the QFI with respect to the number of spins at different times for differential initial separable spin coherent states \(|\psi_{0}\rangle=\bigotimes_{i=1}^{N}[\cos(\theta/2)|\uparrow\rangle_{i}+\sin (\theta/2)e^{i\phi}|\downarrow\rangle_{i}]\). Here numerical data is obtained by directly diagonalizing the Hamiltonian of the TFI model, while theoretical data is derived using results by mapping the TFI model to the free fermion model. The analytical result refers to Eq. (17). Other parameters used for the calculations are \(J=2\), \(\lambda=5\), and \(\phi=0\).
scrambling and open the door to investigate many intricate questions, such as driven many-body sensing, the optimal control metrology over a restricted set of initial states, the connection between the growth of the QFI and the measures characterizing quantum chaos and operator complexity. We leave these studies for the future.
_Acknowledgement._--We thank Adolfo del Campo for useful comments on the manuscript. It is a pleasure to acknowledge discussions with Federcio Balducci and Xingze Qiu. XWG was supported by the NSFC key grant No. 12134015, the NSFC grant No. 12121004, and partially supported by the Innovation Program for Quantum Science and Technology 2021ZD0302000. JY was funded by the Wallenberg Initiative on Networks and Quantum Information (WINQ) and would like to thank Hui Zhai for the hospitality to host his long-term visit at the Institute of Advanced Study in Tsinghua University, during which this work was completed.
_Notes added._--When completing this work, we noted that a bound similar with Eq. (4) also appears in Ref. [48] with the focus on non-Hermitian sensing.
|
2308.12943 | Characterization of the gravitational wave spectrum from sound waves
within the sound shell model | We compute the gravitational wave (GW) spectrum sourced by sound waves
produced during a first-order phase transition in the radiation-dominated
epoch. The correlator of the velocity field is evaluated in accordance with the
sound shell model. In our derivation we include the effects of the expansion of
the Universe, which are relevant in particular for sourcing processes whose
time duration is comparable with the Hubble time. Our results show a causal
growth at small frequencies, $\Omega_{\rm GW} \sim k^3$, possibly followed by a
linear regime $\Omega_{\rm GW} \sim k$ at intermediate $k$, depending on the
phase transition parameters. Around the peak, we find a steep growth that
approaches the $k^9$ scaling found within the sound shell model. The resulting
bump around the peak of the GW spectrum may represent a distinctive feature of
GWs produced from acoustic motion. Nothing similar has been observed for
vortical (magneto)hydrodynamic turbulence. Nevertheless, we find that the $k^9$
scaling is less extended than expected in the literature, and it does not
necessarily appear. The dependence on the duration of the source, $\delta
\tau_{\rm fin}$, is quadratic at small frequencies $k$, and proportional to
$\ln^2 (1 + \delta \tau_{\rm fin} H_*)$ for an expanding Universe. At
frequencies around the peak, the growth is suppressed by a factor $\Upsilon = 1
- 1/(1 + \delta \tau_{\rm fin} {H}_*)$ that becomes linear when the GW source
is short. We discuss in which cases the dependence on the source duration is
linear or quadratic for stationary processes. This affects the amplitude of the
GW spectrum, both in the causality tail and at the peak, showing that the
assumption of stationarity is a very relevant one, as far as the GW spectral
shape is concerned. Finally, we present a general semi-analytical template of
the resulting GW spectrum, as a function of the parameters of the phase
transition. | Alberto Roper Pol, Simona Procacci, Chiara Caprini | 2023-08-24T17:32:37Z | http://arxiv.org/abs/2308.12943v3 | # Characterization of the gravitational wave spectrum from sound waves within the sound shell model
###### Abstract
We compute the gravitational wave (GW) spectrum sourced by the sound waves produced during a first-order phase transition in the radiation-dominated epoch. The correlator of the velocity field perturbations is evaluated in accordance with the sound shell model. In our derivation we include the effects of the expansion of the Universe, which are relevant in particular for sourcing processes whose time duration is comparable with the Hubble time. Our results show a causal growth of the GW spectrum at small frequencies, \(\Omega_{\rm GW}\sim k^{3}\), possibly followed by a linear regime \(\Omega_{\rm GW}\sim k\) at intermediate \(k\), depending on the phase transition parameters. Around the peak, we find a steep growth that approaches the \(\sim k^{9}\) scaling previously found within the sound shell model. The resulting bump around the peak of the GW spectrum may represent a distinctive feature of GWs produced from acoustic motion. Nothing similar has been observed for vortical (magneto)hydrodynamic turbulence. Nevertheless, we find that the \(\sim k^{9}\) scaling is less extended than expected in the literature, and it does not necessarily appear. The dependence on the duration of the source, \(\delta\tau_{\rm fin}\), is quadratic at small frequencies \(k\), and proportional to \(\ln^{2}(1+\delta\tau_{\rm fin}\mathcal{H}_{*})\) for an expanding Universe. At frequencies around the peak, the growth with \(\delta\tau_{\rm fin}\) may become linear, and is suppressed by a factor \(\Upsilon=1-1/(1+\delta\tau_{\rm fin}\mathcal{H}_{*})\) due to the expansion of the Universe. We discuss in which cases the dependence on the source duration is linear or quadratic for stationary processes. This affects the amplitude of the GW spectrum, both in the causality tail and at the peak, showing that the assumption of stationarity is a very relevant one, as far as the GW spectral shape is concerned. Finally, we present a general semi-analytical template of the resulting GW spectrum, as a function of the parameters of the phase transition.
###### Contents
* I Introduction
* II GW production during radiation domination
* II.1 Tensor-mode perturbations
* II.2 GWs sourced by sound waves
* III Sound waves from first-order phase transitions in the sound shell model
* III.1 Velocity field
* III.2 UETC of the velocity field
* III.3 UETC of the anisotropic stress
* IV Low wave number tail of the GW spectrum from sound waves
* IV.1 GW spectrum in the sound shell model
* IV.2 Low-frequency limit
* IV.3 \(k^{3}\) vs \(k^{9}\) tilt in the low-frequency limit
* V GW production from stationary processes
* V Sound-shell model UETC
* V.1 Kraichnan decorrelation
* VI GW spectrum from sound waves: results and template
* VI.1 GW spectral shape
* VI.2 Estimation of the source duration
* VI.3 Present-time spectral amplitude
* VII Conclusions
* A Full time evolution of the GW spectrum
* B GW spectrum in the infinite duration approximation
Introduction
A first-order thermal phase transition can be parameterized in terms of a scalar field, whose vacuum state is degenerate at a given critical temperature \(T_{c}\)[1; 2; 3]. According to the Standard Model (SM), both the electroweak [4] and the QCD [5] phase transitions have occurred as crossovers in the early Universe. However, extensions of the SM that provide the required conditions for baryogenesis at the electroweak scale can also lead to first-order phase transitions, see ref. [6] for a review, and references therein. Moreover, a large lepton asymmetry or a primordial magnetic field may affect the QCD phase diagram, potentially leading to a first-order QCD phase transition [7; 8; 9; 10; 11].
We assume that, for a specific model, \(T_{c}\) is reached while the early Universe is cooling down in the radiation-dominated era. Part of the potential energy in the unstable vacuum is then transferred to the surroundings as kinetic energy, through the nucleation and expansion of bubbles of the broken phase [12; 13; 14].
The resulting shear stress of the fluid can have anisotropies of the tensor type and, hence, source gravitational waves, that propagate in the homogeneous and isotropic background [15; 16]. To study the power spectrum of these gravitational waves, the shear stress from a first-order phase transition can be decomposed into different contributions: bubble collisions [17; 18; 19; 20; 21; 22; 23], sound waves [24; 25; 26; 27; 28; 29; 30], and turbulence [31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44]; for reviews see refs. [6; 45] and references therein.
The dynamics of the expanding bubbles of the broken phase is determined by the interaction of the plasma particles with the scalar field, which is commonly modeled as a friction term [21; 24; 46; 14]. If the friction is strong enough, we expect the expanding bubble walls to reach a terminal velocity \(\xi_{w}\), which depends on the specific value of the friction term. On the contrary, the bubbles may run away when the friction is not sufficiently strong [47]. However, first-order electroweak phase transitions are expected to rarely reach this regime [48]. If the bubbles do not run away, the long-lasting nature of the sound waves promotes them as the dominant source of GWs. Only if the phase transition is supercooled, it effectively occurs in a vacuum and hence the production of sound waves is negligible [6; 49; 50; 51].
The development of turbulence can occur due to the interaction of the scalar field and the plasma [15; 52], or in the presence of a primordial magnetic field [53; 54], due to the extremely high conductivity and Reynolds number in the early Universe [55; 56]. The production of GWs from vortical turbulence has been found to be subdominant with respect to the one from acoustic turbulence [37]. However, it is not clear how much energy is converted from sound waves into turbulence once this regime takes over, or if vortical motions can be directly sourced from bubble collisions [57]. Moreover, the time scales corresponding to each production mechanism are not well understood. This information determines the resulting GW amplitudes, see, e.g., refs. [6; 58].
In the current work, we focus on the production of GWs from sound waves. A semi-analytical description of the velocity spectrum originating from sound waves is provided by the sound shell model, put forward in the seminal work [26]. The corresponding gravitational wave spectrum has been studied in detail in ref. [28] for a non-expanding Universe, and extended in ref. [59] to an expanding Universe. These results feature a steep growth at small frequencies, \(\Omega_{\rm GW}\sim k^{9}\). The latter, however, has not been found in other numerical [24; 25; 30] or analytical [60] works, which are, instead, consistent with the \(\sim k^{3}\) low-frequency tail typically expected outside the zone of both spatial and temporal correlation of the GW source [33].
The goal of this work is to generalize the results of refs. [28; 59] to provide a semi-analytical template that is accurate and applicable to the full range of frequencies of the GW spectrum.
We confirm the presence of a steep growth of the GW spectrum (cf. ref. [28]) that, however, only appears around the peak and for certain values of the phase transition parameters. In particular, it depends simultaneously on the duration of the GW sourcing and the mean size of the bubbles. The steep growth extends for a short range of frequencies around the peak, leading to a bump in the GW spectral shape. At lower frequencies, the GW power spectrum can develop an intermediate linear growth, \(\Omega_{\rm GW}\sim k\). At even smaller frequencies, i.e., below the inverse duration of the GW sourcing, the causal tail, \(\Omega_{\rm GW}\sim k^{3}\), takes over. We also find that the bump around the GW peak is less pronounced when one takes into account the expansion of the Universe.
With the detection of a stochastic gravitational wave background from the early Universe becoming conceivable in the near future, it is important to crosscheck and validate accurate theoretical tem
plates for the signal of the different contributions. The predicted spectral shape of the GW signal, in fact, strongly affects forecast observational constraints on the phase transition parameters.
The current observations by pulsar timing arrays (PTA) have reported a stochastic GW background (SGWB) at nHz frequencies that could be compatible with sourcing anisotropic stresses produced around the QCD scale [61, 62, 63, 64, 65]. PTA observations have been extensively used in the literature to report constraints on the phase transition parameters from the GW production due to sound waves [67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77]. The space-based GW detector Laser Interferometer Space Antenna (LISA), planned to be launched in the early 2030s, will be sensitive to GWs with a peak sensitivity of around 1 mHz [78]. Signals produced at the electroweak phase transition are expected to peak around these frequencies. Several studies have used the expected sensitivity of LISA to forecast the potential detectability of the SGWB produced by sound waves [6, 58, 79, 80, 81, 82, 83]. First-order phase transitions at higher energy scales, e.g., at temperatures \(T>10^{8}\,\mathrm{GeV}\) have been constrained by the results of the third observing run of the LIGO-Virgo collaboration [84] and can be further probed by the next generation of ground-based GW detectors, like the Einstein Telescope or the Cosmic Explorer.
This paper is organized as follows. In Sec. (II), we provide general formulae for the production of GWs during the radiation-domination era, and we introduce the unequal-time correlator (UETC) of the anisotropic stresses originating from sound waves. Section (III) deals with the velocity field within the framework of the sound shell model. We provide new results regarding the causality bounds on the velocity field and its UETC spectrum. Being the focus of the current work on GW production, we briefly discuss a theoretical interpretation of the causality argument for the initial conditions used in the sound shell model [26, 28], and we extend the discussion in an accompanying paper [85].
In Sec. (IV), we study specific features of the GW spectrum in the sound shell model, both analytically and numerically. In particular, we discuss the occurrence of the \(k^{3}\) causal tail at small frequencies. We investigate its dependence on the duration of the source, identifying the cases in which the assumptions of refs. [28, 59] do not apply. The dependence of the GW amplitude on the duration of the source is the topic of Sec. (V). We study the GW production for stationary processes by comparing the results obtained within the sound shell model with those obtained for a velocity field with Gaussian (cf. Kraichnan) decorrelation.
Numerical results for the GW spectrum are presented in Sec. (VI). We show that a steep \(\Omega_{\rm GW}\sim k^{7}\) growth may appear below the peak under certain circumstances, leading to a bump in the spectral shape. A linear growth \(\Omega_{\rm GW}\sim k\) can also develop between the causal \(\Omega_{\rm GW}\sim k^{3}\) and the steep bump. Studying the dependence of the amplitude on the duration of the source \(\delta\tau_{\rm fin}\), we find that the causality tail is always quadratic in \(\delta\tau_{\rm fin}\), while the peak may present a quadratic or a linear dependence, with the latter being the one obtained in refs. [28, 59].
We provide a template for the current-day observable \(\Omega_{\rm GW}\), as a function of the parameters that describe the phase transition. In Sec. (VII) we discuss the implications and conclude.
In the following, the notation is such that the characteristic scales and time intervals, e.g., the source duration, are physical, and therefore time-dependent. They are understood to be redshifted when compared with the conformal Hubble factor at the phase transition time, \(\mathcal{H}_{*}\equiv\left(a_{*}/a_{0}\right)H_{*}\).
## II GW production during radiation domination
### Tensor-mode perturbations
We consider tensor-mode perturbations \(\ell_{ij}\) in an expanding Universe, described by conformal coordinates
\[\mathrm{d}s^{2}=a^{2}(\tau)\left[-\,\mathrm{d}\tau^{2}+\left(\delta_{ij}+\ell _{ij}\right)\mathrm{d}x^{i}\,\mathrm{d}x^{j}\right]\,, \tag{1}\]
where \(a\) is the scale factor. The perturbations are traceless and transverse (TT): \(\ell_{i}^{i}=0\) and \(\partial^{i}\ell_{ij}=0\). Assuming radiation domination, the scale factor \(a\) evolves linearly with conformal time. Following ref. [36], at the beginning of the phase transition we set \(a(\tau_{*})=1\), such that \(a(\tau)=\mathcal{H}_{*}\tau\), where \(\mathcal{H}_{*}\equiv a^{\prime}/a(\tau_{*})\) is the conformal Hubble parameter, and a prime denotes the derivative with respect to conformal time \(\partial_{\tau}\).
The dynamics of small perturbations is described by the linearized Einstein equations. In comoving momentum space, \(\mathbf{k}\), the tensor-mode perturbations are governed by the GW equation
\[\left(\partial_{\tau}^{2}+2\mathcal{H}\partial_{\tau}+k^{2}\right)\ell_{ij}( \tau,\mathbf{k})=16\pi Ga^{2}\bar{\rho}\,\Pi_{ij}(\tau,\mathbf{k})\,, \tag{2}\]
with \(G\) being the gravitational constant and \(k\equiv|\mathbf{k}|\). The perturbations of the stress-energy tensor \(T_{ij}\) are denoted by \(\bar{\rho}\,\Pi_{ij}(\tau,\mathbf{k})\equiv\Lambda_{ijlm}(\hat{\mathbf{k}})\,T_{lm}(\tau, \mathbf{k})\), where \(\bar{\rho}\equiv 3\mathcal{H}^{2}/(8\pi Ga^{2})\) is the critical energy density, and \(\Lambda_{ijlm}\) denotes the projection onto TT components,
\[\Lambda_{ijlm}(\hat{\mathbf{k}})=P_{il}(\hat{\mathbf{k}})P_{jm}(\hat{\mathbf{k}})-\frac{1} {2}P_{ij}(\hat{\mathbf{k}})P_{lm}(\hat{\mathbf{k}}), \tag{3}\]
with \(P_{ij}(\hat{\mathbf{k}})=\delta_{ij}-\hat{k}_{i}\hat{k}_{j}\) and \(\hat{k}_{i}=k_{i}/k\). We distinguish Fourier-transformed quantities by their argument \(\mathbf{k}\). Rewriting Eq. (2) for \(h_{ij}\equiv a\ell_{ij}\) during radiation domination yields
\[\left(\partial_{\tau}^{2}+k^{2}\right)\,h_{ij}(\tau,\mathbf{k})=\frac{6\,\mathcal{ H}_{*}\,\Pi_{ij}(\tau,\mathbf{k})}{\tau}\,. \tag{4}\]
Equation (4) shows that the scaled strains \(h_{ij}\) are sourced by the normalized and comoving TT projection of the anisotropic stresses, \(\Pi_{ij}\).
While the source is active,1\(\tau_{*}\leq\tau\leq\tau_{\text{fin}}\), the solution to Eq. (4) with initial conditions \(h_{ij}(\tau_{*},\mathbf{k})=h^{\prime}_{ij}(\tau_{*},\mathbf{k})=0\) is the convolution of the source with the Green's function,
Footnote 1: Since the initial time of GW production occurs within the radiation-dominated era, \(\tau_{*}\simeq 1/\mathcal{H}_{*}\).
\[h_{ij}( \tau_{*}\leq\,\tau\leq\tau_{\text{fin}},\mathbf{k})\] \[=\frac{6\mathcal{H}_{*}}{k}\int_{\tau_{*}}^{\tau}\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Note that the approximation in Eq. (15) is not valid if one is interested in computing the gravitational wave spectrum while the source is active. For this case, we provide a formula for the full time dependency of \(\Omega_{\text{GW}}\) in App. (A).
### GWs sourced by sound waves
The stress-energy tensor \(\Pi_{ij}\equiv\Lambda_{ijlm}T_{lm}\) that sources GWs (see Eq. (4)) can contain contributions from the fluid (depending on the enthalpy \(w\), the pressure \(p\), and on \(u^{i}\equiv\gamma v^{i}\), where \(\gamma\) is the Lorentz factor and \(v^{i}\) the velocity), and from gradients of the scalar field, \(\phi\), among other possible contributions (e.g., gauge fields),
\[T_{ij}\supset w\,u_{i}\,u_{j}+p\,\delta_{ij}+\partial_{i}\phi\,\partial_{j}\phi -\frac{1}{2}\big{(}\partial\phi\big{)}^{2}\delta_{ij}\,, \tag{16}\]
where \(w=p+\rho\), being \(\rho\) the energy density.
In the current work, we focus on the GWs sourced by sound waves in the aftermath of a first-order phase transition. Hence, we only consider the GW production from the linearized fluid motion (omitting the potential development of turbulence), and neglect the contributions from bubble collisions, as well as the possible presence of electromagnetic fields that would alternatively affect the fluid dynamics and also source GWs [15, 53].
Since diagonal terms in Eq. (16) are ruled out by the TT projection, the contributing part of the energy-momentum tensor is the convolution of the velocity field in Fourier space
\[T_{ij}(\tau,\mathbf{k})\supset\bar{w}\int\!\!\frac{\mathrm{d}^{3}\mathbf{p}}{(2\pi)^{ 3}}\,u_{i}(\tau,\mathbf{p})\,u_{j}(\tau,\tilde{\mathbf{p}})\,, \tag{17}\]
where we have denoted \(\tilde{\mathbf{p}}\equiv\mathbf{k}-\mathbf{p}\). The velocity field from sound waves corresponds to perturbations over a background at rest with mean enthalpy \(\bar{w}\). Hence, fluctuations in the enthalpy field correspond to higher-order terms in the perturbative expansion and can be neglected at first order. In the linear regime, we also have \(\gamma\sim 1\).
If we assume that the stochastic velocity field is Gaussian, Isserlis' (or Wick's) theorem [86] allows us to express the four-point correlations as linear superposition of the product of two-point functions,
\[\left\langle T_{ij}(\tau_{1},\mathbf{k})\,T_{lm}^{*}(\tau_{2},\mathbf{k} )\right\rangle\supset\bar{w}^{2}\int\!\!\frac{\mathrm{d}^{3}\mathbf{p}_{1}}{(2\pi )^{3}}\int\!\!\frac{\mathrm{d}^{3}\mathbf{p}_{2}}{(2\pi)^{3}} \tag{18}\] \[\times\Big{[}\left\langle u_{i}(\tau_{1},\mathbf{p}_{1})\,u_{l}^{*}( \tau_{2},\mathbf{p}_{2})\right\rangle\left\langle u_{j}(\tau_{1},\tilde{\mathbf{p}}_{1 })\,u_{m}^{*}(\tau_{2},\tilde{\mathbf{p}}_{2})\right\rangle\] \[+\left\langle u_{i}(\tau_{1},\mathbf{p}_{1})\,u_{m}^{*}(\tau_{2}, \tilde{\mathbf{p}}_{2})\right\rangle\left\langle u_{j}(\tau_{1},\tilde{\mathbf{p}}_{1 })\,u_{l}^{*}(\tau_{2},\mathbf{p}_{2})\right\rangle\Big{]}\,.\]
In general, the spectrum of any statistically homogeneous and isotropic field can be decomposed in a spectrum proportional to the projector \(P_{ij}\), given below Eq. (3), and a spectral function proportional to \(\hat{k}_{i}\hat{k}_{j}\)[87]. In the particular case of irrotational fields (as it is the case for sound waves), the contribution proportional to \(P_{ij}\) is zero, and the two-point correlation function of the velocity field is5
Footnote 5: Note that ref. [28] uses the spectral density \(G=4\pi^{2}E_{\text{kin}}/k^{2}\). We add an extra factor 2 in Eq. (19) such that the kinetic energy density is \(\frac{1}{2}\langle\mathbf{u}^{2}(\mathbf{x})\rangle=\int E_{\text{kin}}(k)\,\mathrm{d}k\).
\[\left\langle u_{i}(\tau_{1},\mathbf{k})\,u_{j}^{*}(\tau_{2},\mathbf{k}_ {2})\right\rangle\] \[\quad=(2\pi)^{6}\,\hat{k}_{i}\hat{k}_{j}\,\delta^{3}(\mathbf{k}-\bm {k}_{2})\frac{2E_{\text{kin}}(\tau_{1},\tau_{2},k)}{4\pi k^{2}}\,. \tag{19}\]
The assumption of the velocity field being irrotational is motivated by the results of numerical simulations [24, 25, 27].
In a semi-analytical approach, the sound shell model describes the velocity field as the linear superposition of the single-bubble contributions until the moment of collision [26, 28], based on the hydrodynamics of expanding bubbles [46]. At later times, the velocity field is assumed to be described by the superposition of sound waves. Hence, the resulting velocity field is irrotational and is described by the tensor structure of Eq. (19).
Using Eq. (19), the TT projection of the stress tensor in Eq. (18) acts as
\[\Lambda_{ijlm}(\hat{\mathbf{k}})\,\hat{p}^{i}\hat{\tilde{p}}^{j}\hat{p}^{l}\hat{ \tilde{p}}^{m}=\frac{p^{2}}{\tilde{p}^{2}}\frac{(1-z^{2})^{2}}{2}\,, \tag{20}\]
where \(z=\hat{\mathbf{k}}\cdot\tilde{\mathbf{p}}\). The UETC spectrum of the anisotropic stresses \(E_{\Pi}\), which sources the GW spectrum in Eq. (15), becomes
\[E_{\Pi}(\tau_{1},\tau_{2},k)= \,2\,k^{2}\bar{w}^{2}\int_{-1}^{1}\!\mathrm{d}z\int_{0}^{\infty} \!\mathrm{d}p\,\frac{p^{2}}{\tilde{p}^{4}}(1-z^{2})^{2}\] \[\times E_{\text{kin}}(\tau_{1},\tau_{2},p)\,E_{\text{kin}}(\tau_{1}, \tau_{2},\tilde{p})\,. \tag{21}\]
Hence, under the assumption of Gaussianity of the velocity field, the UETC of the anisotropic stresses
\(E_{\Pi}\) is reduced to a quadratic function of the UETC of the velocity field \(E_{\rm kin}\), integrated over \(p\) and \(z\).
A useful alternative form of Eq. (21) is found by changing the integration variable from \(z\) to \(\tilde{p}\) with
\[\tilde{p}^{2}\equiv|\mathbf{k}-\mathbf{p}|^{2}=p^{2}+k^{2}-2pkz\,, \tag{22}\]
yielding
\[E_{\Pi}(\tau_{1},\tau_{2},k)=2\,k\,\bar{w}^{2}\int_{0}^{\infty} \!\mathrm{d}p\,p\,E_{\rm kin}(\tau_{1},\tau_{2},p)\\ \times\int_{|k-p|}^{k+p}\!\mathrm{d}\tilde{p}\,\frac{E_{\rm kin}( \tau_{1},\tau_{2},\tilde{p})}{\tilde{p}^{3}}\left[1-z^{2}(\tilde{p})\right]^{2}. \tag{23}\]
This expression is used in ref. [28] and we use it in App. (B) for a comparison with their results.
In Sec. (III), we present the computation of the UETC of the velocity field for the sound waves produced upon collision of broken-phase bubbles, following the sound shell model. A detailed derivation, and theoretical aspects of the velocity UETC are presented in an accompanying paper [85].
## III Sound waves from first-order phase transitions in the sound shell model
### Velocity field
In a first-order phase transition, the hydrodynamic equations of the fluid around the expanding bubbles of the broken phase can be derived imposing the conservation of energy and momentum, \(\partial_{\mu}T^{\mu\nu}=0\), and assuming radial symmetry around the center of bubble nucleation [28; 46]. Once the broken-phase bubbles collide, it can be assumed that the Higgs field has reached its true vacuum state and the fluid perturbations follow a linear hydrodynamical description without any forcing term, leading to the development of compressional sound waves, according to the sound shell model [26; 28]. Defining the energy density fluctuations \(\lambda\equiv(\rho-\bar{\rho})/\bar{w}\), the linearization of the fluid equations leads to wave equations for \(\mathbf{u}\) and \(\lambda\),
\[\lambda^{\prime}(\tau,\mathbf{k})-ik_{i}\,u_{i}(\tau,\mathbf{k})=0\,, \tag{24}\] \[u^{\prime}_{i}(\tau,\mathbf{k})-ik_{i}\,c_{\rm s}^{2}\,\lambda(\tau, \mathbf{k})=0\,. \tag{25}\]
The equation of state \(c_{\rm s}^{2}\equiv\,\mathrm{d}\bar{p}/\,\mathrm{d}\bar{\rho}\) relates the background fluid pressure \(\bar{p}\) and energy density \(\bar{\rho}\). The solution is a longitudinal velocity field, \(u_{i}=\hat{k}_{i}u\),
\[u(\tau,\mathbf{k})=\sum_{s=\pm}A_{s}(\mathbf{k})\,e^{is\omega(\tau-\tau_{*})}, \tag{26}\]
where the dispersion relation is \(\omega=c_{\rm s}k\). The coefficients \(A_{\pm}\) depend on the velocity and energy density fields at the time of collisions [28; 85],
\[A_{\pm}(\mathbf{k})=\frac{1}{2}\Big{[}u(\tau_{*},\mathbf{k})\pm c_{\rm s}\lambda(\tau_ {*},\mathbf{k})\Big{]}. \tag{27}\]
Alternatively, as initial conditions we could use the velocity \(u\) and acceleration \(u^{\prime}\) fields, as done in ref. [24]. Reference [28] suggests the use of \(\lambda\) in Eq. (27) to respect the causality condition of irrotational fields when \(k\to 0\)[87; 88]. We show in an accompanying paper that the causal limit does not depend on this choice, however the latter is required to avoid discontinuities on \(u\) and \(\lambda\) at \(\tau_{*}\)[85].
According to the sound shell model, the velocity and energy density fields are the linear superposition of the fields produced by the expansion of each of the \(N_{b}\) single bubbles [24; 28],
\[A_{\pm}(\mathbf{k})=\sum_{n=1}^{N_{b}}\mathcal{A}_{\pm}(\chi)\,T_{n}^{3}\,e^{i\mathbf{ k}\cdot\mathbf{x}_{0}^{(n)}}, \tag{28}\]
where, for the \(n\)-th bubble, \(T_{n}=\tau_{*}-\tau_{0}^{(n)}\) is its lifetime, \(\tau_{0}^{(n)}\) is its time of nucleation, and \(\mathbf{x}_{0}^{(n)}\) is its nucleation location. The functions \(\mathcal{A}_{\pm}(\chi)\), where \(\chi\equiv k\,T_{n}\), are
\[\mathcal{A}_{\pm}(\chi)=-\frac{i}{2}\big{[}f^{\prime}(\chi)\pm ic_{\rm s}l( \chi)\big{]}, \tag{29}\]
being \(f(\chi)\) and \(l(\chi)\) integrals of the single-bubble radial profiles \(v_{{}_{\rm ip}}(\xi)\) and \(\lambda_{{}_{\rm ip}}(\xi)\) over a normalized radial coordinate \(\xi\),
\[f(\chi) = \frac{4\pi}{\chi}\int_{0}^{\infty}\,\mathrm{d}\xi\,v_{{}_{\rm ip} }(\xi)\,\sin(\chi\xi)\, \tag{30}\] \[l(\chi) = \frac{4\pi}{\chi}\int_{0}^{\infty}\,\mathrm{d}\xi\,\xi\,\lambda_{ {}_{\rm ip}}(\xi)\,\sin(\chi\xi). \tag{31}\]
We follow refs. [28; 46] to compute the single-bubble profiles, and present the detailed calculation in an accompanying paper [85].
### UETC of the velocity field
The UETC of the velocity field in Eq. (19) can be computed from the resulting velocity field given in Eq. (26),
\[E_{\rm kin}(\tau_{1},\tau_{2},k)= \,E_{\rm kin}^{(1)}(k)\cos\omega(\tau_{1}-\tau_{2})\] \[+ E_{\rm kin}^{(2)}(k)\cos\omega(\tau_{1}+\tau_{2}-2\tau_{*})\] \[+ E_{\rm kin}^{(3)}(k)\sin\omega(\tau_{1}+\tau_{2}-2\tau_{*}), \tag{32}\]
whose coefficients \(E^{(n)}_{\rm kin}(k)\) are given as [28; 85],
\[E^{(n)}_{\rm kin}\left(k\right)=\\ \frac{k^{2}}{2\pi^{2}\beta^{6}R_{*}^{3}}\int_{0}^{\infty}\!\!{\rm d }\tilde{T}\,\nu(\tilde{T})\,\tilde{T}^{6}\,\mathcal{E}^{(n)}(\tilde{T}k/\beta)\,, \tag{33}\]
where \(\beta\) denotes the inverse duration of the phase transition and \(\tilde{T}\equiv T\beta\) is the normalized bubble lifetime. The mean bubble separation, \(R_{*}\equiv(8\pi)^{1/3}\xi_{w}/\beta\)[13], corresponds to the characteristic length scale of the fluid motion. The distribution of the bubbles' lifetime, \(\nu(\tilde{T})\), is considered in ref. [28] for the scenarios of exponential and simultaneous nucleation,
\[\nu_{\rm exp}(\tilde{T})=e^{-\tilde{T}},\quad\nu_{\rm sim}(\tilde{T})=\tfrac{ 1}{2}\tilde{T}^{2}e^{-\frac{1}{6}\tilde{T}^{3}}\,. \tag{34}\]
The functions \(\mathcal{E}^{(n)}\) in Eq. (33) are
\[\mathcal{E}^{(1)}(\chi)= \,|\mathcal{A}_{+}|^{2}=\frac{1}{4}\Big{[}{f^{\prime}}^{2}(\chi) +c_{\rm s}^{2}l^{2}(\chi)\Big{]}, \tag{35}\] \[\mathcal{E}^{(2)}(\chi)= \,{\rm Re}\big{(}\mathcal{A}_{+}\mathcal{A}_{-}^{*}\big{)}=\frac{ 1}{4}\Big{[}{f^{\prime}}^{2}(\chi)-c_{\rm s}^{2}l^{2}(\chi)\Big{]},\] (36) \[\mathcal{E}^{(3)}(\chi)= \,{\rm Im}\big{(}\mathcal{A}_{+}\mathcal{A}_{-}^{*}\big{)}=\frac{ 1}{2}c_{s}f^{\prime}(\chi)\,l(\chi), \tag{37}\]
where \(\mathcal{A}_{\pm}(\chi)\) are defined in Eq. (29).
Following ref. [28], we expect the amplitude of the oscillatory contributions corresponding to \(E^{(1)}_{\rm kin}\) in Eq. (32) to be larger than those from \(E^{(2)}_{\rm kin}\) and \(E^{(3)}_{\rm kin}\). This is a consequence of the inequalities among their amplitudes,
\[\mathcal{E}^{(1)}(\chi)\geq\mathcal{E}^{(2)}(\chi)\geq\mathcal{E}^{(3)}(\chi). \tag{38}\]
However, when the term proportional to \(E^{(2)}_{\rm kin}\) is not highly oscillating, it cannot be neglected with respect to the one proportional to \(E^{(1)}_{\rm kin}\). This occurs in the limit \(\omega\equiv kc_{\rm s}\ll(2\delta\tau_{\rm fin})^{-1}\), since \(0\leq\tau_{1}+\tau_{2}-2\tau_{*}\leq 2\delta\tau_{\rm fin}\), where we denote the duration of the source as \(\delta\tau_{\rm fin}\equiv\tau_{\rm fin}-\tau_{*}\).
Let us first focus on the case \(k\gg 1/(2c_{\rm s}\delta\tau_{\rm fin})\). Then, we find a stationary UETC [28],
\[E_{\rm kin}(k,\tau_{1},\tau_{2})\approx E_{\rm kin}(k)\cos(kc_{\rm s}\tau_{-} )\,, \tag{39}\]
where \(E_{\rm kin}(k)=E^{(1)}_{\rm kin}(k)\) and \(\tau_{-}=\tau_{2}-\tau_{1}\). Figure (1) shows benchmark results for the normalized \(\zeta_{\rm kin}(k)=E_{\rm kin}(k)/E^{*}_{\rm kin}\), \(E^{*}_{\rm kin}\) denoting the maximum value of \(E_{\rm kin}(k)\), obtained for a benchmark phase transition strength \(\alpha=0.1\) and a range of broken-phase bubble wall speeds \(\xi_{w}\in(0.1,0.9)\). We present the details of these calculations in an accompanying paper [85].
Since the resulting velocity field due to the superposition of sound waves is irrotational, the causality condition requires \(E_{\rm kin}(\tau_{1},\tau_{2},k)\sim k^{4}\) in the limit \(k\to 0\)[87; 28; 88]. We note that, since \(E_{\rm kin}(k)\) is an integral over \(T\) of \(\mathcal{E}^{(1)}(\chi)\), the limit of \(E_{\rm kin}(k)\) when \(k\to 0\) is equivalent to the limit of \(\mathcal{E}^{(1)}(\chi)\) when \(\chi\to 0\). The integrand is then proportional to \(f^{\prime 2}(\chi)+c_{\rm s}^{2}l^{2}(\chi)\), see Eq. (35).
As mentioned above, ref. [28] justifies the choice of \(\lambda\) (which leads to the \(c_{\rm s}^{2}l^{2}\) contribution in \(E_{\rm kin}\)) in Eq. (29) for the initial conditions, instead of \(u^{\prime}\), to ensure the causality condition. However, the function \(l^{2}(\chi)\) in Eq. (31) leads to the asymptotic limits \(l^{2}(\chi)\sim\chi^{0}\) when \(\chi\to 0\), and \(E_{\rm kin}(k)\sim k^{2}\) when \(k\to 0\), as we show in an accompanying paper [85]. This naively seems to violate causality. The same is true when one chooses \(u^{\prime}\) to impose the initial conditions. The key point to recover the causality condition is to note that the assumption in Eq. (39) is not valid in the limit \(k\ll 1/(2c_{\rm s}\delta\tau_{\rm fin})\). In this limit, one finds from Eq. (32),
\[\lim_{k\to 0}E_{\rm kin}(\tau_{1},\tau_{2},k)=E^{(1)}_{\rm kin}(k)+E^{(2)}_{\rm kin }(k). \tag{40}\]
The UETC of the velocity field in the \(k\to 0\) limit is then proportional to \({f^{\prime}}^{2}(\chi)\) (see Eqs. (35) and (36)), and not to \(l^{2}(\chi)\), as previously found using Eq. (39). Then the \(\chi\to 0\) limit is indeed \({f^{\prime}}^{2}\sim\chi^{2}\), such that \(E_{\rm kin}\sim k^{4}\), as expected from causality.
In the following, we take Eq. (39) to describe the UETC spectrum and will refer to \(E_{\rm kin}\) as the kinetic spectrum. Even though \(E_{\rm kin}\) does not describe the UETC in the limit \(k\to 0\), it does for all the scales that are relevant for the study of GW production (see Fig. (1)).
Following the normalization of ref. [42], we define a characteristic amplitude \(E^{*}_{\rm kin}\) and wave number \(k_{*}\). For the kinetic spectrum corresponding to sound waves, we set \(k_{*}=1/R_{*}\) and \(E^{*}_{\rm kin}\) to be the maximum amplitude, which is located at \(K^{\rm peak}_{\rm kin}=k^{\rm peak}_{\rm kin}R_{*}\sim\mathcal{O}(1)\) (see Fig. (1) and values in Table 1). Then, the kinetic spectrum can be expressed as
\[E_{\rm kin}(k)=E^{*}_{\rm kin}\,\zeta_{\rm kin}(K), \tag{41}\]
where \(K=k/k_{*}=kR_{*}\) and \(\zeta_{\rm kin}\) determines the spectral shape of the kinetic spectrum. The spectral shape found within the sound shell model (see Fig. (1)) is proportional to \(k^{4}\) at low \(k\), as discussed in Sec. (III.2), and follows a \(k^{-2}\) decay at large \(k\). At intermediate scales, \(\zeta_{\rm kin}\) can present an additional intermediate power law, especially for values
of the wall velocity \(\xi_{w}\) close to the speed of sound \(c_{\rm s}=1/\sqrt{3}\), and develop a double peak structure, as can be seen in Fig. (1) and shown in refs. [28; 80].
The total kinetic energy density \(\Omega_{\rm K}\), expressed as a fraction of the critical energy density, is computed from Eq. (39) at equal times \(\tau_{1}=\tau_{2}=\tau\),
\[\Omega_{\rm K}=\int_{0}^{\infty}E_{\rm kin}(k,\tau,\tau)\,{\rm d}k=\frac{E_{ \rm kin}^{*}}{R_{*}}\,{\cal K}, \tag{42}\]
where we have used Eq. (41), and6
Footnote 6: As discussed above, \(\zeta_{\rm kin}\) is not a valid description of the UETC spectrum at small \(K\). However, the effect on \({\cal K}\) is negligible, since \(\zeta_{\rm kin}\) becomes very small in this range of \(K\), and it does not contribute appreciably to the integral.
\[{\cal K}\approx\int_{0}^{\infty}\zeta_{\rm kin}(K)\,{\rm d}K, \tag{43}\]
only depends on the spectral shape, characterizing how broad is the spectrum around \(K=1\). The values of \({\cal K}\) are listed in Table 1 for the benchmark phase transitions shown in Fig. (1). The kinetic energy density \(\Omega_{\rm K}\) is estimated by the single-bubble profiles in ref. [46] as \(\Omega_{\rm K}\equiv\kappa\alpha/(1+\alpha)\), where \(\kappa\) is an efficiency factor that depends on \(\alpha\) and \(\xi_{w}\). We omit the comparison of \(\Omega_{\rm K}\) found in the sound shell model with that of ref. [46] since we focus on the GW production in the current work. This relation will be explored in an accompanying paper [85], see also the discussion of refs. [26; 27; 24; 28; 29; 30].
### UETC of the anisotropic stress
We consider the UETC of the anisotropic stresses \(E_{\Pi}\), defined in Eq. (21), under the stationary assumption of Eq. (39). Introducing the normalization of Eqs. (41-43) one obtains
\[k\,E_{\Pi}(\tau_{1},\tau_{2},k)\] \[\qquad\simeq 2\,\bar{w}^{2}\,K^{3}\left(\frac{\Omega_{\rm K}}{{ \cal K}}\right)^{2}{\cal C}\,\zeta_{\Pi}(\tau_{-},K), \tag{44}\]
Figure 1: Time-independent component of the normalized velocity field UETC spectrum \(\zeta_{\rm kin}(k)\equiv E_{\rm kin}(k)/E_{\rm kin}^{*}\) (see Eq. (39)), \(E_{\rm kin}^{*}\) being the maximum value of the spectrum. Numerical results are obtained according to the sound shell model [28], as described in ref. [85], for the phase transition strength parameter \(\alpha=0.1\) and a range of wall velocities \(\xi_{w}\in[0.1,0.9]\). The results are computed in the cases of exponential (black) and simultaneous (red) bubble nucleations [28]. Vertical dashed lines indicate the wave numbers, \(k_{\rm kin}^{\rm peak}\), where the maxima, \(E_{\rm kin}^{*}\), are reached. Their numerical values are given in Table 1.
where, following ref. [42], we have defined
\[\mathcal{C}\,\zeta_{\Pi}(\tau_{-},K)=\int_{0}^{\infty}P^{2}\zeta_{ \mathrm{kin}}(P)\,\cos(Pc_{\mathrm{s}}\,k_{*}\,\tau_{-})\,\,\mathrm{d}P\\ \int_{-1}^{1}(1-z^{2})^{2}\,\frac{\zeta_{\mathrm{kin}}(\tilde{P})} {\tilde{P}^{4}}\cos(\tilde{P}c_{\mathrm{s}}\,k_{*}\,\tau_{-})\,\mathrm{d}z\,, \tag{45}\]
and used the notation \(P\equiv p/k_{*}=pR_{*}\) and \(\tilde{P}\equiv\tilde{p}/k_{*}=\tilde{p}R_{*}\). The constant \(\mathcal{C}\) is the value of \(\zeta_{\Pi}\) in the limit \(K\to 0\) (see Table 1 for benchmark values),
\[\mathcal{C}=\frac{16}{15}\int_{0}^{\infty}\frac{\zeta_{\mathrm{kin}}^{2}(K)}{ K^{2}}\,\mathrm{d}K\,. \tag{46}\]
so that \(\zeta_{\Pi}\to 1\) in this limit. The spectral shape is therefore encoded in \(\zeta_{\Pi}\). We note that, as discussed in the previous section, the UETC of the velocity field in this limit should be taken from Eq. (40), so it does not only depend on the time difference \(\tau_{-}\) when \(k\ll 1/(2c_{\mathrm{s}}\delta\tau_{\mathrm{fin}})\).
At equal times, Eq. (45) becomes
\[\mathcal{C}\,\zeta_{\Pi}(K)=\int_{0}^{\infty}P^{2}\zeta_{\mathrm{ kin}}(P)\,\mathrm{d}P\\ \times\int_{-1}^{1}(1-z^{2})^{2}\,\frac{\zeta_{\mathrm{kin}}(\tilde {P})}{\tilde{P}^{4}}\,\mathrm{d}z, \tag{47}\]
where \(\zeta_{\Pi}(K)\leq 1\) is a monotonically decreasing function, shown in Fig. (2) for the benchmark phase transitions of Fig. (1). This condition can be understood from the derivative of \(\mathcal{C}\zeta_{\Pi}\) with respect to \(K\),
\[\mathcal{C}\,\partial_{K}\zeta_{\Pi}(K)=\int_{0}^{\infty}P^{2} \zeta_{\mathrm{kin}}(P)\,\mathrm{d}P\int_{-1}^{1}(1-z^{2})^{2}\\ \times\bigg{[}\zeta_{\mathrm{kin}}^{\prime}(\tilde{P})-\frac{4 \zeta_{\mathrm{kin}}(\tilde{P})}{\tilde{P}}\bigg{]}\frac{K-Pz}{\tilde{P}^{4}} \,\mathrm{d}z. \tag{48}\]
We find that the term in square brackets is always negative if \(\zeta_{\mathrm{kin}}(\tilde{P})\propto\tilde{P}^{n}\) with \(n\leq 4\) at all \(\tilde{P}\), which is indeed the case. The second term \(K-Pz\) is positive for most of the integration range since it becomes negative only when \(z>K/P\). Since \(1-z^{2}\) is symmetric in \(z\), then the final integral is _almost_ always negative, unless the term in the square bracket,
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline type & \(\xi_{w}\) & \(10^{4}E_{\mathrm{kin}}^{*}/R_{*}\) & \(K_{\mathrm{kin}}^{\mathrm{peak}}\) & \(\mathcal{K}\) & \(10^{2}\Omega_{\mathrm{K}}\) & \(\mathcal{C}\) & \(K_{\mathrm{GW}}\) & \((K^{3}\zeta_{\Pi})_{\mathrm{peak}}\) & \(K_{1}\) & \(K_{2}\) & \(b\) & \(\alpha_{1}\) & \(\alpha_{2}\) \\ \hline exp & 0.1 & 26.8 & 1.15 & 7.49 & 2.0 & 1.21 & 2.03 & 0.90 & 1.18 & 2.39 & 0.34 & 0.76 & 1.22 \\ exp & 0.2 & 25.7 & 1.28 & 5.42 & 1.4 & 0.94 & 2.39 & 1.36 & 1.59 & 2.39 & 1.06 & 0.66 & 1.33 \\ exp & 0.3 & 21.6 & 1.53 & 5.52 & 1.2 & 0.80 & 3.01 & 2.41 & 1.98 & 3.00 & 0 & 0.67 & 1.10 \\ exp & 0.4 & 16.0 & 1.80 & 8.43 & 1.4 & 0.76 & 4.01 & 4.93 & 1.99 & 6.70 & 0 & 0.70 & 1.30 \\ exp & 0.5 & 10.3 & 2.02 & 21.95 & 2.3 & 0.75 & 8.44 & 13.97 & 2.26 & 12.81 & 0.36 & 0.73 & 1.31 \\ exp & 0.6 & 5.3 & 2.28 & 58.88 & 3.1 & 0.75 & 20.33 & 68.80 & 2.79 & 26.94 & 0.78 & 0.68 & 1.08 \\ exp & 0.7 & 3.2 & 2.21 & 88.82 & 2.9 & 0.73 & 56.63 & 102.37 & 3.42 & 91.89 & 0.42 & 0.49 & 1.27 \\ exp & 0.8 & 5.3 & 2.05 & 22.98 & 1.2 & 0.72 & 9.53 & 14.09 & 1.63 & 11.94 & 1.47 & 1.81 & 0.39 \\ exp & 0.9 & 5.1 & 2.04 & 10.90 & 0.6 & 0.68 & 6.36 & 9.09 & 2.33 & 10.66 & 0 & 0.69 & 1.38 \\ exp & 0.99 & 4.5 & 2.04 & 7.95 & 0.4 & 0.66 & 4.99 & 7.36 & 2.20 & 7.82 & 0 & 0.73 & 1.49 \\ \hline sim & 0.1 & 18.2 & 2.59 & 10.43 & 1.9 & 0.44 & 3.42 & 7.51 & 1.74 & 3.81 & 0.92 & 1.41 & 2.34 \\ sim & 0.2 & 19.0 & 2.82 & 7.08 & 1.3 & 0.31 & 4.03 & 10.85 & 2.09 & 4.04 & 0.93 & 1.34 & 2.13 \\ sim & 0.3 & 16.1 & 3.29 & 7.29 & 1.2 & 0.26 & 5.07 & 18.45 & 2.56 & 4.77 & 1.29 & 1.39 & 1.25 \\ sim & 0.4 & 11.6 & 3.64 & 11.11 & 1.3 & 0.25 & 6.76 & 35.29 & 3.53 & 10.50 & 0 & 0.92 & 2.39 \\ sim & 0.5 & 7.2 & 3.85 & 30.66 & 2.2 & 0.25 & 16.95 & 102.40 & 4.20 & 20.63 & 0.33 & 0.85 & 2.95 \\ sim & 0.6 & 3.7 & 4.16 & 84.35 & 3.1 & 0.25 & 40.79 & 528.01 & 5.36 & 44.62 & 0.75 & 0.71 & 2.15 \\ sim & 0.7 & 2.3 & 4.20 & 123.32 & 2.8 & 0.24 & 113.65 & 718.45 & 7.15 & 154.60 & 0.29 & 0.45 & 2.86 \\ sim & 0.8 & 3.8 & 4.06 & 31.14 & 1.2 & 0.23 & 16.07 & 97.56 & 3.14 & 23.38 & 1.02 & 1.70 & 0.71 \\ sim & 0.9 & 3.6 & 4.12 & 14.92 & 0.5 & 0.22 & 10.72 & 63.57 & 4.15 & 16.72 & 0 & 0.88 & 2.74 \\ sim & 0.99 & 3.2 & 4.13 & 10.84 & 0.4 & 0.22 & 8.41 & 52.26 & 3.96 & 12.37 & 0 & 0.96 & 2.66 \\ \hline \end{tabular}
\end{table}
Table 1: Numerical values of the amplitudes and peak frequencies that characterize the spectra of the velocity field (columns 3 to 6) and of GWs (columns 7 to 9), within the sound shell model for exponential and simultaneous types of nucleation [85; 28]. The bubble wall velocities \(\xi_{w}\) correspond to the benchmark phase transitions shown in Fig. (1), with \(\alpha=0.1\). The parameters in the last five columns determine the fit of \(K^{3}\zeta_{\Pi}(K)\) in Eq. (50).
once multiplied by \(P^{2}\zeta_{\rm kin}(P)\), has a larger contribution when \(K/P<1\) and \(K/P<z<1\) than in the rest of the range, but this is not the case for any of the evaluated spectra (see Fig. (2)).
At intermediate \(K\), \(\zeta_{\Pi}\) strongly depends on the specific spectral shape of the velocity power spectrum \(\zeta_{\rm kin}(K)\), and it requires numerical evaluation of the integral in Eq. (47). However, in the asymptotic limit \(K\to\infty\), indicated by a \(\infty\) superscript, Eq. (47) becomes
\[\zeta_{\Pi}^{\infty}=\frac{\zeta_{\rm kin}^{\infty}}{K^{4}}\frac{\int_{0}^{ \infty}P^{2}\zeta_{\rm kin}(P)\,{\rm d}P}{\int_{0}^{\infty}\frac{\zeta_{\rm kin }^{2}(P)}{P^{2}}\,{\rm d}P}. \tag{49}\]
Therefore, if the kinetic spectrum decays as \(\zeta_{\rm kin}^{\infty}\sim K^{-b}\), then \(\zeta_{\Pi}^{\infty}\) decays as \(K^{-b-4}\). However, since \(P\) is integrated from \(0\) to \(\infty\), it can become of the same order as \(K\) and the power law decay \(K^{-b-4}\) might not be reached exactly. In particular, we find \(\zeta_{\Pi}^{\infty}\sim K^{-5}\), which is close to the estimated \(K^{-6}\) slope, for the benchmark kinetic spectra (see Fig. (2), where dashed lines correspond to the fit in Eq. (50), with an exact \(K^{-5}\) decay).
We find in Sec. (VI) that the final GW spectrum is proportional to \(K^{3}\zeta_{\Pi}\) in the limit of short duration of the GW sourcing, \(\delta\tau_{\rm fin}/R_{*}\ll 1\). For longer duration, the GW spectrum can deviate with respect to \(K^{3}\zeta_{\Pi}\) by a factor \(\tilde{\Delta}\) (see Sec. (VI)). In any case, the GW spectrum approximately peaks at \(K_{\rm GW}\), defined as the wave number where \(K^{3}\zeta_{\Pi}\) takes its maximum value \((K^{3}\zeta_{\Pi})_{\rm peak}\). The value of \(K_{\rm GW}\) depends on how steep is the negative slope of \(\zeta_{\Pi}\) when it starts to decay around \(K\sim\mathcal{O}(1)\): it therefore requires numerical evaluation of \(\zeta_{\Pi}\) using Eq. (47). We give in Table 1 the numerical values of \(K_{\rm GW}\) and \((K^{3}\zeta_{\Pi})_{\rm peak}\).
Due to the double peak structure of \(\zeta_{\rm kin}(K)\), which appears when the wall velocity \(\xi_{w}\) approaches \(c_{s}\), an appropriate fit for \(K^{3}\zeta_{\Pi}\) is a smoothed doubly broken power law,
\[K^{3}\zeta_{\Pi}(K)=\frac{K^{3}\big{[}1+(K/K_{1})^{(3-b)\alpha_{1}}\big{]}^{- \frac{1}{\alpha_{1}}}}{\big{[}1+(K/K_{2})^{(2+b)\alpha_{2}}\big{]}^{\frac{1}{ \alpha_{2}}}}\, \tag{50}\]
where \(K_{1,2}\) are the wave number breaks, \(b\) is the intermediate slope, and \(\alpha_{1,2}\) are parameters that determine the smoothness of the transition between slopes. At low \(K\), we fix \(\zeta_{\Pi}=1\), as desired, and at large \(K\), we fix \(\zeta_{\Pi}^{\infty}\sim K^{-5}\). We note that, in general, \(K_{2}\) is not necessarily equal to \(K_{\rm GW}\). We show the corresponding values of \(K_{1,2}\), \(b\), and \(\alpha_{1,2}\), found for the benchmark phase transitions of Fig. (1) in Table 1. We note that some \(\zeta_{\Pi}\) are already well approximated by a single broken power law since they do not present a double peak structure, especially for \(\xi_{w}\lesssim 0.5\) and \(\xi_{w}\gtrsim 0.8\).
The exact values of the amplitude \((K^{3}\zeta_{\Pi})_{\rm peak}\), the frequency breaks \(K_{1,2}\), and the intermediate slopes highly depend on the specific spectral shape of the velocity power spectrum \(\zeta_{\rm kin}\). According to the sound shell model, refs. [26; 28] proposed that the two peaks are determined by the inverse mean size of the bubbles, \(1/R_{*}\), and the inverse sound shell thickness, \(1/(R_{*}\Delta_{w})\), where \(\Delta_{w}=|\xi_{w}-c_{\rm s}|/c_{\rm s}\). Similar dependencies are found in numerical simulations [24; 25; 27; 29; 30]. We explore the relations between the phase transition parameters and the shape of the anisotropic stresses, which will ultimately impact the GW spectrum, in an accompanying paper [85]. In the following, we study the spectral shape of GWs once we know the spectral shape of \(\zeta_{\Pi}\), shown in Fig. (2) for a set of benchmark phase transitions.
Figure 2: Normalized spectrum of the anisotropic stresses, \(\zeta_{\Pi}\), for the kinetic spectra of the benchmark phase transitions shown in Fig. (1), in the case of exponential nucleation, computed numerically (solid lines) compared to the fit from Eq. (50) (dashed lines). \(\zeta_{\Pi}\) is multiplied by \(K^{3}\) since this is the relevant contribution to the resulting GW spectrum (see Sec. (VI)). The stars correspond to \(K_{\rm GW}\), where \(K^{3}\zeta_{\Pi}\) is maximum (see values in Table 1).
Low wave number tail of the GW spectrum from sound waves
In this section, we study the amplitude of the GW spectrum analytically, by evaluating its low-frequency limit \(k\to 0\). We do not assume flat spacetime but consider an expanding Universe. Following the sound shell model [28], we adopt the stationary assumption of Eq. (39), assuming its validity down to \(k\to 0\), see discussion in Sec. (III.2). The source is assumed to be stationary but still characterized by a finite lifetime, \(\delta\tau_{\rm fin}\). Note that this might introduce a spurious effect in the final GW spectrum due to the sharp cutoff of the integrals in time [34]. However, we deem this not important, given the good agreement of the GW spectrum evaluated semi-analytically following the sound shell model with the one from numerical simulations [28; 26]. The study of the GW spectrum at all \(k\) is presented in Sec. (VI).
In Sec. (IV.1), we start by collecting the results of Sec. (III) to evaluate the GW spectrum, and comment on the consequences of the expansion of the Universe. We find in Sec. (IV.2) that the GW spectrum follows a \(k^{3}\) scaling at low \(k\): this is expected from previous analyses, both analytical [60] and numerical [89; 30], but it is in contradiction with the findings of the original sound-shell model of ref. [28], which obtains instead that, at scales larger than the peak, the GW spectrum goes as \(k^{9}\). In Sec. (IV.3), we reproduce the calculation of ref. [28] and show that the \(k^{9}\) behavior is recovered only when one makes an assumption for the UETC that is, however, only justified under certain conditions that do not hold in the \(k\to 0\) limit. We therefore claim that the \(k^{3}\) scaling is the correct one in the low-\(k\) limit.
Moreover, we find in Sec. (IV.2) that the GW amplitude in the \(k\to 0\) limit is proportional to \(\ln^{2}(1+\delta\tau_{\rm fin}\mathcal{H}_{*})\). This factor becomes quadratic in the source duration parameter \(\delta\tau_{\rm fin}\mathcal{H}_{*}\) when one ignores the expansion of the Universe, i.e., in the limit \(\delta\tau_{\rm fin}\mathcal{H}_{*}\ll 1\). A similar quadratic dependence has also been found in the numerical analysis of ref. [37] for acoustic turbulence, as well as for (magneto)hydrodynamical ((M)HD) vortical turbulence, both analytically [33; 34; 42] and numerically [37; 38; 39; 40; 41; 42; 43]. However, this result is in contradiction with the linear dependence in the source duration usually assumed for stationary UETCs [28; 31; 32; 33; 34; 24; 35; 90]. In particular, a linear growth is assumed for sound waves in analytical (see, e.g., refs. [28; 49; 58; 80; 83; 91; 92; 6]) and numerical (see, e.g., refs. [24; 25; 27; 29; 30; 57]) studies. We investigate this issue in Sec. (V). We show that the linear growth of ref. [28], and the suppression factor \(\Upsilon=1-1/(\tau_{\rm fin}\mathcal{H}_{*})\) of ref. [59] for an expanding Universe, are valid for stationary processes _only_ under specific assumptions [33], which are equivalent to those used in refs. [28; 59]. We show that these assumptions do not hold in the \(k\to 0\) limit. Therefore, the causality tail, proportional to \(k^{3}\), is also proportional to \(\ln^{2}(1+\delta\tau_{\rm fin}\mathcal{H}_{*})\).
In Sec. (V.2), we extend our analysis to a stationary Gaussian UETC (cf. Kraichnan decorrelation [31; 32; 34; 43; 93]) to show, within a general framework, when the aforementioned assumptions hold. We find that this occurs when \(k\tau_{c}\gg 1\), where \(\tau_{c}\) is a characteristic time of the process (e.g., \(\delta\tau_{\rm fin}\) in the sound shell model). Hence, if \(\delta\tau_{\rm fin}/R_{*}\gg 1\), the slope of the GW spectrum around its spectral peak, \(k_{\rm kin}^{\rm peak}\sim 1/R_{*}\), is well described under these assumptions. As discussed in Sec. (VI.2), this limit corresponds to low fluid velocities and correspondingly weak first-order phase transitions.
Indeed, in Sec. (VI), we extend our analysis to all \(k\) and we show that, even though the causality tail is proportional to \(k^{3}\) and follows a quadratic growth with \(\delta\tau_{\rm fin}\), the amplitude around the peak can present a steep slope approaching the \(k^{9}\) scaling, and can follow a linear growth with \(\delta\tau_{\rm fin}\), when \(\delta\tau_{\rm fin}/R_{*}\gg 1\). Hence, at frequencies \(k\gg 1/\delta\tau_{\rm fin}\), the GW spectrum can be approximately described by the calculation of refs. [28; 59], reproduced in App. (B). Including the expansion of the Universe, the quadratic \((\delta\tau_{\rm fin}\mathcal{H}_{*})^{2}\) and linear \(\delta\tau_{\rm fin}\mathcal{H}_{*}\) dependence become respectively \(\ln^{2}(1+\delta\tau_{\rm fin}\mathcal{H}_{*})\) and \(\Upsilon\).
### GW spectrum in the sound shell model
We adopt the stationary assumption of Eq. (39) and combine Eqs. (15) and (21) to find the GW spectrum today. After averaging over fast oscillations in time, it becomes
\[\Omega_{\rm GW}(\delta\tau_{\rm fin},k)=3\,k^{3}\bar{w}^{2}\int_ {-1}^{1}\!\left(1-z^{2}\right)^{2}\,\mathrm{d}z\\ \times\int_{0}^{\infty}\!\!\mathrm{d}p\,\frac{p^{2}}{\bar{p}^{4}} \,E_{\rm kin}(p)E_{\rm kin}(\tilde{p})\,\Delta(\delta\tau_{\rm fin},k,p, \tilde{p})\,. \tag{51}\]
Note that Eq. (51) gives the present-day GW spectrum, i.e., the observable we are generally interested in. While the source is still active, the GW spec
trum would depend not only on the source duration \(\delta\tau_{\rm fin}\equiv\tau_{\rm fin}-\tau_{*}\), but also on the absolute time \(\tau\). During the production phase in the early Universe, in fact, the dependence on \(\tau\) cannot be averaged out. We present this case in App. (A), which is particularly relevant when one compares it with the results of numerical simulations: depending on the wave number span and on the duration of the simulation, it is often required to take into account the residual dependence on \(\tau\) of the GW spectrum, instead of on \(\delta\tau_{\rm fin}\) only.
The function \(\Delta\) in Eq. (51) contains the integral over times \(\tau_{1}\) and \(\tau_{2}\) of the Green's functions and the time dependence of the stationary UETC,
\[\Delta(\delta\tau_{\rm fin},k,p,\tilde{p})\equiv\int_{\tau_{*}}^{ \tau_{\rm fin}}\!\!\frac{\mathrm{d}\tau_{1}}{\tau_{1}}\int_{\tau_{*}}^{\tau_{ \rm fin}}\!\!\frac{\mathrm{d}\tau_{2}}{\tau_{2}}\\ \times\cos(pc_{\rm s}\tau_{-})\cos(\tilde{p}c_{\rm s}\tau_{-}) \cos(k\tau_{-})\,. \tag{52}\]
The product of cosines can be expressed as
\[\cos(pc_{\rm s}\tau_{-})\cos(\tilde{p}c_{\rm s}\tau_{-})\cos(k \tau_{-})\\ =\frac{1}{4}\sum_{m,n=\pm 1}\cos(\hat{p}_{mn}\tau_{-})\,, \tag{53}\]
where we have defined \(\hat{p}_{mn}\equiv(p+m\tilde{p})\,c_{\rm s}+nk\). We separate the time dependencies using
\[\cos(\hat{p}_{mn}\tau_{-})=\cos(\hat{p}_{mn}\tau_{2})\cos(\hat{p }_{mn}\tau_{1})\\ +\sin(\hat{p}_{mn}\tau_{2})\sin(\hat{p}_{mn}\tau_{1}), \tag{54}\]
so that the integrals over \(\tau_{1}\) and \(\tau_{2}\) yield
\[\Delta(\delta\tau_{\rm fin},k,p,\tilde{p})=\sum_{m,n=\pm 1}\Delta_{mn}( \delta\tau_{\rm fin},\hat{p}_{mn}), \tag{55}\]
where we have defined the function
\[\Delta_{mn}(\delta\tau_{\rm fin},\hat{p}_{mn})\\ =\!\frac{1}{4}\Big{[}\Delta{\rm Ci}^{2}(\tau_{\rm fin},\hat{p}_ {mn})+\Delta{\rm Si}^{2}(\tau_{\rm fin},\hat{p}_{mn})\Big{]}, \tag{56}\]
and
\[\Delta{\rm Ci}(\tau,p)\equiv{\rm Ci}\big{(}p\tau\big{)}-{\rm Ci} \big{(}p\tau_{*}\big{)}\,, \tag{57}\] \[\Delta{\rm Si}(\tau,p)\equiv{\rm Si}\big{(}p\tau\big{)}-{\rm Si} \big{(}p\tau_{*}\big{)}\,. \tag{58}\]
Even though \(\Delta_{mn}\) is an intermediate function, which needs to be integrated over \(p\) and \(z\) to obtain the GW spectrum (see Eq. (51)), it is still very useful to study its behavior as a function of both \(\delta\tau_{\rm fin}\mathcal{H}_{*}\) and \(\hat{p}_{mn}/\mathcal{H}_{*}\). In Fig. (3) we show \(\Delta_{mn}\) as a function of \(\delta\tau_{\rm fin}\mathcal{H}_{*}\) for different fixed values of \(\hat{p}_{mn}/\mathcal{H}_{*}\).
In the limit \(\hat{p}_{mn}\ll\mathcal{H}_{*}\), the functions \(\Delta{\rm Ci}\to\ln(\tau_{\rm fin}\mathcal{H}_{*})\) and \(\Delta{\rm Si}\to 0\), such that \(\Delta\to\ln^{2}(\tau_{\rm fin}\mathcal{H}_{*})=\ln^{2}(1+\delta\tau_{\rm fin} \mathcal{H}_{*})\) (see Fig. (3)). This limit is very relevant: we show in Sec. (IV.2) that, indeed, this logarithmic scaling with the source duration holds also for the GW spectrum in the \(k\to 0\) limit.
If the duration of the production of GWs from sound waves is short, \(\delta\tau_{\rm fin}\mathcal{H}_{*}\ll 1\), the expansion of the Universe can be neglected. As a consequence, \(\tau\approx 1/\mathcal{H}_{*}\) in Eq. (4), and the factor \(1/(\tau_{1}\tau_{2})\) in the integrand of Eq. (52) becomes constant, \(\mathcal{H}_{*}^{2}\). In this case, we obtain the solution for a flat (non-expanding) Universe,
\[\Delta_{mn}^{\rm flat}(\delta\tau_{\rm fin},\hat{p}_{mn})=\\ \frac{1-\cos\!\big{[}(\hat{p}_{mn}/\mathcal{H}_{*})(\mathcal{H}_ {*}\delta\tau_{\rm fin})\big{]}}{2\left(\hat{p}_{mn}/\mathcal{H}_{*}\right)^{ 2}}. \tag{59}\]
Since \(\delta\tau_{\rm fin}\mathcal{H}_{*}\ll 1\), one has \(\Delta^{\rm flat}\to(\delta\tau_{\rm fin}\mathcal{H}_{*})^{2}\) from Eq. (59), suggesting that the GW spectrum grows quadratically in \(\delta\tau_{\rm fin}\). This quadratic scaling also holds for an expanding Universe, since the same limit can be found from Eq. (56): for \(\hat{p}_{mn}\ll\mathcal{H}_{*}\) and \(\delta\tau_{\rm fin}\mathcal{H}_{*}\ll 1\), one has \(\Delta\to\ln^{2}(1+\delta\tau_{\rm fin}\mathcal{H}_{*})\to(\delta\tau_{\rm fin} \mathcal{H}_{*})^{2}\). These behaviors for a flat and an expanding Universe are shown in Fig. (3) and are due to the asymptotic limits of the cosine and sine integral functions, as pointed out in refs. [33; 42].
### Low-frequency limit
In the previous section, we have shown that the function \(\Delta_{mn}\), given in Eqs. (56) and (59) respectively for an expanding and a flat Universe, depends logarithmically on the duration of the source, \(\ln^{2}(\tau_{\rm fin}\mathcal{H}_{*})\), for small values of \(\hat{p}_{mn}/\mathcal{H}_{*}\). In this section, we compute explicitly the GW spectrum in the limit \(k\to 0\), and confirm that the GW spectrum inherits the same logarithmic dependence at large scales. We also show how the \(k^{3}\) scaling, expected from causality [88], appears in this limit, instead of the \(k^{9}\) scaling found in ref. [28].
In the low frequency limit \(k\to 0\), \(\tilde{p}\to p\) and \(\hat{p}_{mn}\to(p+m\tilde{p})\,c_{\rm s}\). The latter becomes \(0\) for \(m=-1\) and \(2pc_{\rm s}\) for \(m=1\). Therefore, the \(z\)-dependence in Eq. (51) is reduced only to the function \((1-z^{2})^{2}\)
and the GW spectrum becomes
\[\lim_{k\to 0}\Omega_{\rm GW}(\delta\tau_{\rm fin},k)=\\ 3\,\bar{w}^{2}\,k^{3}\,\frac{16}{15}\int_{0}^{\infty}\frac{E_{ \rm kin}^{2}(p)}{p^{2}}\Delta_{0}(\delta\tau_{\rm fin},p)\,\mathrm{d}p. \tag{60}\]
This expression already shows an important result: the GW spectrum scales with \(k^{3}\) in the limit \(k\to 0\), since the integral in Eq. (60) does not depend on \(k\). We defer the comparison of this result to the \(k^{9}\) scaling found in ref. [28] to Sec. (IV.3). There, we demonstrate that a simplifying approximation of \(\Delta\) used in ref. [28] leads to an additional dependence of \(\Delta_{0}\) on \(k\). However, this approximation does not apply in the \(k\to 0\) limit, invalidating the \(k^{9}\) behaviour at large scales. In the following, we rather focus on the dependence of \(\Omega_{\rm GW}\) with the source duration \(\delta\tau_{\rm fin}\).
The \(\Delta_{0}\) function that appears in Eq. (60) corresponds to \(\Delta\), given in Eq. (55), in the \(k\to 0\) limit:
\[\Delta_{0}(\delta\tau_{\rm fin},p)=\lim_{k\to 0}\Delta(\delta\tau_{\rm fin },k,p,\tilde{p})\\ =\frac{1}{2}\Bigl{[}\ln^{2}(\tau_{\rm fin}\mathcal{H}_{*})+\Delta \mathrm{Ci}^{2}(\tau_{\rm fin},2pc_{\rm s})\\ +\Delta\mathrm{Si}^{2}(\tau_{\rm fin},2pc_{\rm s})\Bigr{]}, \tag{61}\]
which, for a flat (non-expanding) Universe, reduces to
\[\Delta_{0}^{\rm flat}(\delta\tau_{\rm fin},p)=\lim_{k\to 0} \Delta^{\rm flat}(\delta\tau_{\rm fin},k,p,\tilde{p})\\ =\frac{1}{2}\Bigg{[}\left(\delta\tau_{\rm fin}\mathcal{H}_{*} \right)^{2}+\frac{\sin^{2}\bigl{(}pc_{\rm s}\delta\tau_{\rm fin}\bigr{)}}{ \bigl{(}pc_{\rm s}/\mathcal{H}_{*}\bigr{)}^{2}}\Bigg{]}. \tag{62}\]
We find in Eq. (61) a first term, \(\frac{1}{2}\ln^{2}(\tau_{\rm fin}\mathcal{H}_{*})\), independent of \(p\), and a second term that depends on \(p\) and will enter the integral over \(p\) in Eq. (60). We can parameterize the dependence of the GW amplitude with \(\delta\tau_{\rm fin}\) by defining a weighted average of the
function \(\Delta_{0}\) with the spectral function \(\zeta_{\rm kin}\),
\[\tilde{\Delta}_{0}(\delta\tau_{\rm fin},R_{*})\] \[\qquad=\frac{\int_{0}^{\infty}\frac{\zeta_{\rm kin}^{2}(K)}{K^{2}} \,\Delta_{0}(\delta\tau_{\rm fin},K/R_{*})\ {\rm d}K}{\int_{0}^{\infty}\frac{\zeta_{\rm kin}^{2}(K)}{K^{2}}\,{ \rm d}K}\, \tag{63}\]
where we have used the normalized quantities \(\zeta_{\rm kin}(K)=E_{\rm kin}(K)/E_{\rm kin}^{*}\) and \(K\equiv k/k_{*}=kR_{*}\), defined in Sec. (III.3). Introducing Eq. (63) into Eq. (60), and using the normalization of Sec. (III.3) for the UETC of the anisotropic stresses, we find
\[\lim_{K\to 0}\Omega_{\rm GW}(\delta\tau_{\rm fin},K)\] \[\qquad\qquad= 3\,\bar{w}^{2}\,K^{3}\!\left(\frac{\Omega_{\rm K}}{\mathcal{K}} \right)^{2}\mathcal{C}\,\tilde{\Delta}_{0}(\delta\tau_{\rm fin},R_{*}), \tag{64}\]
where \(E_{\rm kin}^{*}=\Omega_{\rm K}R_{*}/\mathcal{K}\), see Eq. (42). Since the dimensionless kinetic power spectrum \(\zeta_{\rm kin}\) is peaked at \(K_{\rm kin}^{\rm peak}=k_{\rm kin}^{\rm peak}R_{*}\sim\mathcal{O}(1)\) (see Fig. (1)), we can approximate it as \(\zeta_{\rm kin}(K)\sim\delta(K-1)\) in the integrals of Eq. (63). Under this assumption, \(\tilde{\Delta}_{0}(\delta\tau_{\rm fin})\to\Delta_{0}(\delta\tau_{\rm fin},1)\), showing that \(\tilde{\Delta}_{0}/\Delta_{0}\) characterizes the deviations with respect to a delta-peaked kinetic power spectrum. We can now study the dependence of \(\tilde{\Delta}_{0}\) with \(\delta\tau_{\rm fin}\) under this approximation,
\[\tilde{\Delta}_{0}(\delta\tau_{\rm fin},\,R_{*})\sim\frac{1}{2} \Bigg{\{}\!\ln^{2}(\tau_{\rm fin}\mathcal{H}_{*})+ \tag{65}\] \[\left[{\rm Ci}\left(\frac{2c_{s}}{\mathcal{H}_{*}R_{*}}+2c_{s} \frac{\delta\tau_{\rm fin}}{R_{*}}\right)-{\rm Ci}\left(\frac{2c_{s}}{ \mathcal{H}_{*}R_{*}}\right)\right]^{2}+\] \[\left[{\rm Si}\left(\frac{2c_{s}}{\mathcal{H}_{*}R_{*}}+2c_{s} \frac{\delta\tau_{\rm fin}}{R_{*}}\right)-{\rm Si}\left(\frac{2c_{s}}{ \mathcal{H}_{*}R_{*}}\right)\right]^{2}\Bigg{\}}\,.\]
If \(c_{s}\delta\tau_{\rm fin}/R_{*}\ll 1\), from the expansion of the Ci and Si functions one gets:
\[\tilde{\Delta}_{0}(c_{s}\delta\tau_{\rm fin}/R_{*}\ll 1) \sim\frac{1}{2}\left\{\ln^{2}(\tau_{\rm fin}\mathcal{H}_{*})+( \delta\tau_{\rm fin}\mathcal{H}_{*})^{2}\right\}\] \[\sim(\delta\tau_{\rm fin}\mathcal{H}_{*})^{2}\,, \tag{66}\]
where the last estimate holds when \(\delta\tau_{\rm fin}\mathcal{H}_{*}\ll R_{*}\mathcal{H}_{*}\leq 1\). In the opposite limit \(c_{s}\delta\tau_{\rm fin}/R_{*}\gg 1\), the contribution from the Ci and Si functions is oscillating and decaying, and therefore subdominant. One expects therefore:
\[\tilde{\Delta}_{0}(c_{s}\delta\tau_{\rm fin}/R_{*}\gg 1)\sim\frac{1}{2}\ln^{2}( \tau_{\rm fin}\mathcal{H}_{*})\,. \tag{67}\]
The asymptotic behavior at the extremes of the quantity \(c_{s}\delta\tau_{\rm fin}R_{*}\) is confirmed by Fig. (4), showing the function \(\tilde{\Delta}_{0}\), compensated by the logarithmic dependence \(\ln^{2}(\tau_{\rm fin}\mathcal{H}_{*})\), for the benchmark phase transitions of Fig. (1). One can appreciate that almost all curves collapse into one, apart from small deviations, which are due to the specific spectral shape of the kinetic spectra \(\zeta_{\rm kin}(k)\) around their peak \(k_{\rm kin}^{\rm peak}\), see Fig. (1). In all the cases considered, the dependence of the GW amplitude with \(\delta\tau_{\rm fin}\) given in Eq. (IV.2) can be expressed as \(A\ln^{2}(\tau_{\rm fin}\mathcal{H}_{*})\), where \(A\) monotonically decreases around \(\delta\tau_{\rm fin}\sim(c_{s}k_{\rm kin}^{\rm peak})^{-1}\) between its asymptotic values, i.e., from 1 to 0.5, as it can be derived approximately from Eqs. (65-67). The exact variation of the function \(A\) at intermediate \(\delta\tau_{\rm fin}\) requires numerical computation of Eq. (63) for the specific spectral shape \(\zeta_{\rm kin}\). However, we show in Fig. (4) that the empirical fit
\[A\approx\frac{1}{2}\bigg{[}1+\exp\!\left(-0.35\left[\delta\tau_{\rm fin}\,c_{s }\,k_{\rm kin}^{\rm peak}\right]^{1.5}\right)\bigg{]}, \tag{68}\]
gives an accurate estimate for the evaluated phase transitions.
By taking the low-frequency limit \(k\to 0\) of the GW spectrum, we have found that its amplitude depends quadratically on the duration of the GW source when \(\delta\tau_{\rm fin}\) is short, compared to the Hubble time, and it is proportional to \(\ln^{2}(\tau_{\rm fin}\mathcal{H}_{*})\) in general (see Fig. (4)). As previously discussed, this result is
in contradiction with the linear dependence on the source duration usually assumed for the GW spectrum from sound waves, and from stationary processes in general. We come back to this aspect in Sec. (V) and extend the discussion to a generic class of stationary UETC. In the next Sec. (IV.3), we instead analyze the \(k\)-dependence of the GW spectrum at large scales, and provide insight on the reasons why a \(k^{9}\) behavior is found in refs. [28; 59], as opposed to the usual causal \(k^{3}\) scaling given in Eq. (64).
### \(k^{3}\) vs \(k^{9}\) tilt in the low-frequency limit
In Sec. (IV.2), we have found that the GW spectrum scales proportional to \(k^{3}\) when \(k\to 0\) (see Eq. (64)). The causal \(k^{3}\) branch is, in general,7 in agreement with numerical simulations of sound waves [24; 25; 29; 30; 89] and the recent analytical derivation of ref. [60]. However, as mentioned above, it is in contradiction with the \(k^{9}\) scaling reported in the sound shell model [26; 28; 59]. To understand the \(k^{3}\) vs \(k^{9}\) discrepancy of the GW spectrum, we reproduce in this section the calculation of refs. [28; 59]. Since ref. [28] considers that the duration of the phase transition is short and hence ignores the expansion of the Universe,8 we will consider the limit \(\delta\tau_{\rm fin}\mathcal{H}_{*}\ll 1\) when comparing our results to theirs.
Footnote 7: This agreement is not always completely clear, since the numerical studies of the IR regime of the GW spectrum are computationally challenging, see discussion in refs. [30; 42].
Footnote 8: We note, however, that even if the duration of the phase transition \(\beta^{-1}\) is short with respect to the Hubble time, the duration of the GW sourcing from sound waves can last longer, until the plasma develops non-linearities or until the sound waves are completely dissipated [6; 94].
In order to reproduce the calculations of ref. [28], we need to compute the growth rate of \(\Omega_{\rm GW}\) with the duration of GW production, \(\delta\tau_{\rm fin}\). Note that in ref. [28], the growth rate \(\dot{\Delta}\) [see their Eq.(3.38)] is defined instead as the derivative of \(\Delta\) with respect to cosmic time \(t\). We consider this interpretation to be misleading since \(\Delta\), see Eq. (52), has been defined after averaging over time and it is valid _only_ in the free propagation regime at late times \(\tau\gg\tau_{\rm fin}\), e.g., at present time \(\tau_{0}\) (see Eqs. (12) and (15)). We show in App. (A) the correct time-dependence of \(\Delta\) with conformal time during the phase of GW production, \(\tau<\tau_{\rm fin}\). We note that using Eq. (55) during the sourcing could lead to wrong results when comparing, for example, to the results from numerical simulations [24; 25; 27; 29; 30].
As a present-day observable, we are then interested in the dependence of the GW spectrum with the source duration \(\delta\tau_{\rm fin}\), so we define \(\Delta^{\prime}\equiv\partial_{\tau_{\rm fin}}\Delta\). Note that in the current work, we distinguish \(\Delta^{\prime}\) from \(\dot{\Delta}\equiv\partial_{t_{\rm fin}}\Delta\) since we take into account the expansion of the Universe.
We start by performing the change of variables \(\{\tau_{1,2}\}\to\{\tau_{\pm}\}\) in the integral of Eq. (52), with \(\tau_{+}\equiv(\tau_{1}+\tau_{2})/2\) and \(\tau_{-}\equiv\tau_{2}-\tau_{1}\). The limits of integration can be found in the following way. Since \(\tau_{1},\tau_{2}\in[\tau_{*},\tau_{\rm fin}]\), one has that \(\tau_{+}\in[\tau_{*},\tau_{\rm fin}]\), and
\[\tau_{-}=2(\tau_{+}-\tau_{1})=2(\tau_{2}-\tau_{+}), \tag{69}\]
which, since \(\tau_{1},\tau_{2}\in[\tau_{*},\tau_{\rm fin}]\), leads to the limits
\[\tau_{-}\in 2\ [-\delta\tau_{+}^{\rm fin},\delta\tau_{+}]\ \vee\ \tau_{-}\in 2\ [-\delta\tau_{+},\delta\tau_{+}^{\rm fin}], \tag{70}\]
where we have defined \(\delta\tau_{+}\equiv\tau_{+}-\tau_{*}\) and \(\delta\tau_{+}^{\rm fin}\equiv\tau_{\rm fin}-\tau_{+}\). Combining both limits we see that, when \(\tau_{+}\leq\tau_{m}\equiv\frac{1}{2}(\tau_{*}+\tau_{\rm fin})\), the limits of integration for \(\tau_{-}\) are \(\tau_{-}\in 2\,[-\delta\tau_{+},\delta\tau_{+}]\), and when \(\tau_{+}>\tau_{m}\), then \(\tau_{-}\in 2\,[-\delta\tau_{+}^{\rm fin},\delta\tau_{+}^{\rm fin}]\); see Fig. (5). Hence, the change of variables \(\{\tau_{1,2}\}\to\{\tau_{\pm}\}\) in Eq. (52) yields
\[\Delta_{mn}(\delta\tau_{\rm fin},\hat{p}_{mn})=\int_{\tau_{*}}^{ \tau_{\rm fin}}\frac{{\rm d}\tau_{1}}{2\tau_{1}}\int_{\tau_{*}}^{\tau_{\rm fin} }\frac{{\rm d}\tau_{2}}{2\tau_{2}}\cos(\hat{p}_{mn}\tau_{-})\] \[\ \
limits of the integral are taken to be \(\tau_{+}\in[0,\tau_{\rm fin}]\) and \(\tau_{-}\in[-2\tau_{+},2\tau_{+}]\). This corresponds to integrating over \(\tau_{-}\) according to the blue limits in Fig. (5) in all the range \(\tau_{+}\in[0,\tau_{\rm fin}]\), hence including the areas of integration that are not allowed, limited by the red lines. The inclusion of the upper and lower right triangles, out of the limits denoted by the red lines, leads to \(\tau_{2}>\tau_{\rm fin}\) and \(\tau_{1}>\tau_{\rm fin}\), respectively. Using these limits of integration, the explicit dependence of the limits of the integral over \(\tau_{-}\) on the source duration \(\tau_{\rm fin}\) is ignored, leading to the wrong value of \(\Delta^{\prime}\), as we show below.
We now compute the growth rate \(\Delta^{\prime}\) from Eq. (71),9
Footnote 9: The derivative can be taken from the the integral over \(\tau_{1,2}\) or from the integral over \(\tau_{\pm}\). The dependence of the integration limits on \(\tau_{\rm fin}\) is simpler in the former case after using the correct limits (see Fig. (5)) but both computations lead to the same result.
\[\Delta^{\prime}_{mn}(\delta\tau_{\rm fin},\hat{p}_{mn})\\ =\frac{1}{2\,\tau_{\rm fin}}\Big{[}\,\cos\bigl{(}\hat{p}_{mn} \tau_{\rm fin}\bigr{)}\Delta{\rm Ci}(\tau_{\rm fin},\hat{p}_{mn})\\ +\,\sin\bigl{(}\hat{p}_{mn}\tau_{\rm fin}\bigr{)}\Delta{\rm Si}( \tau_{\rm fin},\hat{p}_{mn})\Big{]}\, \tag{73}\]
which can also be directly found from Eq. (56). Ignoring the expansion of the Universe we get, from either Eq. (59) or Eq. (72),
\[\Delta^{\rm flat^{\prime}}_{mn}(\delta\tau_{\rm fin},\hat{p}_{mn})=\mathcal{H }_{*}\,\frac{\sin\bigl{(}\hat{p}_{mn}\delta\tau_{\rm fin}\bigr{)}}{2\,(\hat{p }_{mn}/\mathcal{H}_{*})}. \tag{74}\]
If one omits the dependence on \(\tau_{\rm fin}\) in the integration limits over \(\tau_{-}\) in Eq. (72), the solution to Eq. (3.38) of ref. [28] is found, which is equivalent to Eq. (74) with an extra factor 2 in the \(\sin\) function, \(\sin(2\hat{p}_{mn}\delta\tau_{\rm fin})\).
Figure (6) shows the dependence of the growth rate \(\Delta^{\prime}_{mn}\), given in Eqs. (73) and (74), on the combined momenta \(\hat{p}_{mn}\) for different values of the GW source duration \(\delta\tau_{\rm fin}\). We observe that, as \(\delta\tau_{\rm fin}\) increases, \(\Delta^{\prime}_{mn}\) becomes more confined around \(\hat{p}_{mn}\to 0\). Taking into account the relation between the sinc and the Dirac \(\delta\) function,
\[\delta(x)=\lim_{a\to 0}\frac{\sin(\pi x/a)}{\pi x}, \tag{75}\]
ref. [28] approximates Eq. (74) in the \(1/\delta\tau_{\rm fin}\to 0\) limit, i.e., for large GW duration,10
Footnote 10: Equation (76) is equivalent to Eq. (3.39) in ref. [28] after taking into account the extra factor of 2 (see text below Eq. (74)) and that their \(\Delta\) is defined with an extra \(\frac{1}{2}\) factor (see their Eq. (3.36) compared to Eq. (52)).
\[\lim_{\delta\tau_{\rm fin}\mathcal{H}_{*}\to\infty}\Delta^{\rm flat^{\prime}}_ {mn}(\hat{p}_{mn})=\mathcal{H}_{*}\frac{\pi}{2}\delta\bigl{(}\hat{p}_{mn}/ \mathcal{H}_{*}\bigr{)}. \tag{76}\]
This approximation is used in refs. [28; 59] to simplify the calculation of the integral in Eq. (51). However, it is not required to compute the GW amplitude, as we have done in Sec. (IV.2) in the \(k\to 0\) limit and we extend in Sec. (VI) to all \(k\). We show in the following that it is precisely this assumption the one that leads to the linear growth with the source
duration and the \(k^{9}\) scaling of the GW spectrum when \(k\to 0\). We also show in Sec. (V) that this assumption is equivalent to the one usually taken for stationary processes that decay very quickly with the time difference \(\tau_{-}\)[31, 32, 33, 34, 35, 43, 93]. However, the UETC found in the sound shell model is a periodic function in \(\tau_{-}\) (see Eq. (39)) so this assumption is, in general, not justified.
On the other hand, when \(k\) is large and oscillations over \(\tau_{-}\) become very rapid, this assumption might become justified. In such circumstances, as we show in Sec. (VI), the expression computed in App. (B), based on this approximation, can describe the GW spectrum in the regime \(k\gg 1/\delta\tau_{\rm fin}\) and, in particular, around the spectral peak if \(\delta\tau_{\rm fin}/R_{*}\gg 1\). One can understand this by noting that the limit leading to Eq. (76) is equivalent to considering \(\hat{p}_{mn}\delta\tau_{\rm fin}\to\infty\). At low and moderate \(k\), in general, this limit does not hold, since \(p\) and \(\tilde{p}\) are integrated from \(0\) to \(\infty\). However, when \(k\delta\tau_{\rm fin}\to\infty\), this assumption is valid, since \(\Delta_{mn}\) is symmetric in \(\hat{p}_{mn}\) and then \(\hat{p}_{mn}\delta\tau_{\rm fin}\to\infty\).
We note that this approximation is _only_ valid when \(k\delta\tau_{\rm fin}\) becomes sufficiently large, not when \(\tau\) is large, since \(\Delta^{\prime}\) is the growth with respect to \(\delta\tau_{\rm fin}\). The assumption of asymptotically large \(\delta\tau_{\rm fin}\) is not justified for GW production from sound waves and it is in contradiction with the assumption that the expansion of the Universe can be ignored, so expansion becomes relevant in this regime.
In general, we find that \(\Delta^{\prime}_{mn}\) is widely spread along a broad range of \(\hat{p}_{mn}\neq 0\) for short and moderate (around one Hubble time) duration (see Fig. (6)). Its maximum value at \(\hat{p}_{mn}=0\) is \(\frac{1}{2}\,\delta\tau_{\rm fin}\mathcal{H}_{*}\), as can be inferred from Eq. (74). For longer sourcing duration, one can no longer ignore the expansion of the Universe and we find that the growth rate at \(\hat{p}_{mn}=0\) decreases to \(\frac{1}{2}\,\ln(\tau_{\rm fin}\mathcal{H}_{*})/\tau_{\rm fin}\) (see blue and red dots in Fig. (6)). Therefore, the integral over \(p\) and \(z\) in Eq. (51) includes non-negligible contributions from \(\hat{p}_{mn}\neq 0\) that are being ignored if one uses Eq. (76).
We now explicitly show how this approximation affects the limit \(k\to 0\) of the GW spectrum, computed in Sec. (IV.2). Denoting \(\Omega^{\prime}_{\rm GW}\equiv\partial_{\tau_{\rm fin}}\Omega_{\rm GW}\) as the growth rate of the GW spectrum and using Eqs. (64) and (76), we find,
\[\lim_{K\to 0}\Omega^{\prime}_{\rm GW}(\delta\tau_{\rm fin},K)= \frac{8\pi}{5}\,R_{*}\,\bar{w}^{2}\,K^{3}\left(\frac{\Omega_{\rm K}}{\cal K} \right)^{2}\] \[\times\,\int_{0}^{\infty}\frac{\zeta_{\rm kin}^{2}(P)}{P^{2}} \delta(K-2Pc_{8})\,{\rm d}P, \tag{77}\]
where, following ref. [28], we have further assumed that \(\hat{p}_{mn}\) only cancels when \(m=1\) and \(n=-1\)
and \(\Delta^{\prime}_{0}\to\frac{1}{2}\mathcal{H}_{*}\pi\,\delta(2c_{\rm s}p-k)=\frac{1 }{2}(\mathcal{H}_{*}R_{*})\pi\,\delta(2c_{\rm s}P-K)\). We note that this additional assumption does not take into account the case \(m=-1\), such that \(p+m\tilde{p}=0\), which always holds when \(k\to 0\). From Eq. (62), one can see that the \(m=-1\) case would include in \(\Omega^{\prime}_{\rm GW}\) a linear term in \(\delta\tau_{\rm fin}\) that would lead to the quadratic scaling and a function proportional to \(k^{3}\) when \(k\to 0\) that would dominate over the \(k^{9}\) term. Therefore, the \(k^{9}\) scaling appears due to the inclusion of a \(k\) dependence in the integral over \(p\) of Eq. (77) and due to neglecting the leading-order term when \(k\to 0\). The extension of Eq. (77) to all values of \(k\) is shown in App. (B).
The integral in Eq. (77) is directly computed by substituting \(P=K/(2c_{\rm s})\),
\[\lim_{K\to 0}\Omega^{\prime}_{\rm GW}(\delta\tau_{\rm fin},K)=\] \[\qquad\qquad\qquad\frac{32\pi}{5}\,c_{\rm s}^{2}\,R_{*}\,\bar{w}^{ 2}\,K\left(\frac{\Omega_{\rm K}}{\mathcal{K}}\right)^{2}\zeta_{\rm kin}^{2}(K). \tag{78}\]
Therefore, we find that the GW spectrum in the \(k\to 0\) regime is proportional to \(K\zeta_{\rm kin}^{2}(K)\). For irrotational fields, \(\zeta_{\rm kin}\sim K^{a}\) with \(a\geq 4\) (see Sec. (III.2)) and, for the kinetic spectra of the benchmark phase transitions of Fig. (1), we find \(a=4\). Therefore, one finds that the GW spectrum is proportional to \(K^{2a+1}=K^{9}\) in this case, as argued in ref. [28]. As discussed above, this result is a consequence of the assumption that the growth rate \(\Delta^{\prime}\) can be approximated as a Dirac \(\delta\) function, see Eq. (76). The calculation using the stationary UETC found in the sound shell model (see Eq. (39)) in the \(k\to 0\) limit has been presented in Sec. (IV.2), where we recover the low frequency scaling with \(k^{3}\) as expected by causality; see Eq. (64). We note that this result also holds when one takes into account the expansion of the Universe.
## V GW production from stationary processes
In Secs. (IV.1) and (IV.2), we have shown that the dependence of the GW amplitude in the \(k\to 0\) limit with the source duration is \(\ln^{2}(\tau_{\rm fin}\mathcal{H}_{*})\), which becomes quadratic when the duration is short. In addition, we have shown in Sec. (IV.3) that the approximation of the growth rate \(\Delta^{\prime}\), given in Eqs. (73) and (74), as a Dirac \(\delta\) function (see Eq. (76)), taken in refs. [28; 59], leads to the conclusion that the GW spectrum is proportional to \(k^{9}\) in the \(k\to 0\) limit. We have found that this scaling is actually \(k^{3}\) as expected from causality and found in numerical studies. In addition, from Eq. (78) we directly find that since \(\Omega^{\prime}_{\rm GW}\) does not depend on \(\tau_{\rm fin}\), then \(\Omega_{\rm GW}=\delta\tau_{\rm fin}\,\Omega_{\rm GW}\), which corresponds to the assumed linear growth with the source duration. Hence, this result is also a consequence of the aforementioned assumption, which does not hold in the \(k\to 0\) limit. We note that this is not necessarily the case at all \(k\), however, as we show in Sec. (VI), it can give an accurate estimate of the GW amplitude at \(k\gg 1/\delta\tau_{\rm fin}\).
To understand the transition from the quadratic to the linear growth of \(\Omega_{\rm GW}\) with \(\delta\tau_{\rm fin}\) as \(k\) increases, let us now generalize our study to a velocity UETC described by an arbitrary stationary process, \(E_{\rm kin}(\tau_{1},\tau_{2},k)=E_{\rm kin}(k)\,f(\tau_{-},k)\), where \(f(\tau_{-},k)=\cos(kc_{\rm s}\tau_{-})\) in the sound shell model. In the general case, the function \(\Delta\) in Eq. (52) is
\[\Delta(\delta\tau_{\rm fin},k,p,\tilde{p})=\int_{\tau_{*}}^{\tau _{\rm fin}}\!\!\frac{{\rm d}\tau_{1}}{\tau_{1}}\int_{\tau_{*}}^{\tau_{\rm fin}} \!\!\frac{{\rm d}\tau_{2}}{\tau_{2}}\] \[\qquad\qquad\times f(\tau_{-},p)f(\tau_{-},\tilde{p})\,\cos(k\tau _{-})\,. \tag{79}\]
Following ref. [33], we take the change of variable \(\tau_{2}\to\tau_{-}\),
\[\Delta(\delta\tau_{\rm fin},k,p,\tilde{p})=\int_{\tau_{*}}^{\tau _{\rm fin}}\!\!\frac{{\rm d}\tau_{1}}{\tau_{1}}\int_{\tau_{*}-\tau_{1}}^{\tau_{ \rm fin}-\tau_{1}}\!\!\frac{{\rm d}\tau_{-}}{\tau_{-}+\tau_{1}}\] \[\qquad\qquad\times f(\tau_{-},p)f(\tau_{-},\tilde{p})\,\cos(k\tau _{-})\,. \tag{80}\]
The characteristic linear growth of stationary processes [24; 28; 31; 32; 59] is found when inverting the order of integration in Eq. (IV.3) is allowed [33]. This is justified if the function \(f(\tau,k)\) becomes negligibly small in the range \(\tau<\tau_{*}-\tau_{1}\) and \(\tau>\tau_{\rm fin}-\tau_{1}\) for all \(\tau_{1}\in(\tau_{*},\tau_{\rm fin})\), such that the integral over \(\tau_{-}\) can be extended to \(\tau_{-}\in(-\infty,\infty)\) and the limits of integration do not depend any longer on \(\tau_{1}\)[33]. This condition can be justified, for example, when the UETC decays as a Gaussian function (e.g., Kraichan decorrelation [93]) as we show in Sec. (V.2). On the other hand, when \(f(\tau,k)\) is a periodic function (e.g, the UETC found in the sound shell) this condition is, in general, unjustified, unless \(f\) becomes sufficiently oscillatory in \(\tau_{-}\). This is the case in the \(k\tau_{-}\to\infty\) limit, where the limits of integration already include several oscillations, so that extending the limits to \(\pm\infty\) does not affect drastically the result of the integral. This approximation holds in the regime assumed in ref. [28], \(k\delta\tau_{\rm fin}\to\infty\), see discussion in
Sec. (IV.3). Under this assumption, we find
\[\Delta(\delta\tau_{\rm fin},k,p,\tilde{p})=\int_{\tau_{*}}^{\tau_{ \rm fin}}\!\!\frac{{\rm d}\tau_{1}}{\tau_{1}}\int_{-\infty}^{\infty}\!\frac{{\rm d }\tau_{-}}{\tau_{-}+\tau_{1}}\] \[\times f(\tau_{-},p)f(\tau_{-},\tilde{p})\,\cos(k\tau_{-})\,. \tag{81}\]
In particular, if one ignores the expansion of the Universe, the integral over \(\tau_{1}\) directly yields the linear dependence with \(\delta\tau_{\rm fin}\),
\[\Delta^{\rm flat}(\delta\tau_{\rm fin},k,p,\tilde{p})={\cal H}_{*} ^{2}\delta\tau_{\rm fin}\] \[\times\,\int_{-\infty}^{\infty}\,{\rm d}\tau_{-}f(\tau_{-},p)f( \tau_{-},\tilde{p})\cos(k\tau_{-}). \tag{82}\]
### Sound-shell model UETC
When we use the UETC found in the sound shell model (see Eq. (39)), the solution to Eq. (IV.1) is
\[\Delta^{\rm flat}_{mn}(\delta\tau_{\rm fin},\hat{p}_{mn})= \frac{{\cal H}_{*}^{2}\delta\tau_{\rm fin}}{4}\int_{-\infty}^{ \infty}\cos(\hat{p}_{mn}\tau_{-})\,{\rm d}\tau_{-}\] \[=\frac{\pi}{2}\delta\tau_{\rm fin}{\cal H}_{*}\,\delta\big{(}\hat {p}_{mn}/{\cal H}_{*}\big{)}, \tag{83}\]
which is equivalent to Eq. (76). Therefore, we find that, as mentioned above, the assumption to find Eq. (IV.1) and the one used in ref. [28] to find Eq. (76) lead to the same result.
Including the expansion of the Universe, there is still a dependence on \(\tau_{1}\) in the integral over \(\tau_{-}\) in Eq. (IV.1). With the change of variables \(\{\tau_{1,2}\}\to\{\tau_{\pm}\}\), the term due to the Universe expansion is \(\tau_{1}\tau_{2}=\tau_{+}^{2}-\frac{1}{4}\tau_{-}^{2}\) (see Eq. (71)). In ref. [59], see their Eq. (IV.1), the term \(\tau_{1}\tau_{2}\) is approximated as \(\tau_{1}\tau_{2}\sim\tau_{+}^{2}\). This is equivalent11
Footnote 11: Reference [59] uses an integral equivalent to Eq. (71) with an inverted order of integration,
\[\Delta(\delta\tau_{\rm fin},k,p,\tilde{p})=\] \[=2\int_{0}^{\delta\tau_{\rm fin}}{\rm d}\tau_{-}\int_{\tau_{+} \frac{1}{2}\tau_{-}}^{\tau_{\rm fin}-\frac{1}{2}\tau_{-}}\frac{f(\tau_{-},p)f (\tau_{-},\tilde{p})\cos(k\tau_{-})}{\tau_{+}^{2}-\frac{1}{4}\tau_{-}^{2}}\,{ \rm d}\tau_{+}\,\]
where the limits of integration are shown in Fig. (5), and we have used the change of variable \(\tau_{-}\to-\tau_{-}\) in the range \(\tau_{-}\in(-\delta\tau_{\rm fin},0)\) to find the same integral as the one in the range \((0,\delta\tau_{\rm fin})\). We find Eq. (IV.1) by taking the limits over \(\tau_{+}\) to \((-\infty,\infty)\) and neglecting \(\tau_{-}^{2}\) compared to \(4\tau_{+}^{2}\).
the dimension of the dependence on \(\tau_{-}\) in the term \(1/(\tau_{-}+\tau_{1})\) of Eq. (IV.1), yielding
\[\Delta(\delta\tau_{\rm fin},k,p,\tilde{p})={\cal H}_{*}\Upsilon( \delta\tau_{\rm fin})\] \[\times\int_{-\infty}^{\infty}{\rm d}\tau_{-}f(\tau_{-},p)f(\tau_{ -},\tilde{p})\,\cos(k\tau_{-})\,, \tag{84}\]
where \(\Upsilon\) is the suppression factor defined in ref. [59] and used in recent literature to account for the expansion of the Universe in the GW production from sound waves [80; 83; 92],
\[\Upsilon(\delta\tau_{\rm fin})=\int_{\tau_{*}}^{\tau_{\rm fin}}\frac{{\rm d} \tau_{1}}{{\cal H}_{*}\tau_{1}^{2}}=1-\frac{1}{\tau_{\rm fin}{\cal H}_{*}}. \tag{85}\]
This function reduces to the linear growth \(\Upsilon\to\delta\tau_{\rm fin}{\cal H}_{*}\) in the limit \(\delta\tau_{\rm fin}{\cal H}_{*}\ll 1\), yielding Eq. (IV.1) in the case of a flat (non-expanding) Universe. Again, substituting the UETC of Eq. (39) in Eq. (IV.1), one finds
\[\Delta_{mn}(\delta\tau_{\rm fin},\hat{p}_{mn})=\frac{\pi}{2}\Upsilon(\delta \tau_{\rm fin})\,\delta\big{(}\hat{p}_{mn}/{\cal H}_{*}\big{)}. \tag{86}\]
The results presented above are justified only in the asymptotic limit \(k\delta\tau_{\rm fin}\to\infty\), since this is the limit of validity of the assumptions introduced to invert the order of integration over \(\tau_{1}\) and \(\tau_{-}\) (or over \(\tau_{+}\) and \(\tau_{-}\)). In particular, these assumptions imply that the dependence of \(\Delta\) on \(\delta\tau_{\rm fin}\) is encoded solely in the suppression factor \(\Upsilon\) (see Eq. (IV.1)), which, in the limit of a short GW source, is linear, \(\Upsilon\sim\delta\tau_{\rm fin}{\cal H}_{*}\).
The calculation of the integral over \(\tau_{1}\) and \(\tau_{2}\), performed in Sec. (IV.2) in the \(k\to 0\) limit without any simplifying assumptions, leads, instead to a dependence with \(\tau_{\rm fin}\) characterized by \(\tilde{\Delta}\). This function is given in Eq. (63) in the \(k\to 0\) limit and even though it depends on the spectral shape, it is found to always be
\[\tilde{\Delta}_{0}(\delta\tau_{\rm fin})\simeq A\ln^{2}(\tau_{\rm fin}{\cal H}_{* }), \tag{87}\]
where \(A\in[0.5,1]\) (see Fig. (4) and Eq. (68)). Moreover, Eq. (87) reduces to \((\delta\tau_{\rm fin}{\cal H}_{*})^{2}\) when \(\delta\tau_{\rm fin}\) is short. Its extension to all \(k\) is studied in Sec. (VI), where we find that, when \(k\gg 1/\delta\tau_{\rm fin}\), the suppression factor \(\Upsilon\) can be found and if, in addition, the peak is in this regime \((\delta\tau_{\rm fin}/R_{*}\gg 1)\), then it is relevant to describe the GW spectrum around its peak.
### Kraichnan decorrelation
Let us consider a stationary process, described by a function \(f(\tau_{-},k)\), that does not decay fast enough in \(\tau_{-}\) out of the integration limits in Eq. (80), and does not include many periodic oscillations within the integration limits. We have argued that, in this case, the GW amplitude grows quadratically with \(\delta\tau_{\rm fin}\). To understand this result, we study the Kraichnan decorrelation [93], usually applied to the
study of turbulence [31, 32, 34, 35, 43], where \(f\) is a Gaussian function of \(\tau_{-}\),
\[f(\tau_{-},k)=\exp\Bigl{(}-\tfrac{1}{2}k^{2}v_{\rm sw}^{2}\tau_{-}^{2}\Bigr{)}, \tag{88}\]
where \(v_{\rm sw}(\tau_{1},\tau_{2},k)\) is the sweeping velocity [93]. We note that this function is a positive definite kernel only if \(v_{\rm sw}\) is a function of \(\tau_{1,2}\), breaking the stationary assumption [43], and otherwise it is not an adequate description of the velocity field UETC [34]. However, since we want to address the importance of the aforementioned assumptions for a generic stationary process qualitatively in the current work, we use Eq. (88) with a time-independent \(v_{\rm sw}\) for simplicity.
Using this UETC for the velocity field and taking the \(k\to 0\) limit (such that \(\tilde{p}\to p\)), Eq. (79) becomes
\[\Delta_{0}(\delta\tau_{\rm fin},p)=\int_{\tau_{*}}^{\tau_{\rm fin}}\!\frac{ \mathrm{d}\tau_{1}}{\tau_{1}}\int_{\tau_{*}}^{\tau_{\rm fin}}\!\frac{\mathrm{d }\tau_{2}}{\tau_{2}}e^{-p^{2}v_{\rm sw}^{2}\tau_{-}^{2}}. \tag{89}\]
The integrand is shown in Fig. (7).
We observe that for large \(p^{2}v_{\rm sw}^{2}\sim\mathcal{O}(10^{2})\), it is a good approximation to extend the integration limits to \(\tau_{-}\in(-\infty,\infty)\), while the same is not true at smaller \(p^{2}v_{\rm sw}^{2}\sim\mathcal{O}(1)\). In this case we find two limiting cases:
* if \(\delta\tau_{\rm fin}\ll 1/(pv_{\rm sw})\), we expand \(e^{-p^{2}v_{\rm sw}^{2}\tau_{-}^{2}}\sim 1\), since \(\tau_{-}\in[0,\delta\tau_{\rm fin}]\) (see footnote 11). Then Eq. (89) yields the duration dependence found for the UETC of the sound shell model in the \(k\to 0\) limit: \(\ln^{2}(\tau_{\rm fin}\mathcal{H}_{*})\);
* if \(\delta\tau_{\rm fin}\gg 1/(pv_{\rm sw})\), the approximation leading to Eq. (84) is justified, and we find the suppression factor \(\Upsilon\) in the \(k\to 0\) limit. As discussed above, this regime can also appear in the sound shell model when \(k\delta\tau_{\rm fin}\gg 1\).
Therefore, the resulting dependence of the GW amplitude with \(\delta\tau_{\rm fin}\) will change for different \(v_{\rm sw}\) and it might be a combination of the different modes since one needs to integrate Eq. (63) over \(p\) for the general time-dependence. In addition, as mentioned above, \(v_{\rm sw}\) is also a function of \(\tau_{1}\) and \(\tau_{2}\), to ensure the positivity of the UETC kernel [43].
We recover the previous result analytically when neglecting the expansion of the Universe,12
Footnote 12: In this case, one can find an analytical expression for any wave number \(k\), here avoided for the sake of brevity.
\[\Delta_{0}^{\rm flat}(\delta\tau_{\rm fin},p)/\mathcal{H}_{*}^{2} =\frac{\sqrt{\pi}}{pv_{\rm sw}}\delta\tau_{\rm fin}\operatorname{ Erf}\bigl{(}pv_{\rm sw}\delta\tau_{\rm fin}\bigr{)}\] \[-\frac{1-e^{-p^{2}v_{\rm sw}^{2}\delta\tau_{\rm fin}^{2}}}{p^{2}v _{\rm sw}^{2}}\,, \tag{90}\]
Figure 7: Integrand leading to the value of \(\Delta_{0}\) assuming Kraichnan decorrelation for \(p^{2}v_{\rm sw}^{2}=1\) (upper panel), \(10\) (middle), and \(100\) (lower), in the \(k\to 0\) limit.
where \(\text{Erf}(x)\) is the error function. Taking the limits \(\delta\tau_{\text{fin}}\ll 1/(pv_{\text{sw}})\) and \(\delta\tau_{\text{fin}}\gg 1/(pv_{\text{sw}})\), we find the two asymptotic behaviors mentioned above,
\[\Delta_{0}^{\text{flat}}(\delta\tau_{\text{fin}}pv_{\text{sw}}\ll 1 ) =(\delta\tau_{\text{fin}}\mathcal{H}_{*})^{2},\] \[\Delta_{0}^{\text{flat}}(\delta\tau_{\text{fin}}pv_{\text{sw}} \gg 1) =\frac{\sqrt{\pi}\delta\tau_{\text{fin}}\mathcal{H}_{*}}{pv_{\text{ sw}}/\mathcal{H}_{*}}. \tag{91}\]
Including the effect of the expansion of the Universe leads to the same short-duration regime, and the limit at large \(\delta\tau_{\text{fin}}pv_{\text{sw}}\) becomes
\[\Delta(\delta\tau_{\text{fin}}pv_{\text{sw}}\gg 1)=\frac{\sqrt{\pi}}{pv_{ \text{sw}}/\mathcal{H}_{*}}\Upsilon(\delta\tau_{\text{fin}}). \tag{92}\]
The two asymptotic limits are shown in Fig. (8), compared to Eq. (89) evaluated numerically. These results show how we can, in general, find both the quadratic and linear growth rates, depending on \(v_{\text{sw}}\), the specific value of \(k\) (even in the \(k\to 0\) limit), and the integrals over \(p\) and \(\tilde{p}\) performed to find the GW spectrum sourced by a stationary process.
## VI GW spectrum from sound waves: results and template
In Secs. (IV) and (V) we have studied the GW spectrum in the low-frequency limit \(k\to 0\), aiming to understand two characteristic features: the \(k^{3}\) scaling, and the amplitude evolution with respect to the duration of the source. The present section is dedicated to the study of the shape of the GW spectrum at all frequencies.
For a direct comparison of our results for sound waves to those for other sources, e.g., decaying vortical turbulence, we adopt a similar normalization as in ref. [42] (see also Sec. (III.3)).
### GW spectral shape
With Eq. (64) the GW spectrum can be expressed in terms of a normalized spectrum, \(\zeta_{\text{GW}}\),
\[\Omega_{\text{GW}}(\delta\tau_{\text{fin}},R_{*},K)=3\,\bar{w}^{2 }\,K^{3}\left(\frac{\Omega_{\text{K}}}{\mathcal{K}}\right)^{2}\!\mathcal{C}\] \[\qquad\qquad\times\,\tilde{\Delta}_{0}(\delta\tau_{\text{fin}},R_ {*})\,\zeta_{\text{GW}}(\delta\tau_{\text{fin}},K,R_{*})\,. \tag{93}\]
In order to describe the spectral modifications of \(\zeta_{\text{GW}}\) with respect to \(\zeta_{\text{II}}\), we introduce the function \(\tilde{\Delta}\equiv\zeta_{\text{GW}}/\zeta_{\text{II}}\). Then Eq. (93) becomes
\[\Omega_{\text{GW}}(\delta\tau_{\text{fin}},R_{*},K)=3\,\bar{w}^{2 }\,K^{3}\left(\frac{\Omega_{\text{K}}}{\mathcal{K}}\right)^{2}\mathcal{C}\, \zeta_{\text{II}}(K)\\ \times\tilde{\Delta}_{0}(\delta\tau_{\text{fin}},R_{*})\,\tilde{ \Delta}(\delta\tau_{\text{fin}},R_{*},K)\,. \tag{94}\]
\(\tilde{\Delta}\) generalizes Eq. (63) to all values of \(k\) and \(\delta\tau_{\text{fin}}\),
\[\tilde{\Delta}(\delta\tau_{\text{fin}},R_{*},K)\\ =\frac{1}{\mathcal{C}\,\zeta_{\text{II}}(K)\,\tilde{\Delta}_{0}( \delta\tau_{\text{fin}},R_{*})}\int_{0}^{\infty}\,\mathrm{d}PP^{2}\zeta_{\text{ kin}}(P)\\ \times\,\int_{-1}^{1}(1-z^{2})^{2}\frac{\zeta_{\text{kin}}(\tilde {P})}{\bar{P}^{4}}\Delta(\delta\tau_{\text{fin}},k,p,\tilde{p})\,\mathrm{d}z\,. \tag{95}\]
By construction we find that \(\tilde{\Delta}\to 1\), when \(\Delta\) does not depend on \(p\) nor \(k\), i.e., in the short-duration regime, or in the \(k\to 0\) limit.
Hence, the parameters that determine the modifications of \(\zeta_{\text{GW}}\) with respect to \(\zeta_{\text{II}}\) are the source duration \(\delta\tau_{\text{fin}}\), and the characteristic scale \(R_{*}=1/k_{*}\).
Depending on how \(k\) compares with the inverse source duration \(1/\delta\tau_{\text{fin}}\), the GW spectrum presents different behaviors.
In the regime where \(k\lesssim 1/\delta\tau_{\text{fin}}\), studied in Sec. (IV), \(\tilde{\Delta}\to 1\). The dependence of the GW spectrum on the source duration \(\delta\tau_{\text{fin}}\) is then fully encoded in \(\tilde{\Delta}_{0}=A\ln^{2}(\tau_{\text{fin}}\mathcal{H}_{*})\), with \(A\in[0.5,1]\), see Eq. (68). The amplitude in this regime does not depend on \(R_{*}\), whose dependence only appears through the self-similar \(K\equiv kR_{*}\). At the same time, the dependence on \(K\) survives in \(K^{3}\zeta_{\text{II}}\), which, as
Figure 8: Dependence of the GW amplitude in the \(k\to 0\) limit with the duration of the GW sourcing \(\delta\tau_{\text{fin}}\) for a Kraichnan decorrelation with \(pv_{\text{sw}}=12\) for a flat (red) and an expanding (blue) Universe. The two asymptotic limits are separated at \(\delta\tau_{\text{fin}}=1/(pv_{\text{sw}})\), showing the \((\delta\tau_{\text{fin}}\mathcal{H}_{*})^{2}\) scaling below this limit, and the suppression factor \(\Upsilon\) above the limit.
shown in Fig. (2), follows a broken-power law that can be fit using Eq. (50).13 The amplitude of the GW spectrum depends on the specific spectral shape of the kinetic spectrum via the constants \(\mathcal{K}\) and \(\mathcal{C}\) (see Eqs. (43) and (46)). Table 1 presents values for the benchmark phase transitions considered here.
Footnote 13: The peak structure in the sound shell model is simple or double, depending on the specific value of the wall velocity (see Fig. (1) and Table 1).
At wave numbers \(k>1/\delta\tau_{\rm fin}\), the approximation leading to \(\tilde{\Delta}\sim 1\) is no longer valid, and the function \(\tilde{\Delta}(K)\) depends on both, \(\delta\tau_{\rm fin}\) and \(R_{*}\). As a consequence, in this range, the GW spectrum shows a complex dependence on \(K\) and \(\delta\tau_{\rm fin}\) that deviates with respect to the simple \(K^{3}\zeta_{\Pi}\) causal growth. We expect the GW spectrum to transition from the causal branch at \(k\delta\tau_{\rm fin}\ll 1\), toward the spectrum found in refs. [28; 59] (see App. (B)), which is valid for \(k\delta\tau_{\rm fin}\gg 1\), as discussed in Secs. (IV) and (V). This transition among the two asymptotic limits is, _a priori_, unknown and requires a numerical evaluation of Eq. (95).
Numerical examples of the resulting normalized GW spectra, \(K^{3}\zeta_{\rm GW}\), are shown in Fig. (9) for the benchmark phase transitions of Fig. (1), and at different values of \(\delta\tau_{\rm fin}\) and \(R_{*}\). We find the predicted \(K^{3}\zeta_{\Pi}\) scaling when \(k<1/\delta\tau_{\rm fin}\), with the amplitude exactly given by Eq. (94) when setting \(\tilde{\Delta}=1\).
A more complex structure appears at \(k>1/\delta\tau_{\rm fin}\), where \(\tilde{\Delta}\equiv\zeta_{\rm GW}/\zeta_{\Pi}\) plays a major role. To underline some generic features, we show \(\tilde{\Delta}\) in Figure (10) at different \(\delta\tau_{\rm fin}\) and \(R_{*}\).
In the range \(1/\delta\tau_{\rm fin}\lesssim k<1/R_{*}\), we find \(\tilde{\Delta}\sim K^{-2}\), leading to the development of a linear GW spectrum in \(k\). A similar transition from a \(K^{3}\) to \(K\) slope in the GW spectrum is also found for vortical
(M)HD turbulence [37; 38; 39; 40; 41; 42; 43], and is analytically described by the constant-in-time approximation [42].
At larger \(k\), a steep growth, \(\Omega_{\rm GW}\sim K^{7}\), appears just below the peak of the spectrum. This result is close to the \(K^{9}\) growth found in ref. [28]. In fact, in this range, \(1/\delta\tau_{\rm fin}\ll k\lesssim 1/R_{*}\), motivating the assumption \(k\delta\tau_{\rm fin}\to\infty\), required to obtain the \(K^{9}\) spectrum; see discussion in Sec. (V). Note however that, when the source duration becomes a non-negligible fraction of a Hubble time, \(\delta\tau_{\rm fin}\mathcal{H}_{*}\gtrsim\mathcal{O}(10^{-1})\), the expansion of the Universe starts playing a significant role. In particular, it modifies not only the dependence of the GW spectrum on \(\delta\tau_{\rm fin}\) but also its spectral shape through \(\Delta\) in Eq. (95).
The peak amplitude of the GW spectrum, which we have previously estimated to be located at \(K_{\rm GW}\), where \(K^{3}\zeta_{\Pi}\) is maximum, is modified by \(\tilde{\Delta}\) when the \(k\lesssim 1/\delta\tau_{\rm fin}\) limit does not hold. We find that \(\tilde{\Delta}\) modifies the position of the GW peak roughly to \(K\approx 0.8\,K_{\rm GW}\) (see Fig. (9) and values in Table 1). In addition, \(\tilde{\Delta}\) adds a dependence of the GW amplitude on \(\delta\tau_{\rm fin}/R_{*}\), shown in Fig. (11). This modification at the peak is well approximated by the function \((1+\delta\tau_{\rm fin}/R_{*})^{-1}\). For the benchmark phase transitions, and the values of \(\delta\tau_{\rm fin}\) and \(R_{*}\) shown in Fig. (10), we estimate the accuracy of the fit within a factor of 5.
Around the peak, \(\tilde{\Delta}_{0}\,\tilde{\Delta}\) depends linearly on the suppression factor, \(\Upsilon\), and \(R_{*}\mathcal{H}_{*}\). This result agrees with the one derived in App. (B), following the approximation of refs. [28; 59], when \(\delta\tau_{\rm fin}/R_{*}\gg 1\), such that the peak \(1/R_{*}\) is within the \(k\delta\tau_{\rm fin}\gg 1\) regime. For an accurate prediction of the amplitude at the peak, we thus take this value into account and multiply it by the value where the function \(K^{3}\zeta_{\Pi}\) is maximal (see Table 1).
Finally, at large \(K>1\), we find that the GW spectrum decreases as \(1/K\) when compared to \(K^{3}\zeta_{\Pi}\). Since the latter scales as \(K^{-2}\) (see Fig. (2)), the GW spectrum decays as \(K^{-3}\) at large values of \(k\), which agrees with refs. [28; 29; 30].
To compare the resulting spectral shape of GWs to that of ref. [28], where the function \(\Delta\) is approximated by a Dirac delta function, we show in Fig. (12) the resulting GW spectra, obtained for a specific benchmark phase transition with \(\alpha=0.1\) and \(\xi_{w}=0.3\), for a range of \(R_{*}\) and \(\delta\tau_{\rm fin}\). The calculation of the GW spectra under the assumption of refs. [28; 59] is given in App. (B).
We show that the GW spectrum found in ref. [28] is a correct description for the bump around and above the peak when \(\delta\tau_{\rm fin}/R_{*}\) is sufficiently large (as described above), after taking into account the correction due to the expansion of the Universe [59]. The transition toward the GW spectrum in the "infinite duration" limit (given in App. (B)) is related to the one from the quadratic to linear growth that we have found in Sec. (V.2), since the approximation used to extend the limits of integration over \(\tau_{-}\) to \(\pm\infty\) in Eq. (80) is based on the assumption that \(k\delta\tau_{\rm fin}\to\infty\). However, additional linear and cu
Figure 10: Ratio \(\tilde{\Delta}\equiv\zeta_{\rm GW}/\zeta_{\Pi}\) for the benchmark phase transitions and parameters of Fig. (9). Line colors and styles are the same as those in Fig. (9).
bic regimes appear in \(\Omega_{\rm GW}\) at frequencies below the peak that were not found in refs. [59; 28] since the \(k\delta\tau_{\rm fin}\gg 1\) assumption does not hold in this range of frequencies. Moreover, when \(\delta\tau_{\rm fin}/R_{*}<1\), the peak is in the regime \(k_{*}<1/\delta\tau_{\rm fin}\), so that significant modifications of the GW spectrum may appear around the peak.
### Estimation of the source duration
Let us now discuss why the variables \(\delta\tau_{\rm fin}\) and \(R_{*}\) are not completely independent. The characteristic scale \(R_{*}\) is determined by the mean bubble separation, which depends on the characteristics of the phase transition via \(\beta\) and \(\xi_{w}\) (see relation below Eq. (33)).
The evaluation of \(\delta\tau_{\rm fin}\) requires further numerical studies to simulate the decay of the sound waves, as well as the development of turbulence. A first estimation of \(\delta\tau_{\rm fin}\) is the eddy turnover time, i.e., the time that it takes the plasma to develop nonlinearities, \(\delta\tau_{\rm nl}\sim R_{*}/\sqrt{\Omega_{\rm K}}\)[6], and it directly depends on \(R_{*}\). Setting \(\delta\tau_{\rm fin}\sim\delta\tau_{\rm nl}\) and \(\Omega_{\rm K}\sim 10^{-2}\) for the benchmark phase transitions with \(\alpha=0.1\) (see Table 1), we find \(\delta\tau_{\rm fin}/R_{*}\sim 10\). For this estimate, the condition \(\delta\tau_{\rm fin}/R_{*}\gg 1\) is valid, and the prescription of refs. [59; 28] gives a correct estimate of the amplitude around the peak. However, it fails at frequencies below the peak, as expected.
We show in Fig. (13) the GW spectrum found in the current work and compare it to the one given by Eq. (34), based on the assumptions of refs. [59; 28], when we set \(\delta\tau_{\rm fin}\sim 10R_{*}\). We find that, in this case, the suppression factor \(\Upsilon\) is justified to describe the growth rate with \(\tau_{\rm fin}\) at the peak. At frequencies below the peak, we find, in this case, that the linear growth with \(k\) is almost completely absent and the causality tail, proportional to \(k^{3}\), appears close to the peak, similar to the results of numerical simulations [30] and other analytical estimates [60]. However, for the exact dependence with \(\delta\tau_{\rm fin}\) of the full spectral shape, we need to use the prescription developed in the current work. In particular, we find that the causality tail grows proportional to \(\ln^{2}(\tau_{\rm fin}\mathcal{H}_{*})\).
### Present-time spectral amplitude
The present-time GW energy density spectrum, \(\Omega_{\rm GW}^{0}(f)\) is a result of the computed spectrum redshifted from its time of generation to the present day,
\[h^{2}\Omega_{\rm GW}^{0}(f)=\left(\frac{a_{*}}{a_{0}}\right)^{4 }\!\left(h\frac{H_{*}}{H_{0}}\right)^{2}\!\Omega_{\rm GW}(f)\] \[\simeq 1.6\times 10^{-5}\left(\frac{100}{g_{*}}\right)^{\frac{1}{3}} \!\Omega_{\rm GW}(f)\,, \tag{96}\]
where \(a_{*}\) and \(g_{*}\) correspond to the scale factor and the relativistic degrees of freedom at the time of GW generation, e.g., at the electroweak phase transition. \(h=H_{0}/(100\,{\rm km/s/Mpc})\) takes into account the uncertainties on the present-time Hubble rate. Frequencies can be obtained from \(k\) using the dispersion relation of GWs, \(2\pi f=k\), and redshifting the
mean-size of the bubbles \(R_{*}\) to the present day,
\[R_{*}^{-1} =\frac{H_{*}}{R_{*}\mathcal{H}_{*}}\frac{a_{*}}{a_{0}}\] \[\simeq\frac{1.65\times 10^{-5}\ \mathrm{Hz}}{R_{*}\mathcal{H}_{*}} \,\frac{T_{*}}{100\,\mathrm{GeV}}\bigg{(}\frac{g_{*}}{100}\bigg{)}^{\frac{1}{ 6}}. \tag{97}\]
## VII Conclusions
We have studied the GW production from sound waves in a first-order phase transition during radiation domination. Sound waves are expected to be the dominant contribution to the SGWB, unless the bubbles run away, the phase transition is supercooled,14 or the efficiency in generating turbulence from bubble collisions is large.
Footnote 14: In this case bubble collisions may represent the dominant contribution to the GW signal [49].
We adopt the framework of the sound shell model to estimate the UETC of the velocity field [26]. For the single-bubble velocity and energy density profiles, we follow the description of ref. [46] and present the details of our calculation in an accompanying paper [85]. The sound-shell model predicted a \(k^{9}\) growth of the spectrum at small frequencies \(k\), and a linear dependence on the source duration \(\delta\tau_{\mathrm{fin}}\) in ref. [28] that can be generalized to the suppression factor \(\Upsilon=1-1/(1+\delta\tau_{\mathrm{fin}}\mathcal{H}_{*})\) when including the effect of the expansion of the Universe [59]. With this work, we have found that their prescription holds only in the regime \(k\gg 1/\delta\tau_{\mathrm{fin}}\). We have addressed this issue and generalized their results to all frequencies.
Our results show that at small frequencies \(k\to 0\), the GW spectrum presents a causal tail, proportional to \(k^{3}\). The amplitude of this tail has a universal dependence on the physical parameters that describe the source. In particular, it is independent of \(R_{*}\), and it grows with the duration of the source as \(\ln^{2}(1+\delta\tau_{\mathrm{fin}}\mathcal{H}_{*})\), which yields a quadratic dependence when the source duration is short.
Around \(k\gtrsim 1/\delta\tau_{\mathrm{fin}}\), an intermediate linear spectrum, \(\Omega_{\mathrm{GW}}\sim k\), may appear, extending until a steep slope just below the peak takes over, which leads to the formation of a bump around the peak. When we estimate the duration of the GW sourcing as the time scale for the production of non-linearities in the plasma, we find that, for the benchmark phase transitions considered in this work with \(\alpha=0.1\)
Figure 13: GW spectrum as a function of \(kR_{*}\) for the benchmark phase transitions shown in Fig. (1). The results are shown for different values of \(R_{*}\mathcal{H}_{*}=\{0.001,0.01,0.1,1\}\) and taking \(\delta\tau_{\mathrm{fin}}=10R_{*}\), corresponding to the time expected to develop non-linearities for \(\Omega_{\mathrm{K}}\sim 10^{-2}\). For comparison, the grey lines correspond to the GW spectrum using the approximation of refs. [28; 59] (see Eq. (34)).
\(\delta\tau_{\rm fin}/R_{*}\sim 10\). In this case, the linear regime in \(\Omega_{\rm GW}\) is almost absent, and the GW spectrum soon develops the causal \(k^{3}\) tail at frequencies below the peak. When \(\delta\tau_{\rm fin}/R_{*}\) becomes larger, the intermediate linear regime extends between the peak and the causal tail. This bump is a characteristic sign of a GW spectrum sourced by sound waves, since this distinctive feature does not appear in the GW spectrum sourced by vortical turbulence [37; 38; 39; 40; 41; 42; 43]. A similar bump was previously found numerically for acoustic turbulence in ref. [37] and confirmed in ref. [89]. As long as the source duration is sufficiently large, \(\delta\tau_{\rm fin}/R_{*}\gg 1\), we find that the amplitude around the peak is well described by the approach of refs. [59; 28].
Our results reconcile the predictions of the sound shell model with the numerical simulations of ref. [30], where a cubic dependence of the GW spectrum at low \(k\) is also found. Furthermore, they are in agreement with the findings of ref. [89], where numerical simulations are also performed, supporting the theoretical results of the sound shell model.
We have presented a theoretical description of the origin of the linear and quadratic growth with \(\delta\tau_{\rm fin}\) that can appear when GWs are sourced by a general stationary process as, in the sound shell model, by a stationary UETC of the velocity field given by Eq. (39).
The resulting GW spectrum has been presented in a semi-analytical framework by separating each of the different contributions that can affect its final spectral shape and amplitude. Understanding each of the different contributions separately is important to test the validity of each of the underlying assumptions in future work. This framework allows for direct extensions of our results to include different models or assumptions.
We present the detailed calculation of the anisotropic stresses of the velocity field, following the sound shell model, in an accompanying paper [85]. We have also addressed the issue of causality that motivated the choice of initial conditions for sound waves in ref. [28], but we defer a detailed discussion of this issue to ref. [85].
Our work has consequences on the interpretation of current observations of pulsar timing arrays under the assumption that the QCD phase transition is of first order. There are several analyses in the literature that have used the \(k^{9}\) spectrum and the inclusion of a \(k^{3}\) tail could lead to significantly different constraints on the phase transition parameters. This is especially important if one considers the smallest frequency bins reported by the PTA collaborations, which are below the characteristic frequency of the QCD phase transition where the signal is expected to be dominated by the \(k^{3}\) tail or by the intermediate linear growth, \(k\). Even at frequencies right below the peak, we expect the \(k^{9}\) behavior to be shallower. Especially with the improvement of the PTA data in this range of frequencies expected in the next years, the study of the GW spectrum from sound waves with the presented modifications will become completely relevant.
Similarly, our model has implications for current estimations of the phase transition parameters that can be probed by LISA when one considers a first-order electroweak phase transition, since several analyses are currently using the \(k^{9}\) model for the GW signal.
At larger frequencies, our model can be used to test the potential observability of higher-energy phase transitions with next-generation ground-based detectors, like Einstein Telescope or Cosmic Explorer, and to put constraints on the current and forthcoming observing runs by the LIGO-Virgo-KAGRA collaborations, especially in view of the advent of improvements in their sensitivities.
###### Acknowledgements.
We are grateful to Jorinde van de Vis and Mikko Laine for useful discussions, and to Ramkishor Sharma, Jani Dahl, Axel Brandenburg, and Mark Hindmarsh for sharing their draft [89]. ARP is supported by the Swiss National Science Foundation (SNSF Ambizione grant 182044). CC and SP are supported by the Swiss National Science Foundation (SNSF Project Funding grant 212125). SP is supported by the Swiss National Science Foundation under grant 188712. ARP and SP acknowledge the hospitality of CERN, where part of this work has taken place.
## Appendix A Full time evolution of the GW spectrum
In this section, we compute the time evolution of the GW spectrum while the source is active, according to the sound shell model. The GW spectrum is usually averaged over oscillations with time, as we
are interested in its present time observable, i.e., at very late times \(\tau_{0}\gg\tau_{\rm fin}\). However, if the GW spectrum is compared with the results from simulations to, for example, test the validity of the sound shell model, it is required to compute its exact time evolution while the source is active at \(\tau<\tau_{\rm fin}\). The average over oscillations is then not well motivated and it could lead to wrong results. We note that one has to pay particular attention to this aspect when using Weinberg's formula as, for example, in refs. [29; 30], since this approach already assumes that the GWs have reached their free propagation regime at all \(k\), which can potentially lead to wrong results in the IR tail of the GW spectrum, and it does not allow to study their evolution with time.
We start with the GW spectrum, given by Eq. (12), and use the UETC of the anisotropic stresses of Eq. (23) with the stationary assumption for the velocity field UETC, see Eq. (39). We then find an expression analogous to that of Eq. (51) but in this case, the function \(\Delta\) is a time-dependent expression given as
\[\Delta(\tau,k,p,\tilde{p})\equiv 2\int_{\tau_{*}}^{\tau}\frac{ \mathrm{d}\tau_{1}}{\tau_{1}}\int_{\tau_{*}}^{\tau}\frac{\mathrm{d}\tau_{2}}{ \tau_{2}}\cos(c_{\rm s}p\tau_{-})\\ \times\cos(c_{\rm s}\tilde{p}\tau_{-})\cos k(\tau-\tau_{1})\cos k (\tau-\tau_{2}). \tag{63}\]
We can express the product of \(\cos\) as
\[\cos(c_{\rm s}p\tau_{-})\cos(c_{\rm s}\tilde{p}\tau_{-})=\frac{1}{2}\sum_{m= \pm 1}\cos(\hat{p}_{m}\tau_{-}), \tag{64}\]
with \(\hat{p}_{m}=c_{\rm s}(p+m\tilde{p})\). Then, using \(\cos k(\tau-\tau_{i})=\cos k\tau\cos k\tau_{i}+\sin k\tau\sin k\tau_{i}\) for \(i=1,2\), one gets,
\[\Delta(\tau,k,p,\tilde{p}) =\sum_{m=\pm 1}\Biggl{[}\left(\int_{\tau_{*}}^{\tau}\frac{ \mathrm{d}\tau_{1}}{\tau_{1}}\bigl{[}\cos k\tau\cos k\tau_{1}+\,\sin k\tau\sin k \tau_{1}\bigr{]}\cos(\hat{p}_{m}\tau_{1})\right)^{2}\\ +\biggl{(}\int_{\tau_{*}}^{\tau}\frac{\mathrm{d}\tau_{1}}{\tau_{ 1}}\bigl{[}\,\cos k\tau\cos k\tau_{1}+\sin k\tau\sin k\tau_{1}\bigr{]}\sin( \hat{p}_{m}\tau_{1})\biggr{)}^{2}\Biggr{]}\] \[=\frac{1}{4}\sum_{m,n=\pm 1}\biggl{[}\Delta\mathrm{Ci}^{2}(\tau, \hat{p}_{mn})+\Delta\mathrm{Si}^{2}(\tau,\hat{p}_{mn})\] \[+\cos 2k\tau\Bigl{(}\Delta\mathrm{Ci}(\tau,\hat{p}_{mn})\Delta \mathrm{Ci}(\tau,\hat{p}_{m,-n})+\Delta\mathrm{Si}(\tau,\hat{p}_{mn})\Delta \mathrm{Si}(\tau,\hat{p}_{m,-n})\Bigr{)}\biggr{]}\,, \tag{65}\]
where the functions \(\Delta\mathrm{Ci}_{mn}\) and \(\Delta\mathrm{Si}_{mn}\) have been defined in Eqs. (57) and (58). We note that if one uses Eq. (56) substituting \(\tau_{\rm fin}\to\tau\), Eq. (65) is not recovered, since the latter presents an additional term that is relevant during the phase of GW production. Hence, when comparing to numerical simulations, one should use Eq. (65) to study the validity of the stationary assumption for the UETC found in the sound shell model.
## Appendix B GW spectrum in the infinite duration approximation
In this section, we take the approximation of \(\Delta\) as a Dirac delta function (see Eq. (86)) that has been used in refs. [59; 28] to find the GW spectrum from sound waves in the sound shell model approximation. We have shown in Secs. (IV) and (V) that this assumption is not valid in the \(k\to 0\) limit and have presented the resulting GW spectrum in Sec. (VI), so we compare here what are the differences in the resulting spectral shape.
The GW spectrum, which we denote as HH19 (for Hindmarsh & Hijazi, 2019), is found substituting Eq. (86) into Eq. (51),
\[\Omega_{\rm GW}^{\rm HH19}(K)=\frac{3\pi}{2}K^{2}\,\Upsilon(\tau_ {\rm fin})\,\frac{\mathcal{H}_{*}R_{*}}{c_{\rm s}}\,\bar{w}^{2}\left(\frac{ \Omega_{\rm K}}{\mathcal{K}}\right)^{2}\\ \times\int_{0}^{\infty}P\,\zeta_{\rm kin}(P)\,\mathrm{d}P\int_{|P -K|}^{P+K}(1-z^{2})^{2}\,\frac{\mathrm{d}\tilde{P}}{\tilde{P}^{3}}\\ \times\zeta_{\rm kin}(\tilde{P}\,)\,\delta(P+\tilde{P}-K/c_{\rm s }). \tag{66}\]
Under this assumption, one can perform the integral in Eq. (66) over \(\tilde{P}\) by substituting \(\tilde{P}=K/c_{\rm s}-P\) when \(|K-P|\leq K/c_{\rm s}-P\leq K+P\), which yields the
condition \(P\in[P_{-},P_{+}]\) being \(P_{\pm}=\frac{1}{2}K(1\pm c_{\rm s})/c_{\rm s}\), and
\[z=\frac{1}{c_{\rm s}}-\frac{K(1-c_{\rm s}^{2})}{2Pc_{\rm s}^{2}}. \tag{101}\]
Then the GW spectrum becomes
\[\Omega_{\rm GW}^{\rm HH19}(K) = \frac{3\pi}{2}K^{2}\,\Upsilon(\tau_{\rm fin})\,\frac{\mathcal{H}_ {*}R_{*}}{c_{\rm s}}\,\bar{w}^{2}\left(\frac{\Omega_{\rm K}}{\mathcal{K}} \right)^{2} \tag{102}\] \[\times \int_{P_{-}}^{P_{+}}P\zeta_{\rm kin}(P)(1-z^{2})^{2}\] \[\times\frac{\zeta_{\rm kin}(K/c_{\rm s}-P)}{(K/c_{\rm s}-P)^{3}}\, \mathrm{d}P.\]
The resulting GW spectrum is shown in Figs. (12) and (13), compared with the full calculation. We find that Eq. (102) provides a good approximation when \(k\gg 1/\delta\tau_{\rm fin}\).
|
2307.13200 | Photonic quantum signatures of chaos and boson sampling | Boson sampling is a paradigmatic example of a task that can be performed by a
quantum photonic computer yet is hard for digital classical computers. In a
typical boson sampling experiment, the scattering amplitude is determined by
the permanent of a submatrix of a unitary drawn from an ensemble of random
matrices. Random matrix theory plays a very important role in quite diverse
fields while at the same time being intimately related to quantum signatures of
chaos. Within this framework, a chaotic quantum system exhibits level
statistics characteristic of ensembles of random matrices. Such quantum
signatures are encoded in the unitary evolution and so in this work we combine
the dynamics of chaotic systems with boson sampling. One of the key results of
our work is that we demonstrate the intimate relation between out-of-time-order
correlators and boson sampling. We show that the unitary dynamics of a Floquet
system may be exploited to perform sampling tasks with identical particles
using single-mode phase shifters and multiport beamsplitters. At the end of our
paper propose a photonic implementation of the multiparticle kicked rotor,
which provides a concrete example of our general approach. | V. M. Bastidas, H. L. Nourse, A. Sakurai, A. Hayashi, S. Nishio, Kae Nemoto, W. J. Munro | 2023-07-25T01:38:57Z | http://arxiv.org/abs/2307.13200v2 | # Photonic quantum signatures of chaos and boson sampling
###### Abstract
Boson sampling is a paradigmatic example of a task that can be performed by a quantum photonic computer yet is hard for digital classical computers. In a typical boson sampling experiment, the scattering amplitude is determined by the permanent of a submatrix of a unitary drawn from an ensemble of random matrices. Random matrix theory plays a very important role in quite diverse fields while at the same time being intimately related to quantum signatures of chaos. Within this framework, a chaotic quantum system exhibits level statistics characteristic of ensembles of random matrices. Such quantum signatures are encoded in the unitary evolution and so in this work we combine the dynamics of chaotic systems with boson sampling. One of the key results of our work is that we demonstrate the intimate relation between out-of-time-order correlators and boson sampling. We show that the unitary dynamics of a Floquet system may be exploited to perform sampling tasks with identical particles using single-mode phase shifters and multiport beamsplitters. At the end of our paper propose a photonic implementation of the multiparticle kicked rotor, which provides a concrete example of our general approach.
## I Introduction
The interplay between chaos and complexity plays an important role in our daily life and especially in technological applications [1; 2]. Classically chaotic systems are well known to be extremely sensitive to small perturbations in the parameters that define them [3; 4]. The exploration of these systems is challenging and their importance in our lives is evident from the prediction of the weather forecast [5], the study of turbulence [6] and fluid dynamics to the behavior of financial markets [7]. Of course, such behavior is not restricted to classical systems.
In the quantum world there are complex systems that exhibit a well-defined semiclassical limit that is chaotic in nature [8; 9; 10]. The investigation of the properties of these quantum systems is not simple because far away from the semiclassical limit the notion of phase-space trajectories is not well defined and one needs to look for quantum manifestations of chaotic behavior [9; 11]. These manifestations are referred to as quantum signatures of chaos (QSOC) [9; 11; 12; 13; 14] and currently there is a plethora of them, ranging from level statistics [15; 10; 16], Lochschmidt echoes [17; 18], out-of-time order correlators [19; 20; 21; 13], and information scrambling [22; 23; 24; 25]. In the context of level statistics, it is conjectured [26; 27; 28; 29] that the spectral properties of a system with a chaotic semiclassical limit are related to random matrix theory (RMT) and the symmetries of the system [30; 31; 32]. Experimental demonstrations of QSOC are abundant in diverse communities such as nuclear physics[31], cold atoms [33; 34], trapped ions [35] and superconducting qubits [36; 37].
Complexity in manybody quantum systems can also appear due to the statistics of the particles, even if they are not interacting [38; 39; 40]. Perhaps the most intriguing example of this is the problem of boson sampling [41; 42; 43]. In this context, to sample the output of multiple bosonic particles in a given number of modes turns out to be hard for digital classical computers [41]. The underlying reason for this is the multiparticle interference of bosonic particles [44; 45], resulting in the output of a sampling experiment given in terms permanent [41], which is hard to compute. In the complexity proof for boson sampling there is an intriguing connection to RMT [41]. In approximate sampling, it is usually assumed that the single-particle unitary determining the behavior of each boson is chosen according to the Haar measure [46; 47; 43]. Hence, the transition probability between a given input and the desired output is determined by the permanent of a submatrix of a random unitary [41]. As a consequence, for the boson sampling task to be hard the submatrices satisfy the sufficient condition that its elements are i.i.d. complex random Gaussian variables [41].
This suggests that there may be an intriguing link between systems that exhibit QSOC and boson sampling. It is natural to ask whether one can exploit the complexity of a single-particle system that is chaotic to perform boson sampling in the case of multiple bosonic particles. This is a priori a nontrivial question as the complex behavior may originate from two different sources. One of them is the single-particle chaotic behavior. Recent works have proposed to use quantum control to effectively simulate the dynamics of a unitary chosen from the Haar measure [48] and discussed the intimate relation between dynamical phase transition in sampling complexity and the time evolution generated by spatially local quadratic bosonic Hamiltonians [49].
In this work, we establish the relation between QSOC and
boson sampling. We discuss QSOC for general photonic systems of non-interacting photons such as level statistics, spectral form factors (SFF), and localization properties of Floquet states. With these results at hand, we define a photonic out-of-time-order correlator (OTOC) and show that it is related to the output of a boson sampling experiment, which is a key result of our work. We explore how the dynamics are intimately related to the crossover from regular to chaotic behavior at the single-particle level. To substantiate our results, we propose a photonic implementation of the kicked rotor, a paradigmatic model in the community of quantum chaos [50; 51; 52; 53; 54; 55]. Our proposed photonic system is given as a product of phase shifters and a multiport beam splitter [57]. We also consider the effect of disorder in the phase shifters. Both the strengths of the disorder and the multiport beam splitters control the crossover from regular to chaotic behavior, allowing the exploration of a wide parameter space.
Now in Fig. 1 we show a schematic that illustrates the main idea of our work. The intimate relation between QSOC and random matrix theory (RMT) tell us that chaotic systems show universality [8; 9], and are described by ensembles of random matrices such as the Gaussian orthogonal ensemble (GOE) or the Circular orthogonal ensemble COE [31]. We show that the ability of the system to explore the available configurations over time might be related to the complexity of the sampling problem. When the system is in the chaotic regime, it can explore most of the available configurations. In contrast, when the system is in the regular regime, it can only access a few of them. We show numerical evidence that for the kicked rotor in the chaotic regime the corresponding unitary, with GOE spectral statistics, has submatrices that are a random Gaussian, which is a sufficient condition for boson sampling to be hard [41].
The structure of our paper is as follows. In Section II we provide a brief summary of the basic aspects of Floquet theory and boson sampling. Then in Section III we introduce QSOC, such as quasienergy level statistics, spectral form factors, and localization properties of Floquet states. Next in Section IV we show one of our key results relating photonic OTOCs with boson sampling. In Section V we discuss how QSOC influence the dynamics of local observables and, in particular, we discuss the relation to equilibration. Then in Section VI we provide an intuitive explanation of the relation between QSOC and the complexity of boson sampling, which is another key result of our work. The results presented in the aforementioned sections are general. For this reason in Section VII, we provide a specific example of a photonic Floquet system exhibiting QSOC, that is intimately related to the kicked rotor. For this particular example, in Section VIII we present numerical results for QSOC, dynamics of observables measurable in experiments and the statistics of submatrices. Lastly, we provide concluding remarks and an outlook in Section IX.
## II Periodic photonic circuits: Floquet theory and boson sampling
In our work we will establish a general framework that exploits photonic dynamics and QSOC to perform boson sampling. We will extensively use tools of periodically-driven systems theory [58]. Let us start by examining the photonic dynamics in the context of Floquet theory for time-periodic Hamiltonians \(\tilde{H}(t+T)=\tilde{H}(t)\) with a period \(T\)[58]. Due to the time-periodicity, it is convenient to define the Floquet operator \(\hat{\mathcal{F}}=\tilde{\mathcal{U}}(T)\) that generates the evolution of the system \(|\Psi(mT)\rangle=\hat{\mathcal{F}}^{m}|\Psi(0)\rangle\) at stroboscopic times \(t_{m}=mT\).
The advantage of using the language of Floquet theory is that we can use many results of periodically driven quantum systems to understand properties of the photonic system. In an actual photonic implementation, the coaxial propagation
Figure 1: Schematic illustration of the main idea of our work: boson sampling with regular vs chaotic Floquet dynamics. In boson sampling, the dynamics of the modes are generated by an \(M\times M\) unitary matrix \(\mathbf{U}_{S}(mT)\), where \(M\) is the number of modes and \(T\) is the period. a) When the dynamics are regular, the photons remain localized with restricted operator spreading (shaded areas) and not all of them are able to interfere. b) In the chaotic regime, the operators spread with a typical linear light cone (shaded areas). This allows all the photons to effectively interfere after a characteristic time (dashed line). Due to causality, identical photons can only interfere when their lightcones overlap. This intereference is the underlying mechanism that allows equilibration of local observables at long times.
coordinate \(z\) of light along an optical waveguide acts as a "time" [59]. From now on, we will keep this in mind when we talk about time in our work.
Next, let us explore to which extent Floquet theory relates to boson sampling. We will consider the initial state
\[|\Psi(0)\rangle=|n_{1}^{(I)},n_{2}^{(I)},\ldots,n_{M}^{(I)}\rangle=\prod_{j=1}^ {M}\frac{(\hat{a}_{j}^{\dagger})^{\gamma_{j}^{(I)}}}{\left[n_{j}^{(I)}!\right]^ {1/2}}|0\rangle\;, \tag{1}\]
where \(|0\rangle=|0_{1},0_{2},\ldots,0_{M}\rangle\) represents the vacuum. Eq. (1) describes an initial configuration \(I\) of \(N=\sum_{j}n_{j}^{(I)}\) particles distributed among \(M\) modes, where \(n_{j}^{(I)}\) is the number of photons in mode \(j\). For simplicity we restrict ourselves to an initial configuration where at most one particle can occupy a given mode (\(n_{j}^{(I)}\in\{0,1\}\)). We denote the single-particle basis as
\[|j\rangle\equiv\hat{a}_{j}^{\dagger}|0\rangle=|0_{1},0_{2},\ldots,1_{j},\ldots,0_{M}\rangle, \tag{2}\]
such that the \(M\times M\) matrix representation of \(\hat{\mathcal{F}}^{m}\), in the single-particle subspace, is defined as
\[[\mathbf{U}_{S}(mT)]_{ij}\equiv U_{ij}(mT)\equiv\langle i|\hat{P}^{m}|j\rangle. \tag{3}\]
The evolution of the bosonic operators can be expressed in the Heisenberg picture as [60]
\[\hat{a}_{i}^{\dagger}(mT)=\sum_{j=1}^{M}U_{i,j}(mT)\hat{a}_{j}^{\dagger}. \tag{4}\]
The typical evolution of the operators is illustrated in Fig. 1, where we can interpret the stroboscopic evolution as a photonic quantum circuit with depth \(mT\).
### Photonic dynamics and permanents
The crucial aspect is that the complexity of the boson sampling problem grows with the number of bosons \(N\) in a photonic circuit with \(M\) modes. Due to the statistics of the photons, the dimension of the Hilbert space is given by [61]
\[M+N-1C_{N}=\frac{(M+N-1)!}{N!(M-1)!}\;, \tag{5}\]
which corresponds to the number of \(N\)-combinations of a set of \(M+N-1\) elements [62]. The number of configurations quickly increases with the number of photons. For example, with \(N=3\) photons distributed among \(M=12\) modes there are \({}^{12}C_{3}=364\) configurations.
The evolution of the quantum state defined in Eq. (1) after \(m\) periods of the drive is given by
\[|\Psi(mT)\rangle = \prod_{i=1}^{M}\frac{[\hat{a}_{i}^{\dagger}(mT)]^{\gamma_{i}^{(I )}}}{\left[n_{i}^{(I)}!\right]^{1/2}}|0\rangle \tag{6}\] \[= \sum_{F}\gamma_{F}(mT)|n_{1}^{(F)},n_{2}^{(F)},\ldots,n_{M}^{(F)} \rangle\;,\]
where \(F\) denotes all the possible configurations of \(N\) bosonic particles in \(M\) modes, while \(n_{j}^{(F)}\) is the population of the \(j\)th mode for a given configuration \(F\) such that \(N=\sum_{j}n_{j}^{(F)}\). The probability amplitude \(\gamma_{F}(mT)\) in Eq. (6) also defines the matrix elements of the evolution operator in the \(N\)-particle subspace [60; 41]
\[\gamma_{F}(mT) = \langle n_{1}^{(I)},n_{2}^{(I)},\ldots,n_{M}^{(I)}|\hat{U}(mT)|n_ {1}^{(F)},n_{2}^{(F)},\ldots,n_{M}^{(F)}\rangle \tag{7}\] \[= \frac{\text{Per}\left[U^{(F,\;I)}(mT)\right]}{\sqrt{n_{1}^{(F)}!n _{2}^{(F)}!\cdots n_{M}^{(F)}!}}\;.\]
This is given in terms of the permanent of an \(N\times N\) submatrix \(U^{(F,I)}(mT)\) of \(\mathbf{U}_{S}(mT)\). The submatrix \(U^{(F,\;I)}(mT)\) depends on the initial configuration \(I\) and the specific configuration \(F\) measured at the end of the experiment. More specifically, \(U^{(F,\;I)}(mT)\) is obtained by keeping \(n_{j}^{(F)}\) copies of the \(j\)th column and \(n_{i}^{(I)}\) copies of the \(i\)th rows. Due to the simplification of the initial configuration \(I\), we only need to choose one copy of a given row of \(\mathbf{U}_{S}(mT)\). The corresponding probability of obtaining the configuration \(F\) is
\[P_{F}(mT)=|\gamma_{F}(mT)|^{2}=\frac{\left|\text{Per}\left[U^{(F,\;I)}(mT) \right]\right|^{2}}{n_{1}^{(F)}!n_{2}^{(F)}!\cdots n_{M}^{(F)}!}\;. \tag{8}\]
It is now illustrative to consider the dynamics of the local mean number of photons in mode \(l\), given by
\[\langle\hat{n}_{l}(mT)\rangle = \langle\Psi(mT)|\hat{a}_{l}^{\dagger}\hat{a}_{l}|\Psi(mT)\rangle \tag{9}\] \[= \sum_{F}n_{l}^{(F)}P_{F}(mT)\;.\]
We see that measurements with single-photon detectors samples the probability distribution \(P_{F}(mT)\), and thus samples the permanent of the quantum device [42]. The important point of boson sampling is that the problem of calculating the permanent is extremely hard, and in some cases impossible for a classical digital computer [41]. As we will discuss in the following sections, the dynamics in a system that exhibits QSOC is encoded in \(\mathbf{U}_{S}(mT)\) and is related to the complexity of the sampling problem.
### Relation to the Haar measure
In RMT the unitary group together with the Haar measure is referred to as the Circular Unitary ensemble (CUE) [30; 31; 32]. In the original paper by Aaronson and Arkhipov [41], a crucial technical aspect of the theory of boson sampling is that the unitary considered in the complexity proof is an \(M\times M\) unitary matrix generating the evolution of the modes in Eq. (4) that is chosen randomly according to the Haar measure. This ensures that the distribution of elements of an \(N\times N\) submatrix \(U^{(F,J)}\) is close in variation distance to i.i.d. complex Gaussian random variables [41].
In the theory of QSOC, it is well known that there is an intimate relation between the properties of chaotic systems and
ensembles of random matrices [31; 9]. Another way to formulate this relation is by analyzing the matrix \(U_{\mathbf{S}}(mT)\), which contains information about QSOC at the single-particle level. The fact that we are dealing with photons implies that the output of the sampling experiment is given by a permanent of a submatrix, \(U^{(\varepsilon_{i},\,\,I)}(mT)\) of \(U_{\mathbf{S}}(mT)\), and not a determinant as is the case of fermions.
Hence, it is reasonable to expect that when the system exhibits QSOC at the single particle level the complexity of the sampling problem should be similar to the case of a matrix chosen from the Haar measure. We are motivated by a recent work that has explored the relation between the problem of sampling bit-strings [63] and the evolution under a unitary of a chaotic system that exhibits circular orthogonal ensemble (COE) level statistics [64]. In the next sections, we will provide a series of intuitive arguments to illustrate how QSOC in our Floquet system may allow for the complexity required to perform boson sampling.
## III QSOC in photonic Floquet systems
In this section we discuss several QSOC that are of interest in the context of our photonic system and we analyze their dynamical consequences for boson sampling. We consider an ensemble of unitary operators, \(\mathcal{E}_{U}\equiv\{\hat{U}_{w}(T)\}\), indexed by \(w\). In order to investigat QSOC, we examine the properties of the quasienergies, \(\{\xi_{\alpha}^{w}\}\), and single-particle Floquet states, \(\{|\alpha^{w}\rangle\}\), of the Floquet operator, \(\hat{U}_{w}(T)\), defined from the eigenvalue problem \(\hat{U}_{w}(T)|\alpha^{w}\rangle=\exp{(-\mathrm{i}\xi_{\alpha}^{w}T/\hbar)}| \alpha^{w}\rangle\), where \(-\hbar\pi/T<\xi_{\alpha}^{w}<\hbar\pi/T\)[65]. We will compare the spectral statistics of the Floquet operator with ensembles of random matrices in RMT.
### Quasienergy level statistics
A standard quantity to distinguish ensembles of random matrices is the probability distribution, \(P(r)\), of consecutive level spacing ratios [66; 67]
\[r_{\alpha}^{w}=\frac{\min(s_{\alpha}^{w},s_{\alpha-1}^{w})}{\max(s_{\alpha}^{ w},s_{\alpha-1}^{w})}, \tag{10}\]
where \(s_{\alpha}^{w}=\xi_{\alpha+1}^{w}-\xi_{\alpha}^{w}\geq 0\) for adjacent quasienergies \(\xi_{\alpha}^{w}\), ordered by increasing energy. We denote \(r\) as the level spacing ratio averaged over the ensemble. Eq. (10) has been used successfully to investigate quantum signatures of single- and multi-particle chaos [31; 9], eigenstate thermalization hypothesis [68; 69; 70], and manybody localization[71; 72; 73]. To compare our system to ensembles of random matrices we compute the spectral average of Eq. (10), given by
\[\langle r\rangle=\frac{1}{|\mathcal{E}_{U}|}\sum_{w}\frac{1}{M-2}\sum_{\alpha }r_{\alpha}^{w}, \tag{11}\]
where we have averaged over the ensemble. When calculated from the Wigner surmise in RMT \(\langle r\rangle_{\mathrm{poisson}}\approx 0.38629\) (regular regime) and \(\langle r\rangle_{\mathrm{GOE}}\approx 0.53590\) (chaotic regime) [67]. Hence, Eq. (11) gives an indication when the unitary ensemble has chaotic dynamics.
Even though the probability distribution \(P(r)\) is a standard probe used in systems that exhibit QSOC, it only captures local spectral correlations. It misses important long-range spectral correlations [13; 14]. Therefore, it is also useful to look at other quantities associated to the ensemble \(\mathcal{E}_{U}\) in order to probe QSOC. From now on, to simplify the notation, we will neglect the index \(w\) denoting a given unitary in the ensemble and present ensemble averaged quantities.
### Spectral form factors
In the theory of QSOC [9], one is often interested in the correlations between quasienergy levels. This is obtained by the spectral form factors (SFF), which are intimately related to scrambling [22; 23; 24; 25] and to other QSOC [9]. The infinite temperature \(2N\)-point SFF is given by
\[\mathcal{R}_{2N}(mT)=\sum_{\mathbf{\zeta},\mathbf{\eta}}e^{-\mathrm{i}(\sum_{\mathbf{ \zeta},\mathbf{\zeta}}_{\mathbf{\zeta}}-\sum_{\mathbf{\zeta},\mathbf{\eta}}\xi_{\mathbf{\zeta}})mT/ \hbar}\, \tag{12}\]
where \(\mathbf{\zeta}=(\xi_{1},\xi_{2},\ldots,\xi_{N})\) and \(\mathbf{\eta}=(\eta_{1},\eta_{2},\ldots,\eta_{N})\). Specifically, we will be interested in the two- and four-point SFF
\[\mathcal{R}_{2}(mT) =\sum_{\alpha,\beta}e^{-\mathrm{i}(\xi_{\alpha}-\xi_{\beta})mT/ \hbar}, \tag{13}\] \[\mathcal{R}_{4}(mT) =\sum_{\alpha,\beta,\varphi}e^{-\mathrm{i}(\xi_{\alpha}+\xi_{ \beta}-\xi_{\varphi})mT/\hbar}. \tag{14}\]
The SFF exhibits universal features found in chaotic systems, such as a dip, ramp, and plateau, which will be determined by the symmetries of the Floquet operator. These features are governed by correlations in the quasienergy levels with gaps \(\Delta_{\alpha\beta}\equiv\xi_{\alpha}-\xi_{\beta}=\sum_{\lambda=\beta}^{ \alpha-1}s_{\lambda}\), that define characteristic time scales \(\tau_{\alpha,\beta}=\hbar/\Delta_{\alpha,\beta}\equiv\mathrm{i}\) terms of the nearest-neighbor level spacings \(s_{\lambda}\). Hence, as time increases, the SFF probes quasienergy correlations that are closer and closer together, until it is dominated by the smallest (largest) energy (time) scale. Therefore, to investigate the manifestations of universal features found in chaotic systems, it is also important to study the time evolution of the spectral correlations found in the SFF.
An important time scale is the Heisenberg time, \(\tau_{H}\), associated to the energy gap \(\Delta_{\alpha,\alpha+1}=s_{\alpha}\) between adjacent quasienergy levels. It is the smallest energy scale (largest time scale), and hence it is dominated by level repulsion in chaotic systems. It can be estimated as \(\tau_{H}=2\pi\hbar/\langle s\rangle\)[9], where \(\langle s\rangle\) is the ensemble averaged level spacing. \(\tau_{H}\) is associated with the timescale in finite systems that the discrete energy spectrum can be resolved, and is proportional to the dimension of the Hilbert space, \(\tau_{H}\propto M\). The Heisenberg time is captured in the SFF, where it determines the onset of the plateau where the SFF approaches its asymptotic value.
In RMT the two-point SFF for the GOE is characterized by
the Heisenberg time, and is given by [9; 74]
\[\mathcal{R}_{2}^{\rm GOE}(mT)=M^{2}r(mT)^{2}\] \[\qquad+M\begin{cases}\frac{mT}{\tau_{H}}-\frac{mT}{\tau_{H}}\log \left(1+2\frac{mT}{\tau_{H}}\right)&\text{for }0<mT\leq\tau_{H},\\ 2-\frac{mT}{\tau_{H}}\log\left(\frac{2mT/\tau_{H}+1}{2mT/\tau_{H}-1}\right)& \text{for }mT>\tau_{H}\,\end{cases} \tag{15}\]
where \(r(mT)=\tau_{H}J_{1}(4MmT/\tau_{H})/(2MmT)\), and \(\mathcal{J}_{1}(z)\) is the Bessel function of the first kind [75]. At the Heisenberg time, \(\tau_{H}\), the two-point SFF for the GOE approaches the asymptotic value \(\mathcal{R}_{2}^{\rm GOE}(t\geq\tau_{H})=M\).
### Level repulsion and localization properties of Floquet states
Whether the system is localized or in the quantum chaotic regime will have profound consequences for the boson sampling problem because it will determine the complexity of the multi-particle interference. Let us first investigate the effect of localization and delocalization on the matrix elements of the Floquet operator in the single-particle basis, under the assumption its spectral statistics is that of RMT.
The Floquet operator can be written in the basis of the single-particle Floquet states as \(\hat{\mathcal{F}}=\sum_{\alpha}|\alpha\rangle\alpha|e^{-i\xi_{\alpha}T/\hbar}\). Hence, the matrix elements in the single-particle position basis [see Eq. (2)] can be written as
\[U_{i,j}(mT)=\langle i|U(\hat{m}T)|j\rangle=\sum_{\alpha}e^{-i\xi_{\alpha}mT/ \hbar}c_{i,\alpha}c_{j,\alpha}^{*}\, \tag{16}\]
where \(c_{i,\alpha}=\langle i|\alpha\rangle\).
When a Floquet state, \(|\alpha\rangle\), is localized in real space it follows that \(|c_{i,\alpha}|^{2}\approx\exp{(-|i-l_{\alpha}|/\Lambda_{\alpha})}\), where \(\Lambda_{\alpha}\) is the localization length, and \(l_{\alpha}\) is the center of mass of the wavepacket. That is, most of the contribution to the dynamics is from the diagonal matrix elements of \(U_{i,j}(mT)\) within a band \(|i-j|<{\rm Max}(\Lambda_{\alpha})\), which is determined by the longest localization length scale. We refer the interested reader to Ref. [76] for more information. In terms of the spectrum, localized Floquet states are related to clustering of levels with very small quasienergy gaps [9], and the statistics of the gaps follows a Poissonian distribution because the levels are uncorrelated. Hence, when a photon remains localized in space, it cannot interfere with other photons in remote regions at a distance greater than \(\Lambda_{\alpha}\), and there is not enough operator spreading (see next section) to perform boson sampling.
Next, let us briefly discuss the onset of thermalization in our Floquet bosonic system and how it affects localization properties of Floquet states. As the Floquet operator is unitary, it can be written in terms of an effective Hamiltonian \(\hat{H}_{\rm eff}\) as \(\hat{\mathcal{F}}=e^{-i\hat{H}_{\rm eff}T/\hbar}\). To talk about thermalization, it is important to identify the conserved quantities of our Floquet system. Previous works define these conserved quantities for fermionic quadratic Hamiltonians [77]. Here we extend the theory for bosons by defining the operator \(b_{\alpha}^{\dagger}\equiv\sum_{\alpha}c_{\alpha\alpha}^{*}\hat{a}_{\alpha}^ {\dagger}\) that creates bosonic particles in the \(\alpha\) quasienergy state, i.e., \(|\alpha\rangle\equiv b_{\alpha}^{\dagger}|0\rangle\). As our system is quadratic in the bosonic operators, the effective Hamiltonian is also quadratic and can be written as a system of free bosons \(\hat{H}_{\rm eff}=\sum_{\alpha}\xi_{\alpha}b_{\alpha}^{\dagger}b_{\alpha}\). This naturally defines a set \(\{\hat{\mathcal{I}}_{\alpha}\}\) of conserved quantities \(\hat{\mathcal{I}}_{\alpha}=b_{\alpha}^{\dagger}b_{\alpha}\).
In the long-time limit, the system is known to reach a steady state known as the Floquet generalized Gibbs ensemble [77], described by a density matrix
\[\hat{\rho}_{\rm GOE}=\frac{1}{Z}e^{-\sum_{\alpha}\Gamma_{\alpha}b_{\alpha}^{ \dagger}b_{\alpha}}=\frac{1}{Z}\sum_{\alpha}|\alpha\rangle\langle\alpha|e^{- \Gamma_{\alpha}\xi_{\alpha}}, \tag{17}\]
where \(\Gamma_{\alpha}=1/k_{B}T_{\alpha}\) and \(T_{\alpha}\) are effective temperatures determined by the conserved quantities, while \(Z\) is a normalization constant. Further, we can deduce that \(|c_{i,\alpha}|^{2}\approx e^{-\Gamma_{\alpha}\xi_{\alpha}}/Z\) (see Appendix A) [77]. Thus, the steady state is Gaussian and determined by different effective temperatures. In certain parameter regimes, the system can heat up to infinite temperatures [78; 79; 80]. In this case, one obtains fully delocalized Floquet states with \(|c_{i,\alpha}|^{2}\approx 1/M\) and the quasienergies exhibit strong level repulsion following COE statistics, while \(\{\xi_{\alpha}\}\) behave like incommensurable phases [78]. Intuitively, we expect thermalized systems to be more useful for boson sampling as the photons can explore more modes and interfere, which increases the complexity of the problem.
## IV Out-of-time-order correlators (OTOC) and boson sampling
In this section we introduce one of our key results. One way to capture long-range spectral correlations is to investigate the dynamics of out-of-time-order correlators (OTOCs) [81; 21]. We will define a \(2N\)-point OTOC and show that it is equal to performing \(N\)-particle boson sampling and calculating the permanent (see Eq. (8)). Hence the properties of the OTOC gives information about the complexity of the Floquet operator and how hard the boson sampling task is.
The simple form of the evolution of the modes (see Eq. (4)) motivates us to investigate the following two-point OTOC,
\[\mathcal{C}_{i,j}^{(2)}(mT)=\langle 0|[\hat{a}_{i}^{\dagger}(mT),\hat{a}_{j}]^{ \dagger}[\hat{a}_{i}^{\dagger}(mT),\hat{a}_{j}]|0\rangle. \tag{18}\]
Using Eq. (4) this can be simply evaluated, giving
\[\mathcal{C}_{i,j}^{(2)}(mT) = U_{ij}^{*}(mT)U_{ij}(mT) \tag{19}\] \[= \sum_{\alpha,\beta}c_{i,\alpha}c_{j,\alpha}^{*}c_{i,\beta}^{*}c_{ j,\beta}e^{-i(\xi_{\alpha}-\xi_{\beta})mT/\hbar}\] \[= P_{F}(mT).\]
We see that the two-point OTOC is calculated from the permanent as in Eq. (7), and is obtained from a single-particle (\(N=1\)) boson sampling experiment.
As we previously discussed, when there is strong level repulsion with RMT spectral statistics, one has \(|c_{i,\alpha}|^{2}\approx e^{-\Gamma_{\alpha}\xi_{\alpha}}/Z\)[77]. If the systems heats up to infinite temperature, the Floquet states become delocalized. In this situation, using Eq. (16), the two-point OTOC is approximately
\[\mathcal{C}_{i,j}^{(2)}(mT)\approx\frac{\mathcal{R}_{2}(mT)}{M^{2}}. \tag{20}\]
This naturally establishes the relation between our photonic OTOC and the two-point SFF, \(\mathcal{R}_{2}(mT)\), given by Eq. (13).
Next, let us discuss the four-point OTOC, given by
\[\mathcal{C}^{(4)}_{i,j,r,s}(mT)=\langle 0|\hat{C}^{\dagger}_{i,j,r,s}(mT)\hat{ \mathcal{C}}_{i,j,r,s}(mT)|\mathbf{0}\rangle\;, \tag{21}\]
where \(\hat{\mathcal{C}}_{i,j,r,s}(mT)=[\hat{a}^{\dagger}_{i}(mT)\hat{a}^{\dagger}_{ j}(mT),\hat{a}_{r}\hat{a}_{s}]\). To evaluate this, it is enough to look at the expression
\[\hat{\mathcal{C}}_{i,j,r,s}(mT)|\mathbf{0}\rangle =\sum_{\alpha,p}U_{i,\alpha}(mT)U_{i,p}(mT)\hat{a}_{r}\hat{a}_{r} \hat{a}^{\dagger}_{o}\hat{a}^{\dagger}_{p}|\mathbf{0}\rangle\] \[=[U_{i,r}(mT)U_{j,s}(mT)+U_{i,s}(mT)U_{j,r}(mT)]|\mathbf{0} \rangle\;. \tag{22}\]
From this we can directly obtain the 4-point OTOC
\[\mathcal{C}^{(4)}_{i,j,r,s}(mT)=P_{F}(mT)=\left|\text{Per}\left[U^{(F,\;I)}( mT)\right]\right|^{2}\;, \tag{23}\]
where \(\text{Per}\left[U^{(F,\;I)}(mT)\right]\) denotes the permanent of a submatrix
\[U^{(F,\;I)}=\begin{bmatrix}U_{i,r}(mT)&U_{i,s}(mT)\\ U_{j,r}(mT)&U_{j,s}(mT)\end{bmatrix}\;, \tag{24}\]
of \(U_{\text{S}}(mT)\). In Eq. (8), we used \(P_{F}(mT)=|\gamma_{F}(mT)|^{2}\) that is determined by the probability amplitude
\[\gamma_{F}(mT) =\langle 0|\hat{a}_{r}\hat{a}_{r}|\Psi(mT)\rangle\] \[=U_{i,r}(mT)U_{j,s}(mT)+U_{i,s}(mT)U_{j,r}(mT)\;, \tag{25}\]
of having the configuration \(\hat{a}^{\dagger}_{r}\hat{a}^{\dagger}_{s}|\mathbf{0}\rangle\) provided that we initially prepare a two-photon state \(|\Psi(0)\rangle=\hat{a}^{\dagger}_{i}\hat{a}^{\dagger}_{j}|\mathbf{0}\rangle\) and let the system evolve \(m\) periods. This establishes a relation between \(\mathcal{C}^{(4)}_{i,j,r,s}(mT)\), operator spreading, and two-particle boson sampling (\(N=2\)). In a similar fashion to the single-particle case, when the system exhibits level repulsion, the correlator \(\mathcal{C}^{(4)}_{i,j,r,s}(mT)\) can be written in terms of the four-point SFF, \(\mathcal{R}_{4}(mT)\), given by Eq. (14).
The examples presented so far for few particles gives insight on how to generalize the photonic OTOC for multiple particles. In a system with \(N\) input photons, the general OTOC corresponds to measuring a \(2N\)-point correlator \(\mathcal{C}^{(2N)}_{I,F}(mT)=\langle 0|\hat{C}^{\dagger}_{I,F}(mT)\hat{ \mathcal{C}}_{I,F}(mT)|\mathbf{0}\rangle\), where we define the operator \(\hat{\mathcal{C}}_{I,F}(mT)=[\prod_{i\in I}[\hat{a}^{\dagger}_{i}(mT)]^{s^{( \prime)}},\prod_{j\in F}[\hat{a}_{j}]^{s^{\prime}_{j}}]\), and \(I\) and \(F\) are, respectively, the initial and final configurations of \(N\) photons in \(M\) modes. Consequently, the \(2N\)-point OTOC is given by
\[\mathcal{C}^{(2N)}_{I,F}(mT)=P_{F}(mT)=\frac{\left|\text{Per}\left[U^{(F,\;I)} (mT)\right]\right|^{2}}{n_{1}^{(F)}!n_{2}^{(F)}!\ldots n_{M}^{(F)}!}\;. \tag{26}\]
In the chaotic regime the \(2N\)-point photonic OTOC is proportional to the \(2N\)-point SFF, \(\mathcal{R}_{2N}(mT)\), given by Eq. (12).
The important message we want to convey is that measuring the probability amplitude, \(P_{F}(mT)\), is equivalent to measuring a photonic OTOC. We will show, with an example, that when the system exhibits QSOC that boson sampling should be hard [see Section VIII.4]. This is linked with the OTOC and scrambling [21] in the chaotic regime, where in a system with QSOC operators spread across all modes, but instead do not in the regular regime. Thus, whether the boson sampling task is hard depends on the periodic photonic chips capability to scramble information and become delocalized [see Fig. 1].
In the next section we will show how the two- and four-point SFF will naturally appear in the dynamics of expectation values and how they determine long-time properties such as equilibration due to level repulsion.
## V Floquet theory and quantum dynamics of local observables
In this section we discuss another important result of our work. Here we show the intimate relation between QSOC and the dynamics of the system. In particular, we will explore how single-particle signatures of chaos influence the dynamics of observables in a multi-particle scenario. We will show that when the single-particle unitary matrix \(U_{\text{S}}(mT)\) [with elements \(U_{i,j}(mT)\)] exhibits spectral properties related to RMT, the single-particle dynamics reaches the periodic steady state \(\hat{\rho}_{\text{GGE}}\), given by Eq. (17), that is diagonal in the basis of Floquet states. This is solely determined by the strong repulsion of quasienergy levels that is characteristic of chaotic systems. For simplicity, we focus on the single- and two-particle case, but the results and conclusions presented here are valid for any number of particles.
### Single-particle dynamics
As a first step, it is useful to discuss the effect of QSOC on the dynamics of local observables at the single-particle level. In particular, we will explore the consequences of quasienergy level repulsion with GOE statistics [9; 31]. In Section VII of our paper, we will show an example of a unitary that exhibits spectral statistics consistent with the Gaussian Orthogonal Ensemble (GOE) [31].
Let us investigate what happens with the dynamics of a single particle initialized in the state \(|\psi(0)\rangle=\hat{a}^{\dagger}_{i}|\mathbf{0}\rangle\) in the regular and chaotic regimes. After \(m\) periods of the circuit, the evolution of the particle is given by
\[|\psi(mT)\rangle=\sum_{r}U_{i,r}(mT)\hat{a}^{\dagger}_{r}|\mathbf{0}\rangle= \sum_{\alpha}e^{-\mathrm{i}\xi_{\alpha}mT/h}c_{i,\alpha}\hat{b}^{\dagger}_{ \alpha}|\mathbf{0}\rangle\;. \tag{27}\]
To investigate long-time properties of the system due to level repulsion, such as equilibration [82], it is useful to define the time average of observables. For example, the time-averaged number of photons at a given site \(l\) over a total number \(\mathcal{M}\) of periods of the drive is given by
\[\bar{n}_{l} =\frac{1}{\mathcal{M}}\sum_{m=0}^{\mathcal{M}-1}\langle\hat{n}_{l }(mT)\rangle=\frac{1}{\mathcal{M}}\sum_{m=0}^{\mathcal{M}-1}P_{F}(mT)\] \[=\frac{1}{\mathcal{M}}\sum_{m=0}^{\mathcal{M}-1}\sum_{\alpha, \beta}e^{-\mathrm{i}(\xi_{\alpha}-\hat{\rho}_{\beta})mT/h}c_{i,\alpha}c_{i,\beta} ^{*}c_{i,\alpha}c_{i,\beta}^{*}\;, \tag{28}\]
where we have used Eq. (9) that relates \(\hat{n}_{l}\) and \(P_{F}(mT)\). It is worth noticing that the expression for \(\bar{n}_{l}\) resembles the two-point SFF \(\mathcal{R}_{2}(mT)\) in Eq. (13). The dynamics of observables is determined by the quasienergy gaps and the spectral correlations that are encoded in the SFF.
For example, when the system is in the regular regime, the quasienergy gaps become uncorrelated [8; 9] and usually they are also small thus defining a set of long time scales \(\tau_{\alpha,\beta}=\hbar/\Delta_{\alpha,\beta}\) discussed above. Further, when the system is close to a degeneracy point or if it exhibit clustering of levels, then it is not able to equilibrate as the average in Eq. (28) contains off-diagonal elements \(\langle\beta|\hat{n}_{l}|\alpha\rangle\) with \(\alpha\neq\beta\). On the other hand, when the system exhibits QSOC, the system not only equilibrates but it also thermalizes in the sense of ETH at time scales \(\mathcal{M}T\gg\text{Max}(\tau_{\alpha,\beta})\)[82] with
\[\bar{n}_{l}=\sum_{\alpha}|c_{i,\alpha}|^{2}|c_{l,\alpha}|^{2}\;, \tag{29}\]
where we have used the fact that \(\langle\alpha|\hat{n}_{l}|\alpha\rangle=|c_{l,\alpha}|^{2}\). This expression can be obtained by assuming that the eigenphases are incommensurable and thus the only term that contribute after the time average is given by the diagonal matrix elements of the observable in the single-particle Floquet basis. In Appendix A we provide a formal derivation of this time average. Further, we can deduce that when the system thermalizes [77; 82], \(\bar{n}_{l}=\text{tr}(\hat{\rho}_{\text{GGE}}\hat{n}_{l})\) (see Appendix A), where \(\hat{\rho}_{\text{GGE}}\) is the Floquet generalized Gibbs states in Eq. (17). At infinite temperature, the Floquet states are fully delocalized in space and \(|c_{i,\alpha}|^{2}=1/M\). Thus, the average local number of photons scales as \(\bar{n}_{l}\sim 1/M\). As a consequence of Eqs. (28) and (29), we obtain the time averaged probability scaling as \(\bar{P}_{F}=1/\mathcal{M}\sum_{m=1}^{M}P_{F}(mT)\sim 1/M\).
### Two-particle dynamics
Now let us consider the two-particle case, which will give us insight on the interplay between single-particle chaos and the bosonic character of the particles. Similarly to the single particle case, we investigate the evolution of an initial two particle state \(|\Psi(0)\rangle=\hat{a}_{i}^{\dagger}\hat{a}_{j}^{\dagger}|0\rangle\). After \(m\) periods, the state evolves to
\[|\Psi(mT)\rangle =\hat{a}_{i}^{\dagger}(mT)\hat{a}_{j}^{\dagger}(mT)|0\rangle\] \[=\sum_{r,=1}^{M}U_{i,r}(mT)U_{j,s}(mT)\hat{a}_{r}^{\dagger}\hat{a }_{s}^{\dagger}|0\rangle\;. \tag{30}\]
Next, let us calculate how the single-particle QSOC influence the long-time behavior of
\[P_{F}(mT)=|U_{i,r}(mT)U_{j,s}(mT)+U_{i,s}(mT)U_{j,r}(mT)|^{2}\;, \tag{31}\]
which is obtained from the probability amplitude in Eq. (25). To see how the level repulsion affects the dynamics, it is convenient to use Eq. (16) to write the probability in terms of Floquet states
\[P_{F}(mT) =\sum_{\alpha,\lambda,\beta,\rho}e^{-i(\xi_{\alpha}+\xi_{\beta}- \xi_{\rho}-\xi_{\rho})mT/\hbar}\left(W_{i,j,r,s}^{\alpha,\lambda,\beta,\rho}+ W_{i,j,r,s}^{\alpha,\lambda,\beta,\rho}\right)\] \[+2\text{Re}\left[\sum_{\alpha,\lambda,\beta,\rho}e^{-i(\xi_{ \alpha}+\xi_{\beta}-\xi_{\rho}-\xi_{\rho})mT/\hbar}S_{i,j,r,s}^{\alpha,\lambda,\beta,\rho}\right]\;, \tag{32}\]
where we have defined
\[W_{i,j,r,s}^{\alpha,\lambda,\beta,\rho} =c_{i,\alpha}c_{r,\alpha}^{*}c_{j,\lambda}c_{s,\lambda}^{*}c_{i,\beta}^{*}c_{r,\beta}c_{r,\beta}^{*}c_{r,\beta}\] \[S_{i,j,r,s}^{\alpha,\lambda,\beta,\rho} =c_{i,\alpha}c_{r,\alpha}^{*}c_{j,\lambda}c_{s,\lambda}^{*}c_{i,\beta}^{*}c_{r,\beta}c_{r,\rho}^{*}c_{r,\rho}\;. \tag{33}\]
As we are exploring here the dynamics of two particles, it is expected that the dynamics strongly depends on correlations between four single-particle quasienergy levels, which is captured the spectral form factor \(\mathcal{R}_{4}(mT)\) [see Eq. (14)].
In Appendix A we describe in detail how to perform the time average of this quantity in the chaotic regime to obtain
\[\bar{P}_{F} =\sum_{\alpha,\lambda}(W_{i,j,r,s}^{\alpha,\lambda,\alpha,\lambda }+W_{i,j,r,s}^{\alpha,\lambda,\lambda,\alpha}+W_{i,j,r,s}^{\alpha,\lambda, \alpha,\lambda}+W_{i,j,s,r}^{\alpha,\lambda,\lambda,\alpha})\] \[+2\text{Re}\left[\sum_{\alpha,\lambda}(S_{i,j,r,s}^{\alpha, \lambda,\alpha,\lambda}+S_{i,j,r,s}^{\alpha,\lambda,\lambda,\alpha})\right]\;. \tag{34}\]
The most important information we want to extract from this time average is the scaling of the probability \(\bar{P}_{F}\sim 1/M^{2}\) for each configuration because \(W_{i,j,r,s}^{\alpha,\lambda,\lambda,\rho}\approx S_{i,j,r,s}^{\alpha,\lambda, \alpha,\lambda,\alpha}\approx 1/M^{4}\) (assuming that the single-particle system thermalizes at infinite temperature). This is the approximate scaling that the probability \(\bar{P}_{F}\) reaches when the system equilibrates.
We can also write the expression for the state in terms of Floquet states
\[|\Psi(mT)\rangle=\sum_{\alpha,\lambda}e^{-i(\xi_{\alpha}+\xi_{\lambda})mT/ \hbar}c_{i,\alpha}c_{j,\lambda}b_{\alpha}^{\dagger}b_{\lambda}^{\dagger}|0 \rangle\;. \tag{35}\]
As expected, the evolution of two photons is determined by a two-particle quasienergy \(E_{\alpha,\lambda}=\epsilon_{\alpha}+\epsilon_{\lambda}\) and is given by a quantum superposition of two particles occupying the different available Floquet states. This equation contains valuable information as the overlaps \(c_{i,\alpha}=\langle i|\alpha\rangle\) and \(c_{j,\lambda}=\langle j|\lambda\rangle\) contain information about localization properties of the Floquet states, the two particle quasienergies carry information about spectral properties of the system, and the operators \(b_{\alpha}^{\dagger}\) allows us to keep track of the bosonic character of the photons. We can show that
\[\bar{n}_{l}=\frac{1}{\mathcal{M}}\sum_{m=0}^{\mathcal{M}}\sum_{\alpha,\lambda, \beta,\rho}e^{-i(\xi_{\alpha}+\xi_{\beta}-\xi_{\rho}-\xi_{\rho})mT/\hbar}O_{i,j}^ {\alpha,\lambda,\beta,\rho}(\mathbf{0}|b_{\beta}b_{\rho}\hat{n}_{l}b_{\rho}^{ \dagger}b_{\lambda}^{\dagger}|0\rangle\;, \tag{36}\]
where \(O_{i,j}^{\alpha,\lambda,\beta,\rho}=c_{i,\alpha}c_{j,\lambda}c_{i,\beta}c_{j, \rho}^{*}c_{j,\rho}^{*}\).
In contrast to the single-particle case, here we need to be cautious and take into account the bosonic character of the
particles. For this reason, let us investigate in detail the vacuum expectation value
\[\langle\mathbf{0}|b_{\rho}b_{\rho}\hat{p}_{\mu}b_{\alpha}^{\dagger}b_ {\beta}^{\dagger}|\mathbf{0}\rangle =\sum_{\sigma,\eta}c_{l\sigma}c_{l,\eta}^{*}\langle\mathbf{0}|b_{ \rho}b_{\rho}^{\dagger}\hat{p}_{\rho}^{\dagger}\hat{b}_{\eta}b_{\alpha}^{ \dagger}b_{\beta}^{\dagger}|\mathbf{0}\rangle\] \[=c_{l\rho}c_{L}^{*}\delta_{\alpha\rho}+c_{l\rho}c_{L,\theta}^{*} \delta_{\beta,\lambda}\] \[+c_{l\beta}c_{L,\lambda}^{*}\delta_{\alpha,\rho}+c_{l\beta}c_{L, \theta}^{*}\delta_{\rho,\lambda}\;, \tag{37}\]
where we have used that \(\hat{n}_{l}=\hat{a}_{l}^{\dagger}\hat{a}_{l}=\sum_{\sigma,\eta}c_{l,\sigma}c_ {\ast\eta}^{*}\hat{b}_{\sigma}^{\dagger}\hat{b}_{\eta}\) and the identity
\[\langle\mathbf{0}|b_{\rho}b_{\rho}\hat{p}_{\rho}^{\dagger}\hat{b} _{\rho}^{\dagger}\hat{b}_{\rho}^{\dagger}b_{\lambda}^{\dagger}|\mathbf{0}\rangle =\delta_{\rho,\sigma}(\delta_{\eta,\lambda}\delta_{\alpha\rho}+ \delta_{\eta,\alpha}\delta_{\lambda,\rho})\] \[+\delta_{\beta,\sigma}(\delta_{\eta,\lambda}\delta_{\alpha,\rho}+ \delta_{\eta,\alpha}\delta_{\lambda,\rho})\;. \tag{38}\]
In the chaotic regime, there are no degeneracies in the quasienergy spectrum and by using the relations discussed above we can explicitly derive the time average in Eq. (36) as
\[\bar{n}_{l} \approx\sum_{\alpha,\lambda}O_{i,j}^{\alpha,\lambda,\alpha}|c_{L, \lambda}|^{2}+O_{i,j}^{\alpha,\lambda,\alpha}|c_{L,\alpha}|^{2}\] \[+\sum_{\alpha,\lambda}O_{i,j}^{\alpha,\lambda,\lambda,\alpha}|c_{ L,\lambda}|^{2}+O_{i,j}^{\alpha,\lambda,\alpha,\lambda}|c_{L,\alpha}|^{2}\;. \tag{39}\]
Similarly to the single-particle case, the average local number of photons scales as \(\bar{n}_{l}\sim 1/M\) when the system equilibrates.
At this point, it is important to emphasize that at the single-particle level we assume a general system with RMT level statistics. From the theory of generalized thermalization of Floquet systems, this implies that such a system thermalizes at a given temperature determined by conserved quantities as in Ref. [82]. In the multiparticle case, however, the local observables do not thermalize but they equilibrate [82]. In fact, in a recent experiment, local equilibration [83] was observed in an optical simulation of undriven Hamiltonians. The scalings that we have obtained from the time averages give us some intuition of the values of observables after equilibration takes place.
From the results presented in this section, we can see the intimate relation between the calculation of time averages and spectral correlations. We can also see how to generalize this to the \(N\)-particle case. In this situation, the dynamics is determined by \(2N\)-point level correlations encoded in the spectral form factor \(\mathcal{R}_{2N}(mT)\). The level repulsion set the time scales for equilibration [82].
## VI Complexity of boson sampling and relation to Qsoc
As we mentioned above, in our work our aim is to understand the relation between single-particle chaotic evolution and the inherent complexity of boson sampling. Our results thus far show a relation between QSOC in periodic photonic circuits, such as OTOCs, spectral form factors and equilibration of observables. However, the results presented so far are not enough evidence to show that chaotic systems provide the complexity required for a boson sampling task to be hard. In fact, although the unitary \(\mathbf{U}_{S}(mT)\) exhibits chaotic spectral statistics, it is not guaranteed at all that such a unitary is close to a random matrix drawn from the CUE. Therefore, to discuss the complexity of sampling, we cannot invoke the argument used in the original paper by Aaronson and Arkhipov based on properties of the Haar measure [41].
To have a deeper understanding of the caveats here, we need to look at the arguments presented in the original paper of Aaronson and Arkhipov [41]. To set the basis for the first argument, we remind the reader that in their work, Aaronson and Arkhipov wrote that given a general matrix \(A\in C^{N\times N}\), the problem of approximating \(\text{Per}(A)\) to within a constant factor is \(\#\)-P complete. In our case, the chaotic dynamics renders a \(N\times N\) submatrix \(U^{(F,D)}(mT)\) that is obviously complex.
To show that the estimation of the permanent is \(\#\)-P-hard, one requires an additional ingredient based on the assumption that the matrix \(\mathbf{U}_{S}(mT)\) is drawn from the CUE. In their work, Aaronson and Arkhipov discuss the motivation of choosing unitaries drawn from the Haar measure to perform boson sampling. The idea is that given a random matrix \(\hat{U}\) chosen randomly according to the Haar measure, then any \(N\times N\) submatrix \(U^{(F,D)}\) of \(\hat{U}\) will be close (provided a suitable distance) to a matrix of i.i.d. Gaussians when \(N<M^{1/6}\)[41]. Thus, unitaries from the Haar measure naturally provide submatrices that are Gaussian. Further, Gaussian matrices are extremely important because one can invoke the "Permanent-of-Gaussians Conjecture" to show that the Gaussian Permanent Estimation (GPE) problem is \(\#\)-P-hard.
All of this being said, to have some intuition about how complex our dynamics is, we need to investigate statistical properties of the submatrix \(U^{(F,D)}(mT)\). Intuitively, this makes sense because the matrix \(\mathbf{U}_{S}(mT)\) generates the dynamics of the modes in the Heisenberg picture, and thus it determines the operator spreading. In other words, to achieve the complexity required for boson sampling, we need to let the system evolve for enough time to have enough information scrambling. For example, if the disorder is too strong, the operator spreading shows a logarithmic light cone characteristic of anomalous diffusion [76]. Due to disorder, some modes cannot be reached during the evolution. In a multiparticle scenario, this would limit the photon interference and certain submatrices \(U^{(F,D)}(mT)\) will not show Gaussian statistics. A direct consequence of this is that the permanent for those atypical configurations is not hard (see Fig. 1).
In the chaotic regime, the operator scrambling resembles diffusive behavior and photon interference is enhanced. This situation enables more information scrambling, and hence, the matrix \(U^{(F,D)}(mT)\) shows Gaussian statistics.
## VII An example model: A hybrid optical Floquet circuit
The results presented so far are general and valid for any unitary operator. Our aim in this section is to provide a concrete example of a system that undergoes a crossover from regular to chaotic behavior.
Next, we will propose a time-periodic photonic system,
where within a period of the drive, \(T\), the evolution is given by a succession of two operators. First one applies a pattern of local phase shift unitaries, given by
\[\hat{U}_{1}\equiv\prod_{j=1}^{M}\exp\left(-\mathrm{i}\tilde{\phi}\,\hat{a}_{j}^{ \dagger}\hat{a}_{j}\right), \tag{40}\]
where \(\hat{a}_{j}^{\dagger}\) creates a photon in mode \(j\), \(\hat{a}_{j}\) annhiliates a photon, \(\tilde{\phi}_{j}\) is the angle of the phase shifter, and \(M\) is the total number of modes. Then a unitary is applied that acts like an \(M\)-port beam splitter characterized by an angle \(\theta\), given by
\[\hat{U}_{2}\equiv\exp\left[-\mathrm{i}\theta\sum_{j=1}^{M}(\hat{a}_{j}^{ \dagger}\hat{a}_{j+1}+\hat{a}_{j+1}^{\dagger}\hat{a}_{j})\right]. \tag{41}\]
Hence, the evolution operator in one period of the drive is given by \(\hat{\mathcal{F}}\equiv\hat{U}_{2}\hat{U}_{1}\), where
\[\hat{\mathcal{F}}=\exp\left[-\mathrm{i}\theta\sum_{j=1}^{M}(\hat{a}_{j}^{ \dagger}\hat{a}_{j+1}+\hat{a}_{j+1}^{\dagger}\hat{a}_{j})\right]\prod_{j=1}^{ M}\exp\left(-\mathrm{i}\tilde{\phi}_{j}\hat{a}_{j}^{\dagger}\hat{a}_{j}\right)\,, \tag{42}\]
is the Floquet operator [9, 58].
The unitary given by Eq. (42) is general. We propose that it can be physically realized in silica-on-silicon waveguide circuits consisting of \(M\) accessible spatial modes [60, 84], which we schematically depict in Fig. 2. For a period of the drive, the waveguides are separated at the beginning to avoid evanescent coupling and phase-shifters are used to implement \(\hat{U}_{1}\). Then, as the photons travel along the chip, the waveguides are quickly brought together, allowing for evanescent coupling [59], which implements \(\hat{U}_{2}\). The strength of the evanescent coupling controls the parameter \(\theta\).
### Quantum kicked rotor
To obtain some physical intuition of the dynamics generated by our hybrid quantum circuit, it is useful to consider a time-periodic Hamiltonian \(\hat{H}(t)=\hat{H}(t+T)\), whose unitary time evolution in one period is given by Eq. (42). The corresponding Hamiltonian is a kicked rotor, given by
\[\hat{H}(t)=\sum_{j=1}^{M}\frac{\hbar\tilde{\phi}_{j}}{T}\hat{a}_{j}^{\dagger} \hat{a}_{j}+\frac{\hbar\theta}{T}\sum_{m=-\infty}^{\infty}\delta(t/T-m)\sum_{ j=1}^{M}(\hat{a}_{j}^{\dagger}\hat{a}_{j+1}+\mathrm{h.c})\,, \tag{43}\]
where the first term is a spatial modulation of onsite energies, and the second term is a periodic train of delta kicks, with kicking strength \(\hbar\theta/T\), that couples nearest-neighbor modes. We define a spatial profile of the onsite angular frequency detunings
\[\tilde{\phi}_{j}=\phi_{j}+\delta_{j}, \tag{44}\]
where
\[\phi_{j}=\frac{4\Phi}{M^{2}}\left(j-\frac{M}{2}\right)^{2}\,, \tag{45}\]
acts as a harmonic trapping potential with strength \(\Phi\), and \(\delta_{j}\) is a random angle drawn from a uniform distribution in the interval \([-W,W]\).
It is important to emphasize that the Hamiltonian given by Eq. (43) does not contain interactions between the particles, i.e., it is quadratic in the bosonic operators. As the particles are non-interacting, each particle independently evolves under the time evolution operator \(\hat{U}(mT)\). However, interesting physics, such as multiparticle interference [44, 45], occurs in the case of multiple bosonic particles due to their statistics [85].
In our photonic implementation, by adding disorder to the phase-shifters, we define an ensemble of unitaries associated to the dynamics generated by the quantum kicked rotor. The effect of a small amount of disorder is to break any remaining symmetries in our system. At the single particle level this allows the system to thermalize, as has been recently reported in the context of undriven systems [82]. If the disorder is too strong, however, the system becomes localized. In the next
Figure 2: Schematic of a photonic chip that implements the dynamics of the kicked rotor (see Eq. (43)), for demonstrating boson sampling. The yellow boxes represent the local phase-shifter \(\hat{U}_{1}\) (see Eq. (41)). The multiport beam splitter is achieved by bringing the waveguides together, which implements the unitary \(\hat{U}_{2}\) (see Eq. (41). The black box is one cycle of the drive, whose dynamics is given by the Floquet operator \(\hat{\mathcal{F}}\) (see Eq. (42)).
section, we will show that both \(\theta\) as well as \(W\) are key parameters describing the transition between regular and chaotic behavior in our system.
### Classical kicked rotor and chaos
Following the procedure outlined in [86], in the absence of disorder (\(W=0\)), one can obtain a semiclassical limit of Eq. (43) in the single-particle subspace. Here the size \(M\) of the chain determines the effective Planck constant given by \(\hbar_{\rm eff}=\hbar/M\). In this way, we can derive a semiclassical Hamiltonian for \(M\gg 1\) in terms of the dimensionless canonical variables \(x\) and \(k\) [see Appendix B for its derivation], given by
\[\mathcal{H}_{\rm SC}(x,k,t)=\frac{4\hbar\Phi}{T}\left(x-\frac{1}{2}\right)^{ 2}+\frac{2\hbar\theta}{T}\sum_{m=-\infty}^{\infty}\delta(t/T-m)\cos(k). \tag{46}\]
The resulting Hamiltonian has a very similar form to the classical kicked rotor, which is a paradigmatic model of chaotic dynamics [50; 51; 52; 53; 54; 55; 56]. In Appendix B we show that Eq. (46) exhibits a crossover from regular to chaos [56]. For example, when \(\theta=1/(8\Phi)\) the system is regular with a mixed phase space, while for \(\theta=5/(8\Phi)\) the system is fully chaotic.
The semi-classical limit is obtained in the limit of an infinite chain. However, in experimental platforms one works with a finite number of sites \(M\) - usually a few of them - far from the semiclassical limit. A natural question is whether the chaotic dynamics found in the classical kicked rotor are also exhibited in the finite sized quantum kicked rotor. We will show in that the quantum kicked rotor defined in Eq. (43) exhibits a crossover to a regime that exhibits QSOC associated with the classical rotor defined in Eq. (46).
## VIII Numerical results
The purpose of this section is to show numerical results for our particular example of a photonic Floquet circuit. Numerically, we generate \(w=1,\ldots,|\mathcal{E}_{U}|\) realizations of the Floquet operator Eq. (42) which are uniformally distributed with probability \(p_{w}=1/|\mathcal{E}_{U}|\). We show results for the level statistics, spectral form factors and the dynamics of observables. We also show key numerical evidence that the submatrices \(U^{(F,\ I)}(mT\) of the unitary \(\mathbf{U}_{S}(mT)\) show Gaussian statistics. This is an indication that chaotic system exhibit the complexity required for boson sampling tasks to be hard.
### Quasienergy level statistics
Fig. 3 shows \(\langle r\rangle\) [see Eq. (11)] as a function of the \(M\)-port beam splitter rotation angle (kicking strength), \(\theta\), and onsite disorder strength \(W\). When disorder dominates, the system is localized in real space with a Poisson distribution. For large kicking that dominates over disorder, pseudoconservation of crystal momentum is recovered and the system is instead localized in momentum space. The competition between kicking and disorder promotes quasienergy level repulsion (\(P(r)\sim r\) for GOE), giving large regions where the level statistics appears chaotic.
In the regions where \(\langle r\rangle\neq\langle r\rangle_{\rm GOE}\), the probability distribution \(P(r)\) deviates from the ones exactly calculated from the Wigner surmise, as shown in Fig. 3. In fact, there is a crossover between regions, as is expected for systems exhibiting single-particle chaos [9]. The quasienergy level statistics gives an indication of where the system is chaotic, and we will confirm this by looking for universal features found in the SFF
Figure 3: There is a crossover from Poissonian to Gaussian Orthogonal Ensemble (GOE) statistics in the consecutive level spacing ratio, \(r\). (a) The average \(\langle r\rangle\), where \(\theta\) is the rotation angle of the \(M\)-port beam splitter, and \(\Phi\) is the strength of the harmonic trapping potential. The contour line delineates \(\langle r\rangle=0.53590\) for the GOE. The probability distribution of consecutive level spacing ratios, \(P(r)\), is depicted in (b) \(W=7/(16\Phi)\) and \(\theta=7.4/(16\Phi)\) (upward triangle); (c) \(W=3.5/(16\Phi)\) and \(\theta=7.4/(16\Phi)\) (star); (d) \(W=2/(16\Phi)\) and \(\theta=7.4/(16\Phi)\) (circle); (e) \(W=3/(16\Phi)\) and \(\theta=18/(16\Phi)\) (downward triangle). Calculated for \(|\mathcal{E}_{U}|=100\) disorder realizations, \(M=300\) modes, \(\Phi=\pi/4\), and \(T=1\).
for systems that exhibit QSOC.
### Spectral form factors
For a general chaotic photonic system with \(N\ll M\) the SFF \(\mathcal{R}_{2N}(mT)\) [see Eq. (12)] shows a typical dynamical behavior characterized by a decay from a value \(M^{2N}\) reaching a dip after \(m\approx\sqrt{M}\) iterations of the Floquet operator. Finally, the SFF reaches a plateau at \(m\sim M\) with a value \(NM^{N}\) that defines the long-time asymptote [20]. As expected, the SFF reaches a plateau at a time that scales with \(M\) as our estimate \(\tau_{H}\). Fig. 4 shows the long-time dynamics of the SFF \(\mathcal{R}_{2}(mT)\) [see Eq. (13)] in the single-particle sector.
In the regular regime [see Fig. 4(a)] the SFF displays a dip, but not a pronounced ramp before reaching the plateau. This indicates, along with the quasienergy statistics in Fig. 3(b), that in this region the system is not chaotic. We will later show that in this region the system may not be complex enough for boson sampling to be hard. In the chaotic regions [see Fig. 4(c)(d)] the SFF displays the typical features associated with systems that exhibit QSOC. However, even in the crossover region [see Fig. 4(b)] there are weak QSOC, exhibiting a dip, ramp, and plateau. But, as we will show in Section VIII.4, in the crossover regime the system may not have complex random Gaussian submatrices.
### Long-time dynamics of local observables
In the this section we will show numerical evidence of the dynamics of \(\langle\hat{n}_{t}(mT)\rangle\) and the probability \(P_{\text{F}}(mT)\) for our photonic circuit with \(N=2\) photons in \(M=12\) modes. In this case, the single-particle unitary can be represented as a \(12\times 12\) unitary matrix \(\mathbf{U}_{S}(mT)\). This is an interesting example, as the system has a small system size far away of the semiclassical regime \(M\gg 1\).
In Fig. 5 we plot the stroboscopic dynamics of \(\langle\hat{n}_{t}(mT)\rangle\) in the regular regime for \(N=2\) photons. From this one can see that the photons remain localized and there are regions of the chain that cannot be accessed as we show in Fig. 5 a). On the contrary, when the system approaches the chaotic regime, it can explore more modes along the lattice, as we show in Fig. 5 b). To benchmark our results, Fig. 5 c) depict the dynamics of the local mean number of photons when the evolution operator is a random matrix drawn from the circular unitary ensemble (CUE) [41; 31]. When the our system approaches the chaotic regime, the dynamics is very close to the case of a random matrix as can be seen in Fig. 5. In these numerical results, we show the dynamics during \(m\approx M=12\) periods, which is close to the time scale required to reach the Plateau of the SFF in Fig. 4.
Next it is illustrative to investigate the populations \(P_{F}(mT)\) of the different configurations. The dynamics of multiple photons can be interpreted as a quantum walk in the Hilbert space [87]. In Fig. 5 (d) and (e) we plot the time evolution of the probabilities \(P_{F}(mT)=|\gamma_{F}(mT)|^{2}\) in the regular and chaotic regimes. For comparison Fig. 5 f) shows the dynamics of \(P_{F}(mT)\) for a random matrix drawn from CUE.
The results presented so far show that single-particle QSOC lead to equilibration of observables. We also show the intimate relation between spectral statistics, spectral form factors and photonic OTOCs and discuss the role that they play on the dynamics. However, these arguments still do not show if chaotic systems provide the complexity required to perform boson sampling. We address this in the next section.
### Statistical properties of submatrices and complexity of boson sampling
At the single-particle level, and as we can see from Eq. (16), a regular system cannot effectively explore all the modes \(M\) of the chain because the transition probability, \(U_{i,r}(mT)=\langle i|U(\hat{m}T)|r\rangle\), between two modes depends on the localization properties of the wave functions. Furthermore, it is also restricted by conservation laws. However, when the system enters the ergodic regime these constraints disappear due to perturbations that break the regular motion [87], and the system is able to explore more configurations [79; 88]. From this it follows that when the disorder is weak and \(\theta\) is chosen such that the the system exhibits GOE statistics [see Fig. 3], the operator spreading shows a linear light cone. As we discussed in Section III, when the disorder is strong the photons are localized and the operator spreading exhibits a logarithmic light cone.
Figure 4: Stroboscopic dynamics of the two-point spectral form factors, \(\mathcal{R}(mT)\) [see Eq. (13)], shows the characteristic dip, ramp, and plateau of chaotic systems. When the spectral statistics are (a) Poissonian the SFF is close to the long-time asymptote \(\mathcal{R}_{2}=M\) (blue horizontal line). Instead, in (c)-(d) the SFF more closely resembles that expected in a chaotic system [see Eq. (15)], with a dip, ramp, and plateau. In the crossover regime (b) the kicked rotor exhibits weak QSOC. We set the Heisenberg time as (a) \(\tau_{H}\approx 535T\), (b) \(585T\), (c) \(605T\), (d) \(300T\). The upwards triangle, star, circle, and downwards triangle correspond to the subfigures in Fig. 3. Calculated for \(|\mathcal{E}_{U}|=1000\) disorder realizations and \(M=300\) modes.
When the system exhibits QSOC we expect the elements of the single-particle unitary to be independent complex random Gaussian variables \(U_{ij}(mT)\sim Z=X+iY\), where \(X\) and \(Y\) are independent real random Gaussian variables with means \(E(X)=E(Y)=0\) and variances \(E(X^{2})=E(Y^{2})=\sigma/2\). For boson sampling it is sufficient that the elements of the top left \(N\times N\) submatrix are close in variation distance to complex random Gaussian variables [41]. In order to determine this for the kicked rotor, we consider boson sampling with \(N=5\) photons in \(M=300\) modes. The photons are initialized in the first five modes, \(i=1,\ldots,5\), and measured at the output in modes \(j=1,\ldots,5\). We take as a probability sample the real and imaginary parts of the elements of \(U_{ij}(mT)\) from \(|\mathcal{E}_{U}|=100\) disorder realizations.
In Fig. 6 we show the probability distribution of the elements after \(m=300\) cycles. In the regular regime [see Fig. 6(a)] the probability distribution is not Gaussian. This is because the single-photon dynamics are localized, and even at long times (\(m\sim M\)) the evolution will not have the required complexity. In contrast, when the system exhibits QSOC the probability distribution appears Gaussian [see Fig. 6(c)-(d)], because the operator spreading exhibits a linear light cone and the photons explore all modes of the chain. In the crossover regime [see Fig. 6(b)] the elements also appear Gaussian, but a more careful examination will show that the distribution deviates.
A useful way to compare two probability distributions in a graphical fashion is to use a quantile-quantile (Q-Q) plot [89]. A typical Q-Q plot is a parametric curve where one of the quantiles of one distribution is plotted against the same quantile of another distribution. In Fig. 7 we show a Q-Q plot
Figure 5: Dynamics over \(m=12\) cycles for \(N=2\) photons in \(M=12\) modes for a single realization of disorder. \(a)\) (\(W=7/16\Phi\)) and \(b)\) (\(W=1/8\Phi\)) depict the dynamics of the mean number of photons \(\langle\hat{n}_{i}(mT)\rangle\) for regular and chaotic unitaries, respectively. \(d)\), \(e)\) show \(P_{\mathrm{F}}(mT)\) for regular and chaotic unitaries, respectively. Clearly, in the regular regime, the system only explores a small portion of the available configurations. We benchmark our results using the a unitary evolution drawn from the Haar measure in \(c)\) and \(f)\). The dynamics in \(a)\) and \(b)\) resemble the light cone structure illustrated in Fig. 1. For the simulation we set \(\Phi=\pi/4\) and \(\theta=7.4/16\Theta\).
Figure 6: Probability densities of the complex elements of a \(5\times 5\) submatrix of the kicked rotor after \(m=300\) cycles, standardized to the standard Gaussian distribution. The upwards triangle, star, circle, and downwards triangle correspond to subfigures in Fig. 3. Calculated for \(|\mathcal{E}_{U}|=100\) disorder realizations and \(M=300\) modes.
against a theoretical Gaussian distribution. If the elements are complex random Gaussian variables then the points will approximately lie on a straight line. This is the case for when the kicked rotor exhibits QSOC [see Fig. 7(c)-(d)]. We see that in the crossover regime that the tails of the distribution are light and deviate from a Gaussian distribution [see Fig. 7(b)]. We have further confirmed that in the crossover the distribution deviates from a Gaussian by using a Shapiro-Wilk test for normality [90] with an \(\alpha=5\%\) significance level.
We wish to highlight that the conclusions from our results do not change for long times well past the Heisenberg time \(\tau_{H}\). Furthermore, we have confirmed that our results are unchanged for submatrices of sizes \(N=2,\ldots,30\).
## IX Conclusions
In their original paper [41], Aaronson and Arkhipov show that a unitary from the CUE will be sufficient for boson sampling to be hard. In our work, we have shown that one can perform boson sampling exploiting the time evolution of a system that exhibits QSOC and that this will also be hard, because the dynamics is linked to operator spreading and how well a single photon delocalizes across all modes of the system. Hence, we argue that any photonic system that exhibits QSOC should be a hard boson sampling task. As we proposed, this can be explored in the disordered quantum kicked rotor using waveguides in periodic photonic chips.
Of course, our test that the elements are approximately Gaussian may be too strict. We have not quantified the total variation distance to a Gaussian distribution for boson sampling to be hard. It is possible that even in the crossover regime, where the QSOC are weak, that the distribution is sufficiently approximately Gaussian. Specifically, in the crossover regime the maximum deviation of the distribution occurs at the tails. A route to resolve this may be to consider the empirical cumulative distribution function and connect it to the total variation distance measure that relates the hardness of boson sampling with the Permanent-of-Gaussians Conjecture [41] for a given number of modes, \(M\), and particles, \(N\).
In future works, it would be interesting to explore how to use ideas of condensed matter physics to further break symmetries of the unitary. For example, this may allow one to explore chaotic systems with CUE spectral statistics by breaking time reversal symmetries. In real implementations, the photonic system is affected by photon loss. Therefore, a natural question is to explore how these loss mechanism affects the chaotic dynamics in our photonic implementation. In our work, we presented numerical calculations showing that in some regimes, submatrices of the unitary show Gaussian statistics. It would be useful to find an analytical proof of this by using tools of QSOC.
_Acknowledgments.--_ V. M. Bastidas thanks M. M. Zapata-Alvarez and valuable discussions with K. Azuma, H. W. Lau, L. Ruks, and H. Takesue. The authors acknowledge partial support through the MEXT Quantum Leap Flagship Program (MEXT Q-LEAP) under Grant No. JPMXS0118069605.
## Appendix A Calculation of time averages
The purpose of this appendix is to provide a formal derivation of the time average of observables. This type of averages plays a very important role in describing equilibration in periodically-driven quantum system provided that the quasienergies show level repulsion.
### Formal derivation of time averages
Let us consider the regime \(\mathcal{M}\gg 1\) of time averages such as the one in Eqs. (28) and (32). These averages contain quantities of the generic form
\[\bar{Q}_{2N}=\frac{1}{\mathcal{M}}\sum_{m=0}^{\mathcal{M}-1}e^{-i(\sum_{z< \alpha}\xi_{z}-\sum_{\alpha}\xi_{\alpha})mT/h}\, \tag{10}\]
where \(\zeta=(\zeta_{1},\zeta_{2},\ldots,\zeta_{N})\) and \(\mathbf{\eta}=(\eta_{1},\eta_{2},\ldots,\eta_{N})\). It is instructive to see the explicit expressions for \(N=1\) and \(N=2\)
\[\bar{Q}_{2} =\frac{1}{\mathcal{M}}\sum_{m=0}^{\mathcal{M}-1}e^{-i(\xi_{z}- \xi_{\beta})mT/h}\] \[\bar{Q}_{4} =\frac{1}{\mathcal{M}}\sum_{m=0}^{\mathcal{M}-1}e^{-i(\xi_{z}+ \xi_{z}-\xi_{\beta}-\xi_{\beta})mT/h}. \tag{11}\]
To calculate this type of averages in a formal way, it is useful to perform the sums explicitly. With this in mind, we consider
Figure 7: Q-Q plot comparing against a theoretical Gaussian distribution of the complex elements of a \(5\times 5\) submatrix of the kicked rotor (\(N=5\) photons in \(M=300\) modes) after \(m=300\) cycles. When the spectral statistics are (a) Poissonian the distribution does not lie on a straight line and is not Gaussian. In the regime where there are QSOC (c)-(d) the distribution is approximately Gaussian. In the crossover regime (b) the distribution has light tails and is not a Gaussian distribution to within a \(\alpha=5\%\) significance level. The upwards triangle, star, circle, and downwards triangle correspond to subfigures in Fig. 3.
a generic power series as follows
\[F(\phi-z) =\frac{1}{\mathcal{M}}\sum_{m=0}^{\mathcal{M}-1}e^{-\mathrm{i}z( \phi-z)}=\frac{1-e^{\mathrm{i}(z-\phi)\mathcal{M}}}{\mathcal{M}(1-e^{\mathrm{i} (z-\phi)})}\] \[=\left(\frac{1-e^{\mathrm{i}(z-\phi)\mathcal{M}}}{\mathcal{M}} \right)\sum_{l=0}^{\infty}\frac{\mathrm{i}^{l-1}B_{l}}{l!}(z-\phi)^{l-1}. \tag{10}\]
where \(\phi\) is a real variable and \(z=z_{R}+\mathrm{i}z_{I}\) with \(z_{I}\geq 0\) is a complex number added for convergence of the series. In this derivation, we also have used the generating function of the \(B_{l}\) the Bernoulli numbers [75]
\[\frac{1}{1-e^{\mathrm{i}(z-\phi)}}=\sum_{l=0}^{\infty}\frac{\mathrm{i}^{l-1}B _{l}}{l!}(z-\phi)^{l-1} \tag{11}\]
Next, let us carefully investigate the convergence of the power series in Eq. (10) in the complex plane. As a first step, we can notice that \(F(0)=1\) when \(z=\phi\). The second step is to look at the series far from the point \(z=\phi\) defining a scale \(R=|z-\phi|^{-1}\). Due to the exponential decay \(e^{-z_{I}\mathcal{M}}\), the exponential term in Eq. (10) vanishes for \(\mathcal{M}\gg R\) provided that \(|z-\phi|\neq 0\). By using this, we can obtain the bound
\[|F(1/R)|\leq\frac{1}{\mathcal{M}}\sum_{l=0}^{\infty}\frac{\mathrm{i}^{l-1}B_{ l}}{l!}R^{1-l}\,. \tag{12}\]
This clearly tends to zero when \(\mathcal{M}\gg R\). The trick to calculate the average is to take the limit \(|z|\to 0\). This allows us to approach the resonance condition \(\phi=0\) in the complex plane. After taking the limit \(|z|\to 0\), we define a real function \(F(\phi)\), from the previous discussion, we know that \(F(\phi)=1\) if \(\phi=0\) and \(F(\phi)=0\) when \(\phi\neq 0\). Keeping this in mind, we obtain
\[\bar{Q}_{2} =\delta_{\alpha,\beta}\] \[\bar{Q}_{4} =\delta_{\alpha,\beta}\delta_{\lambda,\rho}+\delta_{\alpha,\rho} \delta_{\lambda,\beta}. \tag{13}\]
The first expression follows from the generic Floquet spectrum [13] by setting \(\phi=(\xi_{\alpha}-\xi_{\beta})T/\hbar=T/\tau_{\alpha,\beta}\). From this equation, it is clear the meaning of the scale \(R=\tau_{\alpha,\beta}/T\) that tells us how many periods we need to wait to obtain a well-defined time average, as we discussed in the main text. To obtain the second equation, we need to be careful because \(\phi=(\xi_{\alpha}+\xi_{\lambda}-\xi_{\beta}-\xi_{\rho})T/\hbar\). Thus, we can have a resonance \(\alpha=\beta\) and an effective scale \(R=\tau_{\lambda,\rho}/T\). Alternatively, we can have \(\lambda=\rho\) and \(R=\tau_{\alpha,\beta}/T\). For this reason, we need to take care of the possible combinations defining the time scales \(\mathcal{M}\gg R\) to perform the time average. The averages discussed here resemble the results for time-independent systems in Ref. [82]. To calculate the general time average in Eq. (12), one needs to consider all the possible pairs for which one reaches the resonance \(\xi_{\zeta}-\xi_{\eta}\) for \(\zeta\in\mathbf{\zeta}\) and \(\eta\in\mathbf{\eta}\). For the general expression of this time average, we refer the reader to Appendix \(C4\) of Ref. [13].
### Time averages and Floquet generalized Gibbs states
Let us explain in detail what is the nature of time averages of local observables. With this aim, we investigate what happens with the dynamics of a single particle initialized in the state \(|\psi(0)\rangle=\hat{a}_{i}^{\dagger}|\mathbf{0}\rangle\) in the chaotic regime. After \(m\) periods of the circuit, the evolution of the particle is given by
\[|\psi(mT)\rangle=\sum_{\alpha}e^{-\mathrm{i}\xi_{\alpha}mT/\hbar}c_{i,\alpha} |\alpha\rangle. \tag{14}\]
To investigate thermalization in quadratic Hamiltonians [82], it is useful to define the time average of a single-particle observable \(\hat{O}\). One example of this is \(\hat{O}=\hat{n}_{l}\). The time-averaged expectation value of such a single-particle observable over a total number \(\mathcal{M}\) of periods of the drive is given by
\[\hat{O} =\frac{1}{\mathcal{M}}\sum_{m=0}^{\mathcal{M}-1}\langle\hat{O}(mT )\rangle=\] \[=\frac{1}{\mathcal{M}}\sum_{m=0}^{\mathcal{M}-1}\sum_{\alpha, \beta}e^{-\mathrm{i}(\xi_{\alpha}-\xi_{\beta})mT/\hbar}c_{i,\alpha}c_{i,\beta} ^{*}\langle\beta|\hat{O}|\alpha\rangle. \tag{15}\]
From this expression, we clearly see the relation to \(\bar{Q}_{2}\) in Eq. (12). In the last section, we obtained the average \(\bar{Q}_{2}=\delta_{\alpha,\beta}\), which give us the time average
\[\bar{O}=\sum_{\alpha}|c_{i,\alpha}|^{2}\langle\alpha|\hat{O}|\alpha\rangle= \mathrm{tr}(\hat{\rho}_{\mathrm{GGE}}\hat{O})\, \tag{16}\]
where
\[\hat{\rho}_{\mathrm{GGE}}=\sum_{\alpha}|\alpha\rangle\langle\alpha|e^{-\Gamma_ {\alpha}\xi_{\alpha}}/Z, \tag{17}\]
is a density matrix representing the Floquet generalized Gibbs ensemble [77]. Here \(\Gamma_{\alpha}=1/k_{B}T_{\alpha}\) and \(T_{\alpha}\) are effective temperatures determined by the conserved quantities, while \(Z\) is a normalization constant [77]. The direct implication of this is that due to level repulsion, the weights satisfy the relation \(|c_{i,\alpha}|^{2}\approx e^{-\Gamma_{\alpha}\xi_{\alpha}}/Z\) that we used in the main text.
## Appendix B Connection to the kicked rotor
In this section we show that a classical limit of Eq. (43) at the single-particle level realizes a kicked rotor [50; 51; 52; 53; 54; 55; 56], which is an archetypal model for chaos. Hence, we will show that the growth in complexity of Eq. (43) (see Figs. 3,4) is associated with the destruction of quasi-periodic orbits and a transition to the chaotic regime of a classical Hamiltonian
### Semiclassic Hamiltonian in the absence of disorder (\(W=0\))
It is useful to consider periodic boundaries \(\hat{a}_{M+1}=\hat{a}_{1}\) and transform to the reciprocal lattice with the discrete Fourier transform
\[\hat{A}_{k}\equiv\frac{1}{\sqrt{M}}\sum_{j=1}^{M}e^{-ibk^{j}}\hat{a}_{j}, \tag{18}\]
where \(k\) labels the crystal momentum and we set the lattice constant \(b=1\). In this case, the variable \(k\) is discrete and satisfy the condition \(k=2s\pi/M\) with integer \(s\). Importantly, for
a finite chain, the values of \(k\) are restricted to the first Brillouin zone \(-\pi\leq k\leq\pi-2\pi/M\). Using the identity
\[\frac{1}{M}\sum_{j}e^{-i(k_{2}-k_{1})j}=\delta_{k_{1},k_{2}}, \tag{11}\]
where \(\delta_{k_{1},k_{2}}\) is the Kronecker delta function, Eq. (43) becomes
\[\hat{H}(t)= \frac{1}{M}\sum_{j,k_{1},k_{2}}\frac{\hbar\phi_{j}}{T}e^{-i(k_{1} -k_{2})j}\hat{A}_{k_{1}}^{\dagger}\hat{A}_{k_{2}}\] \[+2\hbar J(t)\sum_{k}\cos(k_{1})\hat{A}_{k}^{\dagger}\hat{A}_{k}\, \tag{12}\]
where \(J(t)=\frac{\theta}{T}\sum_{m=-\infty}^{\infty}\delta(t/T-m)\). Since Eq. (43) is quadratic we may restrict our analysis to the single-particle subspace.
A general single-particle state in the reciprocal lattice basis is given by
\[|\psi(t)\rangle\equiv\sum_{k}\psi_{k}(t)\hat{A}_{k}^{\dagger}|0\rangle, \tag{13}\]
where \(\hat{A}_{k}|0\rangle=0\) defines the vacuum \(|0\rangle\) and \(\psi_{k}(t)\) is a complex coefficient. The time-dependent Schrodinger equation \(i\hbar\partial_{t}|\psi(t)\rangle=\hat{H}(t)|\psi(t)\rangle\) is given by
\[\sum_{k}i\hbar\partial_{t}\psi_{k}(t)\hat{A}_{k}^{\dagger}|0\rangle= \frac{1}{M}\sum_{j,k_{1}}\frac{\hbar\phi_{j}}{T}e^{-i(k_{1}-k)j} \psi_{k_{1}}(t)\hat{A}_{k_{1}}^{\dagger}|0\rangle\] \[+2\hbar J(t)\sum_{k}\cos(k)\psi_{k}(t)\hat{A}_{k}^{\dagger}|0\rangle, \tag{14}\]
where we used the commutation relation \([A_{k_{1}},A_{k_{2}}^{\dagger}]=\delta_{k_{1},k_{2}}\) for bosons. Eq. (14) defines an equation of motion for the coefficients \(\psi_{k}(t)\), given by
\[\hbar\partial_{t}\psi_{k}(t)=\frac{1}{M}\sum_{j,k_{1}}\frac{\hbar \phi_{j}}{T}e^{-i(k_{1}-k)j}\psi_{k_{1}}(t)+2\hbar J(t)\cos(k)\psi_{k}(t). \tag{15}\]
To obtain the semiclassical Hamiltonian, it is useful to calculate the following expression
\[\sum_{j,k_{2}}\frac{je^{-i(k_{1}-k_{2})j}}{M^{2}}\psi_{k_{2}}(t)=\lim_{\epsilon \to 0}\frac{\mathrm{i}}{M^{2}}\sum_{j,k_{2}}\frac{e^{-i(k_{1}-k_{2})j}(e^{- \mathrm{i}\epsilon j}-1)}{\epsilon}\psi_{k_{2}}(t). \tag{16}\]
To be mathematically precise, the right hand side of this equation can be interpreted as the derivative of the function
\[g(\epsilon)=\sum_{j,k_{2}}\frac{e^{-i(k_{1}-k_{2}+\epsilon)j}}{M ^{2}}\psi_{k_{2}}(t)=\sum_{k_{2}}\left[\frac{e^{\mathrm{i}\epsilon}-e^{-i \epsilon M}}{e^{\mathrm{i}\epsilon}-e^{-i(k_{1}-k_{2})}}\right]\frac{\psi_{k _{2}}(t)}{M^{2}} \tag{17}\]
with respect to \(\epsilon\) and evaluated at \(\epsilon=0\) as follows
\[\mathrm{i}\frac{dg(\epsilon)}{d\epsilon}\Big{|}_{\epsilon=0}=\sum_{j,k_{2}} \frac{je^{-i(k_{1}-k_{2})j}}{M^{2}}\psi_{k_{2}}(t). \tag{18}\]
So far all the calculations are exact. But now we will carefully look at Eqs. (16) and (17) in the large-volume limit \(M\gg 1\) and make some approximations. As a first step, approximate the discrete sums in Eq. (17) using an integral in the limit \(M\gg 1\) as follow
\[g(\epsilon) =\frac{1}{M}\sum_{k_{2}}\frac{1}{M\tilde{\Delta}_{k}}\left[\frac{ e^{\mathrm{i}\epsilon}-e^{-i\epsilon M}}{e^{\mathrm{i}\epsilon}-e^{-i(k_{1}-k_{2})}} \right]\psi_{k_{2}}(t)\tilde{\Delta}_{k}\] \[\approx\frac{1}{M}\int_{-\pi}^{\pi}\delta(k_{1}-k_{2}+\epsilon) \psi(k_{2},t)dk_{2}=\frac{\psi(k_{1}+\epsilon,t)}{M}\, \tag{19}\]
where we defined \(\psi(k,t)\) as the continuous version of \(\psi_{k}(t)\). We also considered the volume element in the reciprocal space \(\Delta_{k}=2\pi/M\) and approximate the integrand Kernel by using a Dirac delta \(\delta(k_{1}-k_{2}+\epsilon)\) in the limit \(M\gg 1\) when \(\Delta_{k}\to 0\). In the last derivation, we approximated the discrete sums using an integral in the limit \(M\gg 1\) as follows
\[\sum_{k}f(k)=\frac{1}{\tilde{\Delta}_{k}}\sum_{k}f(k)\tilde{\Delta}_{k}\approx \frac{M}{2\pi}\int_{-\pi}^{\pi}f(k)dk. \tag{20}\]
This procedure allows us to approximate the sum in Eq. (16) in terms of a scaled derivative of the wave function \(\psi(k_{1},t)\) as follows
\[\sum_{j,k_{2}}\frac{je^{-i(k_{1}-k_{2})j}}{M^{2}}\psi_{k_{2}}(t) \approx\lim_{\epsilon\to 0}\frac{\mathrm{i}}{M}\frac{\psi(k_{1}+ \epsilon,t)-\psi(k_{1},t)}{\epsilon}\] \[\approx\frac{\mathrm{i}\partial_{k_{1}}\psi(k_{1},t)}{M}\, \tag{21}\]
where used \(\psi(k_{1}+\epsilon,t)\approx\psi(k_{1},t)+\epsilon\partial_{k_{1}}\psi(k_{1},t)\).
Hence, by using Eq. (B) we can approximate the sum in Eq. (15) as follows
\[\frac{1}{M}\sum_{j,k1}\frac{\hbar\phi_{j}}{T}e^{-i(k_{1}-k)j}\psi_{k_{1}}(t) \approx\frac{4\hbar\Phi}{T}\left(\frac{i\partial_{k}}{M}-\frac{1}{2}\right)^{2} \psi_{k}(t). \tag{22}\]
Defining the position as \(\hat{q}\equiv b\hat{x}\equiv i\partial_{k}/M\) and momentum as \(\hat{p}\equiv\hbar k/b\) (keeping in mind that \(b=1\)), we recover the canonical commutation relation \([\hat{q},\hat{p}]=i\hbar_{\mathrm{eff}}\), with effective Planck constant \(\hbar_{\mathrm{eff}}=\hbar/M\). Hence, for \(M\gg 1\), position and momentum behave like commuting classical variables and may be replaced with real numbers \(\hat{x}\to x\). In this limit we obtain the classical Hamiltonian
\[\mathcal{H}_{\mathrm{C}}(k,x,t) =\frac{4\hbar\Phi}{T}\left(x-\frac{1}{2}\right)^{2}+2\hbar J(t)\cos(k)\] \[=\frac{4\hbar\Phi}{T}\left(x-\frac{1}{2}\right)^{2}+\frac{2\hbar \theta}{T}\cos(k)\sum_{m=-\infty}^{\infty}\delta(t/T-m), \tag{23}\]
where \((k,x)\) are classical canonical variables in phase space.
Next, let us calculate the classical equations of motion for
the Hamiltonian given by Eq. (141)
\[\frac{\partial x}{\partial t}=\frac{\partial\mathcal{H}(x,k,t)}{ \hbar\partial k}=-\frac{2\theta}{T}\sin(k)\sum_{m=-\infty}^{\infty}\delta(t/T-m)\] \[\hbar\frac{\partial k}{\partial t}=-\frac{\partial\mathcal{H}(x,k,t)}{\partial x}=-\frac{8\hbar\Phi}{T}\left(x-\frac{1}{2}\right)\,. \tag{142}\]
The integration of the equations of motion during a period \(T\) of the drive gives us a discrete map that give us the dynamics at stroboscopic times
\[x_{n+1} =x_{n}-2\theta\sin(k_{n})\] \[k_{n+1} =k_{n}-8\Phi\left(x_{n+1}-\frac{1}{2}\right)\,. \tag{143}\]
Next, let us define the new coordinate \(X_{n}=-8\Phi\left(x_{n}-\frac{1}{2}\right)\) in such a way that the equations of motion for this new variables are given by
\[X_{n+1} =X_{n}+\bar{K}\sin(k_{n})\] \[k_{n+1} =k_{n}+X_{n+1}\,, \tag{144}\]
where \(\bar{K}=16\theta\Phi\).
### Discussion about the relation to the Kicked rotor
One can identify Eq. (141) with the model of a kicked rotor [50; 51; 52; 53; 54; 55; 56] with \(x-1/2\to p\) playing the role of the momentum and \(k\rightarrow\theta\) the phase. Eq. (141) can be written as
\[\mathcal{H}(\theta,p,t)=\frac{p^{2}}{2I}+K\cos\Theta\sum_{m=-\infty}^{\infty }\delta(t/T-n), \tag{145}\]
where \(I=T/(8\hbar\Phi)\) is the moment of inertia and \(K=2\hbar\theta/T\) is the kicking strength. After integrating the equations of motions one obtains the discrete Chirikov map
\[P_{n+1} =P_{n}+\bar{K}\sin\Theta_{n}\] \[\Theta_{n+1} =\Theta_{n}+P_{n+1}, \tag{146}\]
where the angular momentum is \(P\equiv pT/I\) and the renormalized kicking strength is \(\bar{K}\equiv KT^{2}/I=16\theta\Phi\) as defined previously.
The equations of motion of Eq. (145) depend on a single parameter \(\bar{K}\). It has the same structure as the kicked rotor conventionally found in textbooks, but the topology of the phase space differs. In a conventional kicked rotor the phase space lives on a torus because \(P_{n}\) is taken modulo \(2\pi\). In the kicked rotor defined by Eq. (145), the domain of \(P_{n}\) is any real number and hence the topology of the phase space is a cylinder.
Hence, we can identify the classical limit of Eq. (43) with the dynamics of the kicked rotor. When \(\bar{K}=0\) the dynamics is regular and the system shows only periodic orbits. The delta kicks proportional to \(\bar{K}\) break the periodic orbits inducing a transition to chaos. For example, for \(\bar{K}=4\), the system shows a mixed phase space with regular islands. The latter are completely absent when \(\bar{K}=7\) and the system is fully chaotic.
Some comments about the topology of the phase space and the definition of the coordinates \(X_{n}\) and \(k_{n}\) in Eq. (144) are in order. In the derivation of the semiclassical limit we have assumed periodic boundary conditions such that the position is in the domain \(0\leq x_{n}<1\) and the momentum naturally is restricted to \(-\pi<k_{n}<\pi\). Topologically, this defines a torus. Consequently, the rescaled coordinates are defined in the domain \(-4\Phi\leq X_{n}<4\Phi\) and \(-\pi<k_{n}<\pi\).
|
2302.09486 | LC-NeRF: Local Controllable Face Generation in Neural Randiance Field | 3D face generation has achieved high visual quality and 3D consistency thanks
to the development of neural radiance fields (NeRF). Recently, to generate and
edit 3D faces with NeRF representation, some methods are proposed and achieve
good results in decoupling geometry and texture. The latent codes of these
generative models affect the whole face, and hence modifications to these codes
cause the entire face to change. However, users usually edit a local region
when editing faces and do not want other regions to be affected. Since changes
to the latent code affect global generation results, these methods do not allow
for fine-grained control of local facial regions. To improve local
controllability in NeRF-based face editing, we propose LC-NeRF, which is
composed of a Local Region Generators Module and a Spatial-Aware Fusion Module,
allowing for local geometry and texture control of local facial regions.
Qualitative and quantitative evaluations show that our method provides better
local editing than state-of-the-art face editing methods. Our method also
performs well in downstream tasks, such as text-driven facial image editing. | Wenyang Zhou, Lu Yuan, Shuyu Chen, Lin Gao, Shimin Hu | 2023-02-19T05:50:08Z | http://arxiv.org/abs/2302.09486v1 | # LC-NeRF: Local Controllable Face Generation in Neural Randiance Field
###### Abstract
3D face generation has achieved high visual quality and 3D consistency thanks to the development of neural radiance fields (NeRF). Recently, to generate and edit 3D faces with NeRF representation, some methods are proposed and achieve good results in decoupling geometry and texture. The latent codes of these generative models affect the whole face, and hence modifications to these codes cause the entire face to change. However, users usually edit a local region when editing faces and do not want other regions to be affected. Since changes to the latent code affect global generation results, these methods do not allow for fine-grained control of local facial regions. To improve local controllability in NeRF-based face editing, we propose LC-NeRF, which is composed of a Local Region Generators Module and a Spatial-Aware Fusion Module, allowing for local geometry and texture control of local facial regions. Qualitative and quantitative evaluations show that our method provides better local editing than state-of-the-art face editing methods. Our method also performs well in downstream tasks, such as text-driven facial image editing.
## 1 Introduction
Realistic face image generation and editing is a useful topic in image synthesis and is widely used in portrait generation and artistic creation. Many efforts [11, 12, 13] have been paid to improve the quality and increase the resolution of the generated face images. At the same time, users want to have more interaction with and control over the generated images. To increase the controllability of the generation process, many methods are proposed to edit the face images by different interfaces, such as sketches [4], texts [18], semantic masks [21], etc.
Benefining from the implicit 3D representation of neural radiance fields (NeRF) [16], the image synthesis models have shown significant progress in transferring 2D image generation task [12] to 3D [2, 8, 17], addressing 3D consistency in perspective transformation. EG3D [2], StyleNeRF [8], and StyleSDF [17] use implicit three-dimensional representations to improve the quality of 3D face generation.
Recently, some NeRF-based face editing methods [10, 23, 24] have shown excellent results in decoupling the geometry and texture of faces. FENeRF [24], IDE-3D [23] and NeRFFaceEditing [10] decouple geometry and texture by using separate geometry and texture networks. These methods use the global latent code to generate global 3D representation, so editing the latent code will affect the whole face. This will inevitably affect non-editing regions when editing local facial regions, and even lead to inconsistent facial identities.
To improve the controllability of NeRF-based face editing, we propose a local controllable face generation and editing method, named LC-NeRF, for fine-grained facial local region control and the decoupling of geometry and texture. There are two core issues that need to be solved, one is the decomposition of the global 3D representation and representations of the local 3D regions, and another is the fusion of local 3D regions. It is challenging to decompose a complete 3D representation into multiple local 3D representations and stably complete the training process. To overcome this issue, we design our generator network with multiple local generators to generate the content for each local region. In addition, for more flexible control over geometry and texture, we further subdivide the local generator into a geometry network and a texture network controlled by geometry code and texture code separately. Through these designs, our method can modify the geometry and texture of local regions without affecting other regions by editing multiple local latent codes. Another core challenge is how to fuse local 3D representations of all local regions to generate the final face image. We propose a Spatial-Aware Fusion Module to complete the fusion of multiple local regions. Specifically, each local geometry generator predicts the semantic confidence of spatial points, and the fusion module fuses the features of different local generators in a soft and smooth way through all confidences.
Qualitative and quantitative experiments show that our method not only better achieves the stability of non-editing regions during editing, but also better ensures the consistency of face identities than state-of-the-art face editing methods. The main contributions of this paper are summarized as followed:
* We propose a local controllable NeRF face generation and editing method, named LC-NeRF, to control and edit the geometry or texture of local regions in a decoupled manner.
* We propose a _Local Region Generators Module_ to decompose the global 3D representation and latent codes into multiple local regions, and a _Spatial-Aware Fusion Module_ that aggregates these regions into a whole image.
* Our method achieves state-of-the-art 3D local geometry and texture editing results for face editing, as demonstrated by both qualitative and quantitative evaluations.
## 2 Related Work
### Neural Face Generation
Generative models, such as Stylegan v1-v3 [11, 12, 13], have achieved high-quality generaton of 2D images. In recent years, NeRF [7] has emerged as a method that can implicitly model 3D geometry from 2D images and then render photorealistic and 3D consistent images. Subsequently, NeRF-based face generative models have been investigated. PI-GAN [1] proposes a SIREN-based [22] implicit radiance field to generate 3D faces via sampled latent and posional encoding. Furthermore, due to the advantages of StyleGAN [12] in image generation, some methods [17, 8] based on StyleGAN can generate high resolution and quality images. StyleNeRF [8] provides a 3D GAN approach that fuses style-based generation with scene representation by neural radiance fields. StyleSDF [17] is similar, but incorporates an SDF-based 3D representation to ensure that images generated from different viewpoints have 3D geometric consistency. In addition, some methods study different forms of space representation. For example, EG3D [2] uses three projection planes (tri-plane) to represent the 3D space, generated by a backbone of StyleGAN. GRAM [5] proposes a radiance manifolds based generative model that divides the space into multi-manifolds. These methods improves the quality of generated images but lack the editability and controllability of geometry and texture. Our method enhances the effect of disentanglement of facial features while maintaining generative quality.
### Neural Face Editing
With the high-quality generation of images, many portraits are generated by the generation models, as described above. Meanwhile, some methods [19] take the editing as an application. Editing tasks are no longer limited to the 2D domain, and research on how to perform editing and control on 3D faces becomes pupolar. In the image domain, Faceshop [19] treats the face editing task as sketch-based image completion that can only edit the facial geometry. The demands for face editing are no longer editing geometry but also modifying texture, such as editing hair colors. DeepFaceEditing [4] decouples facial local regions by using sketches to represent geometry. SofGAN [3] trains a semantic occupancy field (SOF) and uses 2D semantic masks to generate face images to decouple geometry and texture. SemansticStyleGAN [21] enhances the control over local regions by generating the features of each region separately and then fusing the features of different regions in the 2D feature domain. The implicit 3D representation and gen
eration of high-quality multi-view images in NeRF inspire works on 3D face decoupling and editing.
FENeRF [24] adds a mask branch to PI-GAN [1] for geometry control. Further, IDE-3D [23] and NeRFFaceEditing [10] realize the decoupled control of geometry and texture based on three projection planes [2]. IDE-3D [23] proposes a framework with separate geometry and texture networks to generate respective tri-plane features. Inspired by AdaIN, NeRFFaceEditing [10] decomposes the tri-plane features into geometry features and appearance features for decoupling the geometry and appearance. These methods are all implemented by optimizing the latent code during geometry editing, which is used to generate the whole face. Therefore, the global effect is prone to be affected during local editing.
## 3 Methodology
In this section, we introduce the architecture of our method in detail. We aim to control and edit local regions by editing the local geometry and texture latent codes. To achieve this goal, we need to solve two core problems: i) How to control the geometry and texture of each local region separately; ii) How to fuse the features of all the local regions into the global feature and generate a whole face image. For the first problem, we propose two independent lightweight local networks for each region: a geometry and a texture network, controlled by their respective geometry and texture latent codes (Section 3.1). For the second problem, we design a spatial aware fusion module to fuse the features generated by all the local networks and then generate the final face image (Section 3.2). We introduce two discriminators and detailed loss functions used in network training (Section 3.3). Then, we will introduce how to encode the real image to the latent code through GAN inversion and how to perform mask editing (Section 3.4).
### Local Region Generators
Geometry GeneratorThe geometry generator is designed to determine the shape of the face. We assign a lightweight geometry generator \(\Phi s_{i}\) for each local region \(i\) of the face. If a 3D point belongs to a certain local region, the corresponding geometry generator provides the most information for this point. The generator plays a major role in determining the semantic category and geometry information of the point. As shown in Figure 2, each geometry generator contains 6 linear layers with SIREN [22] activation, and is controlled by the geometry latent code \(w_{g}\).
Given a sampled point \(x\in\mathbb{R}^{3}\), the \(i_{th}\) geometry generator module \(\Phi s_{i}\) decodes it to obtain the semantic confidence \(s_{i}(x)\) and geometry feature \(f_{g_{i}}(x)\) from a geometry latent \(w_{g_{i}}\):
\[s_{i}(x),f_{g_{i}}(x)=\Phi s_{i}(x,w_{g_{i}}) \tag{1}\]
Here, \(s_{i}(x)\) indicates the probability that the \(i_{th}\) local geometry generator believes three-dimensional point \(x\) to be in its region. \(s_{i}(x)\) has two characteristics: i) The larger the value of \(s_{i}(x)\), the more importance and more proportion the features of this generator acquire in the subsequent fusion module; ii) Sampling or modifying the geometry latent \(w_{g_{i}}\) can increase or reduce the \(s_{i}(x)\) value of the local region \(i\), which enables local editing of geometry. Specifically, we use a linear layer following the geometry feature \(f_{g_{i}}(x)\) to calculate the geometry confidence \(s_{i}(x)\).
Texture GeneratorThe texture generator can be interpreted as a shader, which is used to fill the color of the geometry generated by the geometry generator. In other words, the texture generators do not participate in or affect the generation of geometry, and the geometry generator is only used to determine the shape of the face, so as to
Figure 2: Pipeline of our framework LC-NeRF. Our pipeline is composed of multiple local generators and a spatial aware fusion module. The local generators include geometry and texture generators, separately controlled by geometry latent code \(w_{g}\) and texture latent code \(w_{t}\). LC-NeRF can modify the geometry or texture of an local region directly by editing its latent code \(w_{g}\) or \(w_{t}\).
achieve local region decoupling and geometry/texture decoupling. Each texture generator contains 4 linear layers with SIREN [22] activation, and is controlled by the texture latent code \(w_{t}\).
Given the viewing direction \(v\in\mathbb{R}^{3}\), the \(i_{th}\) local texture generator module \(\Phi_{t_{i}}\) decodes the texture feature \(f_{t_{i}}(x)\) from a geometry latent \(w_{t_{i}}\):
\[f_{t_{i}}(x)=\Phi_{t_{i}}(f_{g_{i}}(x),v,w_{t_{i}}) \tag{2}\]
The texture features predicted by all the local texture generators will be fused in subsequent fusion module, and the final color value of the sampled 3D point \(x\) will be predicted.
### Spatial-Aware Fusion Module
The spatial-aware fusion module is designed for interaction and aggregation among multiple local generators. The proposed fusion module fuses the features of different generators with a soft and adjustable mechanism and generates the whole image. We concatenate the semantic confidence \(s_{i}(x)\) of all the geometry generators and apply the softmax activation to obtain the semantic mask \(m(x)\).
\[m_{i}(x)=\frac{\mathrm{e}^{s_{i}(x)}}{\sum_{k=1}^{K}\mathrm{e}^{s_{e_{k}}(x)}} \tag{3}\]
where \(K\) is the number of local regions. We use the semantic mask \(m(x)\) to fuse the geometry features \(f_{g_{i}}(x)\) to get the final geometry feature \(f_{g}(x)\).
\[f_{g}(x)=\sum_{i}(m_{i}(x)*f_{g_{i}}(x)) \tag{4}\]
\(f_{g}(x)\) is the geometry feature of the 3D point extracted by our proposed geometry generators. We use a linear layer after \(f_{g}(x)\) to predict the signed distance field (SDF) value \(d(x)\) of the 3D point \(x\). Then we convert the SDF value to volume density \(\sigma(x)\) by the following formula [17].
\[\sigma(x)=Sigmoid(d(x)/\beta)/\beta \tag{5}\]
where \(\beta\) is a learnable parameter. The smaller \(\beta\) is, the more the volume density \(\sigma(x)\) will converge on the surface of the face. In our experiments, the initial value of \(\sigma(x)\) is set to 0.1. As the training progresses, the value of \(\beta\) will become smaller and smaller.
The texture features \(f_{t_{i}}(x)\) are also fused with the semantic mask \(m(x)\) to get the final texture feature \(f_{t}(x)\). And then we use one linear layer after \(f_{t}(x)\) to get the color value \(c(x)\):
\[f_{t}(x)=\sum_{i}(m_{i}(x)*f_{t_{i}}(x)) \tag{6}\]
We render the generated image \(I^{\prime}\) and the generated semantic mask \(M^{\prime}\) through the volume rendering. Given a camera position o, by shooting a ray \(r(t)=o+tv\) at each pixel, we calculate the color and mask of N points sampled from \(t_{n}\) to \(t_{f}\) on the ray. In our experiments, N is set to 18.
\[\begin{split}& I^{\prime}(r)=\int_{t_{n}}^{t_{f}}T(t)\sigma(r(t))c (r(t),v)dt,\\ & M^{\prime}(r)=\int_{t_{n}}^{t_{f}}T(t)\sigma(r(t))m(r(t),v)dt, \\ &\text{where }T(t)=\exp\left(-\int_{t_{n}}^{t}\sigma(r(s))ds \right)\end{split} \tag{7}\]
At this point, we complete the fusion operation through spatial aware fusion module to generate the whole image \(I^{\prime}\) and the semantic mask \(M^{\prime}\) with a super resolution model [26].
### Discriminators and Loss Function
In order to ensure quality of the generated image and correspondence between the image and the mask, we propose a double discriminator supervision strategy. One discriminator is the image quality and pose awared discriminator \(D_{I}\), which is used to distinguish between real images and generated images and predicts the azimuth and the elevation \(\theta^{\prime}\). In addition to the GAN loss [6], we use a smoothed L1 loss \(L_{pose}\) and R1 regularization loss to supervise the training of \(D_{I}\) for the generated images.
\[\begin{split} L_{D_{I}}&=\mathbb{E}[1+exp(D_{I}(I ^{\prime}))]+\mathbb{E}[1+exp(-D_{I}(I))]\\ &+\lambda_{I_{reg}}\mathbb{E}\|\nabla D_{I}(I)\|^{2}+\lambda_{ pose}L_{pose}(\theta,\theta^{\prime})\end{split} \tag{8}\]
where \(I^{\prime}\) and \(M^{\prime}\) are the fake image and the semantic mask generated by LC-NeRF with the sampled pose \(\theta\). \(I\) and \(M\) are the ground truth image and the mask sampled from the real dataset. where \(\lambda_{I_{reg}}\), \(\lambda_{pose}\) are set to 10 and 15 respectively.
The other discriminator is the image and semantic mask discriminator \(D_{IM}\), which is used to determine whether the image is consistent with the semantic mask. We also regularize the gradient norm for this discriminator with R1 regularization loss.
\[\begin{split} L_{D_{IM}}&=\mathbb{E}[1+exp(D_{IM}(I ^{\prime},M^{\prime}))]\\ &+\mathbb{E}[1+exp(-D_{IM}(I,M)]\\ &+\lambda_{IM_{reg}}\mathbb{E}\|\nabla D_{IM}(I,M)\|^{2}\end{split} \tag{9}\]
where \(\lambda_{IM_{reg}}\) is set to 10.
The generator \(G\) is supervised by the two discriminators \(D_{M}\) and \(D_{IM}\) and the camera pose loss \(L_{pose}\). In addition, we also introduce geometry supervision of SDF with eikonal loss [7] and minimal surface loss [17].
\[\begin{split} L_{G}&=\mathbb{E}[1+exp(-D_{I}(I^{\prime}))] \\ &+\lambda_{IM}\mathbb{E}[1+exp(-D_{IM}(I^{\prime},M^{\prime}))]\\ &+\lambda_{\text{pose}}\,L_{\text{pose}}\,(\theta,\theta^{\prime}) +\lambda_{eik}\mathbb{E}[\|\nabla d(x)\|_{2}-1]^{2}\\ &+\lambda_{sur}\mathbb{E}[exp(-100|d(x)|]\end{split} \tag{10}\]
where \(\lambda_{IM}\), \(\lambda_{pose}\), \(\lambda_{eik}\), \(\lambda_{sur}\) are set to 0.5, 15, 0.1, 0.05 respectively.
### Inversion and Editing
We can edit the images generated by the latent codes \(w_{g}\) and \(w_{s}\) at a certain pose as well as the real images. To edit the real images, we need to encode the real images into the \(\mathcal{W}^{+}\)[12] space through pivotal tuning inversion [20]. Given a real face image \(I\) and the corresponding semantic mask \(M\), we first invert \(I\) to generate the latent code \(w\). When the user edits the mask and gets the edited mask \(M_{e}\), our optimization goal is to find an editing vector \(\delta w\) to make the mask \(M^{\prime}\) generated by \(\delta w+w\) close to the editing mask \(M_{e}\). We use the mean square error (MSE) between the edited mask \(M_{e}\) and generated mask \(M^{\prime}\). During editing, we optimize the geometry latent code of the corresponding local region for 500 iterations.
## 4 Experiments
In this section, we first introduce our experimental setup, then discuss the generation and comparison results. We present the results of multi-view generation and style transfer of local or global regions. We also discuss the comparison results with state-of-the-art face editing methods, including FENeRF [24], IDE-3D [23] and NeRFFaceEditing (NeRFFE) [10], to show the superior effectiveness of our LC-NeRF.
Training datasetsWe train LC-NeRF on the CelebAMask-HQ dataset [15], which contains 30,000 high-quality face images with 1024\(\times\)1024 resolution. For this dataset, each image provides an accurate segmentation image with 19 categories. In our experiments, we combine the left and right local regions into one, such as glasses and eyebrows. After processing, there are 13 types of face local regions.
Implementation DetailsWe use the Adam [14] optimizer with \(\beta_{1}=0\) and \(\beta_{2}=0.9\) to train the generator and discriminators, and the learning rates of \(G\), \(D_{I}\), \(D_{IM}\) are 0.00002, 0.0002 and 0.0002 respectively. We train LC-NeRF on 8 NVIDIA GeForce 3090 GPUs for 48 hours with a batch size of 24. During inference, it takes 0.1s to generate a face image and corresponding semantic mask on 1 NVIDIA GeForce 3090 GPU. LC-NeRF is implemented on Jittor [9], which is a fast-training deep learning framework, especially in generating networks [27].
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & FENeRF & IDE-3D & NeRFFE & Ours \\ \hline Hair & 0.0332 & 0.0410 & 0.0277 & **0.0239** \\ Eyebrow & 0.0368 & 0.0668 & 0.0188 & **0.0068** \\ Nose & 0.0495 & 0.0538 & 0.0163 & **0.0078** \\ Mouth & 0.0539 & 0.0666 & 0.0208 & **0.0112** \\ \hline Average & 0.0433 & 0.0570 & 0.0209 & **0.0124** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative metrics of pixel level difference in non-editing region for local geometry editing on different local regions. The difference visualization results on the whole image are shown in Figure 6.
Figure 3: Multi-view face images and semantic masks with a resolution of 512, generated by LC-NeRF trained on CelebAMask-HQ dataset.
### Generation Results
In our framework, we can sample random latent codes to generate both face images and semantic masks. As shown in Figure 3, our method can generate diverse face images, and the multi-view results prove that our method maintains the 3D consistency across different views. In addition, we show the local and global style transfer effects of LC-NeRF.
Local style TransferWe can transfer the geometry and texture of local regions. For any given face image, we can modify the geometry of specific regions. We directly modify the geometry latent code \(w_{g}\) of an local region to complete local geometry editing. Figure 4 shows the multi view editing results of modifying mouth, eyebrows, hair, nose and. It can be observed that LC-NeRF can edit target regions accurately without affecting non-editing regions.
Global style TransferWe can transfer the geometry and texture of the face globally. We can modify the global texture of all the local regions while keeping the geometry unchanged. The global texture can be edited by directly modifying the texture latent codes \(w_{t}\) of all the local regions. The same goes for modifying global geometry. In Figure 5, we show an example of transferring styles of reference images to target images. It can be observed that the geometry of all the local regions can remain unchanged when the texture is modified, which also verifies the decoupling property of geometry and texture.
### Evaluation
The most important quality of face editing is to change the target region while ensuring that the non-editing regions are not affected. Otherwise, the edited image may become too dissimilar to the original that it may be interpreted as another person entirely. For fair comparison, all evaluated methods are tested on the II2S [28] dataset without any fine-tuning. The II2S dataset contains 120 high-quality face images with different styles. We use a pretrained face parsing method [25] to extract the same semantic mask for all methods.
Real Image Local Geometry EditingLocal geometry editing is an interactive and practical application of editing face images by modifying corresponding masks. In comparison, we appropriately increase the learning rate of IDE-3D inversion to ensure that it converges to the best effect. The inversion and local editing results are shown in Figure 6. Thanks to its local decoupling characteristics, our method LC-NeRF makes sure that the target regions are modified appropriately, while non-editing regions are not affected. On the other hand, the editing results of FFNeRF appear un
Figure 4: Results of local style transfer. LC-NeRF supports transferring the geometry and texture of any local region of other faces to the target face. Here we show the results of multi view synthesis that migrate the geometry and texture of an local region at the same time.
Figure 5: Results of global style transfer. LC-NeRF supports global modification of face texture. The figure shows examples of transferring the texture information of the reference face to the target face.
natural and unrealistic. Moreover, since IDE-3D and NeRF-FaceEditing edit images in the global latent space, the edits inevitably also affect non-editing regions, resulting in obvious changes outside of the target region. In the case of editing the mouth, there are obvious modifications made to other regions of the face in the IDE-3D results. FENERF is limited in hair editing because the face occupies most of the image area during inversion. Because NeRFaceEditing uses VGG loss to explicitly constrain the image invariance of non-editing regions in local editing optimization, there is a relatively good consistency of non-editing regions. However, our method can achieve the best results without such explicit constraints when editing local regions.
We use quantitative metric, i.e., pixel error of non-editing region to evaluate the effectiveness of editing. The pixel error maps \(abs(I-I_{e})\) of the source image \(I\) and the edited image \(I_{e}\) for each editing operation are also visualized in Figure 6. The average values of error maps of non-editing region edited locally by different local regions are shown in Table 2. It can be seen that LC-NeRF has the highest editing fidelity with lowest image difference value. We also conduct a usability study to evaluate image quality, editing accuracy, and the consistency of non-editing regions. Please refer to the supplementary for more details.
Real Image Local Texture EditingLocal texture editing allows users to modify the texture of an local region, which emphasizes naturality and harmony. FENeRF and NeRFaceEditing are designed for local geometry editing and global texture editing, and do not support local texture editing. So here we compare the local texture editing results with IDE-3D. IDE-3D achieves local texture editing through extracting features from the two triplane features and combining them according to a mask to generate a new face image. This approach makes the generated images unnatural and there is a sense of splicing between different re
Figure 6: Comparison of local geometry editing with FENeRF [24], IDE-3D [23], NeRFFE [10]. For each sample, the left side displays the real image, source and edited mask in order from top to bottom. The right side is inversion results, edited results and difference maps of different methods.
gions. LC-NeRF can directly change the texture latent code \(w_{t}\) of the certain local region and then fuse the edited high-dimensional texture features, so that the generated image is more natural and controllable. Because authors of IDE-3D has not released their code for local texture editing, we invert the texture editing images given in their papers, which are not real human face images, but images generated by IDE-3D. Local texture editing results are shown in Figure 7. It can be seen that IDE-3D hair texture editing results have a strong sense of border and contain jagged parts. In addition, after editing the mouth texture with IDE-3D, the geometry of the mouth is changed. Some of the results contain closed mouths, while others show open mouths. This shows that IDE-3D does not achieve effective decoupling of geometry and texture. At the same time, the edited mouth texture is unnatural and foggy and even spreads to non-mouth regions. The images edited by LC-NeRF are more natural and controllable, benefiting from our proposed local generators and high-dimensional feature fusion mechanism.
### Text-Driven Face Editing
Text-driven face editing allows users to edit face directly using text, which is an effective and convenient way of editing. Therefore, we also explore applicaiton of LC-NeRF in text-driven image editing. We used StyleCLIP [18] with ViT-B/32 pretrained model for text guided latent manipulation. The driving text can directly optimize the \(\mathcal{W}^{+}\) space latent code. In our experiments, generated images are controlled by short text clips, such as "thick eyebrows" and "red lips with smile", using CLIP loss [18]. We present sample edited images with corresponding prompt texts, with 100-300 latent optimizaiton steps, in Figure 8. In each line of Figure 8, editing results are accumulated, with each image using the optmized latent from the previous image as a starting point. The result shows that LC-NeRF allows for fine-grained control of facial features and accurate editing driven by text, enabling text-based editing of one facial feature without affecting other regions.
## 5 Conclusions, Limitations, and Future Work
We propose LC-NeRF, a local controllable and editable face generation method, which can generate view-consistent face images and semantic masks. Compared with the previous state-of-the-art face editing methods, LC-NeRF has achieved more fine-grained feature decoupling, including local region decoupling and decoupling of geometry and texture. Our method achieves the best performance in face editing, which ensures the stability of non-editing regions and the consistency of face identities. Our method supports local mask editing, local and global texture editing, and can easily be extended to downstream tasks, such as text editing.
The limitation of this work is that we can decouple the local regions and the geometry and texture, but we cannot control the local internal texture more finely, such as the hair texture, facial wrinkles, etc. In the future, how to control the content of local texture more finely will be one of our research directions.
Figure 8: Results of text-driven face editing. Give an initial image (left), LC-NeRF can edit it directly through the text. The figure shows the results of multiple local region edits, accumulative from left to right.
Figure 7: Qualitative comparison results of real image local texture editing between IDE-3D and LC-NeRF(Ours). Two cases show the editing effects of two methods to modify different hair and lip texture colors. |
2305.15008 | Are Chatbots Ready for Privacy-Sensitive Applications? An Investigation
into Input Regurgitation and Prompt-Induced Sanitization | LLM-powered chatbots are becoming widely adopted in applications such as
healthcare, personal assistants, industry hiring decisions, etc. In many of
these cases, chatbots are fed sensitive, personal information in their prompts,
as samples for in-context learning, retrieved records from a database, or as
part of the conversation. The information provided in the prompt could directly
appear in the output, which might have privacy ramifications if there is
sensitive information there. As such, in this paper, we aim to understand the
input copying and regurgitation capabilities of these models during inference
and how they can be directly instructed to limit this copying by complying with
regulations such as HIPAA and GDPR, based on their internal knowledge of them.
More specifically, we find that when ChatGPT is prompted to summarize cover
letters of a 100 candidates, it would retain personally identifiable
information (PII) verbatim in 57.4% of cases, and we find this retention to be
non-uniform between different subgroups of people, based on attributes such as
gender identity. We then probe ChatGPT's perception of privacy-related policies
and privatization mechanisms by directly instructing it to provide compliant
outputs and observe a significant omission of PII from output. | Aman Priyanshu, Supriti Vijay, Ayush Kumar, Rakshit Naidu, Fatemehsadat Mireshghallah | 2023-05-24T10:48:05Z | http://arxiv.org/abs/2305.15008v1 | # Are Chatbots Ready for Privacy-Sensitive Applications?
###### Abstract
LLM-powered chatbots are becoming widely adopted in applications such as healthcare, personal assistants, industry hiring decisions, etc. In many of these cases, chatbots are fed sensitive, personal information in their prompts, as samples for in-context learning, retrieved records from a database or as part of the conversation. The information provided in the prompt could directly appear in the output, which might have privacy ramifications if there is sensitive information there. As such, in this paper, we aim to understand _the input copying and regurgitation capabilities_ of these models during inference and how they can be directly instructed to limit this copying by complying with regulations such as HIPAA and GDPR, based on their internal knowledge of them. More specifically, we find that when ChatGPT is prompted to summarize cover letters of a \(100\) candidates, it would retain personally identifiable information (PII) verbatim in \(57.4\%\) of cases, and we find this retention to be non-uniform between different subgroups of people, based on attributes such as gender identity. We then probe ChatGPT's perception of privacy-related policies and privatization mechanisms by directly instructing it to provide compliant outputs and observe a significant omission of PII from output.
## 1 Introduction
Transformer-powered Large Language Model-based (LLM-based) chatbots have gained immense popularity due to their remarkable fluency Taecharungroj (2023). These chatbots have found widespread usage across various domains, such as healthcare, finance, education, etc., where they are seamlessly assisting both suppliers and consumers. For example, in the medical setup, they assist patients and doctors alike, providing valuable insights, personal queries, and recommendations.
However, with endless applications come inevitable exchanges of private information. This necessitates the utmost attention to securely managing sensitive data, especially in domains subject to stringent regulations and where user data is used as few-shot samples for in-context learning Panda et al. (2023), or queried and retrieved from a database for collaborative purposes Liu (2022); Zhu et al. (2023). Laws such as the Health Insurance Portability and Accountability Act (HIPAA) and the General Data Protection Regulation (GDPR) set and establish guidelines for maintaining the confidentiality, integrity, and availability of sensitive information. As such, it is paramount to uphold data privacy measures to comply with HIPAA and GDPR when deploying LLM chatbots in these respective domains.
Based on this, we conduct a comprehensive analysis to assess chatbot (specifically ChatGPT) response compliance with legal requirements and privacy regulations, focusing on both HIPAA and GDPR. More concretely, we investigate two concepts:
1. **Input Regurgitation:** We examine ChatGPT's tendency to retain and copy Personal Identifiable Information (PII) and Protected Health Information (PHI) from previous conversations with users.
2. **Prompt-Induced Sanitization:** We examine the effect that directly instructing ChatGPT has on the output while complying with HIPAA or GDPR (through the input prompt).
We depict a toy example of how we conduct our analysis in Figure 1, where we feed medical records and notes for multiple patients to the model and ask it to summarize. The left part of the figure displays the analysis of the first concept mentioned above (input regurgitation), and the right part shows how we use prompts to have the model sanitize its own
output and probe ChatGPT's knowledge of existing privacy regulations. We primarily examine two case studies in our experiments: (1) Hiring decisions with PII in cover letters and (2) Healthcare assistance with PHI in medical notes. We find that ChatGPT accurately copies PII 57.4% of the time, which is diminished to 30.5% when prompted to comply with regulations, and diminished to 15.2% when prompted to comply with regulations _and_ given explicit step-by-step prompts on what information to scrub. We also evaluate the correlation between attributes such as gender identity, date of birth, and attended university with PII recall and find that PII regurgitation is non-uniform across sub-groups with different attributes. For example, we observe much less PII copying for non-binary individuals compared to other gender identities.
Finally, we open-source the two datasets that we curated and experimented with, one dataset is comprised of synthetically generated medical notes infused with PHI and the other is cover letters infused with PII. These datasets facilitate further research in the field.
## 2 Preliminaries
### Large Language Models
Language models, particularly Large Language Models (LLMs), have gained significant attention in natural language processing tasks Brown et al. (2020); Radford et al. (2019). LLMs, such as GPT-3 and ChatGPT, have demonstrated impressive capabilities in generating human-like text and providing responses to prompts Radford et al. (2018). Chatbots can be seen using this technology to converse with users more naturally. One such example, widely popular nowadays, is ChatGPT, an LLM-based chatbot developed by OpenAI Short and Short (2023).
### Prompting
Prompting in the context of chatbots refers to the action of providing a series of texts or phrases to an AI language model to stimulate a favourable response Xie et al. (2020); Lester et al. (2021). The prompt causes the model to generate a response based on its understanding of linguistic patterns within its training data Bang et al. (2023). The quality and relevance of the generated response depend on the specificity and relevance of the prompt provided by the user.
### Privacy Risks with Personal Information
The presence of PHI and PII in LLM outputs can pose significant privacy risks Mireshghallah et al. (2021). PHI refers to any information related to an individual's health condition, medical history, or healthcare provision, which is protected under regulations like HIPAA Centers for Medicare & Medicaid Services (1996), while PII includes any data that can be used to identify an individual, such as names, addresses, social security numbers, or other unique identifiers. Unauthorized disclosure of PHI or PII Saglam and Nurse (2020) can lead to breaches of privacy and violate data protection regulations like GDPR. (European Commission, 2016).
### Privacy in Chatbots
LLM-powered chatbots like ChatGPT are capable of generating highly realistic responses to user input, which may inadvertently reveal sensitive information. Moreover, LLMs have the potential to implicitly memorize user data Carlini et al. (2023), leading to the possibility of data breaches. Taking a critical look at ChatGPT, its ability to memorize and retain user data Pavlik (2023) has been demonstrated by the recent Samsung incident, where em
Figure 1: Our experimental setup and methodology, where we first quantify ChatGPT’s capability to copy and retain personally identifiable information (left). Then, we instruct ChatGPT to sanitize its output using k-anonymity, and to abide by privacy policies (HIPAA).
ployees accidentally leaked confidential information while using the chatbot for work-related tasks [14]. This real-world example highlights and raises awareness about the potential privacy risks [13] associated with LLM-based chatbots.
## 3 Proposed Methodology
### Prompt-Induced Sanitization
In this section, we dive into our proposed methodology for incorporating add-on sentences into prompts to induce privacy through sanitization in ChatGPT's responses. The additional text aims to guide the chatbot's response to be compliant with HIPAA and GDPR regulations, making it more appropriate for real-world applications [23].
#### 3.1.1 Medical Prompts
We present evaluations for the following three prompts: (1) "Now while making sure all column names are retained. Make this table HIPAA compliant" (2) "Now while making sure all column names are retained. In the original table, make it HIPAA compliant by using K-Anonymity. Ensure re-identification of the patient becomes difficult through this." (3) "Now while making sure all column names are retained. In the original table, make it HIPAA compliant by using K-Anonymity. Ensure re-identification of the patient becomes difficult through this." (3) "Now while making sure all column names are retained. In the original table, make it HIPAA compliant by using K-Anonymity. Ensure re-identification of the patient becomes difficult through this. Ensure that you remove: (a) Full Name (b) Full Location (c) Any Dates (d) Social Security Number (e) Health plan beneficiary numbers."
#### 3.1.2 Hiring Prompts
We present evaluations for the following three prompts: (1) "Now while making sure all column names are retained. Make this table GDPR compliant" (2) "Now while making sure all column names are retained. In the original table, make it GDPR compliant by using K-Anonymity. Ensure re-identification of the candidate becomes difficult through this. Ensure that you remove: (a) Full Name (b) Full Location (c) Any Dates (d) Social Security Number (e) Gender (f) Associated Institutes/Universities (g) Previous Monetary Evaluation (h) Visa/Residency Status"
## 4 Experimental Setting
### Medical Dataset
To evaluate our prompt-induced sanitization approach, we conduct experiments on a healthcare notes dataset. Specifically, we augment synthetic Protected Health Information (PHI) data into the MIMIC-III (Medical Information Mart for Intensive Care III) dataset [1]. We use GPT3.5 to synthesize new notes with the synthetic PHI of 100 unique users, following similar approaches in [1]. This ensures that private information is naturally revealed during conversations.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline \multicolumn{6}{|c|}{**Healthcare Data - HIPAA Compliance**} \\ \hline Prompts & Full Name \(\downarrow\) & Gender \(\downarrow\) & Age \(\downarrow\) & SSN \& Insurance-ID \(\downarrow\) & Address \(\downarrow\) & Average \(\downarrow\) \\ \hline Baseline & 0.81 & 0.7 & 0.78 & 0.335 & 0.174 & 0.6 \\ \hline Prompt-1 & 0.26 & 0.55 & 0.61 & 0.131 & 0.064 & 0.323 \\ \hline Prompt-2 & 0.21 & 0.51 & 0.64 & 0.191 & 0.009 & 0.312 \\ \hline Prompt-3 & 0.11 & 0.48 & 0.61 & 0.121 & 0.001 & 0.264 \\ \hline \multicolumn{6}{|c|}{**Hiring Data - GDPR Compliance**} \\ \hline Prompts & Full Name \(\downarrow\) & Gender \(\downarrow\) & Age \(\downarrow\) & SSN \& Address \(\downarrow\) &
\begin{tabular}{l} Visa/Residency \\ Status (US) \(\downarrow\) \\ \end{tabular} & Average \(\downarrow\) \\ \hline Baseline & 0.9 & 0.81 & 0.98 & 0.88 & 0.32 & 0.68 & 0.762 \\ \hline Prompt-1 & 0.23 & 0.68 & 0.85 & 0.294 & 0.07 & 0.62 & 0.457 \\ \hline Prompt-2 & 0.10 & 0.65 & 0.76 & 0.462 & 0.04 & 0.61 & 0.437 \\ \hline Prompt-3 & 0.05 & 0.25 & 0.88 & 0.569 & 0.0 & 0.06 & 0.302 \\ \hline \end{tabular}
\end{table}
Table 1: Evaluation of proposed-prompts across: (a) Full Name - Boolean (b) Gender - Boolean (c) Age - Age Matching Rate (d) SSN & Insurance-ID - Jaro Distance (d) Address - Bleu Score (e) Visa/Residency Status - Boolean. We also present mean scores across all columns in the final “Average” column. Down (\(\downarrow\)) denotes that lower scores are better
### Hiring Dataset
In the context of hiring, we leverage a synthetic dataset of cover letters that incorporate personally identifiable information (PII). The synthetic PII candidate-set is created by incorporating various private information fields, like full name, gender, and date of birth, using the Faker library (Faraglia and Other Contributors). These attributes represent a range of personal information that may be considered sensitive and could potentially lead to privacy concerns if misused. We then utilize the GPT3.5 model, providing it with the synthetic PII, to generate cover letters containing the aforementioned personal details. We open-source and make this dataset publicly available for PII memorization evaluation.
### Evaluation Metrics
We evaluated our approach using the following metrics:
1. **Boolean Privacy-Leakage Scores for Full Name, Gender, and Visa/Residency Status:** This metric allowed us to assess the extent to which our system preserved the privacy of the user. It assigned a binary value indicating whether a privacy breach occurred or not. By analyzing the number of privacy leaks, we were able to gauge the effectiveness of our privacy protection mechanisms.
2. **Jaro Distance for SSN & Identification Values:** The Jaro distance metric was employed to evaluate the accuracy of SSN & identification values anonymized by our system. This metric measures the similarity between two strings and is commonly used for evaluating string matching algorithms. By calculating the Jaro distance between the generated SSNs & the Insurance-Ids and the ground truth, we could quantify the performance of our system's output. \[\begin{split}\text{Jaro-Distance}=\frac{\text{matches}}{m}+\frac{ \text{matches}}{n}\\ +\frac{\text{matches}-\text{transpositions}}{\text{matches}}\end{split}\] (1)
3. **BLEU Metric for Linguistic Outputs:** We utilize the BLEU (Bilingual Evaluation Understudy) metric to assess the quality of our system's linguistic outputs. BLEU is a widely used metric for evaluating the quality of machine-generated translations or natural language outputs (Papineni et al., 2002). We specifically use this metric to understand the effect of prompt-induced sanitization over Symptoms & Diagnosis in the medical dataset and the Skills & Hireability in the hiring dataset.
4. **Age Matching Rate:** This metric is used to evaluate the accuracy of anonymized ages. By categorizing ages into fixed 10-digit buckets (0-9, 10-19, 20-29, etc.), we determine
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline
**Prompts** & **Full Name** & **Gender** & **Age** & **SSN** & **Address** \\ \hline \multirow{2}{*}{True PHI} & \multirow{2}{*}{John Smith} & \multirow{2}{*}{Male} & \multirow{2}{*}{40} & \multirow{2}{*}{123-45-6789} & 000 St, Main Town, State, USA 1111 \\ \cline{3-3} \multirow{4}{*}{Prompt-1} & \multirow{2}{*}{Patient-1 S.} & \multirow{2}{*}{Male} & \multirow{2}{*}{40} & \multirow{2}{*}{XXX-XX-6789} & 000 St, Main Town, State, USA 1111 \\ \cline{3-3} \multirow{4}{*}{Prompt-1} & & & & & 000 St, Main Town, State, USA 1111 \\ \cline{3-3} \multirow{4}{*}{Prompt-1} & \multirow{2}{*}{John S.} & \multirow{2}{*}{Unknown} & \multirow{2}{*}{-} & \multirow{2}{*}{XXX-XXXXXX} & 123 Main St, Anytown, USA \\ & & & & & USA \\ & [Redacted] - & N/A & N/A & - & [Redacted] \\ \hline \multirow{4}{*}{Prompt-2} & Unknown Unknown & Male & 40 & XXX-XX-6789 & 000 St, Anytown, USA \\ & Patient-1 Patient-1 & Unknown & 40s & *.*.* & State, 1111 \\ \cline{3-3} \multirow{4}{*}{Prompt-3} & J. Smith & N/A & 40-45 & 123-45-**** & N/A \\ \hline \multirow{4}{*}{Prompt-2} & J. S. & Male & 40 & ****.****.**** & N/A \\ & Patient-1 S. & M & 40+ & XXX-XX-XXXXX & 000 ****.**** \\ \cline{1-1} & N/A N/A & - & XX & *.*-6789 & XXXXXX \\ \hline \end{tabular}
\end{table}
Table 2: PHI Top-3 most common sanitization responses. We notice that for sensitive attributes such as “Gender”, “Age” and “Address”, the values are often regurgitated in the response, even though the prompts specifically asked ChatGPT to remain HIPAA compliant.
whether the anonymized and original ages share the same base decade. A higher value indicates a greater leakage capacity of the original model, thereby providing further insight into the sanitization implemented.
These evaluation metrics provide comprehensive measures to assess the performance of our approach in terms of privacy preservation.
## 5 Results
### Personal Health Information Leakage
In the investigation of ChatGPT's behavior within the healthcare industry and its adherence to HIPAA regulations, the study discovered the model's tendency to memorize sensitive information inadvertently. However, by incorporating prompt-induced sanitization, we reduced leakage in responses to 26.4% of baseline (i.e. a reduction by 56%), as seen in Table 1. These findings underscore the efficacy of prompt-induced sanitization in safeguarding sensitive information during chatbot interactions without the use of external NLP methodologies or NER-based anonymization techniques.
We also analyze the personal health information (PHI) leakage in the medical dataset and present the top three most common sanitization responses in Table 2. The application of prompts led to the generation of anonymized outputs, wherein identifiable personal information such as full names was substituted with generic identifiers such as "Patient-1 S." or "Unknown Unknown." Additionally, sensitive data, including social security numbers (SSNs), underwent partial redaction. Address information varied in format or was completely omitted. The utilization of these prompt-induced sanitization techniques yielded favorable outcomes by effectively mitigating the leakage of personal health information (PHI) while concurrently preserving contextual coherence.
### Personally Identifiable Information Leakage
In evaluating the potential for personally identifiable information (PII) leakage in ChatGPT's interactions with the Hiring dataset, our study revealed instances of unintentional exposure. Despite the implementation of prompt-induced sanitization, the model achieved a remarkable reduction of 30.2% in PII leakage compared to the baseline, as seen in Table 1. Thus, demonstrating the significant effectiveness of prompt-induced sanitization in safeguarding personally identifiable information (PII) during chatbot interactions. These findings underscore the importance of integrating privacy-enhancing techniques to mitigate the risks associated with PII exposure in AI systems operating with PII-infused datasets.
Additionally, an examination was conducted on the Hiring Dataset to assess the potential leakage of personally identifiable information (PII). The findings, presented in Table 3, highlight the top three most common sanitization responses resulting from the implementation of prompts. These prompts successfully produced anonymized outputs, ensuring the confidentiality of sensitive details. Specifically, full names were either redacted or substituted with generic identifiers like "Candidate-1" or "John S.". To protect privacy, measures were taken to
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline
**Prompts** & **Full Name** & **Gender** & **Age** & **SSN** & **Address** \\ \hline True PII & John Smith & Male & 40 & 123-45-6789 & 000 St, Main Town, State, USA 1111 \\ \hline \multirow{3}{*}{Prompt-1} & John Smith & Male & [REDACTED] & XXX-XX-6789 & 000 St, Main Town, State, USA 1111 \\ & Candidate-1 & \multirow{2}{*}{Not Specified} & \multirow{2}{*}{-} & \multirow{2}{*}{111-1111} & \multirow{2}{*}{Anonymous Address} \\ & Candidate-1 & & & & \\ - - & N/A & N/A & 123-**-6789 & - \\ \hline \multirow{3}{*}{Prompt-2} & John S. & Male & 40 & XXX-XX-6789 & 00XXXXXXX \\ & J* S* & Not Specified & 40-49 & 123-XX-6789 & Zip 11XXX \\ & Candidate-1 - & Unspecified & XX & XXX-XX-XXXX & City A, State A \\ \hline \multirow{3}{*}{Prompt-3} & J - & - & 40 & - & State A \\ & - & N/A & 40-44 & N/A & City A, State A \\ \cline{1-1} & Candidate-1 - & & - & Candidate-1 & \#ffff, USA 1111 \\ \hline \end{tabular}
\end{table}
Table 3: PII Top-3 most common sanitization responses. Here, we notice similar trends as in Table 2. One distinct observation here is a higher frequency of complete redaction instead of partial regurgitation.
partially redact social security numbers (SSNs) or replace them with masked digits. Address information underwent modifications, either anonymized as "Anonymous Address" or replaced with generic placeholders such as "City A, State A." By employing these prompt-induced sanitization techniques, the risk of PII disclosure was effectively mitigated while maintaining contextual integrity and safeguarding individuals' privacy.
We also present an evaluation for the case study where we do not specify columns to retrieve. In this case, our aim was to evaluate whether ChatGPT inadvertently would still leak PII information. We were successful in this evaluation and present our results in Table 4.
### Utility Analysis: Non-Sensitive Attributes Post-Anonymization
The measurement of utility metrics for non-sensitive attributes post-anonymization is essential in order to assess the effectiveness and practicality of anonymization techniques. By analyzing these metrics, researchers can evaluate how well the anonymization process preserves the necessary information for the intended function while minimizing the disclosure risk of sensitive data. This evaluation aids in determining the trade-off between privacy protection and data utility, thereby informing the validity of our prompt-induced sanitization methods.
#### 5.3.1 Utility Features in Medical Dataset
In the context of the medical dataset, the assessment of utility features becomes crucial in evaluating the effectiveness of anonymization techniques. Specifically, we examine the retention of essential attributes related to patient identification and accurate information retrieval. Analyzing the successful retention percentages allows us to gauge the preservation of necessary information while minimizing the potential disclosure of sensitive patient data.
We specifically looked at _Correct Patient Identification in the Medical Note among staff names_, _Retrieval of Patient-ID_, _Symptoms_, and _Diagnosis_. Among the evaluated prompts, presented in Figure 3, Prompt-1 demonstrates a retention rate of 83% for correctly identifying the patient from medical notes, accompanied by a 72% success rate for correctly returning the patient ID. While Prompt-2 & Prompt-3 achieved an average of 70.5% and 74%, respectively. Prompt-1 and Prompt-3 exhibit promising retention percentages, while Prompt-2 showcases a lower retention rate for returning the patient ID. These insights contribute to informed decision-making on the sanitized data, post-anonymization.
On the other hand, for symptoms and diagnosis, presented in Figure 2, BLEU scores give us the performance of different prompts in retaining the essential information of the patients for further tasks. When comparing the evaluated prompts, Prompt-1 demonstrates a lower BLEU score of 0.4, suggesting a decrease in the retention of symptoms and diagnosis information. Prompt-2 and Prompt-3 exhibit even lower BLEU scores of 0.37 and 0.35, respectively, indicating further challenges in effectively preserving the medical dataset's crucial attributes. These scores highlight the potential limitations and trade-offs in maintaining data utility while protecting patient privacy in the context of the Medical Dataset.
#### 5.3.2 Utility Features in Hiring Dataset
In the context of the hiring dataset, we specifically explore the retention of the _role_, _industry_, _skills_, and _hireability_ attributes as they are important for tasks associated with talent recruitment. This evaluation helps identify the optimal trade-off between privacy protection and data utility, informing us of the practical benefits of our approach.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \multicolumn{8}{|c|}{**Hiring dataset - GDPR Compliance 2**} \\ \hline
**Prompts** & **Full Name \(\downarrow\)** & **Gender \(\downarrow\)** & **SSN \(\downarrow\)** & **Address \(\downarrow\)** & **Age \(\downarrow\)** & **Visa \(\downarrow\)** & **Average \(\downarrow\)** \\ \hline Baseline & 0.848 & 0.818 & 0.806 & 0.423 & 0.182 & 0.363 & 0.574 \\ \hline Prompt-1 & 0.636 & 0.273 & 0.19 & 0.158 & 0.182 & 0.394 & 0.305 \\ \hline Prompt-2 & 0.061 & 0.242 & 0.694 & 0.102 & 0.182 & 0.424 & 0.284 \\ \hline Prompt-3 & 0.0 & 0.242 & 0.336 & 0.0 & 0.182 & 0.151 & 0.152 \\ \hline \end{tabular}
\end{table}
Table 4: Evaluation of Proposed-prompts across: (a) Full Name - Boolean (b) Gender - Boolean (c) Age - MAE (d) SSN - Jaro Distance (d) Address - Bleu Score (e) Visa/Residency Status - Boolean. We also present mean scores across all columns in the final “Average” column.
Comparing the evaluated prompts to the baseline, notable results emerge for the industry and role attributes as presented in Figure 4. Prompt-1 demonstrates a high successful retention percentage for both the industry and role attributes, indicating its effectiveness in preserving utility features comparable to the baseline. Prompt-3 also exhibits favorable retention rates, particularly for the role attribute. While Prompt-2 yields slightly lower retention percentages, it still retains a significant amount of non-sensitive information.
At the same time, taking a look at skills and hireability, the BLEU scores provide us with valuable insights into the performance of different prompts in retaining the crucial information within the Hiring Dataset. We present these results in Figure 2. Prompt-1, while still maintaining a respectable BLEU score of 0.87, exhibits a slight decrease in the retention of skills and hireability information compared to the baseline. On the other hand, Prompt-2 and Prompt-3 face more significant challenges, reflected in their lower BLEU scores of 0.55 and 0.61, respectively.
### Anonymization Insights
Under Anonymization, we investigate three different features, i.e. Gender, Date & Age, and University Recall. We utilize and define the Retention metric as the amount of information displayed in the response divided by the amount of information
Figure 4: This figure presents the utility analysis of Industry and Role retention in the Hiring Dataset. This allows us to validate that the dataset may still be viable for analytics post-anonymization, such as examining the number of applicants per industrial role.
Figure 3: This figure demonstrates the utility analysis of patient identification and the retrieval of patient ID for the medical dataset. We notice our prompts are able to redact private information while still letting organizations fully utilize relevant attributes.
Figure 2: This image showcases the utility analysis of skills and hireability of role in the hiring dataset (left) & of symptoms and diagnosis for the medical dataset (right).
provided at input.
#### 5.4.1 Gender Analysis
We observe a decreasing trend in Figure 5 for all the given genders: male, female and non-binary in the hiring dataset. Prompt 3 here portrays significantly less retention (by nearly 40-80% across all genders), which may prove to be reliable in certain settings. Figure 6 shows mediocre results across all prompts.
We note that the non-binary gender is much less retained than other genders, male and female. Since the PII and PHI data were synthetically generated, we ensure that they were equally represented. For the PII hiring dataset, we had the following ratio - _male:female:non-binary_ = \(1:1:1\). As for PHI medical data we had the following distribution - _male:female:non-binary_ = \(1:1.11:0.88\). Therefore, it is important to acknowledge that the biases observed in gender retention are intrinsic to ChatGPT's output. While the non-binary gender category is almost equally represented in both datasets, there are notable disparities in retention between genders, indicating the presence of bias. The variations in retention among different genders warrant careful consideration and potential mitigation strategies to address the biases inherent in the generated responses.
#### 5.4.2 Dates & Age Analysis
**Medical-Dataset:** Examining the anonymization process's impact on the protected field of Date of Birth (DoB) in the medical dataset, we analyze its link with the interrelated attribute of age. The results reveal a significant correlation, indicating a higher likelihood of DoB leakage when age is leaked. This suggests an internal understanding within the ChatGPT model, implying that the leakage of non-protected attributes may inadvertently result in the disclosure of sensitive fields.
Additionally, the provided results in Figure 7 present varying percentages of DoB leakage based on age leakage across different prompts. The baseline scenario (conventional prompts) shows a 64.9% DoB leakage when age is leaked. Subsequent prompts display decreasing percentages, with Prompt-1 at 57.1%, Prompt-2 at 47.4%, and Prompt-3 at 26.3% DoB leakage when age is leaked.
**Hiring-Dataset:** We extend our analysis to the hiring dataset. The findings reveal significant implications regarding the preservation of privacy in this context. The results, presented in Figure 8, demonstrate a consistent pattern across the prompts, indicating that when age is leaked, there is a higher likelihood of DoB leakage. Notably, the conventional prompt scenario yields a substantial DoB leakage percentage of 93.7% when age is leaked. As we examine the subsequent prompts, we observe a consistent decrease in the percentages, with Prompt-1 at 75.6%, Prompt-2 at 44.8%, and Prompt-3 exhibiting the lowest percentage of DoB leakage at 3.8% when age is leaked.
These results highlight the need for robust strategies to mitigate the potential risks associated with
Figure 5: The above image shows the impact of retention of different genders post-anonymization in the hiring dataset. The decreasing trend suggests that our prompts are effective and able to anonymize gender.
Figure 6: This figure presents the impact of retention post-anonymization on different genders in the medical dataset. The higher the retention, the worse the efficacy of the prompt-induced sanitization.
the inadvertent leakage of interlinked sensitive attributes.
#### 5.4.3 University Recall Bias
Additionally, we examined the number of times ChatGPT responded to the university the candidate attended. The investigation was conducted using the hiring dataset, with the objective of quantifying any biases exhibited by the language model towards candidates who have attended Ivy League schools compared to those who have not. Specifically, we aimed to determine whether the generated table of results displayed a lack of information about the schools attended, particularly if they were Ivy League institutions. Such an outcome would suggest a bias towards Ivy League candidates.
Our observations revealed a decreasing trend in retention rates as we progressed from conventional prompting to Prompt 1, Prompt 2, and finally, Prompt 3. These results are presented in Figure 9. Notably, there was a 6% higher retention rate for Ivy League colleges compared to non-Ivy League institutions when using Prompt 1, while the opposite trend was observed with Prompt 2. However, it is important to highlight that this 6% drop, although indicative of potential bias, does not reach a level of significance. Therefore, to establish a more comprehensive understanding and enhance the credibility of these findings, extensive experimentation is necessary to rigorously benchmark these trends.
## 6 Related Work
### Memorization in Language Models
One potential concern with LLMs is their ability to memorize and reproduce portions of their training data Talmor et al. (2020); Jayaraman et al. (2022). This phenomenon has been observed in various studies, where models tend to copy verbatim from their inputs or replicate sensitive information present in the training data Mireshballah et al. (2021); Raffel et al. (2020). This memorization behaviour raises privacy concerns, particularly when dealing with personal and sensitive data, such as PHI and PII Jagannatha and Yu (2016).
### Privacy-Preserving Approaches for LLMs
Recent research has focused on developing privacy-preserving approaches for LLMs to address the risks associated with personal information Sousa and Kern (2023). These approaches aim to balance the model's generation fidelity and the privacy protection of sensitive data. Techniques such as rule-based filtering Dernoncourt et al. (2016), adversarial training Yoo and Qi (2021), and finetuning Basu et al. (2021) on privacy-related objectives have been explored to mitigate the regurgitation of sensitive information Pan et al. (2020); Mireshballah et al. (2022).
However, while various approaches have been proposed to address privacy risks in LLMs, there are still limitations to consider Mireshballah et al. (2022). Overall, the existing literature provides valuable insights into the challenges and
Figure 8: This graph illustrates the findings for the hiring dataset’s anonymization. We see similar results of interlinked leakage. These findings imply that other sensitive fields linked to non-protected attributes may also be at risk of leakage.
Figure 7: This figure showcases the insights upon the anonymization of the Medical Dataset. We examine the interlinked attributes of DoB and age, which reveal a higher likelihood of DoB leakage when age is leaked. Thus, indicating ChatGPT’s grasp of these two interlinked attributes.
techniques related to LLMs, prompting, memorization, privacy risks, and anonymization Lukas et al. (2023). Yet, there is a need to further investigate the specific issues of input regurgitation and prompt-induced sanitization to ensure compliance with privacy regulations and protect sensitive personal information.
As chatbots powered by models like ChatGPT continue to gain popularity, the issue of input regurgitation Dolbir et al. (2021) is likely to become even more significant. With a large number of people interacting with these chatbots and providing personal information, the potential for sensitive data to be regurgitated in the model's responses increases. This growing user interaction data can amplify the risks associated with input regurgitation Tople et al. (2019). Therefore, developing effective mechanisms for prompt-induced sanitization and minimizing the likelihood of regurgitating personal information from previous conversations is essential to comply with privacy regulations and protect sensitive personal information at scale Brown et al. (2022). Our contributions presented within this paper address these issues.
## 7 Conclusion
This study explores the privacy concerns associated with using ChatGPT in privacy-sensitive areas and evaluates the potency of prompt-induced sanitization. It is noteworthy to emphasize that prompt-induced sanitization **does not** offer a guaranteed solution for privacy protection, but rather serves as an experimental venue to evaluate ChatGPT's comprehension of HIPAA & GDPR regulations and its proficiency in maintaining confidentiality and anonymizing responses. Our proposed approach of adding safety prompts to anonymize responses can help organizations comply with these regulations. Future research should aim to investigate the efficacy across a broader range of LLM-based chatbots and different domains such as legal and finance.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.